"
labels: ["bug", "needs triage"]
body:
- type: checkboxes
attributes:
label: Is this a support request?
description: This issue tracker is for bugs and feature requests only. If you need
help, please use ask in our Discord community
options:
- label: This is not a support request
required: true
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: Please search to see if an issue already exists for the bug you
encountered.
options:
- label: I have searched the existing issues
required: true
- type: textarea
attributes:
label: Current Behavior
description: A concise description of what you're experiencing.
validations:
required: true
- type: textarea
attributes:
label: Expected Behavior
description: A concise description of what you expected to happen.
validations:
required: true
- type: textarea
attributes:
label: Steps To Reproduce
description: Steps to reproduce the behavior.
placeholder: |
1. In this environment...
1. With this config...
1. Run '...'
1. See error...
validations:
required: true
- type: textarea
attributes:
label: Environment
description: |
Please provide information about your environment.
If you are using a container, always provide the headscale version and not only the Docker image version.
Please do not put "latest".
Describe your "headscale network". Is there a lot of nodes, are the nodes all interconnected, are some subnet routers?
If you are experiencing a problem during an upgrade, please provide the versions of the old and new versions of Headscale and Tailscale.
examples:
- **OS**: Ubuntu 24.04
- **Headscale version**: 0.24.3
- **Tailscale version**: 1.80.0
- **Number of nodes**: 20
value: |
- OS:
- Headscale version:
- Tailscale version:
render: markdown
validations:
required: true
- type: checkboxes
attributes:
label: Runtime environment
options:
- label: Headscale is behind a (reverse) proxy
required: false
- label: Headscale runs in a container
required: false
- type: textarea
attributes:
label: Debug information
description: |
Please have a look at our [Debugging and troubleshooting
guide](https://headscale.net/development/ref/debug/) to learn about
common debugging techniques.
Links? References? Anything that will give us more context about the issue you are encountering.
If **any** of these are omitted we will likely close your issue, do **not** ignore them.
- Client netmap dump (see below)
- Policy configuration
- Headscale configuration
- Headscale log (with `trace` enabled)
Dump the netmap of tailscale clients:
`tailscale debug netmap > DESCRIPTIVE_NAME.json`
Dump the status of tailscale clients:
`tailscale status --json > DESCRIPTIVE_NAME.json`
Get the logs of a Tailscale client that is not working as expected.
`tailscale debug daemon-logs`
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
**Ensure** you use formatting for files you attach.
Do **not** paste in long files.
validations:
required: true
================================================
FILE: .github/ISSUE_TEMPLATE/config.yml
================================================
# Issues must have some content
blank_issues_enabled: false
# Contact links
contact_links:
- name: "headscale Discord community"
url: "https://discord.gg/c84AZQhmpx"
about: "Please ask and answer questions about usage of headscale here."
- name: "headscale usage documentation"
url: "https://headscale.net/"
about: "Find documentation about how to configure and run headscale."
================================================
FILE: .github/ISSUE_TEMPLATE/feature_request.yaml
================================================
name: 🚀 Feature Request
description: Suggest an idea for Headscale
title: "[Feature] "
labels: [enhancement]
body:
- type: textarea
attributes:
label: Use case
description: Please describe the use case for this feature.
placeholder: |
validations:
required: true
- type: textarea
attributes:
label: Description
description: A clear and precise description of what new or changed feature you want.
validations:
required: true
- type: checkboxes
attributes:
label: Contribution
description: Are you willing to contribute to the implementation of this feature?
options:
- label: I can write the design doc for this feature
required: false
- label: I can contribute this feature
required: false
- type: textarea
attributes:
label: How can it be implemented?
description: Free text for your ideas on how this feature could be implemented.
validations:
required: false
================================================
FILE: .github/label-response/needs-more-info.md
================================================
Thank you for taking the time to report this issue.
To help us investigate and resolve this, we need more information. Please provide the following:
> [!TIP]
> Most issues turn out to be configuration errors rather than bugs. We encourage you to discuss your problem in our [Discord community](https://discord.gg/c84AZQhmpx) **before** opening an issue. The community can often help identify misconfigurations quickly, saving everyone time.
## Required Information
### Environment Details
- **Headscale version**: (run `headscale version`)
- **Tailscale client version**: (run `tailscale version`)
- **Operating System**: (e.g., Ubuntu 24.04, macOS 14, Windows 11)
- **Deployment method**: (binary, Docker, Kubernetes, etc.)
- **Reverse proxy**: (if applicable: nginx, Traefik, Caddy, etc. - include configuration)
### Debug Information
Please follow our [Debugging and Troubleshooting Guide](https://headscale.net/stable/ref/debug/) and provide:
1. **Client netmap dump** (from affected Tailscale client):
```bash
tailscale debug netmap > netmap.json
```
2. **Client status dump** (from affected Tailscale client):
```bash
tailscale status --json > status.json
```
3. **Tailscale client logs** (if experiencing client issues):
```bash
tailscale debug daemon-logs
```
> [!IMPORTANT]
> We need logs from **multiple nodes** to understand the full picture:
>
> - The node(s) initiating connections
> - The node(s) being connected to
>
> Without logs from both sides, we cannot diagnose connectivity issues.
4. **Headscale server logs** with `log.level: trace` enabled
5. **Headscale configuration** (with sensitive values redacted - see rules below)
6. **ACL/Policy configuration** (if using ACLs)
7. **Proxy/Docker configuration** (if applicable - nginx.conf, docker-compose.yml, Traefik config, etc.)
## Formatting Requirements
- **Attach long files** - Do not paste large logs or configurations inline. Use GitHub file attachments or GitHub Gists.
- **Use proper Markdown** - Format code blocks, logs, and configurations with appropriate syntax highlighting.
- **Structure your response** - Use the headings above to organize your information clearly.
## Redaction Rules
> [!CAUTION]
> **Replace, do not remove.** Removing information makes debugging impossible.
When redacting sensitive information:
- ✅ **Replace consistently** - If you change `alice@company.com` to `user1@example.com`, use `user1@example.com` everywhere (logs, config, policy, etc.)
- ✅ **Use meaningful placeholders** - `user1@example.com`, `bob@example.com`, `my-secret-key` are acceptable
- ❌ **Never remove information** - Gaps in data prevent us from correlating events across logs
- ❌ **Never redact IP addresses** - We need the actual IPs to trace network paths and identify issues
**If redaction rules are not followed, we will be unable to debug the issue and will have to close it.**
---
**Note:** This issue will be automatically closed in 3 days if no additional information is provided. Once you reply with the requested information, the `needs-more-info` label will be removed automatically.
If you need help gathering this information, please visit our [Discord community](https://discord.gg/c84AZQhmpx).
================================================
FILE: .github/label-response/support-request.md
================================================
Thank you for reaching out.
This issue tracker is used for **bug reports and feature requests** only. Your question appears to be a support or configuration question rather than a bug report.
For help with setup, configuration, or general questions, please visit our [Discord community](https://discord.gg/c84AZQhmpx) where the community and maintainers can assist you in real-time.
**Before posting in Discord, please check:**
- [Documentation](https://headscale.net/)
- [FAQ](https://headscale.net/stable/faq/)
- [Debugging and Troubleshooting Guide](https://headscale.net/stable/ref/debug/)
If after troubleshooting you determine this is actually a bug, please open a new issue with the required debug information from the troubleshooting guide.
This issue has been automatically closed.
================================================
FILE: .github/pull_request_template.md
================================================
- [ ] have read the [CONTRIBUTING.md](./CONTRIBUTING.md) file
- [ ] raised a GitHub issue or discussed it on the projects chat beforehand
- [ ] added unit tests
- [ ] added integration tests
- [ ] updated documentation if needed
- [ ] updated CHANGELOG.md
================================================
FILE: .github/renovate.json
================================================
{
"baseBranches": ["main"],
"username": "renovate-release",
"gitAuthor": "Renovate Bot ",
"branchPrefix": "renovateaction/",
"onboarding": false,
"extends": ["config:base", ":rebaseStalePrs"],
"ignorePresets": [":prHourlyLimit2"],
"enabledManagers": ["dockerfile", "gomod", "github-actions", "regex"],
"includeForks": true,
"repositories": ["juanfont/headscale"],
"platform": "github",
"packageRules": [
{
"matchDatasources": ["go"],
"groupName": "Go modules",
"groupSlug": "gomod",
"separateMajorMinor": false
},
{
"matchDatasources": ["docker"],
"groupName": "Dockerfiles",
"groupSlug": "dockerfiles"
}
],
"regexManagers": [
{
"fileMatch": [".github/workflows/.*.yml$"],
"matchStrings": ["\\s*go-version:\\s*\"?(?.*?)\"?\\n"],
"datasourceTemplate": "golang-version",
"depNameTemplate": "actions/go-version"
}
]
}
================================================
FILE: .github/workflows/build.yml
================================================
name: Build
on:
push:
branches:
- main
pull_request:
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
build-nix:
runs-on: ubuntu-latest
permissions: write-all
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Run nix build
id: build
if: steps.changed-files.outputs.files == 'true'
run: |
nix build |& tee build-result
BUILD_STATUS="${PIPESTATUS[0]}"
OLD_HASH=$(cat build-result | grep specified: | awk -F ':' '{print $2}' | sed 's/ //g')
NEW_HASH=$(cat build-result | grep got: | awk -F ':' '{print $2}' | sed 's/ //g')
echo "OLD_HASH=$OLD_HASH" >> $GITHUB_OUTPUT
echo "NEW_HASH=$NEW_HASH" >> $GITHUB_OUTPUT
exit $BUILD_STATUS
- name: Nix gosum diverging
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
if: failure() && steps.build.outcome == 'failure'
with:
github-token: ${{secrets.GITHUB_TOKEN}}
script: |
github.rest.pulls.createReviewComment({
pull_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}'
})
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
if: steps.changed-files.outputs.files == 'true'
with:
name: headscale-linux
path: result/bin/headscale
build-cross:
runs-on: ubuntu-latest
strategy:
matrix:
env:
- "GOARCH=arm64 GOOS=linux"
- "GOARCH=amd64 GOOS=linux"
- "GOARCH=arm64 GOOS=darwin"
- "GOARCH=amd64 GOOS=darwin"
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Run go cross compile
env:
CGO_ENABLED: 0
run: env ${{ matrix.env }} nix develop --command -- go build -o "headscale"
./cmd/headscale
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: "headscale-${{ matrix.env }}"
path: "headscale"
================================================
FILE: .github/workflows/check-generated.yml
================================================
name: Check Generated Files
on:
push:
branches:
- main
pull_request:
branches:
- main
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
check-generated:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- '**/*.proto'
- 'buf.gen.yaml'
- 'tools/**'
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', '**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Run make generate
if: steps.changed-files.outputs.files == 'true'
run: nix develop --command -- make generate
- name: Check for uncommitted changes
if: steps.changed-files.outputs.files == 'true'
run: |
if ! git diff --exit-code; then
echo "❌ Generated files are not up to date!"
echo "Please run 'make generate' and commit the changes."
exit 1
else
echo "✅ All generated files are up to date."
fi
================================================
FILE: .github/workflows/check-tests.yaml
================================================
name: Check integration tests workflow
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
check-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Generate and check integration tests
if: steps.changed-files.outputs.files == 'true'
run: |
nix develop --command bash -c "cd .github/workflows && go generate"
git diff --exit-code .github/workflows/test-integration.yaml
- name: Show missing tests
if: failure()
run: |
git diff .github/workflows/test-integration.yaml
================================================
FILE: .github/workflows/docs-deploy.yml
================================================
name: Deploy docs
on:
push:
branches:
# Main branch for development docs
- main
# Doc maintenance branches
- doc/[0-9]+.[0-9]+.[0-9]+
tags:
# Stable release tags
- v[0-9]+.[0-9]+.[0-9]+
paths:
- "docs/**"
- "mkdocs.yml"
workflow_dispatch:
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
- name: Install python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: 3.x
- name: Setup cache
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
with:
key: ${{ github.ref }}
path: .cache
- name: Setup dependencies
run: pip install -r docs/requirements.txt
- name: Configure git
run: |
git config user.name github-actions
git config user.email github-actions@github.com
- name: Deploy development docs
if: github.ref == 'refs/heads/main'
run: mike deploy --push development unstable
- name: Deploy stable docs from doc branches
if: startsWith(github.ref, 'refs/heads/doc/')
run: mike deploy --push ${GITHUB_REF_NAME##*/}
- name: Deploy stable docs from tag
if: startsWith(github.ref, 'refs/tags/v')
# This assumes that only newer tags are pushed
run: mike deploy --push --update-aliases ${GITHUB_REF_NAME#v} stable latest
================================================
FILE: .github/workflows/docs-test.yml
================================================
name: Test documentation build
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Install python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: 3.x
- name: Setup cache
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
with:
key: ${{ github.ref }}
path: .cache
- name: Setup dependencies
run: pip install -r docs/requirements.txt
- name: Build docs
run: mkdocs build --strict
================================================
FILE: .github/workflows/gh-action-integration-generator.go
================================================
package main
//go:generate go run ./gh-action-integration-generator.go
import (
"bytes"
"fmt"
"log"
"os/exec"
"strings"
)
// testsToSplit defines tests that should be split into multiple CI jobs.
// Key is the test function name, value is a list of subtest prefixes.
// Each prefix becomes a separate CI job as "TestName/prefix".
//
// Example: TestAutoApproveMultiNetwork has subtests like:
// - TestAutoApproveMultiNetwork/authkey-tag-advertiseduringup-false-pol-database
// - TestAutoApproveMultiNetwork/webauth-user-advertiseduringup-true-pol-file
//
// Splitting by approver type (tag, user, group) creates 6 CI jobs with 4 tests each:
// - TestAutoApproveMultiNetwork/authkey-tag.* (4 tests)
// - TestAutoApproveMultiNetwork/authkey-user.* (4 tests)
// - TestAutoApproveMultiNetwork/authkey-group.* (4 tests)
// - TestAutoApproveMultiNetwork/webauth-tag.* (4 tests)
// - TestAutoApproveMultiNetwork/webauth-user.* (4 tests)
// - TestAutoApproveMultiNetwork/webauth-group.* (4 tests)
//
// This reduces load per CI job (4 tests instead of 12) to avoid infrastructure
// flakiness when running many sequential Docker-based integration tests.
var testsToSplit = map[string][]string{
"TestAutoApproveMultiNetwork": {
"authkey-tag",
"authkey-user",
"authkey-group",
"webauth-tag",
"webauth-user",
"webauth-group",
},
}
// expandTests takes a list of test names and expands any that need splitting
// into multiple subtest patterns.
func expandTests(tests []string) []string {
var expanded []string
for _, test := range tests {
if prefixes, ok := testsToSplit[test]; ok {
// This test should be split into multiple jobs.
// We append ".*" to each prefix because the CI runner wraps patterns
// with ^...$ anchors. Without ".*", a pattern like "authkey$" wouldn't
// match "authkey-tag-advertiseduringup-false-pol-database".
for _, prefix := range prefixes {
expanded = append(expanded, fmt.Sprintf("%s/%s.*", test, prefix))
}
} else {
expanded = append(expanded, test)
}
}
return expanded
}
func findTests() []string {
rgBin, err := exec.LookPath("rg")
if err != nil {
log.Fatalf("failed to find rg (ripgrep) binary")
}
args := []string{
"--regexp", "func (Test.+)\\(.*",
"../../integration/",
"--replace", "$1",
"--sort", "path",
"--no-line-number",
"--no-filename",
"--no-heading",
}
cmd := exec.Command(rgBin, args...)
var out bytes.Buffer
cmd.Stdout = &out
err = cmd.Run()
if err != nil {
log.Fatalf("failed to run command: %s", err)
}
tests := strings.Split(strings.TrimSpace(out.String()), "\n")
return tests
}
func updateYAML(tests []string, jobName string, testPath string) {
testsForYq := fmt.Sprintf("[%s]", strings.Join(tests, ", "))
yqCommand := fmt.Sprintf(
"yq eval '.jobs.%s.strategy.matrix.test = %s' %s -i",
jobName,
testsForYq,
testPath,
)
cmd := exec.Command("bash", "-c", yqCommand)
var stdout bytes.Buffer
var stderr bytes.Buffer
cmd.Stdout = &stdout
cmd.Stderr = &stderr
err := cmd.Run()
if err != nil {
log.Printf("stdout: %s", stdout.String())
log.Printf("stderr: %s", stderr.String())
log.Fatalf("failed to run yq command: %s", err)
}
fmt.Printf("YAML file (%s) job %s updated successfully\n", testPath, jobName)
}
func main() {
tests := findTests()
// Expand tests that should be split into multiple jobs
expandedTests := expandTests(tests)
quotedTests := make([]string, len(expandedTests))
for i, test := range expandedTests {
quotedTests[i] = fmt.Sprintf("\"%s\"", test)
}
// Define selected tests for PostgreSQL
postgresTestNames := []string{
"TestACLAllowUserDst",
"TestPingAllByIP",
"TestEphemeral2006DeletedTooQuickly",
"TestPingAllByIPManyUpDown",
"TestSubnetRouterMultiNetwork",
}
quotedPostgresTests := make([]string, len(postgresTestNames))
for i, test := range postgresTestNames {
quotedPostgresTests[i] = fmt.Sprintf("\"%s\"", test)
}
// Update both SQLite and PostgreSQL job matrices
updateYAML(quotedTests, "sqlite", "./test-integration.yaml")
updateYAML(quotedPostgresTests, "postgres", "./test-integration.yaml")
}
================================================
FILE: .github/workflows/gh-actions-updater.yaml
================================================
name: GitHub Actions Version Updater
on:
schedule:
# Automatically run on every Sunday
- cron: "0 0 * * 0"
jobs:
build:
if: github.repository == 'juanfont/headscale'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# [Required] Access token with `workflow` scope.
token: ${{ secrets.WORKFLOW_SECRET }}
- name: Run GitHub Actions Version Updater
uses: saadmk11/github-actions-version-updater@d8781caf11d11168579c8e5e94f62b068038f442 # v0.9.0
with:
# [Required] Access token with `workflow` scope.
token: ${{ secrets.WORKFLOW_SECRET }}
================================================
FILE: .github/workflows/integration-test-template.yml
================================================
name: Integration Test Template
on:
workflow_call:
inputs:
test:
required: true
type: string
postgres_flag:
required: false
type: string
default: ""
database_name:
required: true
type: string
jobs:
test:
runs-on: ubuntu-latest
env:
# Github does not allow us to access secrets in pull requests,
# so this env var is used to check if we have the secret or not.
# If we have the secrets, meaning we are running on push in a fork,
# there might be secrets available for more debugging.
# If TS_OAUTH_CLIENT_ID and TS_OAUTH_SECRET is set, then the job
# will join a debug tailscale network, set up SSH and a tmux session.
# The SSH will be configured to use the SSH key of the Github user
# that triggered the build.
HAS_TAILSCALE_SECRET: ${{ secrets.TS_OAUTH_CLIENT_ID }}
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 2
- name: Tailscale
if: ${{ env.HAS_TAILSCALE_SECRET }}
uses: tailscale/github-action@a392da0a182bba0e9613b6243ebd69529b1878aa # v4.1.0
with:
oauth-client-id: ${{ secrets.TS_OAUTH_CLIENT_ID }}
oauth-secret: ${{ secrets.TS_OAUTH_SECRET }}
tags: tag:gh
- name: Setup SSH server for Actor
if: ${{ env.HAS_TAILSCALE_SECRET }}
uses: alexellis/setup-sshd-actor@master
- name: Download headscale image
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: headscale-image
path: /tmp/artifacts
- name: Download tailscale HEAD image
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: tailscale-head-image
path: /tmp/artifacts
- name: Download hi binary
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: hi-binary
path: /tmp/artifacts
- name: Download Go cache
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: go-cache
path: /tmp/artifacts
- name: Download postgres image
if: ${{ inputs.postgres_flag == '--postgres=1' }}
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: postgres-image
path: /tmp/artifacts
- name: Pin Docker to v28 (avoid v29 breaking changes)
run: |
# Docker 29 breaks docker build via Go client libraries and
# docker load/save with certain tarball formats.
# Pin to Docker 28.x until our tooling is updated.
# https://github.com/actions/runner-images/issues/13474
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update -qq
VERSION=$(apt-cache madison docker-ce | grep '28\.5' | head -1 | awk '{print $3}')
sudo apt-get install -y --allow-downgrades \
"docker-ce=${VERSION}" "docker-ce-cli=${VERSION}"
sudo systemctl restart docker
docker version
- name: Load Docker images, Go cache, and prepare binary
run: |
gunzip -c /tmp/artifacts/headscale-image.tar.gz | docker load
gunzip -c /tmp/artifacts/tailscale-head-image.tar.gz | docker load
if [ -f /tmp/artifacts/postgres-image.tar.gz ]; then
gunzip -c /tmp/artifacts/postgres-image.tar.gz | docker load
fi
chmod +x /tmp/artifacts/hi
docker images
# Extract Go cache to host directories for bind mounting
mkdir -p /tmp/go-cache
tar -xzf /tmp/artifacts/go-cache.tar.gz -C /tmp/go-cache
ls -la /tmp/go-cache/ /tmp/go-cache/.cache/
- name: Run Integration Test
env:
HEADSCALE_INTEGRATION_HEADSCALE_IMAGE: headscale:${{ github.sha }}
HEADSCALE_INTEGRATION_TAILSCALE_IMAGE: tailscale-head:${{ github.sha }}
HEADSCALE_INTEGRATION_POSTGRES_IMAGE: ${{ inputs.postgres_flag == '--postgres=1' && format('postgres:{0}', github.sha) || '' }}
HEADSCALE_INTEGRATION_GO_CACHE: /tmp/go-cache/go
HEADSCALE_INTEGRATION_GO_BUILD_CACHE: /tmp/go-cache/.cache/go-build
run: /tmp/artifacts/hi run --stats --ts-memory-limit=300 --hs-memory-limit=1500 "^${{ inputs.test }}$" \
--timeout=120m \
${{ inputs.postgres_flag }}
# Sanitize test name for artifact upload (replace invalid characters: " : < > | * ? \ / with -)
- name: Sanitize test name for artifacts
if: always()
id: sanitize
run: echo "name=${TEST_NAME//[\":<>|*?\\\/]/-}" >> $GITHUB_OUTPUT
env:
TEST_NAME: ${{ inputs.test }}
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
if: always()
with:
name: ${{ inputs.database_name }}-${{ steps.sanitize.outputs.name }}-logs
path: "control_logs/*/*.log"
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
if: always()
with:
name: ${{ inputs.database_name }}-${{ steps.sanitize.outputs.name }}-artifacts
path: control_logs/
- name: Setup a blocking tmux session
if: ${{ env.HAS_TAILSCALE_SECRET }}
uses: alexellis/block-with-tmux-action@master
================================================
FILE: .github/workflows/lint.yml
================================================
name: Lint
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
golangci-lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: golangci-lint
if: steps.changed-files.outputs.files == 'true'
run: nix develop --command -- golangci-lint run
--new-from-rev=${{github.event.pull_request.base.sha}}
--output.text.path=stdout
--output.text.print-linter-name
--output.text.print-issued-lines
--output.text.colors
prettier-lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
with:
filters: |
files:
- '*.nix'
- '**/*.md'
- '**/*.yml'
- '**/*.yaml'
- '**/*.ts'
- '**/*.js'
- '**/*.sass'
- '**/*.css'
- '**/*.scss'
- '**/*.html'
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Prettify code
if: steps.changed-files.outputs.files == 'true'
run: nix develop --command -- prettier --no-error-on-unmatched-pattern
--ignore-unknown --check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}
proto-lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Buf lint
run: nix develop --command -- buf lint proto
================================================
FILE: .github/workflows/needs-more-info-comment.yml
================================================
name: Needs More Info - Post Comment
on:
issues:
types: [labeled]
jobs:
post-comment:
if: >-
github.event.label.name == 'needs-more-info' &&
github.repository == 'juanfont/headscale'
runs-on: ubuntu-latest
permissions:
issues: write
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
sparse-checkout: .github/label-response/needs-more-info.md
sparse-checkout-cone-mode: false
- name: Post instruction comment
run: gh issue comment "$NUMBER" --body-file .github/label-response/needs-more-info.md
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GH_REPO: ${{ github.repository }}
NUMBER: ${{ github.event.issue.number }}
================================================
FILE: .github/workflows/needs-more-info-timer.yml
================================================
name: Needs More Info - Timer
on:
schedule:
- cron: "0 0 * * *" # Daily at midnight UTC
issue_comment:
types: [created]
workflow_dispatch:
jobs:
# When a non-bot user comments on a needs-more-info issue, remove the label.
remove-label-on-response:
if: >-
github.repository == 'juanfont/headscale' &&
github.event_name == 'issue_comment' &&
github.event.comment.user.type != 'Bot' &&
contains(github.event.issue.labels.*.name, 'needs-more-info')
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- name: Remove needs-more-info label
run: gh issue edit "$NUMBER" --remove-label needs-more-info
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GH_REPO: ${{ github.repository }}
NUMBER: ${{ github.event.issue.number }}
# On schedule, close issues that have had no human response for 3 days.
close-stale:
if: >-
github.repository == 'juanfont/headscale' &&
github.event_name != 'issue_comment'
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- uses: hustcer/setup-nu@920172d92eb04671776f3ba69d605d3b09351c30 # v3.22
with:
version: "*"
- name: Close stale needs-more-info issues
shell: nu {0}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GH_REPO: ${{ github.repository }}
run: |
let issues = (gh issue list
--repo $env.GH_REPO
--label "needs-more-info"
--state open
--json number
| from json)
for issue in $issues {
let number = $issue.number
print $"Checking issue #($number)"
# Find when needs-more-info was last added
let events = (gh api $"repos/($env.GH_REPO)/issues/($number)/events"
--paginate | from json | flatten)
let label_event = ($events
| where event == "labeled" and label.name == "needs-more-info"
| last)
let label_added_at = ($label_event.created_at | into datetime)
# Check for non-bot comments after the label was added
let comments = (gh api $"repos/($env.GH_REPO)/issues/($number)/comments"
--paginate | from json | flatten)
let human_responses = ($comments
| where user.type != "Bot"
| where { ($in.created_at | into datetime) > $label_added_at })
if ($human_responses | length) > 0 {
print $" Human responded, removing label"
gh issue edit $number --repo $env.GH_REPO --remove-label needs-more-info
continue
}
# Check if 3 days have passed
let elapsed = (date now) - $label_added_at
if $elapsed < 3day {
print $" Only ($elapsed | format duration day) elapsed, skipping"
continue
}
print $" No response for ($elapsed | format duration day), closing"
let message = [
"This issue has been automatically closed because no additional information was provided within 3 days."
""
"If you have the requested information, please open a new issue and include the debug information requested above."
""
"Thank you for your understanding."
] | str join "\n"
gh issue comment $number --repo $env.GH_REPO --body $message
gh issue close $number --repo $env.GH_REPO --reason "not planned"
gh issue edit $number --repo $env.GH_REPO --remove-label needs-more-info
}
================================================
FILE: .github/workflows/nix-module-test.yml
================================================
name: NixOS Module Tests
on:
push:
branches:
- main
pull_request:
branches:
- main
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
nix-module-check:
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
with:
filters: |
nix:
- 'nix/**'
- 'flake.nix'
- 'flake.lock'
go:
- 'go.*'
- '**/*.go'
- 'cmd/**'
- 'hscontrol/**'
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
if: steps.changed-files.outputs.nix == 'true' || steps.changed-files.outputs.go == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.nix == 'true' || steps.changed-files.outputs.go == 'true'
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Run NixOS module tests
if: steps.changed-files.outputs.nix == 'true' || steps.changed-files.outputs.go == 'true'
run: |
echo "Running NixOS module integration test..."
nix build .#checks.x86_64-linux.headscale -L
================================================
FILE: .github/workflows/release.yml
================================================
---
name: Release
on:
push:
tags:
- "*" # triggers only if push new tag version
workflow_dispatch:
jobs:
goreleaser:
if: github.repository == 'juanfont/headscale'
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
- name: Pin Docker to v28 (avoid v29 breaking changes)
run: |
# Docker 29 breaks docker build via Go client libraries and
# docker load/save with certain tarball formats.
# Pin to Docker 28.x until our tooling is updated.
# https://github.com/actions/runner-images/issues/13474
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update -qq
VERSION=$(apt-cache madison docker-ce | grep '28\.5' | head -1 | awk '{print $3}')
sudo apt-get install -y --allow-downgrades \
"docker-ce=${VERSION}" "docker-ce-cli=${VERSION}"
sudo systemctl restart docker
docker version
- name: Login to DockerHub
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Run goreleaser
run: nix develop --command -- goreleaser release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
================================================
FILE: .github/workflows/stale.yml
================================================
name: Close inactive issues
on:
schedule:
- cron: "30 1 * * *"
jobs:
close-issues:
if: github.repository == 'juanfont/headscale'
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10.1.1
with:
days-before-issue-stale: 90
days-before-issue-close: 7
stale-issue-label: "stale"
stale-issue-message: "This issue is stale because it has been open for 90 days with no
activity."
close-issue-message: "This issue was closed because it has been inactive for 14 days
since being marked as stale."
days-before-pr-stale: -1
days-before-pr-close: -1
exempt-issue-labels: "no-stale-bot,needs-more-info"
repo-token: ${{ secrets.GITHUB_TOKEN }}
================================================
FILE: .github/workflows/support-request.yml
================================================
name: Support Request - Close Issue
on:
issues:
types: [labeled]
jobs:
close-support-request:
if: >-
github.event.label.name == 'support-request' &&
github.repository == 'juanfont/headscale'
runs-on: ubuntu-latest
permissions:
issues: write
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
sparse-checkout: .github/label-response/support-request.md
sparse-checkout-cone-mode: false
- name: Post comment and close issue
run: |
gh issue comment "$NUMBER" --body-file .github/label-response/support-request.md
gh issue close "$NUMBER" --reason "not planned"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GH_REPO: ${{ github.repository }}
NUMBER: ${{ github.event.issue.number }}
================================================
FILE: .github/workflows/test-integration.yaml
================================================
name: integration
# To debug locally on a branch, and when needing secrets
# change this to include `push` so the build is ran on
# the main repository.
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
# build: Builds binaries and Docker images once, uploads as artifacts for reuse.
# build-postgres: Pulls postgres image separately to avoid Docker Hub rate limits.
# sqlite: Runs all integration tests with SQLite backend.
# postgres: Runs a subset of tests with PostgreSQL to verify database compatibility.
build:
runs-on: ubuntu-latest
outputs:
files-changed: ${{ steps.changed-files.outputs.files }}
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration/**'
- 'config-example.yaml'
- '.github/workflows/test-integration.yaml'
- '.github/workflows/integration-test-template.yml'
- 'Dockerfile.*'
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', '**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Build binaries and warm Go cache
if: steps.changed-files.outputs.files == 'true'
run: |
# Build all Go binaries in one nix shell to maximize cache reuse
nix develop --command -- bash -c '
go build -o hi ./cmd/hi
CGO_ENABLED=0 GOOS=linux go build -o headscale ./cmd/headscale
# Build integration test binary to warm the cache with all dependencies
go test -c ./integration -o /dev/null 2>/dev/null || true
'
- name: Upload hi binary
if: steps.changed-files.outputs.files == 'true'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: hi-binary
path: hi
retention-days: 10
- name: Package Go cache
if: steps.changed-files.outputs.files == 'true'
run: |
# Package Go module cache and build cache
tar -czf go-cache.tar.gz -C ~ go .cache/go-build
- name: Upload Go cache
if: steps.changed-files.outputs.files == 'true'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: go-cache
path: go-cache.tar.gz
retention-days: 10
- name: Pin Docker to v28 (avoid v29 breaking changes)
if: steps.changed-files.outputs.files == 'true'
run: |
# Docker 29 breaks docker build via Go client libraries and
# docker load/save with certain tarball formats.
# Pin to Docker 28.x until our tooling is updated.
# https://github.com/actions/runner-images/issues/13474
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update -qq
VERSION=$(apt-cache madison docker-ce | grep '28\.5' | head -1 | awk '{print $3}')
sudo apt-get install -y --allow-downgrades \
"docker-ce=${VERSION}" "docker-ce-cli=${VERSION}"
sudo systemctl restart docker
docker version
- name: Build headscale image
if: steps.changed-files.outputs.files == 'true'
run: |
docker build \
--file Dockerfile.integration-ci \
--tag headscale:${{ github.sha }} \
.
docker save headscale:${{ github.sha }} | gzip > headscale-image.tar.gz
- name: Build tailscale HEAD image
if: steps.changed-files.outputs.files == 'true'
run: |
docker build \
--file Dockerfile.tailscale-HEAD \
--tag tailscale-head:${{ github.sha }} \
.
docker save tailscale-head:${{ github.sha }} | gzip > tailscale-head-image.tar.gz
- name: Upload headscale image
if: steps.changed-files.outputs.files == 'true'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: headscale-image
path: headscale-image.tar.gz
retention-days: 10
- name: Upload tailscale HEAD image
if: steps.changed-files.outputs.files == 'true'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: tailscale-head-image
path: tailscale-head-image.tar.gz
retention-days: 10
build-postgres:
runs-on: ubuntu-latest
needs: build
if: needs.build.outputs.files-changed == 'true'
steps:
- name: Pin Docker to v28 (avoid v29 breaking changes)
run: |
# Docker 29 breaks docker build via Go client libraries and
# docker load/save with certain tarball formats.
# Pin to Docker 28.x until our tooling is updated.
# https://github.com/actions/runner-images/issues/13474
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update -qq
VERSION=$(apt-cache madison docker-ce | grep '28\.5' | head -1 | awk '{print $3}')
sudo apt-get install -y --allow-downgrades \
"docker-ce=${VERSION}" "docker-ce-cli=${VERSION}"
sudo systemctl restart docker
docker version
- name: Pull and save postgres image
run: |
docker pull postgres:latest
docker tag postgres:latest postgres:${{ github.sha }}
docker save postgres:${{ github.sha }} | gzip > postgres-image.tar.gz
- name: Upload postgres image
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: postgres-image
path: postgres-image.tar.gz
retention-days: 10
sqlite:
needs: build
if: needs.build.outputs.files-changed == 'true'
strategy:
fail-fast: false
matrix:
test:
- TestACLHostsInNetMapTable
- TestACLAllowUser80Dst
- TestACLDenyAllPort80
- TestACLAllowUserDst
- TestACLAllowStarDst
- TestACLNamedHostsCanReachBySubnet
- TestACLNamedHostsCanReach
- TestACLDevice1CanAccessDevice2
- TestPolicyUpdateWhileRunningWithCLIInDatabase
- TestACLAutogroupMember
- TestACLAutogroupTagged
- TestACLAutogroupSelf
- TestACLPolicyPropagationOverTime
- TestACLTagPropagation
- TestACLTagPropagationPortSpecific
- TestACLGroupWithUnknownUser
- TestACLGroupAfterUserDeletion
- TestACLGroupDeletionExactReproduction
- TestACLDynamicUnknownUserAddition
- TestACLDynamicUnknownUserRemoval
- TestAPIAuthenticationBypass
- TestAPIAuthenticationBypassCurl
- TestGRPCAuthenticationBypass
- TestCLIWithConfigAuthenticationBypass
- TestAuthKeyLogoutAndReloginSameUser
- TestAuthKeyLogoutAndReloginNewUser
- TestAuthKeyLogoutAndReloginSameUserExpiredKey
- TestAuthKeyDeleteKey
- TestAuthKeyLogoutAndReloginRoutesPreserved
- TestOIDCAuthenticationPingAll
- TestOIDCExpireNodesBasedOnTokenExpiry
- TestOIDC024UserCreation
- TestOIDCAuthenticationWithPKCE
- TestOIDCReloginSameNodeNewUser
- TestOIDCFollowUpUrl
- TestOIDCMultipleOpenedLoginUrls
- TestOIDCReloginSameNodeSameUser
- TestOIDCExpiryAfterRestart
- TestOIDCACLPolicyOnJoin
- TestOIDCReloginSameUserRoutesPreserved
- TestAuthWebFlowAuthenticationPingAll
- TestAuthWebFlowLogoutAndReloginSameUser
- TestAuthWebFlowLogoutAndReloginNewUser
- TestUserCommand
- TestPreAuthKeyCommand
- TestPreAuthKeyCommandWithoutExpiry
- TestPreAuthKeyCommandReusableEphemeral
- TestPreAuthKeyCorrectUserLoggedInCommand
- TestTaggedNodesCLIOutput
- TestApiKeyCommand
- TestNodeCommand
- TestNodeExpireCommand
- TestNodeRenameCommand
- TestPolicyCommand
- TestPolicyBrokenConfigCommand
- TestDERPVerifyEndpoint
- TestResolveMagicDNS
- TestResolveMagicDNSExtraRecordsPath
- TestDERPServerScenario
- TestDERPServerWebsocketScenario
- TestPingAllByIP
- TestPingAllByIPPublicDERP
- TestEphemeral
- TestEphemeralInAlternateTimezone
- TestEphemeral2006DeletedTooQuickly
- TestPingAllByHostname
- TestTaildrop
- TestUpdateHostnameFromClient
- TestExpireNode
- TestSetNodeExpiryInFuture
- TestDisableNodeExpiry
- TestNodeOnlineStatus
- TestPingAllByIPManyUpDown
- Test2118DeletingOnlineNodePanics
- TestEnablingRoutes
- TestHASubnetRouterFailover
- TestSubnetRouteACL
- TestEnablingExitRoutes
- TestSubnetRouterMultiNetwork
- TestSubnetRouterMultiNetworkExitNode
- TestAutoApproveMultiNetwork/authkey-tag.*
- TestAutoApproveMultiNetwork/authkey-user.*
- TestAutoApproveMultiNetwork/authkey-group.*
- TestAutoApproveMultiNetwork/webauth-tag.*
- TestAutoApproveMultiNetwork/webauth-user.*
- TestAutoApproveMultiNetwork/webauth-group.*
- TestSubnetRouteACLFiltering
- TestHeadscale
- TestTailscaleNodesJoiningHeadcale
- TestSSHOneUserToAll
- TestSSHMultipleUsersAllToAll
- TestSSHNoSSHConfigured
- TestSSHIsBlockedInACL
- TestSSHUserOnlyIsolation
- TestSSHAutogroupSelf
- TestSSHOneUserToOneCheckModeCLI
- TestSSHOneUserToOneCheckModeOIDC
- TestSSHCheckModeUnapprovedTimeout
- TestSSHCheckModeCheckPeriodCLI
- TestSSHCheckModeAutoApprove
- TestSSHCheckModeNegativeCLI
- TestSSHLocalpart
- TestTagsAuthKeyWithTagRequestDifferentTag
- TestTagsAuthKeyWithTagNoAdvertiseFlag
- TestTagsAuthKeyWithTagCannotAddViaCLI
- TestTagsAuthKeyWithTagCannotChangeViaCLI
- TestTagsAuthKeyWithTagAdminOverrideReauthPreserves
- TestTagsAuthKeyWithTagCLICannotModifyAdminTags
- TestTagsAuthKeyWithoutTagCannotRequestTags
- TestTagsAuthKeyWithoutTagRegisterNoTags
- TestTagsAuthKeyWithoutTagCannotAddViaCLI
- TestTagsAuthKeyWithoutTagCLINoOpAfterAdminWithReset
- TestTagsAuthKeyWithoutTagCLINoOpAfterAdminWithEmptyAdvertise
- TestTagsAuthKeyWithoutTagCLICannotReduceAdminMultiTag
- TestTagsUserLoginOwnedTagAtRegistration
- TestTagsUserLoginNonExistentTagAtRegistration
- TestTagsUserLoginUnownedTagAtRegistration
- TestTagsUserLoginAddTagViaCLIReauth
- TestTagsUserLoginRemoveTagViaCLIReauth
- TestTagsUserLoginCLINoOpAfterAdminAssignment
- TestTagsUserLoginCLICannotRemoveAdminTags
- TestTagsAuthKeyWithTagRequestNonExistentTag
- TestTagsAuthKeyWithTagRequestUnownedTag
- TestTagsAuthKeyWithoutTagRequestNonExistentTag
- TestTagsAuthKeyWithoutTagRequestUnownedTag
- TestTagsAdminAPICannotSetNonExistentTag
- TestTagsAdminAPICanSetUnownedTag
- TestTagsAdminAPICannotRemoveAllTags
- TestTagsIssue2978ReproTagReplacement
- TestTagsAdminAPICannotSetInvalidFormat
- TestTagsUserLoginReauthWithEmptyTagsRemovesAllTags
- TestTagsAuthKeyWithoutUserInheritsTags
- TestTagsAuthKeyWithoutUserRejectsAdvertisedTags
- TestTagsAuthKeyConvertToUserViaCLIRegister
uses: ./.github/workflows/integration-test-template.yml
secrets: inherit
with:
test: ${{ matrix.test }}
postgres_flag: "--postgres=0"
database_name: "sqlite"
postgres:
needs: [build, build-postgres]
if: needs.build.outputs.files-changed == 'true'
strategy:
fail-fast: false
matrix:
test:
- TestACLAllowUserDst
- TestPingAllByIP
- TestEphemeral2006DeletedTooQuickly
- TestPingAllByIPManyUpDown
- TestSubnetRouterMultiNetwork
uses: ./.github/workflows/integration-test-template.yml
secrets: inherit
with:
test: ${{ matrix.test }}
postgres_flag: "--postgres=1"
database_name: "postgres"
================================================
FILE: .github/workflows/test.yml
================================================
name: Tests
on: [push, pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Run tests
if: steps.changed-files.outputs.files == 'true'
env:
# As of 2025-01-06, these env vars was not automatically
# set anymore which breaks the initdb for postgres on
# some of the database migration tests.
LC_ALL: "en_US.UTF-8"
LC_CTYPE: "en_US.UTF-8"
run: nix develop --command -- gotestsum
================================================
FILE: .github/workflows/update-flake.yml
================================================
name: update-flake-lock
on:
workflow_dispatch: # allows manual triggering
schedule:
- cron: "0 0 * * 0" # runs weekly on Sunday at 00:00
jobs:
lockfile:
if: github.repository == 'juanfont/headscale'
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Nix
uses: DeterminateSystems/nix-installer-action@21a544727d0c62386e78b4befe52d19ad12692e3 # v17
- name: Update flake.lock
uses: DeterminateSystems/update-flake-lock@428c2b58a4b7414dabd372acb6a03dba1084d3ab # v25
with:
pr-title: "Update flake.lock"
================================================
FILE: .gitignore
================================================
ignored/
tailscale/
.vscode/
.claude/
logs/
*.prof
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Dependency directories (remove the comment below to include it)
vendor/
dist/
/headscale
config.yaml
config*.yaml
!config-example.yaml
derp.yaml
*.hujson
*.key
/db.sqlite
*.sqlite3
# Exclude Jetbrains Editors
.idea
test_output/
control_logs/
# Nix build output
result
.direnv/
integration_test/etc/config.dump.yaml
# MkDocs
.cache
/site
__debug_bin
node_modules/
package-lock.json
package.json
================================================
FILE: .golangci.yaml
================================================
---
version: "2"
linters:
default: all
disable:
- cyclop
- depguard
- dupl
- exhaustruct
- funcorder
- funlen
- gochecknoglobals
- gochecknoinits
- gocognit
- godox
- interfacebloat
- ireturn
- lll
- maintidx
- makezero
- mnd
- musttag
- nestif
- nolintlint
- paralleltest
- revive
- tagliatelle
- testpackage
- varnamelen
- wrapcheck
- wsl
settings:
forbidigo:
forbid:
# Forbid time.Sleep everywhere with context-appropriate alternatives
- pattern: 'time\.Sleep'
msg: >-
time.Sleep is forbidden.
In tests: use assert.EventuallyWithT for polling/waiting patterns.
In production code: use a backoff strategy (e.g., cenkalti/backoff) or proper synchronization primitives.
# Forbid inline string literals in zerolog field methods - use zf.* constants
- pattern: '\.(Str|Int|Int8|Int16|Int32|Int64|Uint|Uint8|Uint16|Uint32|Uint64|Float32|Float64|Bool|Dur|Time|TimeDiff|Strs|Ints|Uints|Floats|Bools|Any|Interface)\("[^"]+"'
msg: >-
Use zf.* constants for zerolog field names instead of string literals.
Import "github.com/juanfont/headscale/hscontrol/util/zlog/zf" and use
constants like zf.NodeID, zf.UserName, etc. Add new constants to
hscontrol/util/zlog/zf/fields.go if needed.
# Forbid ptr.To - use Go 1.26 new(expr) instead
- pattern: 'ptr\.To\('
msg: >-
ptr.To is forbidden. Use Go 1.26's new(expr) syntax instead.
Example: ptr.To(value) → new(value)
# Forbid tsaddr.SortPrefixes - use slices.SortFunc with netip.Prefix.Compare
- pattern: 'tsaddr\.SortPrefixes'
msg: >-
tsaddr.SortPrefixes is forbidden. Use Go 1.26's netip.Prefix.Compare instead.
Example: slices.SortFunc(prefixes, netip.Prefix.Compare)
analyze-types: true
gocritic:
disabled-checks:
- appendAssign
- ifElseChain
nlreturn:
block-size: 4
varnamelen:
ignore-names:
- err
- db
- id
- ip
- ok
- c
- tt
- tx
- rx
- sb
- wg
- pr
- p
- p2
ignore-type-assert-ok: true
ignore-map-index-ok: true
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- third_party$
- builtin$
- examples$
- gen
formatters:
enable:
- gci
- gofmt
- gofumpt
- goimports
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$
- gen
================================================
FILE: .goreleaser.yml
================================================
---
version: 2
before:
hooks:
- go mod tidy -compat=1.26
- go mod vendor
release:
prerelease: auto
draft: true
header: |
## Upgrade
Please follow the steps outlined in the [upgrade guide](https://headscale.net/stable/setup/upgrade/) to update your existing Headscale installation.
builds:
- id: headscale
main: ./cmd/headscale
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
targets:
- darwin_amd64
- darwin_arm64
- freebsd_amd64
- linux_amd64
- linux_arm64
flags:
- -mod=readonly
tags:
- ts2019
archives:
- id: golang-cross
name_template: '{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}{{ with .Arm }}v{{ . }}{{ end }}{{ with .Mips }}_{{ . }}{{ end }}{{ if not (eq .Amd64 "v1") }}{{ .Amd64 }}{{ end }}'
formats:
- binary
source:
enabled: true
name_template: "{{ .ProjectName }}_{{ .Version }}"
format: tar.gz
files:
- "vendor/"
nfpms:
# Configure nFPM for .deb and .rpm releases
#
# See https://nfpm.goreleaser.com/configuration/
# and https://goreleaser.com/customization/nfpm/
#
# Useful tools for debugging .debs:
# List file contents: dpkg -c dist/headscale...deb
# Package metadata: dpkg --info dist/headscale....deb
#
- ids:
- headscale
package_name: headscale
priority: optional
vendor: headscale
maintainer: Kristoffer Dalby
homepage: https://github.com/juanfont/headscale
description: |-
Open source implementation of the Tailscale control server.
Headscale aims to implement a self-hosted, open source alternative to the
Tailscale control server. Headscale's goal is to provide self-hosters and
hobbyists with an open-source server they can use for their projects and
labs. It implements a narrow scope, a single Tailscale network (tailnet),
suitable for a personal use, or a small open-source organisation.
bindir: /usr/bin
section: net
formats:
- deb
contents:
- src: ./config-example.yaml
dst: /etc/headscale/config.yaml
type: config|noreplace
file_info:
mode: 0644
- src: ./packaging/systemd/headscale.service
dst: /usr/lib/systemd/system/headscale.service
- dst: /var/lib/headscale
type: dir
- src: LICENSE
dst: /usr/share/doc/headscale/copyright
scripts:
postinstall: ./packaging/deb/postinst
postremove: ./packaging/deb/postrm
preremove: ./packaging/deb/prerm
deb:
lintian_overrides:
- no-changelog # Our CHANGELOG.md uses a different formatting
- no-manual-page
- statically-linked-binary
kos:
- id: ghcr
repositories:
- ghcr.io/juanfont/headscale
- headscale/headscale
# bare tells KO to only use the repository
# for tagging and naming the container.
bare: true
base_image: gcr.io/distroless/base-debian13
build: headscale
main: ./cmd/headscale
env:
- CGO_ENABLED=0
platforms:
- linux/amd64
- linux/arm64
tags:
- "{{ if not .Prerelease }}latest{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}{{ end }}"
- "{{ if not .Prerelease }}stable{{ else }}unstable{{ end }}"
- "{{ .Tag }}"
- '{{ trimprefix .Tag "v" }}'
- "sha-{{ .ShortCommit }}"
creation_time: "{{.CommitTimestamp}}"
ko_data_creation_time: "{{.CommitTimestamp}}"
- id: ghcr-debug
repositories:
- ghcr.io/juanfont/headscale
- headscale/headscale
bare: true
base_image: gcr.io/distroless/base-debian13:debug
build: headscale
main: ./cmd/headscale
env:
- CGO_ENABLED=0
platforms:
- linux/amd64
- linux/arm64
tags:
- "{{ if not .Prerelease }}latest-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}-debug{{ end }}"
- "{{ if not .Prerelease }}stable-debug{{ else }}unstable-debug{{ end }}"
- "{{ .Tag }}-debug"
- '{{ trimprefix .Tag "v" }}-debug'
- "sha-{{ .ShortCommit }}-debug"
checksum:
name_template: "checksums.txt"
snapshot:
version_template: "{{ .Tag }}-next"
changelog:
sort: asc
filters:
exclude:
- "^docs:"
- "^test:"
================================================
FILE: .mcp.json
================================================
{
"mcpServers": {
"claude-code-mcp": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@steipete/claude-code-mcp@latest"],
"env": {}
},
"sequential-thinking": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-sequential-thinking"],
"env": {}
},
"nixos": {
"type": "stdio",
"command": "uvx",
"args": ["mcp-nixos"],
"env": {}
},
"context7": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@upstash/context7-mcp"],
"env": {}
},
"git": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@cyanheads/git-mcp-server"],
"env": {}
}
}
}
================================================
FILE: .mdformat.toml
================================================
[plugin.mkdocs]
align_semantic_breaks_in_lists = true
================================================
FILE: .pre-commit-config.yaml
================================================
# prek/pre-commit configuration for headscale
# See: https://prek.j178.dev/quickstart/
# See: https://prek.j178.dev/builtin/
# Global exclusions - ignore generated code
exclude: ^gen/
repos:
# Built-in hooks from pre-commit/pre-commit-hooks
# prek will use fast-path optimized versions automatically
# See: https://prek.j178.dev/builtin/
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v6.0.0
hooks:
- id: check-added-large-files
- id: check-case-conflict
- id: check-executables-have-shebangs
- id: check-json
- id: check-merge-conflict
- id: check-symlinks
- id: check-toml
- id: check-xml
- id: check-yaml
- id: detect-private-key
- id: end-of-file-fixer
- id: fix-byte-order-marker
- id: mixed-line-ending
- id: trailing-whitespace
# Local hooks for project-specific tooling
- repo: local
hooks:
# nixpkgs-fmt for Nix files
- id: nixpkgs-fmt
name: nixpkgs-fmt
entry: nixpkgs-fmt
language: system
files: \.nix$
# Prettier for formatting
- id: prettier
name: prettier
entry: prettier --write --list-different
language: system
exclude: ^docs/
types_or: [javascript, jsx, ts, tsx, yaml, json, toml, html, css, scss, sass, markdown]
# mdformat for docs
- id: mdformat
name: mdformat
entry: mdformat
language: system
types_or: [markdown]
files: ^docs/
# golangci-lint for Go code quality
- id: golangci-lint
name: golangci-lint
entry: nix develop --command -- golangci-lint run --new-from-rev=HEAD~1 --timeout=5m --fix
language: system
types: [go]
pass_filenames: false
================================================
FILE: .prettierignore
================================================
.github/workflows/test-integration-v2*
docs/
================================================
FILE: AGENTS.md
================================================
# AGENTS.md
This file provides guidance to AI agents when working with code in this repository.
## Overview
Headscale is an open-source implementation of the Tailscale control server written in Go. It provides self-hosted coordination for Tailscale networks (tailnets), managing node registration, IP allocation, policy enforcement, and DERP routing.
## Development Commands
### Quick Setup
```bash
# Recommended: Use Nix for dependency management
nix develop
# Full development workflow
make dev # runs fmt + lint + test + build
```
### Essential Commands
```bash
# Build headscale binary
make build
# Run tests
make test
go test ./... # All unit tests
go test -race ./... # With race detection
# Run specific integration test
go run ./cmd/hi run "TestName" --postgres
# Code formatting and linting
make fmt # Format all code (Go, docs, proto)
make lint # Lint all code (Go, proto)
make fmt-go # Format Go code only
make lint-go # Lint Go code only
# Protocol buffer generation (after modifying proto/)
make generate
# Clean build artifacts
make clean
```
### Integration Testing
```bash
# Use the hi (Headscale Integration) test runner
go run ./cmd/hi doctor # Check system requirements
go run ./cmd/hi run "TestPattern" # Run specific test
go run ./cmd/hi run "TestPattern" --postgres # With PostgreSQL backend
# Test artifacts are saved to control_logs/ with logs and debug data
```
## Pre-Commit Quality Checks
### **MANDATORY: Automated Pre-Commit Hooks with prek**
**CRITICAL REQUIREMENT**: This repository uses [prek](https://prek.j178.dev/) for automated pre-commit hooks. All commits are automatically validated for code quality, formatting, and common issues.
### Initial Setup
When you first clone the repository or enter the nix shell, install the git hooks:
```bash
# Enter nix development environment
nix develop
# Install prek git hooks (one-time setup)
prek install
```
This installs the pre-commit hook at `.git/hooks/pre-commit` which automatically runs all configured checks before each commit.
### Configured Hooks
The repository uses `.pre-commit-config.yaml` with the following hooks:
**Built-in Checks** (optimized fast-path execution):
- `check-added-large-files` - Prevents accidentally committing large files
- `check-case-conflict` - Checks for files that would conflict in case-insensitive filesystems
- `check-executables-have-shebangs` - Ensures executables have proper shebangs
- `check-json` - Validates JSON syntax
- `check-merge-conflict` - Prevents committing files with merge conflict markers
- `check-symlinks` - Checks for broken symlinks
- `check-toml` - Validates TOML syntax
- `check-xml` - Validates XML syntax
- `check-yaml` - Validates YAML syntax
- `detect-private-key` - Detects accidentally committed private keys
- `end-of-file-fixer` - Ensures files end with a newline
- `fix-byte-order-marker` - Removes UTF-8 byte order markers
- `mixed-line-ending` - Prevents mixed line endings
- `trailing-whitespace` - Removes trailing whitespace
**Project-Specific Hooks**:
- `nixpkgs-fmt` - Formats Nix files
- `prettier` - Formats markdown, YAML, JSON, and TOML files
- `golangci-lint` - Runs Go linter with auto-fix on changed files only
### Manual Hook Execution
Run hooks manually without making a commit:
```bash
# Run hooks on staged files only
prek run
# Run hooks on all files in the repository
prek run --all-files
# Run a specific hook
prek run golangci-lint
# Run hooks on specific files
prek run --files path/to/file1.go path/to/file2.go
```
### Workflow Pattern
With prek installed, your normal workflow becomes:
```bash
# 1. Make your code changes
vim hscontrol/state/state.go
# 2. Stage your changes
git add .
# 3. Commit - hooks run automatically
git commit -m "feat: add new feature"
# If hooks fail, they will show which checks failed
# Fix the issues and try committing again
```
### Manual golangci-lint
While golangci-lint runs automatically via prek, you can also run it manually:
```bash
# If you have upstream remote configured (recommended)
golangci-lint run --new-from-rev=upstream/main --timeout=5m --fix
# If you only have origin remote
golangci-lint run --new-from-rev=main --timeout=5m --fix
```
**Important**: Always use `--new-from-rev` to only lint changed files. This prevents formatting the entire repository and keeps changes focused on your actual modifications.
### Skipping Hooks (Not Recommended)
In rare cases where you need to skip hooks (e.g., work-in-progress commits), use:
```bash
git commit --no-verify -m "WIP: work in progress"
```
**WARNING**: Only use `--no-verify` for temporary WIP commits on feature branches. All commits to main must pass all hooks.
### Troubleshooting
**Hook installation issues**:
```bash
# Check if hooks are installed
ls -la .git/hooks/pre-commit
# Reinstall hooks
prek install
```
**Hooks running slow**:
```bash
# prek uses optimized fast-path for built-in hooks
# If running slow, check which hook is taking time with verbose output
prek run -v
```
**Update hook configuration**:
```bash
# After modifying .pre-commit-config.yaml, hooks will automatically use new config
# No reinstallation needed
```
## Project Structure & Architecture
### Top-Level Organization
```
headscale/
├── cmd/ # Command-line applications
│ ├── headscale/ # Main headscale server binary
│ └── hi/ # Headscale Integration test runner
├── hscontrol/ # Core control plane logic
├── integration/ # End-to-end Docker-based tests
├── proto/ # Protocol buffer definitions
├── gen/ # Generated code (protobuf)
├── docs/ # Documentation
└── packaging/ # Distribution packaging
```
### Core Packages (`hscontrol/`)
**Main Server (`hscontrol/`)**
- `app.go`: Application setup, dependency injection, server lifecycle
- `handlers.go`: HTTP/gRPC API endpoints for management operations
- `grpcv1.go`: gRPC service implementation for headscale API
- `poll.go`: **Critical** - Handles Tailscale MapRequest/MapResponse protocol
- `noise.go`: Noise protocol implementation for secure client communication
- `auth.go`: Authentication flows (web, OIDC, command-line)
- `oidc.go`: OpenID Connect integration for user authentication
**State Management (`hscontrol/state/`)**
- `state.go`: Central coordinator for all subsystems (database, policy, IP allocation, DERP)
- `node_store.go`: **Performance-critical** - In-memory cache with copy-on-write semantics
- Thread-safe operations with deadlock detection
- Coordinates between database persistence and real-time operations
**Database Layer (`hscontrol/db/`)**
- `db.go`: Database abstraction, GORM setup, migration management
- `node.go`: Node lifecycle, registration, expiration, IP assignment
- `users.go`: User management, namespace isolation
- `api_key.go`: API authentication tokens
- `preauth_keys.go`: Pre-authentication keys for automated node registration
- `ip.go`: IP address allocation and management
- `policy.go`: Policy storage and retrieval
- Schema migrations in `schema.sql` with extensive test data coverage
**CRITICAL DATABASE MIGRATION RULES**:
1. **NEVER reorder existing migrations** - Migration order is immutable once committed
2. **ONLY add new migrations to the END** of the migrations array
3. **NEVER disable foreign keys** in new migrations - no new migrations should be added to `migrationsRequiringFKDisabled`
4. **Migration ID format**: `YYYYMMDDHHSS-short-description` (timestamp + descriptive suffix)
- Example: `202511131500-add-user-roles`
- The timestamp must be chronologically ordered
5. **New migrations go after the comment** "As of 2025-07-02, no new IDs should be added here"
6. If you need to rename a column that other migrations depend on:
- Accept that the old column name will exist in intermediate migration states
- Update code to work with the new column name
- Let AutoMigrate create the new column if needed
- Do NOT try to rename columns that later migrations reference
**Policy Engine (`hscontrol/policy/`)**
- `policy.go`: Core ACL evaluation logic, HuJSON parsing
- `v2/`: Next-generation policy system with improved filtering
- `matcher/`: ACL rule matching and evaluation engine
- Determines peer visibility, route approval, and network access rules
- Supports both file-based and database-stored policies
**Network Management (`hscontrol/`)**
- `derp/`: DERP (Designated Encrypted Relay for Packets) server implementation
- NAT traversal when direct connections fail
- Fallback relay for firewall-restricted environments
- `mapper/`: Converts internal Headscale state to Tailscale's wire protocol format
- `tail.go`: Tailscale-specific data structure generation
- `routes/`: Subnet route management and primary route selection
- `dns/`: DNS record management and MagicDNS implementation
**Utilities & Support (`hscontrol/`)**
- `types/`: Core data structures, configuration, validation
- `util/`: Helper functions for networking, DNS, key management
- `templates/`: Client configuration templates (Apple, Windows, etc.)
- `notifier/`: Event notification system for real-time updates
- `metrics.go`: Prometheus metrics collection
- `capver/`: Tailscale capability version management
### Key Subsystem Interactions
**Node Registration Flow**
1. **Client Connection**: `noise.go` handles secure protocol handshake
2. **Authentication**: `auth.go` validates credentials (web/OIDC/preauth)
3. **State Creation**: `state.go` coordinates IP allocation via `db/ip.go`
4. **Storage**: `db/node.go` persists node, `NodeStore` caches in memory
5. **Network Setup**: `mapper/` generates initial Tailscale network map
**Ongoing Operations**
1. **Poll Requests**: `poll.go` receives periodic client updates
2. **State Updates**: `NodeStore` maintains real-time node information
3. **Policy Application**: `policy/` evaluates ACL rules for peer relationships
4. **Map Distribution**: `mapper/` sends network topology to all affected clients
**Route Management**
1. **Advertisement**: Clients announce routes via `poll.go` Hostinfo updates
2. **Storage**: `db/` persists routes, `NodeStore` caches for performance
3. **Approval**: `policy/` auto-approves routes based on ACL rules
4. **Distribution**: `routes/` selects primary routes, `mapper/` distributes to peers
### Command-Line Tools (`cmd/`)
**Main Server (`cmd/headscale/`)**
- `headscale.go`: CLI parsing, configuration loading, server startup
- Supports daemon mode, CLI operations (user/node management), database operations
**Integration Test Runner (`cmd/hi/`)**
- `main.go`: Test execution framework with Docker orchestration
- `run.go`: Individual test execution with artifact collection
- `doctor.go`: System requirements validation
- `docker.go`: Container lifecycle management
- Essential for validating changes against real Tailscale clients
### Generated & External Code
**Protocol Buffers (`proto/` → `gen/`)**
- Defines gRPC API for headscale management operations
- Client libraries can generate from these definitions
- Run `make generate` after modifying `.proto` files
**Integration Testing (`integration/`)**
- `scenario.go`: Docker test environment setup
- `tailscale.go`: Tailscale client container management
- Individual test files for specific functionality areas
- Real end-to-end validation with network isolation
### Critical Performance Paths
**High-Frequency Operations**
1. **MapRequest Processing** (`poll.go`): Every 15-60 seconds per client
2. **NodeStore Reads** (`node_store.go`): Every operation requiring node data
3. **Policy Evaluation** (`policy/`): On every peer relationship calculation
4. **Route Lookups** (`routes/`): During network map generation
**Database Write Patterns**
- **Frequent**: Node heartbeats, endpoint updates, route changes
- **Moderate**: User operations, policy updates, API key management
- **Rare**: Schema migrations, bulk operations
### Configuration & Deployment
**Configuration** (`hscontrol/types/config.go`)\*\*
- Database connection settings (SQLite/PostgreSQL)
- Network configuration (IP ranges, DNS settings)
- Policy mode (file vs database)
- DERP relay configuration
- OIDC provider settings
**Key Dependencies**
- **GORM**: Database ORM with migration support
- **Tailscale Libraries**: Core networking and protocol code
- **Zerolog**: Structured logging throughout the application
- **Buf**: Protocol buffer toolchain for code generation
### Development Workflow Integration
The architecture supports incremental development:
- **Unit Tests**: Focus on individual packages (`*_test.go` files)
- **Integration Tests**: Validate cross-component interactions
- **Database Tests**: Extensive migration and data integrity validation
- **Policy Tests**: ACL rule evaluation and edge cases
- **Performance Tests**: NodeStore and high-frequency operation validation
## Integration Testing System
### Overview
Headscale uses Docker-based integration tests with real Tailscale clients to validate end-to-end functionality. The integration test system is complex and requires specialized knowledge for effective execution and debugging.
### **MANDATORY: Use the headscale-integration-tester Agent**
**CRITICAL REQUIREMENT**: For ANY integration test execution, analysis, troubleshooting, or validation, you MUST use the `headscale-integration-tester` agent. This agent contains specialized knowledge about:
- Test execution strategies and timing requirements
- Infrastructure vs code issue distinction (99% vs 1% failure patterns)
- Security-critical debugging rules and forbidden practices
- Comprehensive artifact analysis workflows
- Real-world failure patterns from HA debugging experiences
### Quick Reference Commands
```bash
# Check system requirements (always run first)
go run ./cmd/hi doctor
# Run single test (recommended for development)
go run ./cmd/hi run "TestName"
# Use PostgreSQL for database-heavy tests
go run ./cmd/hi run "TestName" --postgres
# Pattern matching for related tests
go run ./cmd/hi run "TestPattern*"
# Run multiple tests concurrently (each gets isolated run ID)
go run ./cmd/hi run "TestPingAllByIP" &
go run ./cmd/hi run "TestACLAllowUserDst" &
go run ./cmd/hi run "TestOIDCAuthenticationPingAll" &
```
**Concurrent Execution Support**:
The test runner supports running multiple tests concurrently on the same Docker daemon:
- Each test run gets a **unique Run ID** (format: `YYYYMMDD-HHMMSS-{6-char-hash}`)
- All containers are labeled with `hi.run-id` for isolation
- Container names include the run ID for easy identification (e.g., `ts-{runID}-1-74-{hash}`)
- Dynamic port allocation prevents port conflicts between concurrent runs
- Cleanup only affects containers belonging to the specific run ID
- Log directories are isolated per run: `control_logs/{runID}/`
**Critical Notes**:
- Tests generate ~100MB of logs per run in `control_logs/`
- Running many tests concurrently may cause resource contention (CPU/memory)
- Clean stale containers periodically: `docker system prune -f`
### Test Artifacts Location
All test runs save comprehensive debugging artifacts to `control_logs/TIMESTAMP-ID/` including server logs, client logs, database dumps, MapResponse protocol data, and Prometheus metrics.
**For all integration test work, use the headscale-integration-tester agent - it contains the complete knowledge needed for effective testing and debugging.**
## NodeStore Implementation Details
**Key Insight from Recent Work**: The NodeStore is a critical performance optimization that caches node data in memory while ensuring consistency with the database. When working with route advertisements or node state changes:
1. **Timing Considerations**: Route advertisements need time to propagate from clients to server. Use `require.EventuallyWithT()` patterns in tests instead of immediate assertions.
2. **Synchronization Points**: NodeStore updates happen at specific points like `poll.go:420` after Hostinfo changes. Ensure these are maintained when modifying the polling logic.
3. **Peer Visibility**: The NodeStore's `peersFunc` determines which nodes are visible to each other. Policy-based filtering is separate from monitoring visibility - expired nodes should remain visible for debugging but marked as expired.
## Testing Guidelines
### Integration Test Patterns
#### **CRITICAL: EventuallyWithT Pattern for External Calls**
**All external calls in integration tests MUST be wrapped in EventuallyWithT blocks** to handle eventual consistency in distributed systems. External calls include:
- `client.Status()` - Getting Tailscale client status
- `client.Curl()` - Making HTTP requests through clients
- `client.Traceroute()` - Running network diagnostics
- `headscale.ListNodes()` - Querying headscale server state
- Any other calls that interact with external systems or network operations
**Key Rules**:
1. **Never use bare `require.NoError(t, err)` with external calls** - Always wrap in EventuallyWithT
2. **Keep related assertions together** - If multiple assertions depend on the same external call, keep them in the same EventuallyWithT block
3. **Split unrelated external calls** - Different external calls should be in separate EventuallyWithT blocks
4. **Never nest EventuallyWithT calls** - Each EventuallyWithT should be at the same level
5. **Declare shared variables at function scope** - Variables used across multiple EventuallyWithT blocks must be declared before first use
**Examples**:
```go
// CORRECT: External call wrapped in EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
// Related assertions using the same status call
for _, peerKey := range status.Peers() {
peerStatus := status.Peer[peerKey]
assert.NotNil(c, peerStatus.PrimaryRoutes)
requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedRoutes)
}
}, 5*time.Second, 200*time.Millisecond, "Verifying client status and routes")
// INCORRECT: Bare external call without EventuallyWithT
status, err := client.Status() // ❌ Will fail intermittently
require.NoError(t, err)
// CORRECT: Separate EventuallyWithT for different external calls
// First external call - headscale.ListNodes()
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 2)
requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2)
}, 10*time.Second, 500*time.Millisecond, "route state changes should propagate to nodes")
// Second external call - client.Status()
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
for _, peerKey := range status.Peers() {
peerStatus := status.Peer[peerKey]
requirePeerSubnetRoutesWithCollect(c, peerStatus, []netip.Prefix{tsaddr.AllIPv4(), tsaddr.AllIPv6()})
}
}, 10*time.Second, 500*time.Millisecond, "routes should be visible to client")
// INCORRECT: Multiple unrelated external calls in same EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes() // ❌ First external call
assert.NoError(c, err)
status, err := client.Status() // ❌ Different external call - should be separate
assert.NoError(c, err)
}, 10*time.Second, 500*time.Millisecond, "mixed calls")
// CORRECT: Variable scoping for shared data
var (
srs1, srs2, srs3 *ipnstate.Status
clientStatus *ipnstate.Status
srs1PeerStatus *ipnstate.PeerStatus
)
assert.EventuallyWithT(t, func(c *assert.CollectT) {
srs1 = subRouter1.MustStatus() // = not :=
srs2 = subRouter2.MustStatus()
clientStatus = client.MustStatus()
srs1PeerStatus = clientStatus.Peer[srs1.Self.PublicKey]
// assertions...
}, 5*time.Second, 200*time.Millisecond, "checking router status")
// CORRECT: Wrapping client operations
assert.EventuallyWithT(t, func(c *assert.CollectT) {
result, err := client.Curl(weburl)
assert.NoError(c, err)
assert.Len(c, result, 13)
}, 5*time.Second, 200*time.Millisecond, "Verifying HTTP connectivity")
assert.EventuallyWithT(t, func(c *assert.CollectT) {
tr, err := client.Traceroute(webip)
assert.NoError(c, err)
assertTracerouteViaIPWithCollect(c, tr, expectedRouter.MustIPv4())
}, 5*time.Second, 200*time.Millisecond, "Verifying network path")
```
**Helper Functions**:
- Use `requirePeerSubnetRoutesWithCollect` instead of `requirePeerSubnetRoutes` inside EventuallyWithT
- Use `requireNodeRouteCountWithCollect` instead of `requireNodeRouteCount` inside EventuallyWithT
- Use `assertTracerouteViaIPWithCollect` instead of `assertTracerouteViaIP` inside EventuallyWithT
```go
// Node route checking by actual node properties, not array position
var routeNode *v1.Node
for _, node := range nodes {
if nodeIDStr := fmt.Sprintf("%d", node.GetId()); expectedRoutes[nodeIDStr] != "" {
routeNode = node
break
}
}
```
### Running Problematic Tests
- Some tests require significant time (e.g., `TestNodeOnlineStatus` runs for 12 minutes)
- Infrastructure issues like disk space can cause test failures unrelated to code changes
- Use `--postgres` flag when testing database-heavy scenarios
## Quality Assurance and Testing Requirements
### **MANDATORY: Always Use Specialized Testing Agents**
**CRITICAL REQUIREMENT**: For ANY task involving testing, quality assurance, review, or validation, you MUST use the appropriate specialized agent at the END of your task list. This ensures comprehensive quality validation and prevents regressions.
**Required Agents for Different Task Types**:
1. **Integration Testing**: Use `headscale-integration-tester` agent for:
- Running integration tests with `cmd/hi`
- Analyzing test failures and artifacts
- Troubleshooting Docker-based test infrastructure
- Validating end-to-end functionality changes
2. **Quality Control**: Use `quality-control-enforcer` agent for:
- Code review and validation
- Ensuring best practices compliance
- Preventing common pitfalls and anti-patterns
- Validating architectural decisions
**Agent Usage Pattern**: Always add the appropriate agent as the FINAL step in any task list to ensure quality validation occurs after all work is complete.
### Integration Test Debugging Reference
Test artifacts are preserved in `control_logs/TIMESTAMP-ID/` including:
- Headscale server logs (stderr/stdout)
- Tailscale client logs and status
- Database dumps and network captures
- MapResponse JSON files for protocol debugging
**For integration test issues, ALWAYS use the headscale-integration-tester agent - do not attempt manual debugging.**
## EventuallyWithT Pattern for Integration Tests
### Overview
EventuallyWithT is a testing pattern used to handle eventual consistency in distributed systems. In Headscale integration tests, many operations are asynchronous - clients advertise routes, the server processes them, updates propagate through the network. EventuallyWithT allows tests to wait for these operations to complete while making assertions.
### External Calls That Must Be Wrapped
The following operations are **external calls** that interact with the headscale server or tailscale clients and MUST be wrapped in EventuallyWithT:
- `headscale.ListNodes()` - Queries server state
- `client.Status()` - Gets client network status
- `client.Curl()` - Makes HTTP requests through the network
- `client.Traceroute()` - Performs network diagnostics
- `client.Execute()` when running commands that query state
- Any operation that reads from the headscale server or tailscale client
### Operations That Must NOT Be Wrapped
The following are **blocking operations** that modify state and should NOT be wrapped in EventuallyWithT:
- `tailscale set` commands (e.g., `--advertise-routes`, `--exit-node`)
- Any command that changes configuration or state
- Use `client.MustStatus()` instead of `client.Status()` when you just need the ID for a blocking operation
### Five Key Rules for EventuallyWithT
1. **One External Call Per EventuallyWithT Block**
- Each EventuallyWithT should make ONE external call (e.g., ListNodes OR Status)
- Related assertions based on that single call can be grouped together
- Unrelated external calls must be in separate EventuallyWithT blocks
2. **Variable Scoping**
- Declare variables that need to be shared across EventuallyWithT blocks at function scope
- Use `=` for assignment inside EventuallyWithT, not `:=` (unless the variable is only used within that block)
- Variables declared with `:=` inside EventuallyWithT are not accessible outside
3. **No Nested EventuallyWithT**
- NEVER put an EventuallyWithT inside another EventuallyWithT
- This is a critical anti-pattern that must be avoided
4. **Use CollectT for Assertions**
- Inside EventuallyWithT, use `assert` methods with the CollectT parameter
- Helper functions called within EventuallyWithT must accept `*assert.CollectT`
5. **Descriptive Messages**
- Always provide a descriptive message as the last parameter
- Message should explain what condition is being waited for
### Correct Pattern Examples
```go
// CORRECT: Blocking operation NOT wrapped
for _, client := range allClients {
status := client.MustStatus()
command := []string{
"tailscale",
"set",
"--advertise-routes=" + expectedRoutes[string(status.Self.ID)],
}
_, _, err = client.Execute(command)
require.NoErrorf(t, err, "failed to advertise route: %s", err)
}
// CORRECT: Single external call with related assertions
var nodes []*v1.Node
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err = headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 2)
requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2)
}, 10*time.Second, 500*time.Millisecond, "nodes should have expected route counts")
// CORRECT: Separate EventuallyWithT for different external call
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
for _, peerKey := range status.Peers() {
peerStatus := status.Peer[peerKey]
requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedPrefixes)
}
}, 10*time.Second, 500*time.Millisecond, "client should see expected routes")
```
### Incorrect Patterns to Avoid
```go
// INCORRECT: Blocking operation wrapped in EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
// This is a blocking operation - should NOT be in EventuallyWithT!
command := []string{
"tailscale",
"set",
"--advertise-routes=" + expectedRoutes[string(status.Self.ID)],
}
_, _, err = client.Execute(command)
assert.NoError(c, err)
}, 5*time.Second, 200*time.Millisecond, "wrong pattern")
// INCORRECT: Multiple unrelated external calls in same EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
// First external call
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 2)
// Second unrelated external call - WRONG!
status, err := client.Status()
assert.NoError(c, err)
assert.NotNil(c, status)
}, 10*time.Second, 500*time.Millisecond, "mixed operations")
```
## Tags-as-Identity Architecture
### Overview
Headscale implements a **tags-as-identity** model where tags and user ownership are mutually exclusive ways to identify nodes. This is a fundamental architectural principle that affects node registration, ownership, ACL evaluation, and API behavior.
### Core Principle: Tags XOR User Ownership
Every node in Headscale is **either** tagged **or** user-owned, never both:
- **Tagged Nodes**: Ownership is defined by tags (e.g., `tag:server`, `tag:database`)
- Tags are set during registration via tagged PreAuthKey
- Tags are immutable after registration (cannot be changed via API)
- May have `UserID` set for "created by" tracking, but ownership is via tags
- Identified by: `node.IsTagged()` returns `true`
- **User-Owned Nodes**: Ownership is defined by user assignment
- Registered via OIDC, web auth, or untagged PreAuthKey
- Node belongs to a specific user's namespace
- No tags (empty tags array)
- Identified by: `node.UserID().Valid() && !node.IsTagged()`
### Critical Implementation Details
#### Node Identification Methods
```go
// Primary methods for determining node ownership
node.IsTagged() // Returns true if node has tags OR AuthKey.Tags
node.HasTag(tag) // Returns true if node has specific tag
node.IsUserOwned() // Returns true if UserID set AND not tagged
// IMPORTANT: UserID can be set on tagged nodes for tracking!
// Always use IsTagged() to determine actual ownership, not just UserID.Valid()
```
#### UserID Field Semantics
**Critical distinction**: `UserID` has different meanings depending on node type:
- **Tagged nodes**: `UserID` is optional "created by" tracking
- Indicates which user created the tagged PreAuthKey
- Does NOT define ownership (tags define ownership)
- Example: User "alice" creates tagged PreAuthKey with `tag:server`, node gets `UserID=alice.ID` + `Tags=["tag:server"]`
- **User-owned nodes**: `UserID` defines ownership
- Required field for non-tagged nodes
- Defines which user namespace the node belongs to
- Example: User "bob" registers via OIDC, node gets `UserID=bob.ID` + `Tags=[]`
#### Mapper Behavior (mapper/tail.go)
The mapper converts internal nodes to Tailscale protocol format, handling the TaggedDevices special user:
```go
// From mapper/tail.go:102-116
User: func() tailcfg.UserID {
// IMPORTANT: Tags-as-identity model
// Tagged nodes ALWAYS use TaggedDevices user, even if UserID is set
if node.IsTagged() {
return tailcfg.UserID(int64(types.TaggedDevices.ID))
}
// User-owned nodes: use the actual user ID
return tailcfg.UserID(int64(node.UserID().Get()))
}()
```
**TaggedDevices constant** (`types.TaggedDevices.ID = 2147455555`): Special user ID for all tagged nodes in MapResponse protocol.
#### Registration Flow
**Tagged Node Registration** (via tagged PreAuthKey):
1. User creates PreAuthKey with tags: `pak.Tags = ["tag:server"]`
2. Node registers with PreAuthKey
3. Node gets: `Tags = ["tag:server"]`, `UserID = user.ID` (optional tracking), `AuthKeyID = pak.ID`
4. `IsTagged()` returns `true` (ownership via tags)
5. MapResponse sends `User = TaggedDevices.ID`
**User-Owned Node Registration** (via OIDC/web/untagged PreAuthKey):
1. User authenticates or uses untagged PreAuthKey
2. Node registers
3. Node gets: `Tags = []`, `UserID = user.ID` (required)
4. `IsTagged()` returns `false` (ownership via user)
5. MapResponse sends `User = user.ID`
#### API Validation (SetTags)
The SetTags gRPC API enforces tags-as-identity rules:
```go
// From grpcv1.go:340-347
// User-owned nodes are nodes with UserID that are NOT tagged
isUserOwned := nodeView.UserID().Valid() && !nodeView.IsTagged()
if isUserOwned && len(request.GetTags()) > 0 {
return error("cannot set tags on user-owned nodes")
}
```
**Key validation rules**:
- ✅ Can call SetTags on tagged nodes (tags already define ownership)
- ❌ Cannot set tags on user-owned nodes (would violate XOR rule)
- ❌ Cannot remove all tags from tagged nodes (would orphan the node)
#### Database Layer (db/node.go)
**Tag storage**: Tags are stored in PostgreSQL ARRAY column and SQLite JSON column:
```sql
-- From schema.sql
tags TEXT[] DEFAULT '{}' NOT NULL, -- PostgreSQL
tags TEXT DEFAULT '[]' NOT NULL, -- SQLite (JSON array)
```
**Validation** (`state/tags.go`):
- `validateNodeOwnership()`: Enforces tags XOR user rule
- `validateAndNormalizeTags()`: Validates tag format (`tag:name`) and uniqueness
#### Policy Layer
**Tag Ownership** (policy/v2/policy.go):
```go
func NodeCanHaveTag(node types.NodeView, tag string) bool {
// Checks if node's IP is in the tagOwnerMap IP set
// This is IP-based authorization, not UserID-based
if ips, ok := pm.tagOwnerMap[Tag(tag)]; ok {
if slices.ContainsFunc(node.IPs(), ips.Contains) {
return true
}
}
return false
}
```
**Important**: Tag authorization is based on IP ranges in ACL, not UserID. Tags define identity, ACL authorizes that identity.
### Testing Tags-as-Identity
**Unit Tests** (`hscontrol/types/node_tags_test.go`):
- `TestNodeIsTagged`: Validates IsTagged() for various scenarios
- `TestNodeOwnershipModel`: Tests tags XOR user ownership
- `TestUserTypedID`: Helper method validation
**API Tests** (`hscontrol/grpcv1_test.go`):
- `TestSetTags_UserXORTags`: Validates rejection of setting tags on user-owned nodes
- `TestSetTags_TaggedNode`: Validates that tagged nodes (even with UserID) are not rejected
**Auth Tests** (`hscontrol/auth_test.go:890-928`):
- Tests node registration with tagged PreAuthKey
- Validates tags are applied during registration
### Common Pitfalls
1. **Don't check only `UserID.Valid()` to determine user ownership**
- ❌ Wrong: `if node.UserID().Valid() { /* user-owned */ }`
- ✅ Correct: `if node.UserID().Valid() && !node.IsTagged() { /* user-owned */ }`
2. **Don't assume tagged nodes never have UserID set**
- Tagged nodes MAY have UserID for "created by" tracking
- Always use `IsTagged()` to determine ownership type
3. **Don't allow setting tags on user-owned nodes**
- This violates the tags XOR user principle
- Use API validation to prevent this
4. **Don't forget TaggedDevices in mapper**
- All tagged nodes MUST use `TaggedDevices.ID` in MapResponse
- User ID is only for actual user-owned nodes
### Migration Considerations
When nodes transition between ownership models:
- **No automatic migration**: Tags-as-identity is set at registration and immutable
- **Re-registration required**: To change from user-owned to tagged (or vice versa), node must be deleted and re-registered
- **UserID persistence**: UserID on tagged nodes is informational and not cleared
### Architecture Benefits
The tags-as-identity model provides:
1. **Clear ownership semantics**: No ambiguity about who/what owns a node
2. **ACL simplicity**: Tag-based access control without user conflicts
3. **API safety**: Validation prevents invalid ownership states
4. **Protocol compatibility**: TaggedDevices special user aligns with Tailscale's model
## Logging Patterns
### Incremental Log Event Building
When building log statements with multiple fields, especially with conditional fields, use the **incremental log event pattern** instead of long single-line chains. This improves readability and allows conditional field addition.
**Pattern:**
```go
// GOOD: Incremental building with conditional fields
logEvent := log.Debug().
Str("node", node.Hostname).
Str("machine_key", node.MachineKey.ShortString()).
Str("node_key", node.NodeKey.ShortString())
if node.User != nil {
logEvent = logEvent.Str("user", node.User.Username())
} else if node.UserID != nil {
logEvent = logEvent.Uint("user_id", *node.UserID)
} else {
logEvent = logEvent.Str("user", "none")
}
logEvent.Msg("Registering node")
```
**Key rules:**
1. **Assign chained calls back to the variable**: `logEvent = logEvent.Str(...)` - zerolog methods return a new event, so you must capture the return value
2. **Use for conditional fields**: When fields depend on runtime conditions, build incrementally
3. **Use for long log lines**: When a log line exceeds ~100 characters, split it for readability
4. **Call `.Msg()` at the end**: The final `.Msg()` or `.Msgf()` sends the log event
**Anti-pattern to avoid:**
```go
// BAD: Long single-line chains are hard to read and can't have conditional fields
log.Debug().Caller().Str("node", node.Hostname).Str("machine_key", node.MachineKey.ShortString()).Str("node_key", node.NodeKey.ShortString()).Str("user", node.User.Username()).Msg("Registering node")
// BAD: Forgetting to assign the return value (field is lost!)
logEvent := log.Debug().Str("node", node.Hostname)
logEvent.Str("user", username) // This field is LOST - not assigned back
logEvent.Msg("message") // Only has "node" field
```
**When to use this pattern:**
- Log statements with 4+ fields
- Any log with conditional fields
- Complex logging in loops or error handling
- When you need to add context incrementally
**Example from codebase** (`hscontrol/db/node.go`):
```go
logEvent := log.Debug().
Str("node", node.Hostname).
Str("machine_key", node.MachineKey.ShortString()).
Str("node_key", node.NodeKey.ShortString())
if node.User != nil {
logEvent = logEvent.Str("user", node.User.Username())
} else if node.UserID != nil {
logEvent = logEvent.Uint("user_id", *node.UserID)
} else {
logEvent = logEvent.Str("user", "none")
}
logEvent.Msg("Registering test node")
```
### Avoiding Log Helper Functions
Prefer the incremental log event pattern over creating helper functions that return multiple logging closures. Helper functions like `logPollFunc` create unnecessary indirection and allocate closures.
**Instead of:**
```go
// AVOID: Helper function returning closures
func logPollFunc(req tailcfg.MapRequest, node *types.Node) (
func(string, ...any), // warnf
func(string, ...any), // infof
func(string, ...any), // tracef
func(error, string, ...any), // errf
) {
return func(msg string, a ...any) {
log.Warn().
Caller().
Bool("omitPeers", req.OmitPeers).
Bool("stream", req.Stream).
Uint64("node.id", node.ID.Uint64()).
Str("node.name", node.Hostname).
Msgf(msg, a...)
},
// ... more closures
}
```
**Prefer:**
```go
// BETTER: Build log events inline with shared context
func (m *mapSession) logTrace(msg string) {
log.Trace().
Caller().
Bool("omitPeers", m.req.OmitPeers).
Bool("stream", m.req.Stream).
Uint64("node.id", m.node.ID.Uint64()).
Str("node.name", m.node.Hostname).
Msg(msg)
}
// Or use incremental building for complex cases
logEvent := log.Trace().
Caller().
Bool("omitPeers", m.req.OmitPeers).
Bool("stream", m.req.Stream).
Uint64("node.id", m.node.ID.Uint64()).
Str("node.name", m.node.Hostname)
if additionalContext {
logEvent = logEvent.Str("extra", value)
}
logEvent.Msg("Operation completed")
```
## Important Notes
- **Dependencies**: Use `nix develop` for consistent toolchain (Go, buf, protobuf tools, linting)
- **Protocol Buffers**: Changes to `proto/` require `make generate` and should be committed separately
- **Code Style**: Enforced via golangci-lint with golines (width 88) and gofumpt formatting
- **Linting**: ALL code must pass `golangci-lint run --new-from-rev=upstream/main --timeout=5m --fix` before commit
- **Database**: Supports both SQLite (development) and PostgreSQL (production/testing)
- **Integration Tests**: Require Docker and can consume significant disk space - use headscale-integration-tester agent
- **Performance**: NodeStore optimizations are critical for scale - be careful with changes to state management
- **Quality Assurance**: Always use appropriate specialized agents for testing and validation tasks
- **Tags-as-Identity**: Tags and user ownership are mutually exclusive - always use `IsTagged()` to determine ownership
================================================
FILE: CHANGELOG.md
================================================
# CHANGELOG
## 0.29.0 (202x-xx-xx)
**Minimum supported Tailscale client version: v1.76.0**
### Tailscale ACL compatibility improvements
Extensive test cases were systematically generated using Tailscale clients and the official SaaS
to understand how the packet filter should be generated. We discovered a few differences, but
overall our implementation was very close.
[#3036](https://github.com/juanfont/headscale/pull/3036)
### SSH check action
SSH rules with `"action": "check"` are now supported. When a client initiates a SSH connection to a node
with a `check` action policy, the user is prompted to authenticate via OIDC or CLI approval before access
is granted.
A new `headscale auth` CLI command group supports the approval flow:
- `headscale auth approve --auth-id ` approves a pending authentication request (SSH check or web auth)
- `headscale auth reject --auth-id ` rejects a pending authentication request
- `headscale auth register --auth-id --user ` registers a node (replaces deprecated `headscale nodes register`)
[#1850](https://github.com/juanfont/headscale/pull/1850)
### BREAKING
- **ACL Policy**: Wildcard (`*`) in ACL sources and destinations now resolves to Tailscale's CGNAT range (`100.64.0.0/10`) and ULA range (`fd7a:115c:a1e0::/48`) instead of all IPs (`0.0.0.0/0` and `::/0`) [#3036](https://github.com/juanfont/headscale/pull/3036)
- This better matches Tailscale's security model where `*` means "any node in the tailnet" rather than "any IP address"
- Policies relying on wildcard to match non-Tailscale IPs will need to use explicit CIDR ranges instead
- **Note**: Users with non-standard IP ranges configured in `prefixes.ipv4` or `prefixes.ipv6` (which is unsupported and produces a warning) will need to explicitly specify their CIDR ranges in ACL rules instead of using `*`
- **ACL Policy**: Validate autogroup:self source restrictions matching Tailscale behavior - tags, hosts, and IPs are rejected as sources for autogroup:self destinations [#3036](https://github.com/juanfont/headscale/pull/3036)
- Policies using tags, hosts, or IP addresses as sources for autogroup:self destinations will now fail validation
- **Upgrade path**: Headscale now enforces a strict version upgrade path [#3083](https://github.com/juanfont/headscale/pull/3083)
- Skipping minor versions (e.g. 0.27 → 0.29) is blocked; upgrade one minor version at a time
- Downgrading to a previous minor version is blocked
- Patch version changes within the same minor are always allowed
- **ACL Policy**: The `proto:icmp` protocol name now only includes ICMPv4 (protocol 1), matching Tailscale behavior [#3036](https://github.com/juanfont/headscale/pull/3036)
- Previously, `proto:icmp` included both ICMPv4 and ICMPv6
- Use `proto:ipv6-icmp` or protocol number `58` explicitly for ICMPv6
- **CLI**: `headscale nodes register` is deprecated in favour of `headscale auth register --auth-id --user ` [#1850](https://github.com/juanfont/headscale/pull/1850)
- The old command continues to work but will be removed in a future release
### Changes
- **SSH Policy**: Add support for `localpart:*@` in SSH rule `users` field, mapping each matching user's email local-part as their OS username [#3091](https://github.com/juanfont/headscale/pull/3091)
- **ACL Policy**: Add ICMP and IPv6-ICMP protocols to default filter rules when no protocol is specified [#3036](https://github.com/juanfont/headscale/pull/3036)
- **ACL Policy**: Fix autogroup:self handling for tagged nodes - tagged nodes no longer incorrectly receive autogroup:self filter rules [#3036](https://github.com/juanfont/headscale/pull/3036)
- **ACL Policy**: Use CIDR format for autogroup:self destination IPs matching Tailscale behavior [#3036](https://github.com/juanfont/headscale/pull/3036)
- **ACL Policy**: Merge filter rules with identical SrcIPs and IPProto matching Tailscale behavior - multiple ACL rules with the same source now produce a single FilterRule with combined DstPorts [#3036](https://github.com/juanfont/headscale/pull/3036)
- Remove deprecated `--namespace` flag from `nodes list`, `nodes register`, and `debug create-node` commands (use `--user` instead) [#3093](https://github.com/juanfont/headscale/pull/3093)
- Remove deprecated `namespace`/`ns` command aliases for `users` and `machine`/`machines` aliases for `nodes` [#3093](https://github.com/juanfont/headscale/pull/3093)
- Add SSH `check` action support with OIDC and CLI-based approval flows [#1850](https://github.com/juanfont/headscale/pull/1850)
- Add `headscale auth register`, `headscale auth approve`, and `headscale auth reject` CLI commands [#1850](https://github.com/juanfont/headscale/pull/1850)
- Add `auth` related routes to the API. The `auth/register` endpoint now expects data as JSON [#1850](https://github.com/juanfont/headscale/pull/1850)
- Deprecate `headscale nodes register --key` in favour of `headscale auth register --auth-id` [#1850](https://github.com/juanfont/headscale/pull/1850)
- Generalise auth templates into reusable `AuthSuccess` and `AuthWeb` components [#1850](https://github.com/juanfont/headscale/pull/1850)
- Unify auth pipeline with `AuthVerdict` type, supporting registration, reauthentication, and SSH checks [#1850](https://github.com/juanfont/headscale/pull/1850)
## 0.28.0 (2026-02-04)
**Minimum supported Tailscale client version: v1.74.0**
### Tags as identity
Tags are now implemented following the Tailscale model where tags and user ownership are mutually exclusive. Devices can be either
user-owned (authenticated via web/OIDC) or tagged (authenticated via tagged PreAuthKeys). Tagged devices receive their identity from
tags rather than users, making them suitable for servers and infrastructure. Applying a tag to a device removes user-based
ownership. See the [Tailscale tags documentation](https://tailscale.com/kb/1068/tags) for details on how tags work.
User-owned nodes can now request tags during registration using `--advertise-tags`. Tags are validated against the `tagOwners` policy
and applied at registration time. Tags can be managed via the CLI or API after registration. Tagged nodes can return to user-owned
by re-authenticating with `tailscale up --advertise-tags= --force-reauth`.
A one-time migration will validate and migrate any `RequestTags` (stored in hostinfo) to the tags column. Tags are validated against
your policy's `tagOwners` rules during migration. [#3011](https://github.com/juanfont/headscale/pull/3011)
### Smarter map updates
The map update system has been rewritten to send smaller, partial updates instead of full network maps whenever possible. This reduces bandwidth usage and improves performance, especially for large networks. The system now properly tracks peer
changes and can send removal notifications when nodes are removed due to policy changes.
[#2856](https://github.com/juanfont/headscale/pull/2856) [#2961](https://github.com/juanfont/headscale/pull/2961)
### Pre-authentication key security improvements
Pre-authentication keys now use bcrypt hashing for improved security [#2853](https://github.com/juanfont/headscale/pull/2853). Keys
are stored as a prefix and bcrypt hash instead of plaintext. The full key is only displayed once at creation time. When listing keys,
only the prefix is shown (e.g., `hskey-auth-{prefix}-***`). All new keys use the format `hskey-auth-{prefix}-{secret}`. Legacy plaintext keys in the format `{secret}` will continue to work for backwards compatibility.
### Web registration templates redesign
The OIDC callback and device registration web pages have been updated to use the Material for MkDocs design system from the official
documentation. The templates now use consistent typography, spacing, and colours across all registration flows.
### Database migration support removed for pre-0.25.0 databases
Headscale no longer supports direct upgrades from databases created before version 0.25.0. Users on older versions must upgrade
sequentially through each stable release, selecting the latest patch version available for each minor release.
### BREAKING
- **API**: The Node message in the gRPC/REST API has been simplified - the `ForcedTags`, `InvalidTags`, and `ValidTags` fields have been removed and replaced with a single `Tags` field that contains the node's applied tags [#2993](https://github.com/juanfont/headscale/pull/2993)
- API clients should use the `Tags` field instead of `ValidTags`
- The `headscale nodes list` CLI command now always shows a Tags column and the `--tags` flag has been removed
- **PreAuthKey CLI**: Commands now use ID-based operations instead of user+key combinations [#2992](https://github.com/juanfont/headscale/pull/2992)
- `headscale preauthkeys create` no longer requires `--user` flag (optional for tracking creation)
- `headscale preauthkeys list` lists all keys (no longer filtered by user)
- `headscale preauthkeys expire --id ` replaces `--user `
- `headscale preauthkeys delete --id ` replaces `--user `
**Before:**
```bash
headscale preauthkeys create --user 1 --reusable --tags tag:server
headscale preauthkeys list --user 1
headscale preauthkeys expire --user 1
headscale preauthkeys delete --user 1
```
**After:**
```bash
headscale preauthkeys create --reusable --tags tag:server
headscale preauthkeys list
headscale preauthkeys expire --id 123
headscale preauthkeys delete --id 123
```
- **Tags**: The gRPC `SetTags` endpoint now allows converting user-owned nodes to tagged nodes by setting tags. [#2885](https://github.com/juanfont/headscale/pull/2885)
- **Tags**: Tags are now resolved from the node's stored Tags field only [#2931](https://github.com/juanfont/headscale/pull/2931)
- `--advertise-tags` is processed during registration, not on every policy evaluation
- PreAuthKey tagged devices ignore `--advertise-tags` from clients
- User-owned nodes can use `--advertise-tags` if authorized by `tagOwners` policy
- Tags can be managed via CLI (`headscale nodes tag`) or the SetTags API after registration
- Database migration support removed for pre-0.25.0 databases [#2883](https://github.com/juanfont/headscale/pull/2883)
- If you are running a version older than 0.25.0, you must upgrade to 0.25.1 first, then upgrade to this release
- See the [upgrade path documentation](https://headscale.net/stable/about/faq/#what-is-the-recommended-update-path-can-i-skip-multiple-versions-while-updating) for detailed guidance
- In version 0.29, all migrations before 0.28.0 will also be removed
- Remove ability to move nodes between users [#2922](https://github.com/juanfont/headscale/pull/2922)
- The `headscale nodes move` CLI command has been removed
- The `MoveNode` API endpoint has been removed
- Nodes are permanently associated with their user or tag at registration time
- Add `oidc.email_verified_required` config option to control email verification requirement [#2860](https://github.com/juanfont/headscale/pull/2860)
- When `true` (default), only verified emails can authenticate via OIDC in conjunction with `oidc.allowed_domains` or
`oidc.allowed_users`. Previous versions allowed to authenticate with an unverified email but did not store the email
address in the user profile. This is now rejected during authentication with an `unverified email` error.
- When `false`, unverified emails are allowed for OIDC authentication and the email address is stored in the user
profile regardless of its verification state.
- **SSH Policy**: Wildcard (`*`) is no longer supported as an SSH destination [#3009](https://github.com/juanfont/headscale/issues/3009)
- Use `autogroup:member` for user-owned devices
- Use `autogroup:tagged` for tagged devices
- Use specific tags (e.g., `tag:server`) for targeted access
**Before:**
```json
{ "action": "accept", "src": ["group:admins"], "dst": ["*"], "users": ["root"] }
```
**After:**
```json
{ "action": "accept", "src": ["group:admins"], "dst": ["autogroup:member", "autogroup:tagged"], "users": ["root"] }
```
- **SSH Policy**: SSH source/destination validation now enforces Tailscale's security model [#3010](https://github.com/juanfont/headscale/issues/3010)
Per [Tailscale SSH documentation](https://tailscale.com/kb/1193/tailscale-ssh), the following rules are now enforced:
1. **Tags cannot SSH to user-owned devices**: SSH rules with `tag:*` or `autogroup:tagged` as source cannot have username destinations (e.g., `alice@`) or `autogroup:member`/`autogroup:self` as destination
2. **Username destinations require same-user source**: If destination is a specific username (e.g., `alice@`), the source must be that exact same user only. Use `autogroup:self` for same-user SSH access instead
**Invalid policies now rejected at load time:**
```json
// INVALID: tag source to user destination
{"src": ["tag:server"], "dst": ["alice@"], ...}
// INVALID: autogroup:tagged to autogroup:member
{"src": ["autogroup:tagged"], "dst": ["autogroup:member"], ...}
// INVALID: group to specific user (use autogroup:self instead)
{"src": ["group:admins"], "dst": ["alice@"], ...}
```
**Valid patterns:**
```json
// Users/groups can SSH to their own devices via autogroup:self
{"src": ["group:admins"], "dst": ["autogroup:self"], ...}
// Users/groups can SSH to tagged devices
{"src": ["group:admins"], "dst": ["autogroup:tagged"], ...}
// Tagged devices can SSH to other tagged devices
{"src": ["autogroup:tagged"], "dst": ["autogroup:tagged"], ...}
// Same user can SSH to their own devices
{"src": ["alice@"], "dst": ["alice@"], ...}
```
### Changes
- Smarter change notifications send partial map updates and node removals instead of full maps [#2961](https://github.com/juanfont/headscale/pull/2961)
- Send lightweight endpoint and DERP region updates instead of full maps [#2856](https://github.com/juanfont/headscale/pull/2856)
- Add NixOS module in repository for faster iteration [#2857](https://github.com/juanfont/headscale/pull/2857)
- Add favicon to webpages [#2858](https://github.com/juanfont/headscale/pull/2858)
- Redesign OIDC callback and registration web templates [#2832](https://github.com/juanfont/headscale/pull/2832)
- Reclaim IPs from the IP allocator when nodes are deleted [#2831](https://github.com/juanfont/headscale/pull/2831)
- Add bcrypt hashing for pre-authentication keys [#2853](https://github.com/juanfont/headscale/pull/2853)
- Add prefix to API keys (`hskey-api-{prefix}-{secret}`) [#2853](https://github.com/juanfont/headscale/pull/2853)
- Add prefix to registration keys for web authentication tracking (`hskey-reg-{random}`) [#2853](https://github.com/juanfont/headscale/pull/2853)
- Tags can now be tagOwner of other tags [#2930](https://github.com/juanfont/headscale/pull/2930)
- Add `taildrop.enabled` configuration option to enable/disable Taildrop file sharing [#2955](https://github.com/juanfont/headscale/pull/2955)
- Allow disabling the metrics server by setting empty `metrics_listen_addr` [#2914](https://github.com/juanfont/headscale/pull/2914)
- Log ACME/autocert errors for easier debugging [#2933](https://github.com/juanfont/headscale/pull/2933)
- Improve CLI list output formatting [#2951](https://github.com/juanfont/headscale/pull/2951)
- Use Debian 13 distroless base images for containers [#2944](https://github.com/juanfont/headscale/pull/2944)
- Fix ACL policy not applied to new OIDC nodes until client restart [#2890](https://github.com/juanfont/headscale/pull/2890)
- Fix autogroup:self preventing visibility of nodes matched by other ACL rules [#2882](https://github.com/juanfont/headscale/pull/2882)
- Fix nodes being rejected after pre-authentication key expiration [#2917](https://github.com/juanfont/headscale/pull/2917)
- Fix list-routes command respecting identifier filter with JSON output [#2927](https://github.com/juanfont/headscale/pull/2927)
- Add `--id` flag to expire/delete commands as alternative to `--prefix` for API Keys [#3016](https://github.com/juanfont/headscale/pull/3016)
## 0.27.1 (2025-11-11)
**Minimum supported Tailscale client version: v1.64.0**
### Changes
- Expire nodes with a custom timestamp [#2828](https://github.com/juanfont/headscale/pull/2828)
- Fix issue where node expiry was reset when tailscaled restarts [#2875](https://github.com/juanfont/headscale/pull/2875)
- Fix OIDC authentication when multiple login URLs are opened [#2861](https://github.com/juanfont/headscale/pull/2861)
- Fix node re-registration failing with expired auth keys [#2859](https://github.com/juanfont/headscale/pull/2859)
- Remove old unused database tables and indices [#2844](https://github.com/juanfont/headscale/pull/2844) [#2872](https://github.com/juanfont/headscale/pull/2872)
- Ignore litestream tables during database validation [#2843](https://github.com/juanfont/headscale/pull/2843)
- Fix exit node visibility to respect ACL rules [#2855](https://github.com/juanfont/headscale/pull/2855)
- Fix SSH policy becoming empty when unknown user is referenced [#2874](https://github.com/juanfont/headscale/pull/2874)
- Fix policy validation when using bypass-grpc mode [#2854](https://github.com/juanfont/headscale/pull/2854)
- Fix autogroup:self interaction with other ACL rules [#2842](https://github.com/juanfont/headscale/pull/2842)
- Fix flaky DERP map shuffle test [#2848](https://github.com/juanfont/headscale/pull/2848)
- Use current stable base images for Debian and Alpine containers [#2827](https://github.com/juanfont/headscale/pull/2827)
## 0.27.0 (2025-10-27)
**Minimum supported Tailscale client version: v1.64.0**
### Database integrity improvements
This release includes a significant database migration that addresses
longstanding issues with the database schema and data integrity that has
accumulated over the years. The migration introduces a `schema.sql` file as the
source of truth for the expected database schema to ensure new migrations that
will cause divergence does not occur again.
These issues arose from a combination of factors discovered over time: SQLite
foreign keys not being enforced for many early versions, all migrations being
run in one large function until version 0.23.0, and inconsistent use of GORM's
AutoMigrate feature. Moving forward, all new migrations will be explicit SQL
operations rather than relying on GORM AutoMigrate, and foreign keys will be
enforced throughout the migration process.
We are only improving SQLite databases with this change - PostgreSQL databases
are not affected.
Please read the
[PR description](https://github.com/juanfont/headscale/pull/2617) for more
technical details about the issues and solutions.
**SQLite Database Backup Example:**
```bash
# Stop headscale
systemctl stop headscale
# Backup sqlite database
cp /var/lib/headscale/db.sqlite /var/lib/headscale/db.sqlite.backup
# Backup sqlite WAL/SHM files (if they exist)
cp /var/lib/headscale/db.sqlite-wal /var/lib/headscale/db.sqlite-wal.backup
cp /var/lib/headscale/db.sqlite-shm /var/lib/headscale/db.sqlite-shm.backup
# Start headscale (migration will run automatically)
systemctl start headscale
```
### DERPMap update frequency
The default DERPMap update frequency has been changed from 24 hours to 3 hours.
If you set the `derp.update_frequency` configuration option, it is recommended
to change it to `3h` to ensure that the headscale instance gets the latest
DERPMap updates when upstream is changed.
### Autogroups
This release adds support for the three missing autogroups: `self`
(experimental), `member`, and `tagged`. Please refer to the
[documentation](https://tailscale.com/kb/1018/autogroups/) for a detailed
explanation.
`autogroup:self` is marked as experimental and should be used with caution, but
we need help testing it. Experimental here means two things; first, generating
the packet filter from policies that use `autogroup:self` is very expensive, and
it might perform, or straight up not work on Headscale installations with a
large number of nodes. Second, the implementation might have bugs or edge cases
we are not aware of, meaning that nodes or users might gain _more_ access than
expected. Please report bugs.
### Node store (in memory database)
Under the hood, we have added a new datastructure to store nodes in memory. This
datastructure is called `NodeStore` and aims to reduce the reading and writing
of nodes to the database layer. We have not benchmarked it, but expect it to
improve performance for read heavy workloads. We think of it as, "worst case" we
have moved the bottle neck somewhere else, and "best case" we should see a good
improvement in compute resource usage at the expense of memory usage. We are
quite excited for this change and think it will make it easier for us to improve
the code base over time and make it more correct and efficient.
### BREAKING
- Remove support for 32-bit binaries [#2692](https://github.com/juanfont/headscale/pull/2692)
- Policy: Zero or empty destination port is no longer allowed [#2606](https://github.com/juanfont/headscale/pull/2606)
- Stricter hostname validation [#2383](https://github.com/juanfont/headscale/pull/2383)
- Hostnames must be valid DNS labels (2-63 characters, alphanumeric and
hyphens only, cannot start/end with hyphen)
- **Client Registration (New Nodes)**: Invalid hostnames are automatically
renamed to `invalid-XXXXXX` format
- `my-laptop` → accepted as-is
- `My-Laptop` → `my-laptop` (lowercased)
- `my_laptop` → `invalid-a1b2c3` (underscore not allowed)
- `test@host` → `invalid-d4e5f6` (@ not allowed)
- `laptop-🚀` → `invalid-j1k2l3` (emoji not allowed)
- **Hostinfo Updates / CLI**: Invalid hostnames are rejected with an error
- Valid names are accepted or lowercased
- Names with invalid characters, too short (<2), too long (>63), or
starting/ending with hyphen are rejected
### Changes
- **Database schema migration improvements for SQLite** [#2617](https://github.com/juanfont/headscale/pull/2617)
- **IMPORTANT: Backup your SQLite database before upgrading**
- Introduces safer table renaming migration strategy
- Addresses longstanding database integrity issues
- Add flag to directly manipulate the policy in the database [#2765](https://github.com/juanfont/headscale/pull/2765)
- DERPmap update frequency default changed from 24h to 3h [#2741](https://github.com/juanfont/headscale/pull/2741)
- DERPmap update mechanism has been improved with retry, and is now failing
conservatively, preserving the old map upon failure.
[#2741](https://github.com/juanfont/headscale/pull/2741)
- Add support for `autogroup:member`, `autogroup:tagged` [#2572](https://github.com/juanfont/headscale/pull/2572)
- Fix bug where return routes were being removed by policy [#2767](https://github.com/juanfont/headscale/pull/2767)
- Remove policy v1 code [#2600](https://github.com/juanfont/headscale/pull/2600)
- Refactor Debian/Ubuntu packaging and drop support for Ubuntu 20.04. [#2614](https://github.com/juanfont/headscale/pull/2614)
- Remove redundant check regarding `noise` config [#2658](https://github.com/juanfont/headscale/pull/2658)
- Refactor OpenID Connect documentation [#2625](https://github.com/juanfont/headscale/pull/2625)
- Don't crash if config file is missing [#2656](https://github.com/juanfont/headscale/pull/2656)
- Adds `/robots.txt` endpoint to avoid crawlers [#2643](https://github.com/juanfont/headscale/pull/2643)
- OIDC: Use group claim from UserInfo [#2663](https://github.com/juanfont/headscale/pull/2663)
- OIDC: Update user with claims from UserInfo _before_ comparing with allowed
groups, email and domain
[#2663](https://github.com/juanfont/headscale/pull/2663)
- Policy will now reject invalid fields, making it easier to spot spelling
errors [#2764](https://github.com/juanfont/headscale/pull/2764)
- Add FAQ entry on how to recover from an invalid policy in the database [#2776](https://github.com/juanfont/headscale/pull/2776)
- EXPERIMENTAL: Add support for `autogroup:self` [#2789](https://github.com/juanfont/headscale/pull/2789)
- Add healthcheck command [#2659](https://github.com/juanfont/headscale/pull/2659)
## 0.26.1 (2025-06-06)
### Changes
- Ensure nodes are matching both node key and machine key when connecting. [#2642](https://github.com/juanfont/headscale/pull/2642)
## 0.26.0 (2025-05-14)
### BREAKING
#### Routes
Route internals have been rewritten, removing the dedicated route table in the
database. This was done to simplify the codebase, which had grown unnecessarily
complex after the routes were split into separate tables. The overhead of having
to go via the database and keeping the state in sync made the code very hard to
reason about and prone to errors. The majority of the route state is only
relevant when headscale is running, and is now only kept in memory. As part of
this, the CLI and API has been simplified to reflect the changes;
```console
$ headscale nodes list-routes
ID | Hostname | Approved | Available | Serving (Primary)
1 | ts-head-ruqsg8 | | 0.0.0.0/0, ::/0 |
2 | ts-unstable-fq7ob4 | | 0.0.0.0/0, ::/0 |
$ headscale nodes approve-routes --identifier 1 --routes 0.0.0.0/0,::/0
Node updated
$ headscale nodes list-routes
ID | Hostname | Approved | Available | Serving (Primary)
1 | ts-head-ruqsg8 | 0.0.0.0/0, ::/0 | 0.0.0.0/0, ::/0 | 0.0.0.0/0, ::/0
2 | ts-unstable-fq7ob4 | | 0.0.0.0/0, ::/0 |
```
Note that if an exit route is approved (0.0.0.0/0 or ::/0), both IPv4 and IPv6
will be approved.
- Route API and CLI has been removed [#2422](https://github.com/juanfont/headscale/pull/2422)
- Routes are now managed via the Node API [#2422](https://github.com/juanfont/headscale/pull/2422)
- Only routes accessible to the node will be sent to the node [#2561](https://github.com/juanfont/headscale/pull/2561)
#### Policy v2
This release introduces a new policy implementation. The new policy is a
complete rewrite, and it introduces some significant quality and consistency
improvements. In principle, there are not really any new features, but some long
standing bugs should have been resolved, or be easier to fix in the future. The
new policy code passes all of our tests.
**Changes**
- The policy is validated and "resolved" when loading, providing errors for
invalid rules and conditions.
- Previously this was done as a mix between load and runtime (when it was
applied to a node).
- This means that when you convert the first time, what was previously a
policy that loaded, but failed at runtime, will now fail at load time.
- Error messages should be more descriptive and informative.
- There is still work to be here, but it is already improved with "typing"
(e.g. only Users can be put in Groups)
- All users in the policy must contain an `@` character.
- If your user naturally contains and `@`, like an email, this will just work.
- If its based on usernames, or other identifiers not containing an `@`, an
`@` should be appended at the end. For example, if your user is `john`, it
must be written as `john@` in the policy.
Migration notes when the policy is stored in the database.
This section **only** applies if the policy is stored in the database and
Headscale 0.26 doesn't start due to a policy error
(`failed to load ACL policy`).
- Start Headscale 0.26 with the environment variable `HEADSCALE_POLICY_V1=1`
set. You can check that Headscale picked up the environment variable by
observing this message during startup: `Using policy manager version: 1`
- Dump the policy to a file: `headscale policy get > policy.json`
- Edit `policy.json` and migrate to policy V2. Use the command
`headscale policy check --file policy.json` to check for policy errors.
- Load the modified policy: `headscale policy set --file policy.json`
- Restart Headscale **without** the environment variable `HEADSCALE_POLICY_V1`.
Headscale should now print the message `Using policy manager version: 2` and
startup successfully.
**SSH**
The SSH policy has been reworked to be more consistent with the rest of the
policy. In addition, several inconsistencies between our implementation and
Tailscale's upstream has been closed and this might be a breaking change for
some users. Please refer to the
[upstream documentation](https://tailscale.com/kb/1337/acl-syntax#tailscale-ssh)
for more information on which types are allowed in `src`, `dst` and `users`.
There is one large inconsistency left, we allow `*` as a destination as we
currently do not support `autogroup:self`, `autogroup:member` and
`autogroup:tagged`. The support for `*` will be removed when we have support for
the autogroups.
**Current state**
The new policy is passing all tests, both integration and unit tests. This does
not mean it is perfect, but it is a good start. Corner cases that is currently
working in v1 and not tested might be broken in v2 (and vice versa).
**We do need help testing this code**
#### Other breaking changes
- Disallow `server_url` and `base_domain` to be equal [#2544](https://github.com/juanfont/headscale/pull/2544)
- Return full user in API for pre auth keys instead of string [#2542](https://github.com/juanfont/headscale/pull/2542)
- Pre auth key API/CLI now uses ID over username [#2542](https://github.com/juanfont/headscale/pull/2542)
- A non-empty list of global nameservers needs to be specified via
`dns.nameservers.global` if the configuration option `dns.override_local_dns`
is enabled or is not specified in the configuration file. This aligns with
behaviour of tailscale.com.
[#2438](https://github.com/juanfont/headscale/pull/2438)
### Changes
- Use Go 1.24 [#2427](https://github.com/juanfont/headscale/pull/2427)
- Add `headscale policy check` command to check policy [#2553](https://github.com/juanfont/headscale/pull/2553)
- `oidc.map_legacy_users` and `oidc.strip_email_domain` has been removed [#2411](https://github.com/juanfont/headscale/pull/2411)
- Add more information to `/debug` endpoint [#2420](https://github.com/juanfont/headscale/pull/2420)
- It is now possible to inspect running goroutines and take profiles
- View of config, policy, filter, ssh policy per node, connected nodes and
DERPmap
- OIDC: Fetch UserInfo to get EmailVerified if necessary [#2493](https://github.com/juanfont/headscale/pull/2493)
- If a OIDC provider doesn't include the `email_verified` claim in its ID
tokens, Headscale will attempt to get it from the UserInfo endpoint.
- OIDC: Try to populate name, email and username from UserInfo [#2545](https://github.com/juanfont/headscale/pull/2545)
- Improve performance by only querying relevant nodes from the database for node
updates [#2509](https://github.com/juanfont/headscale/pull/2509)
- node FQDNs in the netmap will now contain a dot (".") at the end. This aligns
with behaviour of tailscale.com
[#2503](https://github.com/juanfont/headscale/pull/2503)
- Restore support for "Override local DNS" [#2438](https://github.com/juanfont/headscale/pull/2438)
- Add documentation for routes [#2496](https://github.com/juanfont/headscale/pull/2496)
## 0.25.1 (2025-02-25)
### Changes
- Fix issue where registration errors are sent correctly [#2435](https://github.com/juanfont/headscale/pull/2435)
- Fix issue where routes passed on registration were not saved [#2444](https://github.com/juanfont/headscale/pull/2444)
- Fix issue where registration page was displayed twice [#2445](https://github.com/juanfont/headscale/pull/2445)
## 0.25.0 (2025-02-11)
### BREAKING
- Authentication flow has been rewritten [#2374](https://github.com/juanfont/headscale/pull/2374) This change should be
transparent to users with the exception of some buxfixes that has been
discovered and was fixed as part of the rewrite.
- When a node is registered with _a new user_, it will be registered as a new
node ([#2327](https://github.com/juanfont/headscale/issues/2327) and
[#1310](https://github.com/juanfont/headscale/issues/1310)).
- A logged out node logging in with the same user will replace the existing
node.
- Remove support for Tailscale clients older than 1.62 (Capability version 87) [#2405](https://github.com/juanfont/headscale/pull/2405)
### Changes
- `oidc.map_legacy_users` is now `false` by default [#2350](https://github.com/juanfont/headscale/pull/2350)
- Print Tailscale version instead of capability versions for outdated nodes [#2391](https://github.com/juanfont/headscale/pull/2391)
- Do not allow renaming of users from OIDC [#2393](https://github.com/juanfont/headscale/pull/2393)
- Change minimum hostname length to 2 [#2393](https://github.com/juanfont/headscale/pull/2393)
- Fix migration error caused by nodes having invalid auth keys [#2412](https://github.com/juanfont/headscale/pull/2412)
- Pre auth keys belonging to a user are no longer deleted with the user [#2396](https://github.com/juanfont/headscale/pull/2396)
- Pre auth keys that are used by a node can no longer be deleted [#2396](https://github.com/juanfont/headscale/pull/2396)
- Rehaul HTTP errors, return better status code and errors to users [#2398](https://github.com/juanfont/headscale/pull/2398)
- Print headscale version and commit on server startup [#2415](https://github.com/juanfont/headscale/pull/2415)
## 0.24.3 (2025-02-07)
### Changes
- Fix migration error caused by nodes having invalid auth keys [#2412](https://github.com/juanfont/headscale/pull/2412)
- Pre auth keys belonging to a user are no longer deleted with the user [#2396](https://github.com/juanfont/headscale/pull/2396)
- Pre auth keys that are used by a node can no longer be deleted [#2396](https://github.com/juanfont/headscale/pull/2396)
## 0.24.2 (2025-01-30)
### Changes
- Fix issue where email and username being equal fails to match in Policy [#2388](https://github.com/juanfont/headscale/pull/2388)
- Delete invalid routes before adding a NOT NULL constraint on node_id [#2386](https://github.com/juanfont/headscale/pull/2386)
## 0.24.1 (2025-01-23)
### Changes
- Fix migration issue with user table for PostgreSQL [#2367](https://github.com/juanfont/headscale/pull/2367)
- Relax username validation to allow emails [#2364](https://github.com/juanfont/headscale/pull/2364)
- Remove invalid routes and add stronger constraints for routes to avoid API
panic [#2371](https://github.com/juanfont/headscale/pull/2371)
- Fix panic when `derp.update_frequency` is 0 [#2368](https://github.com/juanfont/headscale/pull/2368)
## 0.24.0 (2025-01-17)
### Security fix: OIDC changes in Headscale 0.24.0
The following issue _only_ affects Headscale installations which authenticate
with OIDC.
_Headscale v0.23.0 and earlier_ identified OIDC users by the "username" part of
their email address (when `strip_email_domain: true`, the default) or whole
email address (when `strip_email_domain: false`).
Depending on how Headscale and your Identity Provider (IdP) were configured,
only using the `email` claim could allow a malicious user with an IdP account to
take over another Headscale user's account, even when
`strip_email_domain: false`.
This would also cause a user to lose access to their Headscale account if they
changed their email address.
_Headscale v0.24.0_ now identifies OIDC users by the `iss` and `sub` claims.
[These are guaranteed by the OIDC specification to be stable and unique](https://openid.net/specs/openid-connect-core-1_0.html#ClaimStability),
even if a user changes email address. A well-designed IdP will typically set
`sub` to an opaque identifier like a UUID or numeric ID, which has no relation
to the user's name or email address.
Headscale v0.24.0 and later will also automatically update profile fields with
OIDC data on login. This means that users can change those details in your IdP,
and have it populate to Headscale automatically the next time they log in.
However, this may affect the way you reference users in policies.
Headscale v0.23.0 and earlier never recorded the `iss` and `sub` fields, so all
legacy (existing) OIDC accounts _need to be migrated_ to be properly secured.
#### What do I need to do to migrate?
Headscale v0.24.0 has an automatic migration feature, which is enabled by
default (`map_legacy_users: true`). **This will be disabled by default in a
future version of Headscale – any unmigrated users will get new accounts.**
The migration will mostly be done automatically, with one exception. If your
OIDC does not provide an `email_verified` claim, Headscale will ignore the
`email`. This means that either the administrator will have to mark the user
emails as verified, or ensure the users verify their emails. Any unverified
emails will be ignored, meaning that the users will get new accounts instead of
being migrated.
After this exception is ensured, make all users log into Headscale with their
account, and Headscale will automatically update the account record. This will
be transparent to the users.
When all users have logged in, you can disable the automatic migration by
setting `map_legacy_users: false` in your configuration file.
Please note that `map_legacy_users` will be set to `false` by default in v0.25.0
and the migration mechanism will be removed in v0.26.0.
What does automatic migration do?
##### What does automatic migration do?
When automatic migration is enabled (`map_legacy_users: true`), Headscale will
first match an OIDC account to a Headscale account by `iss` and `sub`, and then
fall back to matching OIDC users similarly to how Headscale v0.23.0 did:
- If `strip_email_domain: true` (the default): the Headscale username matches
the "username" part of their email address.
- If `strip_email_domain: false`: the Headscale username matches the _whole_
email address.
On migration, Headscale will change the account's username to their
`preferred_username`. **This could break any ACLs or policies which are
configured to match by username.**
Like with Headscale v0.23.0 and earlier, this migration only works for users who
haven't changed their email address since their last Headscale login.
A _successful_ automated migration should otherwise be transparent to users.
Once a Headscale account has been migrated, it will be _unavailable_ to be
matched by the legacy process. An OIDC login with a matching username, but
_non-matching_ `iss` and `sub` will instead get a _new_ Headscale account.
Because of the way OIDC works, Headscale's automated migration process can
_only_ work when a user tries to log in after the update.
Legacy account migration should have no effect on new installations where all
users have a recorded `sub` and `iss`.
What happens when automatic migration is disabled?
##### What happens when automatic migration is disabled?
When automatic migration is disabled (`map_legacy_users: false`), Headscale will
only try to match an OIDC account to a Headscale account by `iss` and `sub`.
If there is no match, it will get a _new_ Headscale account – even if there was
a legacy account which _could_ have matched and migrated.
We recommend new Headscale users explicitly disable automatic migration – but it
should otherwise have no effect if every account has a recorded `iss` and `sub`.
When automatic migration is disabled, the `strip_email_domain` setting will have
no effect.
Special thanks to @micolous for reviewing, proposing and working with us on
these changes.
#### Other OIDC changes
Headscale now uses
[the standard OIDC claims](https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims)
to populate and update user information every time they log in:
| Headscale profile field | OIDC claim | Notes / examples |
| ----------------------- | -------------------- | --------------------------------------------------------------------------------------------------------- |
| email address | `email` | Only used when `"email_verified": true` |
| display name | `name` | eg: `Sam Smith` |
| username | `preferred_username` | Varies depending on IdP and configuration, eg: `ssmith`, `ssmith@idp.example.com`, `\\example.com\ssmith` |
| profile picture | `picture` | URL to a profile picture or avatar |
These should show up nicely in the Tailscale client.
This will also affect the way you
[reference users in policies](https://github.com/juanfont/headscale/pull/2205).
### BREAKING
- Remove `dns.use_username_in_magic_dns` configuration option [#2020](https://github.com/juanfont/headscale/pull/2020),
[#2279](https://github.com/juanfont/headscale/pull/2279)
- Having usernames in magic DNS is no longer possible.
- Remove versions older than 1.56 [#2149](https://github.com/juanfont/headscale/pull/2149)
- Clean up old code required by old versions
- User gRPC/API [#2261](https://github.com/juanfont/headscale/pull/2261):
- If you depend on a Headscale Web UI, you should wait with this update until
the UI have been updated to match the new API.
- `GET /api/v1/user/{name}` and `GetUser` have been removed in favour of
`ListUsers` with an ID parameter
- `RenameUser` and `DeleteUser` now require an ID instead of a name.
### Changes
- Improved compatibility of built-in DERP server with clients connecting over
WebSocket [#2132](https://github.com/juanfont/headscale/pull/2132)
- Allow nodes to use SSH agent forwarding [#2145](https://github.com/juanfont/headscale/pull/2145)
- Fixed processing of fields in post request in MoveNode rpc [#2179](https://github.com/juanfont/headscale/pull/2179)
- Added conversion of 'Hostname' to 'givenName' in a node with FQDN rules
applied [#2198](https://github.com/juanfont/headscale/pull/2198)
- Fixed updating of hostname and givenName when it is updated in HostInfo [#2199](https://github.com/juanfont/headscale/pull/2199)
- Fixed missing `stable-debug` container tag [#2232](https://github.com/juanfont/headscale/pull/2232)
- Loosened up `server_url` and `base_domain` check. It was overly strict in some
cases. [#2248](https://github.com/juanfont/headscale/pull/2248)
- CLI for managing users now accepts `--identifier` in addition to `--name`,
usage of `--identifier` is recommended
[#2261](https://github.com/juanfont/headscale/pull/2261)
- Add `dns.extra_records_path` configuration option [#2262](https://github.com/juanfont/headscale/issues/2262)
- Support client verify for DERP [#2046](https://github.com/juanfont/headscale/pull/2046)
- Add PKCE Verifier for OIDC [#2314](https://github.com/juanfont/headscale/pull/2314)
## 0.23.0 (2024-09-18)
This release was intended to be mainly a code reorganisation and refactoring,
significantly improving the maintainability of the codebase. This should allow
us to improve further and make it easier for the maintainers to keep on top of
the project. However, as you all have noticed, it turned out to become a much
larger, much longer release cycle than anticipated. It has ended up to be a
release with a lot of rewrites and changes to the code base and functionality of
Headscale, cleaning up a lot of technical debt and introducing a lot of
improvements. This does come with some breaking changes,
**Please remember to always back up your database between versions**
#### Here is a short summary of the broad topics of changes:
Code has been organised into modules, reducing use of global variables/objects,
isolating concerns and “putting the right things in the logical place”.
The new
[policy](https://github.com/juanfont/headscale/tree/main/hscontrol/policy) and
[mapper](https://github.com/juanfont/headscale/tree/main/hscontrol/mapper)
package, containing the ACL/Policy logic and the logic for creating the data
served to clients (the network “map”) has been rewritten and improved. This
change has allowed us to finish SSH support and add additional tests throughout
the code to ensure correctness.
The
[“poller”, or streaming logic](https://github.com/juanfont/headscale/blob/main/hscontrol/poll.go)
has been rewritten and instead of keeping track of the latest updates, checking
at a fixed interval, it now uses go channels, implemented in our new
[notifier](https://github.com/juanfont/headscale/tree/main/hscontrol/notifier)
package and it allows us to send updates to connected clients immediately. This
should both improve performance and potential latency before a client picks up
an update.
Headscale now supports sending “delta” updates, thanks to the new mapper and
poller logic, allowing us to only inform nodes about new nodes, changed nodes
and removed nodes. Previously we sent the entire state of the network every time
an update was due.
While we have a pretty good
[test harness](https://github.com/search?q=repo%3Ajuanfont%2Fheadscale+path%3A_test.go&type=code)
for validating our changes, the changes came down to
[284 changed files with 32,316 additions and 24,245 deletions](https://github.com/juanfont/headscale/compare/b01f1f1867136d9b2d7b1392776eb363b482c525...ed78ecd)
and bugs are expected. We need help testing this release. In addition, while we
think the performance should in general be better, there might be regressions in
parts of the platform, particularly where we prioritised correctness over speed.
There are also several bugfixes that has been encountered and fixed as part of
implementing these changes, particularly after improving the test harness as
part of adopting [#1460](https://github.com/juanfont/headscale/pull/1460).
### BREAKING
- Code reorganisation, a lot of code has moved, please review the following PRs
accordingly [#1473](https://github.com/juanfont/headscale/pull/1473)
- Change the structure of database configuration, see
[config-example.yaml](./config-example.yaml) for the new structure.
[#1700](https://github.com/juanfont/headscale/pull/1700)
- Old structure has been remove and the configuration _must_ be converted.
- Adds additional configuration for PostgreSQL for setting max open, idle
connection and idle connection lifetime.
- API: Machine is now Node [#1553](https://github.com/juanfont/headscale/pull/1553)
- Remove support for older Tailscale clients [#1611](https://github.com/juanfont/headscale/pull/1611)
- The oldest supported client is 1.42
- Headscale checks that _at least_ one DERP is defined at start [#1564](https://github.com/juanfont/headscale/pull/1564)
- If no DERP is configured, the server will fail to start, this can be because
it cannot load the DERPMap from file or url.
- Embedded DERP server requires a private key [#1611](https://github.com/juanfont/headscale/pull/1611)
- Add a filepath entry to
[`derp.server.private_key_path`](https://github.com/juanfont/headscale/blob/b35993981297e18393706b2c963d6db882bba6aa/config-example.yaml#L95)
- Docker images are now built with goreleaser (ko) [#1716](https://github.com/juanfont/headscale/pull/1716)
[#1763](https://github.com/juanfont/headscale/pull/1763)
- Entrypoint of container image has changed from shell to headscale, require
change from `headscale serve` to `serve`
- `/var/lib/headscale` and `/var/run/headscale` is no longer created
automatically, see [container docs](./docs/setup/install/container.md)
- Prefixes are now defined per v4 and v6 range. [#1756](https://github.com/juanfont/headscale/pull/1756)
- `ip_prefixes` option is now `prefixes.v4` and `prefixes.v6`
- `prefixes.allocation` can be set to assign IPs at `sequential` or `random`.
[#1869](https://github.com/juanfont/headscale/pull/1869)
- MagicDNS domains no longer contain usernames []()
- This is in preparation to fix Headscales implementation of tags which
currently does not correctly remove the link between a tagged device and a
user. As tagged devices will not have a user, this will require a change to
the DNS generation, removing the username, see
[#1369](https://github.com/juanfont/headscale/issues/1369) for more
information.
- `use_username_in_magic_dns` can be used to turn this behaviour on again, but
note that this option _will be removed_ when tags are fixed.
- dns.base_domain can no longer be the same as (or part of) server_url.
- This option brings Headscales behaviour in line with Tailscale.
- YAML files are no longer supported for headscale policy. [#1792](https://github.com/juanfont/headscale/pull/1792)
- HuJSON is now the only supported format for policy.
- DNS configuration has been restructured [#2034](https://github.com/juanfont/headscale/pull/2034)
- Please review the new [config-example.yaml](./config-example.yaml) for the
new structure.
### Changes
- Use versioned migrations [#1644](https://github.com/juanfont/headscale/pull/1644)
- Make the OIDC callback page better [#1484](https://github.com/juanfont/headscale/pull/1484)
- SSH support [#1487](https://github.com/juanfont/headscale/pull/1487)
- State management has been improved [#1492](https://github.com/juanfont/headscale/pull/1492)
- Use error group handling to ensure tests actually pass [#1535](https://github.com/juanfont/headscale/pull/1535) based on
[#1460](https://github.com/juanfont/headscale/pull/1460)
- Fix hang on SIGTERM [#1492](https://github.com/juanfont/headscale/pull/1492)
taken from [#1480](https://github.com/juanfont/headscale/pull/1480)
- Send logs to stderr by default [#1524](https://github.com/juanfont/headscale/pull/1524)
- Fix [TS-2023-006](https://tailscale.com/security-bulletins/#ts-2023-006)
security UPnP issue [#1563](https://github.com/juanfont/headscale/pull/1563)
- Turn off gRPC logging [#1640](https://github.com/juanfont/headscale/pull/1640)
fixes [#1259](https://github.com/juanfont/headscale/issues/1259)
- Added the possibility to manually create a DERP-map entry which can be
customized, instead of automatically creating it.
[#1565](https://github.com/juanfont/headscale/pull/1565)
- Add support for deleting api keys [#1702](https://github.com/juanfont/headscale/pull/1702)
- Add command to backfill IP addresses for nodes missing IPs from configured
prefixes. [#1869](https://github.com/juanfont/headscale/pull/1869)
- Log available update as warning [#1877](https://github.com/juanfont/headscale/pull/1877)
- Add `autogroup:internet` to Policy [#1917](https://github.com/juanfont/headscale/pull/1917)
- Restore foreign keys and add constraints [#1562](https://github.com/juanfont/headscale/pull/1562)
- Make registration page easier to use on mobile devices
- Make write-ahead-log default on and configurable for SQLite [#1985](https://github.com/juanfont/headscale/pull/1985)
- Add APIs for managing headscale policy. [#1792](https://github.com/juanfont/headscale/pull/1792)
- Fix for registering nodes using preauthkeys when running on a postgres
database in a non-UTC timezone.
[#764](https://github.com/juanfont/headscale/issues/764)
- Make sure integration tests cover postgres for all scenarios
- CLI commands (all except `serve`) only requires minimal configuration, no more
errors or warnings from unset settings
[#2109](https://github.com/juanfont/headscale/pull/2109)
- CLI results are now concistently sent to stdout and errors to stderr [#2109](https://github.com/juanfont/headscale/pull/2109)
- Fix issue where shutting down headscale would hang [#2113](https://github.com/juanfont/headscale/pull/2113)
## 0.22.3 (2023-05-12)
### Changes
- Added missing ca-certificates in Docker image [#1463](https://github.com/juanfont/headscale/pull/1463)
## 0.22.2 (2023-05-10)
### Changes
- Add environment flags to enable pprof (profiling) [#1382](https://github.com/juanfont/headscale/pull/1382)
- Profiles are continuously generated in our integration tests.
- Fix systemd service file location in `.deb` packages [#1391](https://github.com/juanfont/headscale/pull/1391)
- Improvements on Noise implementation [#1379](https://github.com/juanfont/headscale/pull/1379)
- Replace node filter logic, ensuring nodes with access can see each other [#1381](https://github.com/juanfont/headscale/pull/1381)
- Disable (or delete) both exit routes at the same time [#1428](https://github.com/juanfont/headscale/pull/1428)
- Ditch distroless for Docker image, create default socket dir in
`/var/run/headscale` [#1450](https://github.com/juanfont/headscale/pull/1450)
## 0.22.1 (2023-04-20)
### Changes
- Fix issue where systemd could not bind to port 80 [#1365](https://github.com/juanfont/headscale/pull/1365)
## 0.22.0 (2023-04-20)
### Changes
- Add `.deb` packages to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
- Update and simplify the documentation to use new `.deb` packages [#1349](https://github.com/juanfont/headscale/pull/1349)
- Add 32-bit Arm platforms to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
- Fix longstanding bug that would prevent "\*" from working properly in ACLs
(issue [#699](https://github.com/juanfont/headscale/issues/699))
[#1279](https://github.com/juanfont/headscale/pull/1279)
- Fix issue where IPv6 could not be used in, or while using ACLs (part of [#809](https://github.com/juanfont/headscale/issues/809))
[#1339](https://github.com/juanfont/headscale/pull/1339)
- Target Go 1.20 and Tailscale 1.38 for Headscale [#1323](https://github.com/juanfont/headscale/pull/1323)
## 0.21.0 (2023-03-20)
### Changes
- Adding "configtest" CLI command. [#1230](https://github.com/juanfont/headscale/pull/1230)
- Add documentation on connecting with iOS to `/apple` [#1261](https://github.com/juanfont/headscale/pull/1261)
- Update iOS compatibility and added documentation for iOS [#1264](https://github.com/juanfont/headscale/pull/1264)
- Allow to delete routes [#1244](https://github.com/juanfont/headscale/pull/1244)
## 0.20.0 (2023-02-03)
### Changes
- Fix wrong behaviour in exit nodes [#1159](https://github.com/juanfont/headscale/pull/1159)
- Align behaviour of `dns_config.restricted_nameservers` to tailscale [#1162](https://github.com/juanfont/headscale/pull/1162)
- Make OpenID Connect authenticated client expiry time configurable [#1191](https://github.com/juanfont/headscale/pull/1191)
- defaults to 180 days like Tailscale SaaS
- adds option to use the expiry time from the OpenID token for the node (see
config-example.yaml)
- Set ControlTime in Map info sent to nodes [#1195](https://github.com/juanfont/headscale/pull/1195)
- Populate Tags field on Node updates sent [#1195](https://github.com/juanfont/headscale/pull/1195)
## 0.19.0 (2023-01-29)
### BREAKING
- Rename Namespace to User [#1144](https://github.com/juanfont/headscale/pull/1144)
- **BACKUP your database before upgrading**
- Command line flags previously taking `--namespace` or `-n` will now require
`--user` or `-u`
## 0.18.0 (2023-01-14)
### Changes
- Reworked routing and added support for subnet router failover [#1024](https://github.com/juanfont/headscale/pull/1024)
- Added an OIDC AllowGroups Configuration options and authorization check [#1041](https://github.com/juanfont/headscale/pull/1041)
- Set `db_ssl` to false by default [#1052](https://github.com/juanfont/headscale/pull/1052)
- Fix duplicate nodes due to incorrect implementation of the protocol [#1058](https://github.com/juanfont/headscale/pull/1058)
- Report if a machine is online in CLI more accurately [#1062](https://github.com/juanfont/headscale/pull/1062)
- Added config option for custom DNS records [#1035](https://github.com/juanfont/headscale/pull/1035)
- Expire nodes based on OIDC token expiry [#1067](https://github.com/juanfont/headscale/pull/1067)
- Remove ephemeral nodes on logout [#1098](https://github.com/juanfont/headscale/pull/1098)
- Performance improvements in ACLs [#1129](https://github.com/juanfont/headscale/pull/1129)
- OIDC client secret can be passed via a file [#1127](https://github.com/juanfont/headscale/pull/1127)
## 0.17.1 (2022-12-05)
### Changes
- Correct typo on macOS standalone profile link [#1028](https://github.com/juanfont/headscale/pull/1028)
- Update platform docs with Fast User Switching [#1016](https://github.com/juanfont/headscale/pull/1016)
## 0.17.0 (2022-11-26)
### BREAKING
- `noise.private_key_path` has been added and is required for the new noise
protocol.
- Log level option `log_level` was moved to a distinct `log` config section and
renamed to `level` [#768](https://github.com/juanfont/headscale/pull/768)
- Removed Alpine Linux container image [#962](https://github.com/juanfont/headscale/pull/962)
### Important Changes
- Added support for Tailscale TS2021 protocol [#738](https://github.com/juanfont/headscale/pull/738)
- Add experimental support for
[SSH ACL](https://tailscale.com/kb/1018/acls/#tailscale-ssh) (see docs for
limitations) [#847](https://github.com/juanfont/headscale/pull/847)
- Please note that this support should be considered _partially_ implemented
- SSH ACLs status:
- Support `accept` and `check` (SSH can be enabled and used for connecting
and authentication)
- Rejecting connections **are not supported**, meaning that if you enable
SSH, then assume that _all_ `ssh` connections **will be allowed**.
- If you decided to try this feature, please carefully managed permissions
by blocking port `22` with regular ACLs or do _not_ set `--ssh` on your
clients.
- We are currently improving our testing of the SSH ACLs, help us get an
overview by testing and giving feedback.
- This feature should be considered dangerous and it is disabled by default.
Enable by setting `HEADSCALE_EXPERIMENTAL_FEATURE_SSH=1`.
### Changes
- Add ability to specify config location via env var `HEADSCALE_CONFIG` [#674](https://github.com/juanfont/headscale/issues/674)
- Target Go 1.19 for Headscale [#778](https://github.com/juanfont/headscale/pull/778)
- Target Tailscale v1.30.0 to build Headscale [#780](https://github.com/juanfont/headscale/pull/780)
- Give a warning when running Headscale with reverse proxy improperly configured
for WebSockets [#788](https://github.com/juanfont/headscale/pull/788)
- Fix subnet routers with Primary Routes [#811](https://github.com/juanfont/headscale/pull/811)
- Added support for JSON logs [#653](https://github.com/juanfont/headscale/issues/653)
- Sanitise the node key passed to registration url [#823](https://github.com/juanfont/headscale/pull/823)
- Add support for generating pre-auth keys with tags [#767](https://github.com/juanfont/headscale/pull/767)
- Add support for evaluating `autoApprovers` ACL entries when a machine is
registered [#763](https://github.com/juanfont/headscale/pull/763)
- Add config flag to allow Headscale to start if OIDC provider is down [#829](https://github.com/juanfont/headscale/pull/829)
- Fix prefix length comparison bug in AutoApprovers route evaluation [#862](https://github.com/juanfont/headscale/pull/862)
- Random node DNS suffix only applied if names collide in namespace. [#766](https://github.com/juanfont/headscale/issues/766)
- Remove `ip_prefix` configuration option and warning [#899](https://github.com/juanfont/headscale/pull/899)
- Add `dns_config.override_local_dns` option [#905](https://github.com/juanfont/headscale/pull/905)
- Fix some DNS config issues [#660](https://github.com/juanfont/headscale/issues/660)
- Make it possible to disable TS2019 with build flag [#928](https://github.com/juanfont/headscale/pull/928)
- Fix OIDC registration issues [#960](https://github.com/juanfont/headscale/pull/960) and
[#971](https://github.com/juanfont/headscale/pull/971)
- Add support for specifying NextDNS DNS-over-HTTPS resolver [#940](https://github.com/juanfont/headscale/pull/940)
- Make more sslmode available for postgresql connection [#927](https://github.com/juanfont/headscale/pull/927)
## 0.16.4 (2022-08-21)
### Changes
- Add ability to connect to PostgreSQL over TLS/SSL [#745](https://github.com/juanfont/headscale/pull/745)
- Fix CLI registration of expired machines [#754](https://github.com/juanfont/headscale/pull/754)
## 0.16.3 (2022-08-17)
### Changes
- Fix issue with OIDC authentication [#747](https://github.com/juanfont/headscale/pull/747)
## 0.16.2 (2022-08-14)
### Changes
- Fixed bugs in the client registration process after migration to NodeKey [#735](https://github.com/juanfont/headscale/pull/735)
## 0.16.1 (2022-08-12)
### Changes
- Updated dependencies (including the library that lacked armhf support) [#722](https://github.com/juanfont/headscale/pull/722)
- Fix missing group expansion in function `excludeCorrectlyTaggedNodes` [#563](https://github.com/juanfont/headscale/issues/563)
- Improve registration protocol implementation and switch to NodeKey as main
identifier [#725](https://github.com/juanfont/headscale/pull/725)
- Add ability to connect to PostgreSQL via unix socket [#734](https://github.com/juanfont/headscale/pull/734)
## 0.16.0 (2022-07-25)
**Note:** Take a backup of your database before upgrading.
### BREAKING
- Old ACL syntax is no longer supported ("users" & "ports" -> "src" & "dst").
Please check [the new syntax](https://tailscale.com/kb/1018/acls/).
### Changes
- **Drop** armhf (32-bit ARM) support. [#609](https://github.com/juanfont/headscale/pull/609)
- Headscale fails to serve if the ACL policy file cannot be parsed [#537](https://github.com/juanfont/headscale/pull/537)
- Fix labels cardinality error when registering unknown pre-auth key [#519](https://github.com/juanfont/headscale/pull/519)
- Fix send on closed channel crash in polling [#542](https://github.com/juanfont/headscale/pull/542)
- Fixed spurious calls to setLastStateChangeToNow from ephemeral nodes [#566](https://github.com/juanfont/headscale/pull/566)
- Add command for moving nodes between namespaces [#362](https://github.com/juanfont/headscale/issues/362)
- Added more configuration parameters for OpenID Connect (scopes, free-form
parameters, domain and user allowlist)
- Add command to set tags on a node [#525](https://github.com/juanfont/headscale/issues/525)
- Add command to view tags of nodes [#356](https://github.com/juanfont/headscale/issues/356)
- Add --all (-a) flag to enable routes command [#360](https://github.com/juanfont/headscale/issues/360)
- Fix issue where nodes was not updated across namespaces [#560](https://github.com/juanfont/headscale/pull/560)
- Add the ability to rename a nodes name [#560](https://github.com/juanfont/headscale/pull/560)
- Node DNS names are now unique, a random suffix will be added when a node
joins
- This change contains database changes, remember to **backup** your database
before upgrading
- Add option to enable/disable logtail (Tailscale's logging infrastructure) [#596](https://github.com/juanfont/headscale/pull/596)
- This change disables the logs by default
- Use [Prometheus]'s duration parser, supporting days (`d`), weeks (`w`) and
years (`y`) [#598](https://github.com/juanfont/headscale/pull/598)
- Add support for reloading ACLs with SIGHUP [#601](https://github.com/juanfont/headscale/pull/601)
- Use new ACL syntax [#618](https://github.com/juanfont/headscale/pull/618)
- Add -c option to specify config file from command line [#285](https://github.com/juanfont/headscale/issues/285)
[#612](https://github.com/juanfont/headscale/pull/601)
- Add configuration option to allow Tailscale clients to use a random WireGuard
port. [kb/1181/firewalls](https://tailscale.com/kb/1181/firewalls)
[#624](https://github.com/juanfont/headscale/pull/624)
- Improve obtuse UX regarding missing configuration
(`ephemeral_node_inactivity_timeout` not set)
[#639](https://github.com/juanfont/headscale/pull/639)
- Fix nodes being shown as 'offline' in `tailscale status` [#648](https://github.com/juanfont/headscale/pull/648)
- Improve shutdown behaviour [#651](https://github.com/juanfont/headscale/pull/651)
- Drop Gin as web framework in Headscale
[648](https://github.com/juanfont/headscale/pull/648)
[677](https://github.com/juanfont/headscale/pull/677)
- Make tailnet node updates check interval configurable [#675](https://github.com/juanfont/headscale/pull/675)
- Fix regression with HTTP API [#684](https://github.com/juanfont/headscale/pull/684)
- nodes ls now print both Hostname and Name(Issue [#647](https://github.com/juanfont/headscale/issues/647) PR
[#687](https://github.com/juanfont/headscale/pull/687))
## 0.15.0 (2022-03-20)
**Note:** Take a backup of your database before upgrading.
### BREAKING
- Boundaries between Namespaces has been removed and all nodes can communicate
by default [#357](https://github.com/juanfont/headscale/pull/357)
- To limit access between nodes, use [ACLs](./docs/ref/acls.md).
- `/metrics` is now a configurable host:port endpoint: [#344](https://github.com/juanfont/headscale/pull/344). You must update your
`config.yaml` file to include:
```yaml
metrics_listen_addr: 127.0.0.1:9090
```
### Features
- Add support for writing ACL files with YAML [#359](https://github.com/juanfont/headscale/pull/359)
- Users can now use emails in ACL's groups [#372](https://github.com/juanfont/headscale/issues/372)
- Add shorthand aliases for commands and subcommands [#376](https://github.com/juanfont/headscale/pull/376)
- Add `/windows` endpoint for Windows configuration instructions + registry file
download [#392](https://github.com/juanfont/headscale/pull/392)
- Added embedded DERP (and STUN) server into Headscale [#388](https://github.com/juanfont/headscale/pull/388)
### Changes
- Fix a bug were the same IP could be assigned to multiple hosts if joined in
quick succession [#346](https://github.com/juanfont/headscale/pull/346)
- Simplify the code behind registration of machines [#366](https://github.com/juanfont/headscale/pull/366)
- Nodes are now only written to database if they are registered successfully
- Fix a limitation in the ACLs that prevented users to write rules with `*` as
source [#374](https://github.com/juanfont/headscale/issues/374)
- Reduce the overhead of marshal/unmarshal for Hostinfo, routes and endpoints by
using specific types in Machine
[#371](https://github.com/juanfont/headscale/pull/371)
- Apply normalization function to FQDN on hostnames when hosts registers and
retrieve information [#363](https://github.com/juanfont/headscale/issues/363)
- Fix a bug that prevented the use of `tailscale logout` with OIDC [#508](https://github.com/juanfont/headscale/issues/508)
- Added Tailscale repo HEAD and unstable releases channel to the integration
tests targets [#513](https://github.com/juanfont/headscale/pull/513)
## 0.14.0 (2022-02-24)
**UPCOMING ### BREAKING From the **next\*\* version (`0.15.0`), all machines
will be able to communicate regardless of if they are in the same namespace.
This means that the behaviour currently limited to ACLs will become default.
From version `0.15.0`, all limitation of communications must be done with ACLs.
This is a part of aligning `headscale`'s behaviour with Tailscale's upstream
behaviour.
### BREAKING
- ACLs have been rewritten to align with the bevaviour Tailscale Control Panel
provides. **NOTE:** This is only active if you use ACLs
- Namespaces are now treated as Users
- All machines can communicate with all machines by default
- Tags should now work correctly and adding a host to Headscale should now
reload the rules.
- The documentation have a [fictional example](./docs/ref/acls.md) that should
cover some use cases of the ACLs features
### Features
- Add support for configurable mTLS [docs](./docs/ref/tls.md) [#297](https://github.com/juanfont/headscale/pull/297)
### Changes
- Remove dependency on CGO (switch from CGO SQLite to pure Go) [#346](https://github.com/juanfont/headscale/pull/346)
**0.13.0 (2022-02-18):**
### Features
- Add IPv6 support to the prefix assigned to namespaces
- Add API Key support
- Enable remote control of `headscale` via CLI
[docs](./docs/ref/api.md#grpc)
- Enable HTTP API (beta, subject to change)
- OpenID Connect users will be mapped per namespaces
- Each user will get its own namespace, created if it does not exist
- `oidc.domain_map` option has been removed
- `strip_email_domain` option has been added (see
[config-example.yaml](./config-example.yaml))
### Changes
- `ip_prefix` is now superseded by `ip_prefixes` in the configuration [#208](https://github.com/juanfont/headscale/pull/208)
- Upgrade `tailscale` (1.20.4) and other dependencies to latest [#314](https://github.com/juanfont/headscale/pull/314)
- fix swapped machine<->namespace labels in `/metrics` [#312](https://github.com/juanfont/headscale/pull/312)
- remove key-value based update mechanism for namespace changes [#316](https://github.com/juanfont/headscale/pull/316)
**0.12.4 (2022-01-29):**
### Changes
- Make gRPC Unix Socket permissions configurable [#292](https://github.com/juanfont/headscale/pull/292)
- Trim whitespace before reading Private Key from file [#289](https://github.com/juanfont/headscale/pull/289)
- Add new command to generate a private key for `headscale` [#290](https://github.com/juanfont/headscale/pull/290)
- Fixed issue where hosts deleted from control server may be written back to the
database, as long as they are connected to the control server
[#278](https://github.com/juanfont/headscale/pull/278)
## 0.12.3 (2022-01-13)
### Changes
- Added Alpine container [#270](https://github.com/juanfont/headscale/pull/270)
- Minor updates in dependencies [#271](https://github.com/juanfont/headscale/pull/271)
## 0.12.2 (2022-01-11)
Happy New Year!
### Changes
- Fix Docker release [#258](https://github.com/juanfont/headscale/pull/258)
- Rewrite main docs [#262](https://github.com/juanfont/headscale/pull/262)
- Improve Docker docs [#263](https://github.com/juanfont/headscale/pull/263)
## 0.12.1 (2021-12-24)
(We are skipping 0.12.0 to correct a mishap done weeks ago with the version
tagging)
### BREAKING
- Upgrade to Tailscale 1.18 [#229](https://github.com/juanfont/headscale/pull/229)
- This change requires a new format for private key, private keys are now
generated automatically:
1. Delete your current key
2. Restart `headscale`, a new key will be generated.
3. Restart all Tailscale clients to fetch the new key
### Changes
- Unify configuration example [#197](https://github.com/juanfont/headscale/pull/197)
- Add stricter linting and formatting [#223](https://github.com/juanfont/headscale/pull/223)
### Features
- Add gRPC and HTTP API (HTTP API is currently disabled) [#204](https://github.com/juanfont/headscale/pull/204)
- Use gRPC between the CLI and the server [#206](https://github.com/juanfont/headscale/pull/206),
[#212](https://github.com/juanfont/headscale/pull/212)
- Beta OpenID Connect support [#126](https://github.com/juanfont/headscale/pull/126),
[#227](https://github.com/juanfont/headscale/pull/227)
## 0.11.0 (2021-10-25)
### BREAKING
- Make headscale fetch DERP map from URL and file [#196](https://github.com/juanfont/headscale/pull/196)
================================================
FILE: CLAUDE.md
================================================
@AGENTS.md
================================================
FILE: CODE_OF_CONDUCT.md
================================================
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation
in our community a harassment-free experience for everyone, regardless
of age, body size, visible or invisible disability, ethnicity, sex
characteristics, gender identity and expression, level of experience,
education, socio-economic status, nationality, personal appearance,
race, religion, or sexual identity and orientation.
We pledge to act and interact in ways that contribute to an open,
welcoming, diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for
our community include:
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our
mistakes, and learning from the experience
- Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
- The use of sexualized language or imagery, and sexual attention or
advances of any kind
- Trolling, insulting or derogatory comments, and personal or
political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or email
address, without their explicit permission
- Other conduct which could reasonably be considered inappropriate in
a professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our
standards of acceptable behavior and will take appropriate and fair
corrective action in response to any behavior that they deem
inappropriate, threatening, offensive, or harmful.
Community leaders have the right and responsibility to remove, edit,
or reject comments, commits, code, wiki edits, issues, and other
contributions that are not aligned to this Code of Conduct, and will
communicate reasons for moderation decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also
applies when an individual is officially representing the community in
public spaces. Examples of representing our community include using an
official e-mail address, posting via an official social media account,
or acting as an appointed representative at an online or offline
event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior
may be reported to the community leaders responsible for enforcement
on our [Discord server](https://discord.gg/c84AZQhmpx). All complaints
will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and
security of the reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in
determining the consequences for any action they deem in violation of
this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior
deemed unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders,
providing clarity around the nature of the violation and an
explanation of why the behavior was inappropriate. A public apology
may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued
behavior. No interaction with the people involved, including
unsolicited interaction with those enforcing the Code of Conduct, for
a specified period of time. This includes avoiding interactions in
community spaces as well as external channels like social
media. Violating these terms may lead to a temporary or permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards,
including sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or
public communication with the community for a specified period of
time. No public or private interaction with the people involved,
including unsolicited interaction with those enforcing the Code of
Conduct, is allowed during this period. Violating these terms may lead
to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of
community standards, including sustained inappropriate behavior,
harassment of an individual, or aggression toward or disparagement of
classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction
within the community.
## Attribution
This Code of Conduct is adapted from the [Contributor
Covenant][homepage], version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of
conduct enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the
FAQ at https://www.contributor-covenant.org/faq. Translations are
available at https://www.contributor-covenant.org/translations.
================================================
FILE: CONTRIBUTING.md
================================================
# Contributing
Headscale is "Open Source, acknowledged contribution", this means that any contribution will have to be discussed with the maintainers before being added to the project.
This model has been chosen to reduce the risk of burnout by limiting the maintenance overhead of reviewing and validating third-party code.
## Why do we have this model?
Headscale has a small maintainer team that tries to balance working on the project, fixing bugs and reviewing contributions.
When we work on issues ourselves, we develop first hand knowledge of the code and it makes it possible for us to maintain and own the code as the project develops.
Code contributions are seen as a positive thing. People enjoy and engage with our project, but it also comes with some challenges; we have to understand the code, we have to understand the feature, we might have to become familiar with external libraries or services and we think about security implications. All those steps are required during the reviewing process. After the code has been merged, the feature has to be maintained. Any changes reliant on external services must be updated and expanded accordingly.
The review and day-1 maintenance adds a significant burden on the maintainers. Often we hope that the contributor will help out, but we found that most of the time, they disappear after their new feature was added.
This means that when someone contributes, we are mostly happy about it, but we do have to run it through a series of checks to establish if we actually can maintain this feature.
## What do we require?
A general description is provided here and an explicit list is provided in our pull request template.
All new features have to start out with a design document, which should be discussed on the issue tracker (not discord). It should include a use case for the feature, how it can be implemented, who will implement it and a plan for maintaining it.
All features have to be end-to-end tested (integration tests) and have good unit test coverage to ensure that they work as expected. This will also ensure that the feature continues to work as expected over time. If a change cannot be tested, a strong case for why this is not possible needs to be presented.
The contributor should help to maintain the feature over time. In case the feature is not maintained probably, the maintainers reserve themselves the right to remove features they redeem as unmaintainable. This should help to improve the quality of the software and keep it in a maintainable state.
## Bug fixes
Headscale is open to code contributions for bug fixes without discussion.
## Documentation
If you find mistakes in the documentation, please submit a fix to the documentation.
================================================
FILE: Dockerfile.derper
================================================
# For testing purposes only
FROM golang:1.26.1-alpine AS build-env
WORKDIR /go/src
RUN apk add --no-cache git
ARG VERSION_BRANCH=main
RUN git clone https://github.com/tailscale/tailscale.git --branch=$VERSION_BRANCH --depth=1
WORKDIR /go/src/tailscale
ARG TARGETARCH
RUN GOARCH=$TARGETARCH go install -v ./cmd/derper
FROM alpine:3.22
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables curl
COPY --from=build-env /go/bin/* /usr/local/bin/
ENTRYPOINT [ "/usr/local/bin/derper" ]
================================================
FILE: Dockerfile.integration
================================================
# This Dockerfile and the images produced are for testing headscale,
# and are in no way endorsed by Headscale's maintainers as an
# official nor supported release or distribution.
FROM docker.io/golang:1.26.1-trixie AS builder
ARG VERSION=dev
ENV GOPATH /go
WORKDIR /go/src/headscale
# Install delve debugger first - rarely changes, good cache candidate
RUN go install github.com/go-delve/delve/cmd/dlv@latest
# Download dependencies - only invalidated when go.mod/go.sum change
COPY go.mod go.sum /go/src/headscale/
RUN go mod download
# Copy source and build - invalidated on any source change
COPY . .
# Build debug binary with debug symbols for delve
RUN CGO_ENABLED=0 GOOS=linux go build -gcflags="all=-N -l" -o /go/bin/headscale ./cmd/headscale
# Runtime stage
FROM debian:trixie-slim
RUN apt-get --update install --no-install-recommends --yes \
bash ca-certificates curl dnsutils findutils iproute2 jq less procps python3 sqlite3 \
&& apt-get dist-clean
RUN mkdir -p /var/run/headscale
# Copy binaries from builder
COPY --from=builder /go/bin/headscale /usr/local/bin/headscale
COPY --from=builder /go/bin/dlv /usr/local/bin/dlv
# Copy source code for delve source-level debugging
COPY --from=builder /go/src/headscale /go/src/headscale
WORKDIR /go/src/headscale
# Need to reset the entrypoint or everything will run as a busybox script
ENTRYPOINT []
EXPOSE 8080/tcp 40000/tcp
CMD ["dlv", "--listen=0.0.0.0:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/usr/local/bin/headscale", "--"]
================================================
FILE: Dockerfile.integration-ci
================================================
# Minimal CI image - expects pre-built headscale binary in build context
# For local development with delve debugging, use Dockerfile.integration instead
FROM debian:trixie-slim
RUN apt-get --update install --no-install-recommends --yes \
bash ca-certificates curl dnsutils findutils iproute2 jq less procps python3 sqlite3 \
&& apt-get dist-clean
RUN mkdir -p /var/run/headscale
# Copy pre-built headscale binary from build context
COPY headscale /usr/local/bin/headscale
ENTRYPOINT []
EXPOSE 8080/tcp
CMD ["/usr/local/bin/headscale"]
================================================
FILE: Dockerfile.tailscale-HEAD
================================================
# Copyright (c) Tailscale Inc & AUTHORS
# SPDX-License-Identifier: BSD-3-Clause
# This Dockerfile is more or less lifted from tailscale/tailscale
# to ensure a similar build process when testing the HEAD of tailscale.
FROM golang:1.26.1-alpine AS build-env
WORKDIR /go/src
RUN apk add --no-cache git
# Replace `RUN git...` with `COPY` and a local checked out version of Tailscale in `./tailscale`
# to test specific commits of the Tailscale client. This is useful when trying to find out why
# something specific broke between two versions of Tailscale with for example `git bisect`.
# COPY ./tailscale .
RUN git clone https://github.com/tailscale/tailscale.git
WORKDIR /go/src/tailscale
# see build_docker.sh
ARG VERSION_LONG=""
ENV VERSION_LONG=$VERSION_LONG
ARG VERSION_SHORT=""
ENV VERSION_SHORT=$VERSION_SHORT
ARG VERSION_GIT_HASH=""
ENV VERSION_GIT_HASH=$VERSION_GIT_HASH
ARG TARGETARCH
ARG BUILD_TAGS=""
RUN GOARCH=$TARGETARCH go install -tags="${BUILD_TAGS}" -ldflags="\
-X tailscale.com/version.longStamp=$VERSION_LONG \
-X tailscale.com/version.shortStamp=$VERSION_SHORT \
-X tailscale.com/version.gitCommitStamp=$VERSION_GIT_HASH" \
-v ./cmd/tailscale ./cmd/tailscaled ./cmd/containerboot
FROM alpine:3.22
# Upstream: ca-certificates ip6tables iptables iproute2
# Tests: curl python3 (traceroute via BusyBox)
RUN apk add --no-cache ca-certificates curl ip6tables iptables iproute2 python3
COPY --from=build-env /go/bin/* /usr/local/bin/
# For compat with the previous run.sh, although ideally you should be
# using build_docker.sh which sets an entrypoint for the image.
RUN mkdir /tailscale && ln -s /usr/local/bin/containerboot /tailscale/run.sh
================================================
FILE: LICENSE
================================================
BSD 3-Clause License
Copyright (c) 2020, Juan Font
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
================================================
FILE: Makefile
================================================
# Headscale Makefile
# Modern Makefile following best practices
# Version calculation
VERSION ?= $(shell git describe --always --tags --dirty)
# Build configuration
GOOS ?= $(shell uname | tr '[:upper:]' '[:lower:]')
ifeq ($(filter $(GOOS), openbsd netbsd solaris plan9), )
PIE_FLAGS = -buildmode=pie
endif
# Tool availability check with nix warning
define check_tool
@command -v $(1) >/dev/null 2>&1 || { \
echo "Warning: $(1) not found. Run 'nix develop' to ensure all dependencies are available."; \
exit 1; \
}
endef
# Source file collections using shell find for better performance
GO_SOURCES := $(shell find . -name '*.go' -not -path './gen/*' -not -path './vendor/*')
PROTO_SOURCES := $(shell find . -name '*.proto' -not -path './gen/*' -not -path './vendor/*')
PRETTIER_SOURCES := $(shell find . \( -name '*.md' -o -name '*.yaml' -o -name '*.yml' -o -name '*.ts' -o -name '*.js' -o -name '*.html' -o -name '*.css' -o -name '*.scss' -o -name '*.sass' \) -not -path './gen/*' -not -path './vendor/*' -not -path './node_modules/*')
# Default target
.PHONY: all
all: lint test build
# Dependency checking
.PHONY: check-deps
check-deps:
$(call check_tool,go)
$(call check_tool,golangci-lint)
$(call check_tool,gofumpt)
$(call check_tool,mdformat)
$(call check_tool,prettier)
$(call check_tool,clang-format)
$(call check_tool,buf)
# Build targets
.PHONY: build
build: check-deps $(GO_SOURCES) go.mod go.sum
@echo "Building headscale..."
go build $(PIE_FLAGS) -ldflags "-X main.version=$(VERSION)" -o headscale ./cmd/headscale
# Test targets
.PHONY: test
test: check-deps $(GO_SOURCES) go.mod go.sum
@echo "Running Go tests..."
go test -race ./...
# Formatting targets
.PHONY: fmt
fmt: fmt-go fmt-mdformat fmt-prettier fmt-proto
.PHONY: fmt-go
fmt-go: check-deps $(GO_SOURCES)
@echo "Formatting Go code..."
gofumpt -l -w .
golangci-lint run --fix
.PHONY: fmt-mdformat
fmt-mdformat: check-deps
@echo "Formatting documentation..."
mdformat docs/
.PHONY: fmt-prettier
fmt-prettier: check-deps $(PRETTIER_SOURCES)
@echo "Formatting markup and config files..."
prettier --write '**/*.{ts,js,md,yaml,yml,sass,css,scss,html}'
.PHONY: fmt-proto
fmt-proto: check-deps $(PROTO_SOURCES)
@echo "Formatting Protocol Buffer files..."
clang-format -i $(PROTO_SOURCES)
# Linting targets
.PHONY: lint
lint: lint-go lint-proto
.PHONY: lint-go
lint-go: check-deps $(GO_SOURCES) go.mod go.sum
@echo "Linting Go code..."
golangci-lint run --timeout 10m
.PHONY: lint-proto
lint-proto: check-deps $(PROTO_SOURCES)
@echo "Linting Protocol Buffer files..."
cd proto/ && buf lint
# Code generation
.PHONY: generate
generate: check-deps
@echo "Generating code..."
go generate ./...
# Clean targets
.PHONY: clean
clean:
rm -rf headscale gen
# Development workflow
.PHONY: dev
dev: fmt lint test build
# Help target
.PHONY: help
help:
@echo "Headscale Development Makefile"
@echo ""
@echo "Main targets:"
@echo " all - Run lint, test, and build (default)"
@echo " build - Build headscale binary"
@echo " test - Run Go tests"
@echo " fmt - Format all code (Go, docs, proto)"
@echo " lint - Lint all code (Go, proto)"
@echo " generate - Generate code from Protocol Buffers"
@echo " dev - Full development workflow (fmt + lint + test + build)"
@echo " clean - Clean build artifacts"
@echo ""
@echo "Specific targets:"
@echo " fmt-go - Format Go code only"
@echo " fmt-mdformat - Format documentation only"
@echo " fmt-prettier - Format markup and config files only"
@echo " fmt-proto - Format Protocol Buffer files only"
@echo " lint-go - Lint Go code only"
@echo " lint-proto - Lint Protocol Buffer files only"
@echo ""
@echo "Dependencies:"
@echo " check-deps - Verify required tools are available"
@echo ""
@echo "Note: If not running in a nix shell, ensure dependencies are available:"
@echo " nix develop"
================================================
FILE: README.md
================================================


An open source, self-hosted implementation of the Tailscale control server.
Join our [Discord server](https://discord.gg/c84AZQhmpx) for a chat.
**Note:** Always select the same GitHub tag as the released version you use
to ensure you have the correct example configuration. The `main` branch might
contain unreleased changes. The documentation is available for stable and
development versions:
- [Documentation for the stable version](https://headscale.net/stable/)
- [Documentation for the development version](https://headscale.net/development/)
## What is Tailscale
Tailscale is [a modern VPN](https://tailscale.com/) built on top of
[Wireguard](https://www.wireguard.com/).
It [works like an overlay network](https://tailscale.com/blog/how-tailscale-works/)
between the computers of your networks - using
[NAT traversal](https://tailscale.com/blog/how-nat-traversal-works/).
Everything in Tailscale is Open Source, except the GUI clients for proprietary OS
(Windows and macOS/iOS), and the control server.
The control server works as an exchange point of Wireguard public keys for the
nodes in the Tailscale network. It assigns the IP addresses of the clients,
creates the boundaries between each user, enables sharing machines between users,
and exposes the advertised routes of your nodes.
A [Tailscale network (tailnet)](https://tailscale.com/kb/1136/tailnet/) is private
network which Tailscale assigns to a user in terms of private users or an
organisation.
## Design goal
Headscale aims to implement a self-hosted, open source alternative to the
[Tailscale](https://tailscale.com/) control server. Headscale's goal is to
provide self-hosters and hobbyists with an open-source server they can use for
their projects and labs. It implements a narrow scope, a _single_ Tailscale
network (tailnet), suitable for a personal use, or a small open-source
organisation.
## Supporting Headscale
If you like `headscale` and find it useful, there is a sponsorship and donation
buttons available in the repo.
## Features
Please see ["Features" in the documentation](https://headscale.net/stable/about/features/).
## Client OS support
Please see ["Client and operating system support" in the documentation](https://headscale.net/stable/about/clients/).
## Running headscale
**Please note that we do not support nor encourage the use of reverse proxies
and container to run Headscale.**
Please have a look at the [`documentation`](https://headscale.net/stable/).
For NixOS users, a module is available in [`nix/`](./nix/).
## Talks
- Fosdem 2026 (video): [Headscale & Tailscale: The complementary open source clone](https://fosdem.org/2026/schedule/event/KYQ3LL-headscale-the-complementary-open-source-clone/)
- presented by Kristoffer Dalby
- Fosdem 2023 (video): [Headscale: How we are using integration testing to reimplement Tailscale](https://fosdem.org/2023/schedule/event/goheadscale/)
- presented by Juan Font Alonso and Kristoffer Dalby
## Disclaimer
This project is not associated with Tailscale Inc.
However, one of the active maintainers for Headscale [is employed by Tailscale](https://tailscale.com/blog/opensource) and he is allowed to spend work hours contributing to the project. Contributions from this maintainer are reviewed by other maintainers.
The maintainers work together on setting the direction for the project. The underlying principle is to serve the community of self-hosters, enthusiasts and hobbyists - while having a sustainable project.
## Contributing
Please read the [CONTRIBUTING.md](./CONTRIBUTING.md) file.
### Requirements
To contribute to headscale you would need the latest version of [Go](https://golang.org)
and [Buf](https://buf.build) (Protobuf generator).
We recommend using [Nix](https://nixos.org/) to setup a development environment. This can
be done with `nix develop`, which will install the tools and give you a shell.
This guarantees that you will have the same dev env as `headscale` maintainers.
### Code style
To ensure we have some consistency with a growing number of contributions,
this project has adopted linting and style/formatting rules:
The **Go** code is linted with [`golangci-lint`](https://golangci-lint.run) and
formatted with [`golines`](https://github.com/segmentio/golines) (width 88) and
[`gofumpt`](https://github.com/mvdan/gofumpt).
Please configure your editor to run the tools while developing and make sure to
run `make lint` and `make fmt` before committing any code.
The **Proto** code is linted with [`buf`](https://docs.buf.build/lint/overview) and
formatted with [`clang-format`](https://clang.llvm.org/docs/ClangFormat.html).
The **docs** are formatted with [`mdformat`](https://mdformat.readthedocs.io).
The **rest** (Markdown, YAML, etc) is formatted with [`prettier`](https://prettier.io).
Check out the `.golangci.yaml` and `Makefile` to see the specific configuration.
### Install development tools
- Go
- Buf
- Protobuf tools
Install and activate:
```shell
nix develop
```
### Testing and building
Some parts of the project require the generation of Go code from Protobuf
(if changes are made in `proto/`) and it must be (re-)generated with:
```shell
make generate
```
**Note**: Please check in changes from `gen/` in a separate commit to make it easier to review.
To run the tests:
```shell
make test
```
To build the program:
```shell
make build
```
### Development workflow
We recommend using Nix for dependency management to ensure you have all required tools. If you prefer to manage dependencies yourself, you can use Make directly:
**With Nix (recommended):**
```shell
nix develop
make test
make build
```
**With your own dependencies:**
```shell
make test
make build
```
The Makefile will warn you if any required tools are missing and suggest running `nix develop`. Run `make help` to see all available targets.
## Contributors
Made with [contrib.rocks](https://contrib.rocks).
================================================
FILE: buf.gen.yaml
================================================
version: v1
plugins:
- name: go
out: gen/go
opt:
- paths=source_relative
- name: go-grpc
out: gen/go
opt:
- paths=source_relative
- name: grpc-gateway
out: gen/go
opt:
- paths=source_relative
- generate_unbound_methods=true
# - name: gorm
# out: gen/go
# opt:
# - paths=source_relative,enums=string,gateway=true
- name: openapiv2
out: gen/openapiv2
================================================
FILE: cmd/headscale/cli/api_key.go
================================================
package cli
import (
"context"
"fmt"
"strconv"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/pterm/pterm"
"github.com/spf13/cobra"
)
const (
// DefaultAPIKeyExpiry is 90 days.
DefaultAPIKeyExpiry = "90d"
)
func init() {
rootCmd.AddCommand(apiKeysCmd)
apiKeysCmd.AddCommand(listAPIKeys)
createAPIKeyCmd.Flags().
StringP("expiration", "e", DefaultAPIKeyExpiry, "Human-readable expiration of the key (e.g. 30m, 24h)")
apiKeysCmd.AddCommand(createAPIKeyCmd)
expireAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
expireAPIKeyCmd.Flags().Uint64P("id", "i", 0, "ApiKey ID")
apiKeysCmd.AddCommand(expireAPIKeyCmd)
deleteAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
deleteAPIKeyCmd.Flags().Uint64P("id", "i", 0, "ApiKey ID")
apiKeysCmd.AddCommand(deleteAPIKeyCmd)
}
var apiKeysCmd = &cobra.Command{
Use: "apikeys",
Short: "Handle the Api keys in Headscale",
Aliases: []string{"apikey", "api"},
}
var listAPIKeys = &cobra.Command{
Use: "list",
Short: "List the Api keys for headscale",
Aliases: []string{"ls", "show"},
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
response, err := client.ListApiKeys(ctx, &v1.ListApiKeysRequest{})
if err != nil {
return fmt.Errorf("listing api keys: %w", err)
}
return printListOutput(cmd, response.GetApiKeys(), func() error {
tableData := pterm.TableData{
{"ID", "Prefix", "Expiration", "Created"},
}
for _, key := range response.GetApiKeys() {
expiration := "-"
if key.GetExpiration() != nil {
expiration = ColourTime(key.GetExpiration().AsTime())
}
tableData = append(tableData, []string{
strconv.FormatUint(key.GetId(), util.Base10),
key.GetPrefix(),
expiration,
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
})
}
return pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
})
}),
}
var createAPIKeyCmd = &cobra.Command{
Use: "create",
Short: "Creates a new Api key",
Long: `
Creates a new Api key, the Api key is only visible on creation
and cannot be retrieved again.
If you loose a key, create a new one and revoke (expire) the old one.`,
Aliases: []string{"c", "new"},
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
expiration, err := expirationFromFlag(cmd)
if err != nil {
return err
}
response, err := client.CreateApiKey(ctx, &v1.CreateApiKeyRequest{
Expiration: expiration,
})
if err != nil {
return fmt.Errorf("creating api key: %w", err)
}
return printOutput(cmd, response.GetApiKey(), response.GetApiKey())
}),
}
// apiKeyIDOrPrefix reads --id and --prefix from cmd and validates that
// exactly one is provided.
func apiKeyIDOrPrefix(cmd *cobra.Command) (uint64, string, error) {
id, _ := cmd.Flags().GetUint64("id")
prefix, _ := cmd.Flags().GetString("prefix")
switch {
case id == 0 && prefix == "":
return 0, "", fmt.Errorf("either --id or --prefix must be provided: %w", errMissingParameter)
case id != 0 && prefix != "":
return 0, "", fmt.Errorf("only one of --id or --prefix can be provided: %w", errMissingParameter)
}
return id, prefix, nil
}
var expireAPIKeyCmd = &cobra.Command{
Use: "expire",
Short: "Expire an ApiKey",
Aliases: []string{"revoke", "exp", "e"},
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
id, prefix, err := apiKeyIDOrPrefix(cmd)
if err != nil {
return err
}
response, err := client.ExpireApiKey(ctx, &v1.ExpireApiKeyRequest{
Id: id,
Prefix: prefix,
})
if err != nil {
return fmt.Errorf("expiring api key: %w", err)
}
return printOutput(cmd, response, "Key expired")
}),
}
var deleteAPIKeyCmd = &cobra.Command{
Use: "delete",
Short: "Delete an ApiKey",
Aliases: []string{"remove", "del"},
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
id, prefix, err := apiKeyIDOrPrefix(cmd)
if err != nil {
return err
}
response, err := client.DeleteApiKey(ctx, &v1.DeleteApiKeyRequest{
Id: id,
Prefix: prefix,
})
if err != nil {
return fmt.Errorf("deleting api key: %w", err)
}
return printOutput(cmd, response, "Key deleted")
}),
}
================================================
FILE: cmd/headscale/cli/auth.go
================================================
package cli
import (
"context"
"fmt"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(authCmd)
authRegisterCmd.Flags().StringP("user", "u", "", "User")
authRegisterCmd.Flags().String("auth-id", "", "Auth ID")
mustMarkRequired(authRegisterCmd, "user", "auth-id")
authCmd.AddCommand(authRegisterCmd)
authApproveCmd.Flags().String("auth-id", "", "Auth ID")
mustMarkRequired(authApproveCmd, "auth-id")
authCmd.AddCommand(authApproveCmd)
authRejectCmd.Flags().String("auth-id", "", "Auth ID")
mustMarkRequired(authRejectCmd, "auth-id")
authCmd.AddCommand(authRejectCmd)
}
var authCmd = &cobra.Command{
Use: "auth",
Short: "Manage node authentication and approval",
}
var authRegisterCmd = &cobra.Command{
Use: "register",
Short: "Register a node to your network",
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
user, _ := cmd.Flags().GetString("user")
authID, _ := cmd.Flags().GetString("auth-id")
request := &v1.AuthRegisterRequest{
AuthId: authID,
User: user,
}
response, err := client.AuthRegister(ctx, request)
if err != nil {
return fmt.Errorf("registering node: %w", err)
}
return printOutput(
cmd,
response.GetNode(),
fmt.Sprintf("Node %s registered", response.GetNode().GetGivenName()))
}),
}
var authApproveCmd = &cobra.Command{
Use: "approve",
Short: "Approve a pending authentication request",
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
authID, _ := cmd.Flags().GetString("auth-id")
request := &v1.AuthApproveRequest{
AuthId: authID,
}
response, err := client.AuthApprove(ctx, request)
if err != nil {
return fmt.Errorf("approving auth request: %w", err)
}
return printOutput(cmd, response, "Auth request approved")
}),
}
var authRejectCmd = &cobra.Command{
Use: "reject",
Short: "Reject a pending authentication request",
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
authID, _ := cmd.Flags().GetString("auth-id")
request := &v1.AuthRejectRequest{
AuthId: authID,
}
response, err := client.AuthReject(ctx, request)
if err != nil {
return fmt.Errorf("rejecting auth request: %w", err)
}
return printOutput(cmd, response, "Auth request rejected")
}),
}
================================================
FILE: cmd/headscale/cli/configtest.go
================================================
package cli
import (
"fmt"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(configTestCmd)
}
var configTestCmd = &cobra.Command{
Use: "configtest",
Short: "Test the configuration.",
Long: "Run a test of the configuration and exit.",
RunE: func(cmd *cobra.Command, args []string) error {
_, err := newHeadscaleServerWithConfig()
if err != nil {
return fmt.Errorf("configuration error: %w", err)
}
return nil
},
}
================================================
FILE: cmd/headscale/cli/debug.go
================================================
package cli
import (
"context"
"fmt"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(debugCmd)
createNodeCmd.Flags().StringP("name", "", "", "Name")
createNodeCmd.Flags().StringP("user", "u", "", "User")
createNodeCmd.Flags().StringP("key", "k", "", "Key")
mustMarkRequired(createNodeCmd, "name", "user", "key")
createNodeCmd.Flags().
StringSliceP("route", "r", []string{}, "List (or repeated flags) of routes to advertise")
debugCmd.AddCommand(createNodeCmd)
}
var debugCmd = &cobra.Command{
Use: "debug",
Short: "debug and testing commands",
Long: "debug contains extra commands used for debugging and testing headscale",
}
var createNodeCmd = &cobra.Command{
Use: "create-node",
Short: "Create a node that can be registered with `auth register <>` command",
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
user, _ := cmd.Flags().GetString("user")
name, _ := cmd.Flags().GetString("name")
registrationID, _ := cmd.Flags().GetString("key")
_, err := types.AuthIDFromString(registrationID)
if err != nil {
return fmt.Errorf("parsing machine key: %w", err)
}
routes, _ := cmd.Flags().GetStringSlice("route")
request := &v1.DebugCreateNodeRequest{
Key: registrationID,
Name: name,
User: user,
Routes: routes,
}
response, err := client.DebugCreateNode(ctx, request)
if err != nil {
return fmt.Errorf("creating node: %w", err)
}
return printOutput(cmd, response.GetNode(), "Node created")
}),
}
================================================
FILE: cmd/headscale/cli/dump_config.go
================================================
package cli
import (
"fmt"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
func init() {
rootCmd.AddCommand(dumpConfigCmd)
}
var dumpConfigCmd = &cobra.Command{
Use: "dumpConfig",
Short: "dump current config to /etc/headscale/config.dump.yaml, integration test only",
Hidden: true,
RunE: func(cmd *cobra.Command, args []string) error {
err := viper.WriteConfigAs("/etc/headscale/config.dump.yaml")
if err != nil {
return fmt.Errorf("dumping config: %w", err)
}
return nil
},
}
================================================
FILE: cmd/headscale/cli/generate.go
================================================
package cli
import (
"fmt"
"github.com/spf13/cobra"
"tailscale.com/types/key"
)
func init() {
rootCmd.AddCommand(generateCmd)
generateCmd.AddCommand(generatePrivateKeyCmd)
}
var generateCmd = &cobra.Command{
Use: "generate",
Short: "Generate commands",
Aliases: []string{"gen"},
}
var generatePrivateKeyCmd = &cobra.Command{
Use: "private-key",
Short: "Generate a private key for the headscale server",
RunE: func(cmd *cobra.Command, args []string) error {
machineKey := key.NewMachine()
machineKeyStr, err := machineKey.MarshalText()
if err != nil {
return fmt.Errorf("marshalling machine key: %w", err)
}
return printOutput(cmd, map[string]string{
"private_key": string(machineKeyStr),
},
string(machineKeyStr))
},
}
================================================
FILE: cmd/headscale/cli/health.go
================================================
package cli
import (
"context"
"fmt"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(healthCmd)
}
var healthCmd = &cobra.Command{
Use: "health",
Short: "Check the health of the Headscale server",
Long: "Check the health of the Headscale server. This command will return an exit code of 0 if the server is healthy, or 1 if it is not.",
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
response, err := client.Health(ctx, &v1.HealthRequest{})
if err != nil {
return fmt.Errorf("checking health: %w", err)
}
return printOutput(cmd, response, "")
}),
}
================================================
FILE: cmd/headscale/cli/mockoidc.go
================================================
package cli
import (
"context"
"encoding/json"
"fmt"
"net"
"net/http"
"os"
"strconv"
"time"
"github.com/juanfont/headscale/hscontrol/util/zlog/zf"
"github.com/oauth2-proxy/mockoidc"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
)
// Error is used to compare errors as per https://dave.cheney.net/2016/04/07/constant-errors
type Error string
func (e Error) Error() string { return string(e) }
const (
errMockOidcClientIDNotDefined = Error("MOCKOIDC_CLIENT_ID not defined")
errMockOidcClientSecretNotDefined = Error("MOCKOIDC_CLIENT_SECRET not defined")
errMockOidcPortNotDefined = Error("MOCKOIDC_PORT not defined")
errMockOidcUsersNotDefined = Error("MOCKOIDC_USERS not defined")
refreshTTL = 60 * time.Minute
)
var accessTTL = 2 * time.Minute
func init() {
rootCmd.AddCommand(mockOidcCmd)
}
var mockOidcCmd = &cobra.Command{
Use: "mockoidc",
Short: "Runs a mock OIDC server for testing",
Long: "This internal command runs a OpenID Connect for testing purposes",
RunE: func(cmd *cobra.Command, args []string) error {
err := mockOIDC()
if err != nil {
return fmt.Errorf("running mock OIDC server: %w", err)
}
return nil
},
}
func mockOIDC() error {
clientID := os.Getenv("MOCKOIDC_CLIENT_ID")
if clientID == "" {
return errMockOidcClientIDNotDefined
}
clientSecret := os.Getenv("MOCKOIDC_CLIENT_SECRET")
if clientSecret == "" {
return errMockOidcClientSecretNotDefined
}
addrStr := os.Getenv("MOCKOIDC_ADDR")
if addrStr == "" {
return errMockOidcPortNotDefined
}
portStr := os.Getenv("MOCKOIDC_PORT")
if portStr == "" {
return errMockOidcPortNotDefined
}
accessTTLOverride := os.Getenv("MOCKOIDC_ACCESS_TTL")
if accessTTLOverride != "" {
newTTL, err := time.ParseDuration(accessTTLOverride)
if err != nil {
return err
}
accessTTL = newTTL
}
userStr := os.Getenv("MOCKOIDC_USERS")
if userStr == "" {
return errMockOidcUsersNotDefined
}
var users []mockoidc.MockUser
err := json.Unmarshal([]byte(userStr), &users)
if err != nil {
return fmt.Errorf("unmarshalling users: %w", err)
}
log.Info().Interface(zf.Users, users).Msg("loading users from JSON")
log.Info().Msgf("access token TTL: %s", accessTTL)
port, err := strconv.Atoi(portStr)
if err != nil {
return err
}
mock, err := getMockOIDC(clientID, clientSecret, users)
if err != nil {
return err
}
listener, err := new(net.ListenConfig).Listen(context.Background(), "tcp", fmt.Sprintf("%s:%d", addrStr, port))
if err != nil {
return err
}
err = mock.Start(listener, nil)
if err != nil {
return err
}
log.Info().Msgf("mock OIDC server listening on %s", listener.Addr().String())
log.Info().Msgf("issuer: %s", mock.Issuer())
c := make(chan struct{})
<-c
return nil
}
func getMockOIDC(clientID string, clientSecret string, users []mockoidc.MockUser) (*mockoidc.MockOIDC, error) {
keypair, err := mockoidc.NewKeypair(nil)
if err != nil {
return nil, err
}
userQueue := mockoidc.UserQueue{}
for _, user := range users {
userQueue.Push(&user)
}
mock := mockoidc.MockOIDC{
ClientID: clientID,
ClientSecret: clientSecret,
AccessTTL: accessTTL,
RefreshTTL: refreshTTL,
CodeChallengeMethodsSupported: []string{"plain", "S256"},
Keypair: keypair,
SessionStore: mockoidc.NewSessionStore(),
UserQueue: &userQueue,
ErrorQueue: &mockoidc.ErrorQueue{},
}
_ = mock.AddMiddleware(func(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
log.Info().Msgf("request: %+v", r)
h.ServeHTTP(w, r)
if r.Response != nil {
log.Info().Msgf("response: %+v", r.Response)
}
})
})
return &mock, nil
}
================================================
FILE: cmd/headscale/cli/nodes.go
================================================
package cli
import (
"context"
"fmt"
"net/netip"
"strconv"
"strings"
"time"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/pterm/pterm"
"github.com/samber/lo"
"github.com/spf13/cobra"
"google.golang.org/protobuf/types/known/timestamppb"
"tailscale.com/types/key"
)
func init() {
rootCmd.AddCommand(nodeCmd)
listNodesCmd.Flags().StringP("user", "u", "", "Filter by user")
nodeCmd.AddCommand(listNodesCmd)
listNodeRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
nodeCmd.AddCommand(listNodeRoutesCmd)
registerNodeCmd.Flags().StringP("user", "u", "", "User")
registerNodeCmd.Flags().StringP("key", "k", "", "Key")
mustMarkRequired(registerNodeCmd, "user", "key")
nodeCmd.AddCommand(registerNodeCmd)
expireNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
expireNodeCmd.Flags().StringP("expiry", "e", "", "Set expire to (RFC3339 format, e.g. 2025-08-27T10:00:00Z), or leave empty to expire immediately.")
expireNodeCmd.Flags().BoolP("disable", "d", false, "Disable key expiry (node will never expire)")
mustMarkRequired(expireNodeCmd, "identifier")
nodeCmd.AddCommand(expireNodeCmd)
renameNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
mustMarkRequired(renameNodeCmd, "identifier")
nodeCmd.AddCommand(renameNodeCmd)
deleteNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
mustMarkRequired(deleteNodeCmd, "identifier")
nodeCmd.AddCommand(deleteNodeCmd)
tagCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
mustMarkRequired(tagCmd, "identifier")
tagCmd.Flags().StringSliceP("tags", "t", []string{}, "List of tags to add to the node")
nodeCmd.AddCommand(tagCmd)
approveRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
mustMarkRequired(approveRoutesCmd, "identifier")
approveRoutesCmd.Flags().StringSliceP("routes", "r", []string{}, `List of routes that will be approved (comma-separated, e.g. "10.0.0.0/8,192.168.0.0/24" or empty string to remove all approved routes)`)
nodeCmd.AddCommand(approveRoutesCmd)
nodeCmd.AddCommand(backfillNodeIPsCmd)
}
var nodeCmd = &cobra.Command{
Use: "nodes",
Short: "Manage the nodes of Headscale",
Aliases: []string{"node"},
}
var registerNodeCmd = &cobra.Command{
Use: "register",
Short: "Registers a node to your network",
Deprecated: "use 'headscale auth register --auth-id --user ' instead",
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
user, _ := cmd.Flags().GetString("user")
registrationID, _ := cmd.Flags().GetString("key")
request := &v1.RegisterNodeRequest{
Key: registrationID,
User: user,
}
response, err := client.RegisterNode(ctx, request)
if err != nil {
return fmt.Errorf("registering node: %w", err)
}
return printOutput(
cmd,
response.GetNode(),
fmt.Sprintf("Node %s registered", response.GetNode().GetGivenName()))
}),
}
var listNodesCmd = &cobra.Command{
Use: "list",
Short: "List nodes",
Aliases: []string{"ls", "show"},
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
user, _ := cmd.Flags().GetString("user")
response, err := client.ListNodes(ctx, &v1.ListNodesRequest{User: user})
if err != nil {
return fmt.Errorf("listing nodes: %w", err)
}
return printListOutput(cmd, response.GetNodes(), func() error {
tableData, err := nodesToPtables(user, response.GetNodes())
if err != nil {
return fmt.Errorf("converting to table: %w", err)
}
return pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
})
}),
}
var listNodeRoutesCmd = &cobra.Command{
Use: "list-routes",
Short: "List routes available on nodes",
Aliases: []string{"lsr", "routes"},
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
identifier, _ := cmd.Flags().GetUint64("identifier")
response, err := client.ListNodes(ctx, &v1.ListNodesRequest{})
if err != nil {
return fmt.Errorf("listing nodes: %w", err)
}
nodes := response.GetNodes()
if identifier != 0 {
for _, node := range response.GetNodes() {
if node.GetId() == identifier {
nodes = []*v1.Node{node}
break
}
}
}
nodes = lo.Filter(nodes, func(n *v1.Node, _ int) bool {
return (n.GetSubnetRoutes() != nil && len(n.GetSubnetRoutes()) > 0) || (n.GetApprovedRoutes() != nil && len(n.GetApprovedRoutes()) > 0) || (n.GetAvailableRoutes() != nil && len(n.GetAvailableRoutes()) > 0)
})
return printListOutput(cmd, nodes, func() error {
return pterm.DefaultTable.WithHasHeader().WithData(nodeRoutesToPtables(nodes)).Render()
})
}),
}
var expireNodeCmd = &cobra.Command{
Use: "expire",
Short: "Expire (log out) a node in your network",
Long: `Expiring a node will keep the node in the database and force it to reauthenticate.
Use --disable to disable key expiry (node will never expire).`,
Aliases: []string{"logout", "exp", "e"},
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
identifier, _ := cmd.Flags().GetUint64("identifier")
disableExpiry, _ := cmd.Flags().GetBool("disable")
// Handle disable expiry - node will never expire.
if disableExpiry {
request := &v1.ExpireNodeRequest{
NodeId: identifier,
DisableExpiry: true,
}
response, err := client.ExpireNode(ctx, request)
if err != nil {
return fmt.Errorf("disabling node expiry: %w", err)
}
return printOutput(cmd, response.GetNode(), "Node expiry disabled")
}
expiry, _ := cmd.Flags().GetString("expiry")
now := time.Now()
expiryTime := now
if expiry != "" {
var err error
expiryTime, err = time.Parse(time.RFC3339, expiry)
if err != nil {
return fmt.Errorf("parsing expiry time: %w", err)
}
}
request := &v1.ExpireNodeRequest{
NodeId: identifier,
Expiry: timestamppb.New(expiryTime),
}
response, err := client.ExpireNode(ctx, request)
if err != nil {
return fmt.Errorf("expiring node: %w", err)
}
if now.Equal(expiryTime) || now.After(expiryTime) {
return printOutput(cmd, response.GetNode(), "Node expired")
}
return printOutput(cmd, response.GetNode(), "Node expiration updated")
}),
}
var renameNodeCmd = &cobra.Command{
Use: "rename NEW_NAME",
Short: "Renames a node in your network",
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
identifier, _ := cmd.Flags().GetUint64("identifier")
newName := ""
if len(args) > 0 {
newName = args[0]
}
request := &v1.RenameNodeRequest{
NodeId: identifier,
NewName: newName,
}
response, err := client.RenameNode(ctx, request)
if err != nil {
return fmt.Errorf("renaming node: %w", err)
}
return printOutput(cmd, response.GetNode(), "Node renamed")
}),
}
var deleteNodeCmd = &cobra.Command{
Use: "delete",
Short: "Delete a node",
Aliases: []string{"del"},
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
identifier, _ := cmd.Flags().GetUint64("identifier")
getRequest := &v1.GetNodeRequest{
NodeId: identifier,
}
getResponse, err := client.GetNode(ctx, getRequest)
if err != nil {
return fmt.Errorf("getting node: %w", err)
}
deleteRequest := &v1.DeleteNodeRequest{
NodeId: identifier,
}
if !confirmAction(cmd, fmt.Sprintf(
"Do you want to remove the node %s?",
getResponse.GetNode().GetName(),
)) {
return printOutput(cmd, map[string]string{"Result": "Node not deleted"}, "Node not deleted")
}
_, err = client.DeleteNode(ctx, deleteRequest)
if err != nil {
return fmt.Errorf("deleting node: %w", err)
}
return printOutput(
cmd,
map[string]string{"Result": "Node deleted"},
"Node deleted",
)
}),
}
var backfillNodeIPsCmd = &cobra.Command{
Use: "backfillips",
Short: "Backfill IPs missing from nodes",
Long: `
Backfill IPs can be used to add/remove IPs from nodes
based on the current configuration of Headscale.
If there are nodes that does not have IPv4 or IPv6
even if prefixes for both are configured in the config,
this command can be used to assign IPs of the sort to
all nodes that are missing.
If you remove IPv4 or IPv6 prefixes from the config,
it can be run to remove the IPs that should no longer
be assigned to nodes.`,
RunE: func(cmd *cobra.Command, args []string) error {
if !confirmAction(cmd, "Are you sure that you want to assign/remove IPs to/from nodes?") {
return nil
}
ctx, client, conn, cancel, err := newHeadscaleCLIWithConfig()
if err != nil {
return fmt.Errorf("connecting to headscale: %w", err)
}
defer cancel()
defer conn.Close()
changes, err := client.BackfillNodeIPs(ctx, &v1.BackfillNodeIPsRequest{Confirmed: true})
if err != nil {
return fmt.Errorf("backfilling IPs: %w", err)
}
return printOutput(cmd, changes, "Node IPs backfilled successfully")
},
}
func nodesToPtables(
currentUser string,
nodes []*v1.Node,
) (pterm.TableData, error) {
tableHeader := []string{
"ID",
"Hostname",
"Name",
"MachineKey",
"NodeKey",
"User",
"Tags",
"IP addresses",
"Ephemeral",
"Last seen",
"Expiration",
"Connected",
"Expired",
}
tableData := pterm.TableData{tableHeader}
for _, node := range nodes {
var ephemeral bool
if node.GetPreAuthKey() != nil && node.GetPreAuthKey().GetEphemeral() {
ephemeral = true
}
var (
lastSeen time.Time
lastSeenTime string
)
if node.GetLastSeen() != nil {
lastSeen = node.GetLastSeen().AsTime()
lastSeenTime = lastSeen.Format(HeadscaleDateTimeFormat)
}
var (
expiry time.Time
expiryTime string
)
if node.GetExpiry() != nil {
expiry = node.GetExpiry().AsTime()
expiryTime = expiry.Format(HeadscaleDateTimeFormat)
} else {
expiryTime = "N/A"
}
var machineKey key.MachinePublic
err := machineKey.UnmarshalText(
[]byte(node.GetMachineKey()),
)
if err != nil {
machineKey = key.MachinePublic{}
}
var nodeKey key.NodePublic
err = nodeKey.UnmarshalText(
[]byte(node.GetNodeKey()),
)
if err != nil {
return nil, err
}
var online string
if node.GetOnline() {
online = pterm.LightGreen("online")
} else {
online = pterm.LightRed("offline")
}
var expired string
if node.GetExpiry() != nil && node.GetExpiry().AsTime().Before(time.Now()) {
expired = pterm.LightRed("yes")
} else {
expired = pterm.LightGreen("no")
}
var tagsBuilder strings.Builder
for _, tag := range node.GetTags() {
tagsBuilder.WriteString("\n" + tag)
}
tags := strings.TrimLeft(tagsBuilder.String(), "\n")
var user string
if node.GetUser() != nil {
user = node.GetUser().GetName()
}
var ipBuilder strings.Builder
for _, addr := range node.GetIpAddresses() {
ip, err := netip.ParseAddr(addr)
if err == nil {
if ipBuilder.Len() > 0 {
ipBuilder.WriteString("\n")
}
ipBuilder.WriteString(ip.String())
}
}
ipAddresses := ipBuilder.String()
nodeData := []string{
strconv.FormatUint(node.GetId(), util.Base10),
node.GetName(),
node.GetGivenName(),
machineKey.ShortString(),
nodeKey.ShortString(),
user,
tags,
ipAddresses,
strconv.FormatBool(ephemeral),
lastSeenTime,
expiryTime,
online,
expired,
}
tableData = append(
tableData,
nodeData,
)
}
return tableData, nil
}
func nodeRoutesToPtables(
nodes []*v1.Node,
) pterm.TableData {
tableHeader := []string{
"ID",
"Hostname",
"Approved",
"Available",
"Serving (Primary)",
}
tableData := pterm.TableData{tableHeader}
for _, node := range nodes {
nodeData := []string{
strconv.FormatUint(node.GetId(), util.Base10),
node.GetGivenName(),
strings.Join(node.GetApprovedRoutes(), "\n"),
strings.Join(node.GetAvailableRoutes(), "\n"),
strings.Join(node.GetSubnetRoutes(), "\n"),
}
tableData = append(
tableData,
nodeData,
)
}
return tableData
}
var tagCmd = &cobra.Command{
Use: "tag",
Short: "Manage the tags of a node",
Aliases: []string{"tags", "t"},
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
identifier, _ := cmd.Flags().GetUint64("identifier")
tagsToSet, _ := cmd.Flags().GetStringSlice("tags")
// Sending tags to node
request := &v1.SetTagsRequest{
NodeId: identifier,
Tags: tagsToSet,
}
resp, err := client.SetTags(ctx, request)
if err != nil {
return fmt.Errorf("setting tags: %w", err)
}
return printOutput(cmd, resp.GetNode(), "Node updated")
}),
}
var approveRoutesCmd = &cobra.Command{
Use: "approve-routes",
Short: "Manage the approved routes of a node",
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
identifier, _ := cmd.Flags().GetUint64("identifier")
routes, _ := cmd.Flags().GetStringSlice("routes")
// Sending routes to node
request := &v1.SetApprovedRoutesRequest{
NodeId: identifier,
Routes: routes,
}
resp, err := client.SetApprovedRoutes(ctx, request)
if err != nil {
return fmt.Errorf("setting approved routes: %w", err)
}
return printOutput(cmd, resp.GetNode(), "Node updated")
}),
}
================================================
FILE: cmd/headscale/cli/policy.go
================================================
package cli
import (
"errors"
"fmt"
"os"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/db"
"github.com/juanfont/headscale/hscontrol/policy"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/spf13/cobra"
"tailscale.com/types/views"
)
const (
bypassFlag = "bypass-grpc-and-access-database-directly" //nolint:gosec // not a credential
)
var errAborted = errors.New("command aborted by user")
// bypassDatabase loads the server config and opens the database directly,
// bypassing the gRPC server. The caller is responsible for closing the
// returned database handle.
func bypassDatabase() (*db.HSDatabase, error) {
cfg, err := types.LoadServerConfig()
if err != nil {
return nil, fmt.Errorf("loading config: %w", err)
}
d, err := db.NewHeadscaleDatabase(cfg, nil)
if err != nil {
return nil, fmt.Errorf("opening database: %w", err)
}
return d, nil
}
func init() {
rootCmd.AddCommand(policyCmd)
getPolicy.Flags().BoolP(bypassFlag, "", false, "Uses the headscale config to directly access the database, bypassing gRPC and does not require the server to be running")
policyCmd.AddCommand(getPolicy)
setPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format")
setPolicy.Flags().BoolP(bypassFlag, "", false, "Uses the headscale config to directly access the database, bypassing gRPC and does not require the server to be running")
mustMarkRequired(setPolicy, "file")
policyCmd.AddCommand(setPolicy)
checkPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format")
mustMarkRequired(checkPolicy, "file")
policyCmd.AddCommand(checkPolicy)
}
var policyCmd = &cobra.Command{
Use: "policy",
Short: "Manage the Headscale ACL Policy",
}
var getPolicy = &cobra.Command{
Use: "get",
Short: "Print the current ACL Policy",
Aliases: []string{"show", "view", "fetch"},
RunE: func(cmd *cobra.Command, args []string) error {
var policyData string
if bypass, _ := cmd.Flags().GetBool(bypassFlag); bypass {
if !confirmAction(cmd, "DO NOT run this command if an instance of headscale is running, are you sure headscale is not running?") {
return errAborted
}
d, err := bypassDatabase()
if err != nil {
return err
}
defer d.Close()
pol, err := d.GetPolicy()
if err != nil {
return fmt.Errorf("loading policy from database: %w", err)
}
policyData = pol.Data
} else {
ctx, client, conn, cancel, err := newHeadscaleCLIWithConfig()
if err != nil {
return fmt.Errorf("connecting to headscale: %w", err)
}
defer cancel()
defer conn.Close()
response, err := client.GetPolicy(ctx, &v1.GetPolicyRequest{})
if err != nil {
return fmt.Errorf("loading ACL policy: %w", err)
}
policyData = response.GetPolicy()
}
// This does not pass output format as we don't support yaml, json or
// json-line output for this command. It is HuJSON already.
fmt.Println(policyData)
return nil
},
}
var setPolicy = &cobra.Command{
Use: "set",
Short: "Updates the ACL Policy",
Long: `
Updates the existing ACL Policy with the provided policy. The policy must be a valid HuJSON object.
This command only works when the acl.policy_mode is set to "db", and the policy will be stored in the database.`,
Aliases: []string{"put", "update"},
RunE: func(cmd *cobra.Command, args []string) error {
policyPath, _ := cmd.Flags().GetString("file")
policyBytes, err := os.ReadFile(policyPath)
if err != nil {
return fmt.Errorf("reading policy file: %w", err)
}
if bypass, _ := cmd.Flags().GetBool(bypassFlag); bypass {
if !confirmAction(cmd, "DO NOT run this command if an instance of headscale is running, are you sure headscale is not running?") {
return errAborted
}
d, err := bypassDatabase()
if err != nil {
return err
}
defer d.Close()
users, err := d.ListUsers()
if err != nil {
return fmt.Errorf("loading users for policy validation: %w", err)
}
_, err = policy.NewPolicyManager(policyBytes, users, views.Slice[types.NodeView]{})
if err != nil {
return fmt.Errorf("parsing policy file: %w", err)
}
_, err = d.SetPolicy(string(policyBytes))
if err != nil {
return fmt.Errorf("setting ACL policy: %w", err)
}
} else {
request := &v1.SetPolicyRequest{Policy: string(policyBytes)}
ctx, client, conn, cancel, err := newHeadscaleCLIWithConfig()
if err != nil {
return fmt.Errorf("connecting to headscale: %w", err)
}
defer cancel()
defer conn.Close()
_, err = client.SetPolicy(ctx, request)
if err != nil {
return fmt.Errorf("setting ACL policy: %w", err)
}
}
fmt.Println("Policy updated.")
return nil
},
}
var checkPolicy = &cobra.Command{
Use: "check",
Short: "Check the Policy file for errors",
RunE: func(cmd *cobra.Command, args []string) error {
policyPath, _ := cmd.Flags().GetString("file")
policyBytes, err := os.ReadFile(policyPath)
if err != nil {
return fmt.Errorf("reading policy file: %w", err)
}
_, err = policy.NewPolicyManager(policyBytes, nil, views.Slice[types.NodeView]{})
if err != nil {
return fmt.Errorf("parsing policy file: %w", err)
}
fmt.Println("Policy is valid")
return nil
},
}
================================================
FILE: cmd/headscale/cli/preauthkeys.go
================================================
package cli
import (
"context"
"fmt"
"strconv"
"strings"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/pterm/pterm"
"github.com/spf13/cobra"
)
const (
DefaultPreAuthKeyExpiry = "1h"
)
func init() {
rootCmd.AddCommand(preauthkeysCmd)
preauthkeysCmd.AddCommand(listPreAuthKeys)
preauthkeysCmd.AddCommand(createPreAuthKeyCmd)
preauthkeysCmd.AddCommand(expirePreAuthKeyCmd)
preauthkeysCmd.AddCommand(deletePreAuthKeyCmd)
createPreAuthKeyCmd.PersistentFlags().
Bool("reusable", false, "Make the preauthkey reusable")
createPreAuthKeyCmd.PersistentFlags().
Bool("ephemeral", false, "Preauthkey for ephemeral nodes")
createPreAuthKeyCmd.Flags().
StringP("expiration", "e", DefaultPreAuthKeyExpiry, "Human-readable expiration of the key (e.g. 30m, 24h)")
createPreAuthKeyCmd.Flags().
StringSlice("tags", []string{}, "Tags to automatically assign to node")
createPreAuthKeyCmd.PersistentFlags().Uint64P("user", "u", 0, "User identifier (ID)")
expirePreAuthKeyCmd.PersistentFlags().Uint64P("id", "i", 0, "Authkey ID")
deletePreAuthKeyCmd.PersistentFlags().Uint64P("id", "i", 0, "Authkey ID")
}
var preauthkeysCmd = &cobra.Command{
Use: "preauthkeys",
Short: "Handle the preauthkeys in Headscale",
Aliases: []string{"preauthkey", "authkey", "pre"},
}
var listPreAuthKeys = &cobra.Command{
Use: "list",
Short: "List all preauthkeys",
Aliases: []string{"ls", "show"},
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
response, err := client.ListPreAuthKeys(ctx, &v1.ListPreAuthKeysRequest{})
if err != nil {
return fmt.Errorf("listing preauthkeys: %w", err)
}
return printListOutput(cmd, response.GetPreAuthKeys(), func() error {
tableData := pterm.TableData{
{
"ID",
"Key/Prefix",
"Reusable",
"Ephemeral",
"Used",
"Expiration",
"Created",
"Owner",
},
}
for _, key := range response.GetPreAuthKeys() {
expiration := "-"
if key.GetExpiration() != nil {
expiration = ColourTime(key.GetExpiration().AsTime())
}
var owner string
if len(key.GetAclTags()) > 0 {
owner = strings.Join(key.GetAclTags(), "\n")
} else if key.GetUser() != nil {
owner = key.GetUser().GetName()
} else {
owner = "-"
}
tableData = append(tableData, []string{
strconv.FormatUint(key.GetId(), util.Base10),
key.GetKey(),
strconv.FormatBool(key.GetReusable()),
strconv.FormatBool(key.GetEphemeral()),
strconv.FormatBool(key.GetUsed()),
expiration,
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
owner,
})
}
return pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
})
}),
}
var createPreAuthKeyCmd = &cobra.Command{
Use: "create",
Short: "Creates a new preauthkey",
Aliases: []string{"c", "new"},
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
user, _ := cmd.Flags().GetUint64("user")
reusable, _ := cmd.Flags().GetBool("reusable")
ephemeral, _ := cmd.Flags().GetBool("ephemeral")
tags, _ := cmd.Flags().GetStringSlice("tags")
expiration, err := expirationFromFlag(cmd)
if err != nil {
return err
}
request := &v1.CreatePreAuthKeyRequest{
User: user,
Reusable: reusable,
Ephemeral: ephemeral,
AclTags: tags,
Expiration: expiration,
}
response, err := client.CreatePreAuthKey(ctx, request)
if err != nil {
return fmt.Errorf("creating preauthkey: %w", err)
}
return printOutput(cmd, response.GetPreAuthKey(), response.GetPreAuthKey().GetKey())
}),
}
var expirePreAuthKeyCmd = &cobra.Command{
Use: "expire",
Short: "Expire a preauthkey",
Aliases: []string{"revoke", "exp", "e"},
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
id, _ := cmd.Flags().GetUint64("id")
if id == 0 {
return fmt.Errorf("missing --id parameter: %w", errMissingParameter)
}
request := &v1.ExpirePreAuthKeyRequest{
Id: id,
}
response, err := client.ExpirePreAuthKey(ctx, request)
if err != nil {
return fmt.Errorf("expiring preauthkey: %w", err)
}
return printOutput(cmd, response, "Key expired")
}),
}
var deletePreAuthKeyCmd = &cobra.Command{
Use: "delete",
Short: "Delete a preauthkey",
Aliases: []string{"del", "rm", "d"},
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
id, _ := cmd.Flags().GetUint64("id")
if id == 0 {
return fmt.Errorf("missing --id parameter: %w", errMissingParameter)
}
request := &v1.DeletePreAuthKeyRequest{
Id: id,
}
response, err := client.DeletePreAuthKey(ctx, request)
if err != nil {
return fmt.Errorf("deleting preauthkey: %w", err)
}
return printOutput(cmd, response, "Key deleted")
}),
}
================================================
FILE: cmd/headscale/cli/pterm_style.go
================================================
package cli
import (
"time"
"github.com/pterm/pterm"
)
func ColourTime(date time.Time) string {
dateStr := date.Format(HeadscaleDateTimeFormat)
if date.After(time.Now()) {
dateStr = pterm.LightGreen(dateStr)
} else {
dateStr = pterm.LightRed(dateStr)
}
return dateStr
}
================================================
FILE: cmd/headscale/cli/root.go
================================================
package cli
import (
"os"
"runtime"
"slices"
"strings"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/tcnksm/go-latest"
)
var cfgFile string = ""
func init() {
if len(os.Args) > 1 &&
(os.Args[1] == "version" || os.Args[1] == "mockoidc" || os.Args[1] == "completion") {
return
}
if slices.Contains(os.Args, "policy") && slices.Contains(os.Args, "check") {
zerolog.SetGlobalLevel(zerolog.Disabled)
return
}
cobra.OnInitialize(initConfig)
rootCmd.PersistentFlags().
StringVarP(&cfgFile, "config", "c", "", "config file (default is /etc/headscale/config.yaml)")
rootCmd.PersistentFlags().
StringP("output", "o", "", "Output format. Empty for human-readable, 'json', 'json-line' or 'yaml'")
rootCmd.PersistentFlags().
Bool("force", false, "Disable prompts and forces the execution")
// Re-enable usage output only for flag-parsing errors; runtime errors
// from RunE should never dump usage text.
rootCmd.SetFlagErrorFunc(func(cmd *cobra.Command, err error) error {
cmd.SilenceUsage = false
return err
})
}
func initConfig() {
if cfgFile == "" {
cfgFile = os.Getenv("HEADSCALE_CONFIG")
}
if cfgFile != "" {
err := types.LoadConfig(cfgFile, true)
if err != nil {
log.Fatal().Caller().Err(err).Msgf("error loading config file %s", cfgFile)
}
} else {
err := types.LoadConfig("", false)
if err != nil {
log.Fatal().Caller().Err(err).Msgf("error loading config")
}
}
machineOutput := hasMachineOutputFlag()
// If the user has requested a "node" readable format,
// then disable login so the output remains valid.
if machineOutput {
zerolog.SetGlobalLevel(zerolog.Disabled)
}
logFormat := viper.GetString("log.format")
if logFormat == types.JSONLogFormat {
log.Logger = log.Output(os.Stdout)
}
disableUpdateCheck := viper.GetBool("disable_check_updates")
if !disableUpdateCheck && !machineOutput {
versionInfo := types.GetVersionInfo()
if (runtime.GOOS == "linux" || runtime.GOOS == "darwin") &&
!versionInfo.Dirty {
githubTag := &latest.GithubTag{
Owner: "juanfont",
Repository: "headscale",
TagFilterFunc: filterPreReleasesIfStable(func() string { return versionInfo.Version }),
}
res, err := latest.Check(githubTag, versionInfo.Version)
if err == nil && res.Outdated {
//nolint
log.Warn().Msgf(
"An updated version of Headscale has been found (%s vs. your current %s). Check it out https://github.com/juanfont/headscale/releases\n",
res.Current,
versionInfo.Version,
)
}
}
}
}
var prereleases = []string{"alpha", "beta", "rc", "dev"}
func isPreReleaseVersion(version string) bool {
for _, unstable := range prereleases {
if strings.Contains(version, unstable) {
return true
}
}
return false
}
// filterPreReleasesIfStable returns a function that filters out
// pre-release tags if the current version is stable.
// If the current version is a pre-release, it does not filter anything.
// versionFunc is a function that returns the current version string, it is
// a func for testability.
func filterPreReleasesIfStable(versionFunc func() string) func(string) bool {
return func(tag string) bool {
version := versionFunc()
// If we are on a pre-release version, then we do not filter anything
// as we want to recommend the user the latest pre-release.
if isPreReleaseVersion(version) {
return false
}
// If we are on a stable release, filter out pre-releases.
for _, ignore := range prereleases {
if strings.Contains(tag, ignore) {
return true
}
}
return false
}
}
var rootCmd = &cobra.Command{
Use: "headscale",
Short: "headscale - a Tailscale control server",
Long: `
headscale is an open source implementation of the Tailscale control server
https://github.com/juanfont/headscale`,
SilenceErrors: true,
SilenceUsage: true,
}
func Execute() {
cmd, err := rootCmd.ExecuteC()
if err != nil {
outputFormat, _ := cmd.Flags().GetString("output")
printError(err, outputFormat)
os.Exit(1)
}
}
================================================
FILE: cmd/headscale/cli/root_test.go
================================================
package cli
import (
"testing"
)
func TestFilterPreReleasesIfStable(t *testing.T) {
tests := []struct {
name string
currentVersion string
tag string
expectedFilter bool
description string
}{
{
name: "stable version filters alpha tag",
currentVersion: "0.23.0",
tag: "v0.24.0-alpha.1",
expectedFilter: true,
description: "When on stable release, alpha tags should be filtered",
},
{
name: "stable version filters beta tag",
currentVersion: "0.23.0",
tag: "v0.24.0-beta.2",
expectedFilter: true,
description: "When on stable release, beta tags should be filtered",
},
{
name: "stable version filters rc tag",
currentVersion: "0.23.0",
tag: "v0.24.0-rc.1",
expectedFilter: true,
description: "When on stable release, rc tags should be filtered",
},
{
name: "stable version allows stable tag",
currentVersion: "0.23.0",
tag: "v0.24.0",
expectedFilter: false,
description: "When on stable release, stable tags should not be filtered",
},
{
name: "alpha version allows alpha tag",
currentVersion: "0.23.0-alpha.1",
tag: "v0.24.0-alpha.2",
expectedFilter: false,
description: "When on alpha release, alpha tags should not be filtered",
},
{
name: "alpha version allows beta tag",
currentVersion: "0.23.0-alpha.1",
tag: "v0.24.0-beta.1",
expectedFilter: false,
description: "When on alpha release, beta tags should not be filtered",
},
{
name: "alpha version allows rc tag",
currentVersion: "0.23.0-alpha.1",
tag: "v0.24.0-rc.1",
expectedFilter: false,
description: "When on alpha release, rc tags should not be filtered",
},
{
name: "alpha version allows stable tag",
currentVersion: "0.23.0-alpha.1",
tag: "v0.24.0",
expectedFilter: false,
description: "When on alpha release, stable tags should not be filtered",
},
{
name: "beta version allows alpha tag",
currentVersion: "0.23.0-beta.1",
tag: "v0.24.0-alpha.1",
expectedFilter: false,
description: "When on beta release, alpha tags should not be filtered",
},
{
name: "beta version allows beta tag",
currentVersion: "0.23.0-beta.2",
tag: "v0.24.0-beta.3",
expectedFilter: false,
description: "When on beta release, beta tags should not be filtered",
},
{
name: "beta version allows rc tag",
currentVersion: "0.23.0-beta.1",
tag: "v0.24.0-rc.1",
expectedFilter: false,
description: "When on beta release, rc tags should not be filtered",
},
{
name: "beta version allows stable tag",
currentVersion: "0.23.0-beta.1",
tag: "v0.24.0",
expectedFilter: false,
description: "When on beta release, stable tags should not be filtered",
},
{
name: "rc version allows alpha tag",
currentVersion: "0.23.0-rc.1",
tag: "v0.24.0-alpha.1",
expectedFilter: false,
description: "When on rc release, alpha tags should not be filtered",
},
{
name: "rc version allows beta tag",
currentVersion: "0.23.0-rc.1",
tag: "v0.24.0-beta.1",
expectedFilter: false,
description: "When on rc release, beta tags should not be filtered",
},
{
name: "rc version allows rc tag",
currentVersion: "0.23.0-rc.2",
tag: "v0.24.0-rc.3",
expectedFilter: false,
description: "When on rc release, rc tags should not be filtered",
},
{
name: "rc version allows stable tag",
currentVersion: "0.23.0-rc.1",
tag: "v0.24.0",
expectedFilter: false,
description: "When on rc release, stable tags should not be filtered",
},
{
name: "stable version with patch filters alpha",
currentVersion: "0.23.1",
tag: "v0.24.0-alpha.1",
expectedFilter: true,
description: "Stable version with patch number should filter alpha tags",
},
{
name: "stable version with patch allows stable",
currentVersion: "0.23.1",
tag: "v0.24.0",
expectedFilter: false,
description: "Stable version with patch number should allow stable tags",
},
{
name: "tag with alpha substring in version number",
currentVersion: "0.23.0",
tag: "v1.0.0-alpha.1",
expectedFilter: true,
description: "Tags with alpha in version string should be filtered on stable",
},
{
name: "tag with beta substring in version number",
currentVersion: "0.23.0",
tag: "v1.0.0-beta.1",
expectedFilter: true,
description: "Tags with beta in version string should be filtered on stable",
},
{
name: "tag with rc substring in version number",
currentVersion: "0.23.0",
tag: "v1.0.0-rc.1",
expectedFilter: true,
description: "Tags with rc in version string should be filtered on stable",
},
{
name: "empty tag on stable version",
currentVersion: "0.23.0",
tag: "",
expectedFilter: false,
description: "Empty tags should not be filtered",
},
{
name: "dev version allows all tags",
currentVersion: "0.23.0-dev",
tag: "v0.24.0-alpha.1",
expectedFilter: false,
description: "Dev versions should not filter any tags (pre-release allows all)",
},
{
name: "stable version filters dev tag",
currentVersion: "0.23.0",
tag: "v0.24.0-dev",
expectedFilter: true,
description: "When on stable release, dev tags should be filtered",
},
{
name: "dev version allows dev tag",
currentVersion: "0.23.0-dev",
tag: "v0.24.0-dev.1",
expectedFilter: false,
description: "When on dev release, dev tags should not be filtered",
},
{
name: "dev version allows stable tag",
currentVersion: "0.23.0-dev",
tag: "v0.24.0",
expectedFilter: false,
description: "When on dev release, stable tags should not be filtered",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := filterPreReleasesIfStable(func() string { return tt.currentVersion })(tt.tag)
if result != tt.expectedFilter {
t.Errorf("%s: got %v, want %v\nDescription: %s\nCurrent version: %s, Tag: %s",
tt.name,
result,
tt.expectedFilter,
tt.description,
tt.currentVersion,
tt.tag,
)
}
})
}
}
func TestIsPreReleaseVersion(t *testing.T) {
tests := []struct {
name string
version string
expected bool
description string
}{
{
name: "stable version",
version: "0.23.0",
expected: false,
description: "Stable version should not be pre-release",
},
{
name: "alpha version",
version: "0.23.0-alpha.1",
expected: true,
description: "Alpha version should be pre-release",
},
{
name: "beta version",
version: "0.23.0-beta.1",
expected: true,
description: "Beta version should be pre-release",
},
{
name: "rc version",
version: "0.23.0-rc.1",
expected: true,
description: "RC version should be pre-release",
},
{
name: "version with alpha substring",
version: "0.23.0-alphabetical",
expected: true,
description: "Version containing 'alpha' should be pre-release",
},
{
name: "version with beta substring",
version: "0.23.0-betamax",
expected: true,
description: "Version containing 'beta' should be pre-release",
},
{
name: "dev version",
version: "0.23.0-dev",
expected: true,
description: "Dev version should be pre-release",
},
{
name: "empty version",
version: "",
expected: false,
description: "Empty version should not be pre-release",
},
{
name: "version with patch number",
version: "0.23.1",
expected: false,
description: "Stable version with patch should not be pre-release",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := isPreReleaseVersion(tt.version)
if result != tt.expected {
t.Errorf("%s: got %v, want %v\nDescription: %s\nVersion: %s",
tt.name,
result,
tt.expected,
tt.description,
tt.version,
)
}
})
}
}
================================================
FILE: cmd/headscale/cli/serve.go
================================================
package cli
import (
"errors"
"fmt"
"net/http"
"github.com/spf13/cobra"
"github.com/tailscale/squibble"
)
func init() {
rootCmd.AddCommand(serveCmd)
}
var serveCmd = &cobra.Command{
Use: "serve",
Short: "Launches the headscale server",
RunE: func(cmd *cobra.Command, args []string) error {
app, err := newHeadscaleServerWithConfig()
if err != nil {
if squibbleErr, ok := errors.AsType[squibble.ValidationError](err); ok {
fmt.Printf("SQLite schema failed to validate:\n")
fmt.Println(squibbleErr.Diff)
}
return fmt.Errorf("initializing: %w", err)
}
err = app.Serve()
if err != nil && !errors.Is(err, http.ErrServerClosed) {
return fmt.Errorf("headscale ran into an error and had to shut down: %w", err)
}
return nil
},
}
================================================
FILE: cmd/headscale/cli/users.go
================================================
package cli
import (
"context"
"errors"
"fmt"
"net/url"
"strconv"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/juanfont/headscale/hscontrol/util/zlog/zf"
"github.com/pterm/pterm"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
)
// CLI user errors.
var (
errFlagRequired = errors.New("--name or --identifier flag is required")
errMultipleUsersMatch = errors.New("multiple users match query, specify an ID")
)
func usernameAndIDFlag(cmd *cobra.Command) {
cmd.Flags().Int64P("identifier", "i", -1, "User identifier (ID)")
cmd.Flags().StringP("name", "n", "", "Username")
}
// usernameAndIDFromFlag returns the username and ID from the flags of the command.
func usernameAndIDFromFlag(cmd *cobra.Command) (uint64, string, error) {
username, _ := cmd.Flags().GetString("name")
identifier, _ := cmd.Flags().GetInt64("identifier")
if username == "" && identifier < 0 {
return 0, "", errFlagRequired
}
// Normalise unset/negative identifiers to 0 so the uint64
// conversion does not produce a bogus large value.
if identifier < 0 {
identifier = 0
}
return uint64(identifier), username, nil //nolint:gosec // identifier is clamped to >= 0 above
}
func init() {
rootCmd.AddCommand(userCmd)
userCmd.AddCommand(createUserCmd)
createUserCmd.Flags().StringP("display-name", "d", "", "Display name")
createUserCmd.Flags().StringP("email", "e", "", "Email")
createUserCmd.Flags().StringP("picture-url", "p", "", "Profile picture URL")
userCmd.AddCommand(listUsersCmd)
usernameAndIDFlag(listUsersCmd)
listUsersCmd.Flags().StringP("email", "e", "", "Email")
userCmd.AddCommand(destroyUserCmd)
usernameAndIDFlag(destroyUserCmd)
userCmd.AddCommand(renameUserCmd)
usernameAndIDFlag(renameUserCmd)
renameUserCmd.Flags().StringP("new-name", "r", "", "New username")
mustMarkRequired(renameUserCmd, "new-name")
}
var userCmd = &cobra.Command{
Use: "users",
Short: "Manage the users of Headscale",
Aliases: []string{"user"},
}
var createUserCmd = &cobra.Command{
Use: "create NAME",
Short: "Creates a new user",
Aliases: []string{"c", "new"},
Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return errMissingParameter
}
return nil
},
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
userName := args[0]
log.Trace().Interface(zf.Client, client).Msg("obtained gRPC client")
request := &v1.CreateUserRequest{Name: userName}
if displayName, _ := cmd.Flags().GetString("display-name"); displayName != "" {
request.DisplayName = displayName
}
if email, _ := cmd.Flags().GetString("email"); email != "" {
request.Email = email
}
if pictureURL, _ := cmd.Flags().GetString("picture-url"); pictureURL != "" {
if _, err := url.Parse(pictureURL); err != nil { //nolint:noinlineerr
return fmt.Errorf("invalid picture URL: %w", err)
}
request.PictureUrl = pictureURL
}
log.Trace().Interface(zf.Request, request).Msg("sending CreateUser request")
response, err := client.CreateUser(ctx, request)
if err != nil {
return fmt.Errorf("creating user: %w", err)
}
return printOutput(cmd, response.GetUser(), "User created")
}),
}
var destroyUserCmd = &cobra.Command{
Use: "destroy --identifier ID or --name NAME",
Short: "Destroys a user",
Aliases: []string{"delete"},
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
id, username, err := usernameAndIDFromFlag(cmd)
if err != nil {
return err
}
request := &v1.ListUsersRequest{
Name: username,
Id: id,
}
users, err := client.ListUsers(ctx, request)
if err != nil {
return fmt.Errorf("listing users: %w", err)
}
if len(users.GetUsers()) != 1 {
return errMultipleUsersMatch
}
user := users.GetUsers()[0]
if !confirmAction(cmd, fmt.Sprintf(
"Do you want to remove the user %q (%d) and any associated preauthkeys?",
user.GetName(), user.GetId(),
)) {
return printOutput(cmd, map[string]string{"Result": "User not destroyed"}, "User not destroyed")
}
deleteRequest := &v1.DeleteUserRequest{Id: user.GetId()}
response, err := client.DeleteUser(ctx, deleteRequest)
if err != nil {
return fmt.Errorf("destroying user: %w", err)
}
return printOutput(cmd, response, "User destroyed")
}),
}
var listUsersCmd = &cobra.Command{
Use: "list",
Short: "List all the users",
Aliases: []string{"ls", "show"},
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
request := &v1.ListUsersRequest{}
id, _ := cmd.Flags().GetInt64("identifier")
username, _ := cmd.Flags().GetString("name")
email, _ := cmd.Flags().GetString("email")
// filter by one param at most
switch {
case id > 0:
request.Id = uint64(id)
case username != "":
request.Name = username
case email != "":
request.Email = email
}
response, err := client.ListUsers(ctx, request)
if err != nil {
return fmt.Errorf("listing users: %w", err)
}
return printListOutput(cmd, response.GetUsers(), func() error {
tableData := pterm.TableData{{"ID", "Name", "Username", "Email", "Created"}}
for _, user := range response.GetUsers() {
tableData = append(
tableData,
[]string{
strconv.FormatUint(user.GetId(), util.Base10),
user.GetDisplayName(),
user.GetName(),
user.GetEmail(),
user.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
},
)
}
return pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
})
}),
}
var renameUserCmd = &cobra.Command{
Use: "rename",
Short: "Renames a user",
Aliases: []string{"mv"},
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
id, username, err := usernameAndIDFromFlag(cmd)
if err != nil {
return err
}
listReq := &v1.ListUsersRequest{
Name: username,
Id: id,
}
users, err := client.ListUsers(ctx, listReq)
if err != nil {
return fmt.Errorf("listing users: %w", err)
}
if len(users.GetUsers()) != 1 {
return errMultipleUsersMatch
}
newName, _ := cmd.Flags().GetString("new-name")
renameReq := &v1.RenameUserRequest{
OldId: id,
NewName: newName,
}
response, err := client.RenameUser(ctx, renameReq)
if err != nil {
return fmt.Errorf("renaming user: %w", err)
}
return printOutput(cmd, response.GetUser(), "User renamed")
}),
}
================================================
FILE: cmd/headscale/cli/utils.go
================================================
package cli
import (
"context"
"crypto/tls"
"encoding/json"
"errors"
"fmt"
"os"
"time"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/juanfont/headscale/hscontrol/util/zlog/zf"
"github.com/prometheus/common/model"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure"
"google.golang.org/protobuf/types/known/timestamppb"
"gopkg.in/yaml.v3"
)
const (
HeadscaleDateTimeFormat = "2006-01-02 15:04:05"
SocketWritePermissions = 0o666
outputFormatJSON = "json"
outputFormatJSONLine = "json-line"
outputFormatYAML = "yaml"
)
var (
errAPIKeyNotSet = errors.New("HEADSCALE_CLI_API_KEY environment variable needs to be set")
errMissingParameter = errors.New("missing parameters")
)
// mustMarkRequired marks the named flags as required on cmd, panicking
// if any name does not match a registered flag. This is only called
// from init() where a failure indicates a programming error.
func mustMarkRequired(cmd *cobra.Command, names ...string) {
for _, n := range names {
err := cmd.MarkFlagRequired(n)
if err != nil {
panic(fmt.Sprintf("marking flag %q required on %q: %v", n, cmd.Name(), err))
}
}
}
func newHeadscaleServerWithConfig() (*hscontrol.Headscale, error) {
cfg, err := types.LoadServerConfig()
if err != nil {
return nil, fmt.Errorf(
"loading configuration: %w",
err,
)
}
app, err := hscontrol.NewHeadscale(cfg)
if err != nil {
return nil, fmt.Errorf("creating new headscale: %w", err)
}
return app, nil
}
// grpcRunE wraps a cobra RunE func, injecting a ready gRPC client and
// context. Connection lifecycle is managed by the wrapper — callers
// never see the underlying conn or cancel func.
func grpcRunE(
fn func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error,
) func(*cobra.Command, []string) error {
return func(cmd *cobra.Command, args []string) error {
ctx, client, conn, cancel, err := newHeadscaleCLIWithConfig()
if err != nil {
return fmt.Errorf("connecting to headscale: %w", err)
}
defer cancel()
defer conn.Close()
return fn(ctx, client, cmd, args)
}
}
func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc, error) {
cfg, err := types.LoadCLIConfig()
if err != nil {
return nil, nil, nil, nil, fmt.Errorf("loading configuration: %w", err)
}
log.Debug().
Dur("timeout", cfg.CLI.Timeout).
Msgf("Setting timeout")
ctx, cancel := context.WithTimeout(context.Background(), cfg.CLI.Timeout)
grpcOptions := []grpc.DialOption{
grpc.WithBlock(), //nolint:staticcheck // SA1019: deprecated but supported in 1.x
}
address := cfg.CLI.Address
// If the address is not set, we assume that we are on the server hosting hscontrol.
if address == "" {
log.Debug().
Str("socket", cfg.UnixSocket).
Msgf("HEADSCALE_CLI_ADDRESS environment is not set, connecting to unix socket.")
address = cfg.UnixSocket
// Try to give the user better feedback if we cannot write to the headscale
// socket. Note: os.OpenFile on a Unix domain socket returns ENXIO on
// Linux which is expected — only permission errors are actionable here.
// The actual gRPC connection uses net.Dial which handles sockets properly.
socket, err := os.OpenFile(cfg.UnixSocket, os.O_WRONLY, SocketWritePermissions) //nolint
if err != nil {
if os.IsPermission(err) {
cancel()
return nil, nil, nil, nil, fmt.Errorf(
"unable to read/write to headscale socket %q, do you have the correct permissions? %w",
cfg.UnixSocket,
err,
)
}
} else {
socket.Close()
}
grpcOptions = append(
grpcOptions,
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithContextDialer(util.GrpcSocketDialer),
)
} else {
// If we are not connecting to a local server, require an API key for authentication
apiKey := cfg.CLI.APIKey
if apiKey == "" {
cancel()
return nil, nil, nil, nil, errAPIKeyNotSet
}
grpcOptions = append(grpcOptions,
grpc.WithPerRPCCredentials(tokenAuth{
token: apiKey,
}),
)
if cfg.CLI.Insecure {
tlsConfig := &tls.Config{
// turn of gosec as we are intentionally setting
// insecure.
//nolint:gosec
InsecureSkipVerify: true,
}
grpcOptions = append(grpcOptions,
grpc.WithTransportCredentials(credentials.NewTLS(tlsConfig)),
)
} else {
grpcOptions = append(grpcOptions,
grpc.WithTransportCredentials(credentials.NewClientTLSFromCert(nil, "")),
)
}
}
log.Trace().Caller().Str(zf.Address, address).Msg("connecting via gRPC")
conn, err := grpc.DialContext(ctx, address, grpcOptions...) //nolint:staticcheck // SA1019: deprecated but supported in 1.x
if err != nil {
cancel()
return nil, nil, nil, nil, fmt.Errorf("connecting to %s: %w", address, err)
}
client := v1.NewHeadscaleServiceClient(conn)
return ctx, client, conn, cancel, nil
}
// formatOutput serialises result into the requested format. For the
// default (empty) format the human-readable override string is returned.
func formatOutput(result any, override string, outputFormat string) (string, error) {
switch outputFormat {
case outputFormatJSON:
b, err := json.MarshalIndent(result, "", "\t")
if err != nil {
return "", fmt.Errorf("marshalling JSON output: %w", err)
}
return string(b), nil
case outputFormatJSONLine:
b, err := json.Marshal(result)
if err != nil {
return "", fmt.Errorf("marshalling JSON-line output: %w", err)
}
return string(b), nil
case outputFormatYAML:
b, err := yaml.Marshal(result)
if err != nil {
return "", fmt.Errorf("marshalling YAML output: %w", err)
}
return string(b), nil
default:
return override, nil
}
}
// printOutput formats result and writes it to stdout. It reads the --output
// flag from cmd to decide the serialisation format.
func printOutput(cmd *cobra.Command, result any, override string) error {
format, _ := cmd.Flags().GetString("output")
out, err := formatOutput(result, override, format)
if err != nil {
return err
}
fmt.Println(out)
return nil
}
// expirationFromFlag parses the --expiration flag as a Prometheus-style
// duration (e.g. "90d", "1h") and returns an absolute timestamp.
func expirationFromFlag(cmd *cobra.Command) (*timestamppb.Timestamp, error) {
durationStr, _ := cmd.Flags().GetString("expiration")
duration, err := model.ParseDuration(durationStr)
if err != nil {
return nil, fmt.Errorf("parsing duration: %w", err)
}
return timestamppb.New(time.Now().UTC().Add(time.Duration(duration))), nil
}
// confirmAction returns true when the user confirms a prompt, or when
// --force is set. Callers decide what to do when it returns false.
func confirmAction(cmd *cobra.Command, prompt string) bool {
force, _ := cmd.Flags().GetBool("force")
if force {
return true
}
return util.YesNo(prompt)
}
// printListOutput checks the --output flag: when a machine-readable format is
// requested it serialises data as JSON/YAML; otherwise it calls renderTable
// to produce the human-readable pterm table.
func printListOutput(
cmd *cobra.Command,
data any,
renderTable func() error,
) error {
format, _ := cmd.Flags().GetString("output")
if format != "" {
return printOutput(cmd, data, "")
}
return renderTable()
}
// printError writes err to stderr, formatting it as JSON/YAML when the
// --output flag requests machine-readable output. Used exclusively by
// Execute() so that every error surfaces in the format the caller asked for.
func printError(err error, outputFormat string) {
type errOutput struct {
Error string `json:"error"`
}
e := errOutput{Error: err.Error()}
var formatted []byte
switch outputFormat {
case outputFormatJSON:
formatted, _ = json.MarshalIndent(e, "", "\t") //nolint:errchkjson // errOutput contains only a string field
case outputFormatJSONLine:
formatted, _ = json.Marshal(e) //nolint:errchkjson // errOutput contains only a string field
case outputFormatYAML:
formatted, _ = yaml.Marshal(e)
default:
fmt.Fprintf(os.Stderr, "Error: %s\n", err)
return
}
fmt.Fprintf(os.Stderr, "%s\n", formatted)
}
func hasMachineOutputFlag() bool {
for _, arg := range os.Args {
if arg == outputFormatJSON || arg == outputFormatJSONLine || arg == outputFormatYAML {
return true
}
}
return false
}
type tokenAuth struct {
token string
}
// Return value is mapped to request headers.
func (t tokenAuth) GetRequestMetadata(
ctx context.Context,
in ...string,
) (map[string]string, error) {
return map[string]string{
"authorization": "Bearer " + t.token,
}, nil
}
func (tokenAuth) RequireTransportSecurity() bool {
return true
}
================================================
FILE: cmd/headscale/cli/version.go
================================================
package cli
import (
"github.com/juanfont/headscale/hscontrol/types"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(versionCmd)
versionCmd.Flags().StringP("output", "o", "", "Output format. Empty for human-readable, 'json', 'json-line' or 'yaml'")
}
var versionCmd = &cobra.Command{
Use: "version",
Short: "Print the version.",
Long: "The version of headscale.",
RunE: func(cmd *cobra.Command, args []string) error {
info := types.GetVersionInfo()
return printOutput(cmd, info, info.String())
},
}
================================================
FILE: cmd/headscale/headscale.go
================================================
package main
import (
"os"
"time"
"github.com/jagottsicher/termcolor"
"github.com/juanfont/headscale/cmd/headscale/cli"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
)
func main() {
var colors bool
switch l := termcolor.SupportLevel(os.Stderr); l {
case termcolor.Level16M:
colors = true
case termcolor.Level256:
colors = true
case termcolor.LevelBasic:
colors = true
case termcolor.LevelNone:
colors = false
default:
// no color, return text as is.
colors = false
}
// Adhere to no-color.org manifesto of allowing users to
// turn off color in cli/services
if _, noColorIsSet := os.LookupEnv("NO_COLOR"); noColorIsSet {
colors = false
}
zerolog.TimeFieldFormat = zerolog.TimeFormatUnix
log.Logger = log.Output(zerolog.ConsoleWriter{
Out: os.Stderr,
TimeFormat: time.RFC3339,
NoColor: !colors,
})
cli.Execute()
}
================================================
FILE: cmd/headscale/headscale_test.go
================================================
package main
import (
"io/fs"
"os"
"path/filepath"
"testing"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestConfigFileLoading(t *testing.T) {
tmpDir := t.TempDir()
path, err := os.Getwd()
require.NoError(t, err)
cfgFile := filepath.Join(tmpDir, "config.yaml")
// Symlink the example config file
err = os.Symlink(
filepath.Clean(path+"/../../config-example.yaml"),
cfgFile,
)
require.NoError(t, err)
// Load example config, it should load without validation errors
err = types.LoadConfig(cfgFile, true)
require.NoError(t, err)
// Test that config file was interpreted correctly
assert.Equal(t, "http://127.0.0.1:8080", viper.GetString("server_url"))
assert.Equal(t, "127.0.0.1:8080", viper.GetString("listen_addr"))
assert.Equal(t, "127.0.0.1:9090", viper.GetString("metrics_listen_addr"))
assert.Equal(t, "sqlite", viper.GetString("database.type"))
assert.Equal(t, "/var/lib/headscale/db.sqlite", viper.GetString("database.sqlite.path"))
assert.Empty(t, viper.GetString("tls_letsencrypt_hostname"))
assert.Equal(t, ":http", viper.GetString("tls_letsencrypt_listen"))
assert.Equal(t, "HTTP-01", viper.GetString("tls_letsencrypt_challenge_type"))
assert.Equal(t, fs.FileMode(0o770), util.GetFileMode("unix_socket_permission"))
assert.False(t, viper.GetBool("logtail.enabled"))
}
func TestConfigLoading(t *testing.T) {
tmpDir := t.TempDir()
path, err := os.Getwd()
require.NoError(t, err)
// Symlink the example config file
err = os.Symlink(
filepath.Clean(path+"/../../config-example.yaml"),
filepath.Join(tmpDir, "config.yaml"),
)
require.NoError(t, err)
// Load example config, it should load without validation errors
err = types.LoadConfig(tmpDir, false)
require.NoError(t, err)
// Test that config file was interpreted correctly
assert.Equal(t, "http://127.0.0.1:8080", viper.GetString("server_url"))
assert.Equal(t, "127.0.0.1:8080", viper.GetString("listen_addr"))
assert.Equal(t, "127.0.0.1:9090", viper.GetString("metrics_listen_addr"))
assert.Equal(t, "sqlite", viper.GetString("database.type"))
assert.Equal(t, "/var/lib/headscale/db.sqlite", viper.GetString("database.sqlite.path"))
assert.Empty(t, viper.GetString("tls_letsencrypt_hostname"))
assert.Equal(t, ":http", viper.GetString("tls_letsencrypt_listen"))
assert.Equal(t, "HTTP-01", viper.GetString("tls_letsencrypt_challenge_type"))
assert.Equal(t, fs.FileMode(0o770), util.GetFileMode("unix_socket_permission"))
assert.False(t, viper.GetBool("logtail.enabled"))
assert.False(t, viper.GetBool("randomize_client_port"))
}
================================================
FILE: cmd/hi/README.md
================================================
# hi
hi (headscale integration runner) is an entirely "vibe coded" wrapper around our
[integration test suite](../integration). It essentially runs the docker
commands for you with some added benefits of extracting resources like logs and
databases.
================================================
FILE: cmd/hi/cleanup.go
================================================
package main
import (
"context"
"fmt"
"log"
"os"
"path/filepath"
"strings"
"time"
"github.com/cenkalti/backoff/v5"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/client"
"github.com/docker/docker/errdefs"
)
// cleanupBeforeTest performs cleanup operations before running tests.
// Only removes stale (stopped/exited) test containers to avoid interfering with concurrent test runs.
func cleanupBeforeTest(ctx context.Context) error {
err := cleanupStaleTestContainers(ctx)
if err != nil {
return fmt.Errorf("cleaning stale test containers: %w", err)
}
if err := pruneDockerNetworks(ctx); err != nil { //nolint:noinlineerr
return fmt.Errorf("pruning networks: %w", err)
}
return nil
}
// cleanupAfterTest removes the test container and all associated integration test containers for the run.
func cleanupAfterTest(ctx context.Context, cli *client.Client, containerID, runID string) error {
// Remove the main test container
err := cli.ContainerRemove(ctx, containerID, container.RemoveOptions{
Force: true,
})
if err != nil {
return fmt.Errorf("removing test container: %w", err)
}
// Clean up integration test containers for this run only
if runID != "" {
err := killTestContainersByRunID(ctx, runID)
if err != nil {
return fmt.Errorf("cleaning up containers for run %s: %w", runID, err)
}
}
return nil
}
// killTestContainers terminates and removes all test containers.
func killTestContainers(ctx context.Context) error {
cli, err := createDockerClient(ctx)
if err != nil {
return fmt.Errorf("creating Docker client: %w", err)
}
defer cli.Close()
containers, err := cli.ContainerList(ctx, container.ListOptions{
All: true,
})
if err != nil {
return fmt.Errorf("listing containers: %w", err)
}
removed := 0
for _, cont := range containers {
shouldRemove := false
for _, name := range cont.Names {
if strings.Contains(name, "headscale-test-suite") ||
strings.Contains(name, "hs-") ||
strings.Contains(name, "ts-") ||
strings.Contains(name, "derp-") {
shouldRemove = true
break
}
}
if shouldRemove {
// First kill the container if it's running
if cont.State == "running" {
_ = cli.ContainerKill(ctx, cont.ID, "KILL")
}
// Then remove the container with retry logic
if removeContainerWithRetry(ctx, cli, cont.ID) {
removed++
}
}
}
if removed > 0 {
fmt.Printf("Removed %d test containers\n", removed)
} else {
fmt.Println("No test containers found to remove")
}
return nil
}
// killTestContainersByRunID terminates and removes all test containers for a specific run ID.
// This function filters containers by the hi.run-id label to only affect containers
// belonging to the specified test run, leaving other concurrent test runs untouched.
func killTestContainersByRunID(ctx context.Context, runID string) error {
cli, err := createDockerClient(ctx)
if err != nil {
return fmt.Errorf("creating Docker client: %w", err)
}
defer cli.Close()
// Filter containers by hi.run-id label
containers, err := cli.ContainerList(ctx, container.ListOptions{
All: true,
Filters: filters.NewArgs(
filters.Arg("label", "hi.run-id="+runID),
),
})
if err != nil {
return fmt.Errorf("listing containers for run %s: %w", runID, err)
}
removed := 0
for _, cont := range containers {
// Kill the container if it's running
if cont.State == "running" {
_ = cli.ContainerKill(ctx, cont.ID, "KILL")
}
// Remove the container with retry logic
if removeContainerWithRetry(ctx, cli, cont.ID) {
removed++
}
}
if removed > 0 {
fmt.Printf("Removed %d containers for run ID %s\n", removed, runID)
}
return nil
}
// cleanupStaleTestContainers removes stopped/exited test containers without affecting running tests.
// This is useful for cleaning up leftover containers from previous crashed or interrupted test runs
// without interfering with currently running concurrent tests.
func cleanupStaleTestContainers(ctx context.Context) error {
cli, err := createDockerClient(ctx)
if err != nil {
return fmt.Errorf("creating Docker client: %w", err)
}
defer cli.Close()
// Only get stopped/exited containers
containers, err := cli.ContainerList(ctx, container.ListOptions{
All: true,
Filters: filters.NewArgs(
filters.Arg("status", "exited"),
filters.Arg("status", "dead"),
),
})
if err != nil {
return fmt.Errorf("listing stopped containers: %w", err)
}
removed := 0
for _, cont := range containers {
// Only remove containers that look like test containers
shouldRemove := false
for _, name := range cont.Names {
if strings.Contains(name, "headscale-test-suite") ||
strings.Contains(name, "hs-") ||
strings.Contains(name, "ts-") ||
strings.Contains(name, "derp-") {
shouldRemove = true
break
}
}
if shouldRemove {
if removeContainerWithRetry(ctx, cli, cont.ID) {
removed++
}
}
}
if removed > 0 {
fmt.Printf("Removed %d stale test containers\n", removed)
}
return nil
}
const (
containerRemoveInitialInterval = 100 * time.Millisecond
containerRemoveMaxElapsedTime = 2 * time.Second
)
// removeContainerWithRetry attempts to remove a container with exponential backoff retry logic.
func removeContainerWithRetry(ctx context.Context, cli *client.Client, containerID string) bool {
expBackoff := backoff.NewExponentialBackOff()
expBackoff.InitialInterval = containerRemoveInitialInterval
_, err := backoff.Retry(ctx, func() (struct{}, error) {
err := cli.ContainerRemove(ctx, containerID, container.RemoveOptions{
Force: true,
})
if err != nil {
return struct{}{}, err
}
return struct{}{}, nil
}, backoff.WithBackOff(expBackoff), backoff.WithMaxElapsedTime(containerRemoveMaxElapsedTime))
return err == nil
}
// pruneDockerNetworks removes unused Docker networks.
func pruneDockerNetworks(ctx context.Context) error {
cli, err := createDockerClient(ctx)
if err != nil {
return fmt.Errorf("creating Docker client: %w", err)
}
defer cli.Close()
report, err := cli.NetworksPrune(ctx, filters.Args{})
if err != nil {
return fmt.Errorf("pruning networks: %w", err)
}
if len(report.NetworksDeleted) > 0 {
fmt.Printf("Removed %d unused networks\n", len(report.NetworksDeleted))
} else {
fmt.Println("No unused networks found to remove")
}
return nil
}
// cleanOldImages removes test-related and old dangling Docker images.
func cleanOldImages(ctx context.Context) error {
cli, err := createDockerClient(ctx)
if err != nil {
return fmt.Errorf("creating Docker client: %w", err)
}
defer cli.Close()
images, err := cli.ImageList(ctx, image.ListOptions{
All: true,
})
if err != nil {
return fmt.Errorf("listing images: %w", err)
}
removed := 0
for _, img := range images {
shouldRemove := false
for _, tag := range img.RepoTags {
if strings.Contains(tag, "hs-") ||
strings.Contains(tag, "headscale-integration") ||
strings.Contains(tag, "tailscale") {
shouldRemove = true
break
}
}
if len(img.RepoTags) == 0 && time.Unix(img.Created, 0).Before(time.Now().Add(-7*24*time.Hour)) {
shouldRemove = true
}
if shouldRemove {
_, err := cli.ImageRemove(ctx, img.ID, image.RemoveOptions{
Force: true,
})
if err == nil {
removed++
}
}
}
if removed > 0 {
fmt.Printf("Removed %d test images\n", removed)
} else {
fmt.Println("No test images found to remove")
}
return nil
}
// cleanCacheVolume removes the Docker volume used for Go module cache.
func cleanCacheVolume(ctx context.Context) error {
cli, err := createDockerClient(ctx)
if err != nil {
return fmt.Errorf("creating Docker client: %w", err)
}
defer cli.Close()
volumeName := "hs-integration-go-cache"
err = cli.VolumeRemove(ctx, volumeName, true)
if err != nil {
if errdefs.IsNotFound(err) { //nolint:staticcheck // SA1019: deprecated but functional
fmt.Printf("Go module cache volume not found: %s\n", volumeName)
} else if errdefs.IsConflict(err) { //nolint:staticcheck // SA1019: deprecated but functional
fmt.Printf("Go module cache volume is in use and cannot be removed: %s\n", volumeName)
} else {
fmt.Printf("Failed to remove Go module cache volume %s: %v\n", volumeName, err)
}
} else {
fmt.Printf("Removed Go module cache volume: %s\n", volumeName)
}
return nil
}
// cleanupSuccessfulTestArtifacts removes artifacts from successful test runs to save disk space.
// This function removes large artifacts that are mainly useful for debugging failures:
// - Database dumps (.db files)
// - Profile data (pprof directories)
// - MapResponse data (mapresponses directories)
// - Prometheus metrics files
//
// It preserves:
// - Log files (.log) which are small and useful for verification.
func cleanupSuccessfulTestArtifacts(logsDir string, verbose bool) error {
entries, err := os.ReadDir(logsDir)
if err != nil {
return fmt.Errorf("reading logs directory: %w", err)
}
var (
removedFiles, removedDirs int
totalSize int64
)
for _, entry := range entries {
name := entry.Name()
fullPath := filepath.Join(logsDir, name)
if entry.IsDir() {
// Remove pprof and mapresponses directories (typically large)
// These directories contain artifacts from all containers in the test run
if name == "pprof" || name == "mapresponses" {
size, sizeErr := getDirSize(fullPath)
if sizeErr == nil {
totalSize += size
}
err := os.RemoveAll(fullPath)
if err != nil {
if verbose {
log.Printf("Warning: failed to remove directory %s: %v", name, err)
}
} else {
removedDirs++
if verbose {
log.Printf("Removed directory: %s/", name)
}
}
}
} else {
// Only process test-related files (headscale and tailscale)
if !strings.HasPrefix(name, "hs-") && !strings.HasPrefix(name, "ts-") {
continue
}
// Remove database, metrics, and status files, but keep logs
shouldRemove := strings.HasSuffix(name, ".db") ||
strings.HasSuffix(name, "_metrics.txt") ||
strings.HasSuffix(name, "_status.json")
if shouldRemove {
info, infoErr := entry.Info()
if infoErr == nil {
totalSize += info.Size()
}
err := os.Remove(fullPath)
if err != nil {
if verbose {
log.Printf("Warning: failed to remove file %s: %v", name, err)
}
} else {
removedFiles++
if verbose {
log.Printf("Removed file: %s", name)
}
}
}
}
}
if removedFiles > 0 || removedDirs > 0 {
const bytesPerMB = 1024 * 1024
log.Printf("Cleaned up %d files and %d directories (freed ~%.2f MB)",
removedFiles, removedDirs, float64(totalSize)/bytesPerMB)
}
return nil
}
// getDirSize calculates the total size of a directory.
func getDirSize(path string) (int64, error) {
var size int64
err := filepath.Walk(path, func(_ string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if !info.IsDir() {
size += info.Size()
}
return nil
})
return size, err
}
================================================
FILE: cmd/hi/docker.go
================================================
package main
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"io"
"log"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/client"
"github.com/docker/docker/pkg/stdcopy"
"github.com/juanfont/headscale/integration/dockertestutil"
)
const defaultDirPerm = 0o755
var (
ErrTestFailed = errors.New("test failed")
ErrUnexpectedContainerWait = errors.New("unexpected end of container wait")
ErrNoDockerContext = errors.New("no docker context found")
ErrMemoryLimitViolations = errors.New("container(s) exceeded memory limits")
)
// runTestContainer executes integration tests in a Docker container.
//
//nolint:gocyclo // complex test orchestration function
func runTestContainer(ctx context.Context, config *RunConfig) error {
cli, err := createDockerClient(ctx)
if err != nil {
return fmt.Errorf("creating Docker client: %w", err)
}
defer cli.Close()
runID := dockertestutil.GenerateRunID()
containerName := "headscale-test-suite-" + runID
logsDir := filepath.Join(config.LogsDir, runID)
if config.Verbose {
log.Printf("Run ID: %s", runID)
log.Printf("Container name: %s", containerName)
log.Printf("Logs directory: %s", logsDir)
}
absLogsDir, err := filepath.Abs(logsDir)
if err != nil {
return fmt.Errorf("getting absolute path for logs directory: %w", err)
}
const dirPerm = 0o755
if err := os.MkdirAll(absLogsDir, dirPerm); err != nil { //nolint:noinlineerr
return fmt.Errorf("creating logs directory: %w", err)
}
if config.CleanBefore {
if config.Verbose {
log.Printf("Running pre-test cleanup...")
}
err := cleanupBeforeTest(ctx)
if err != nil && config.Verbose {
log.Printf("Warning: pre-test cleanup failed: %v", err)
}
}
goTestCmd := buildGoTestCommand(config)
if config.Verbose {
log.Printf("Command: %s", strings.Join(goTestCmd, " "))
}
imageName := "golang:" + config.GoVersion
if err := ensureImageAvailable(ctx, cli, imageName, config.Verbose); err != nil { //nolint:noinlineerr
return fmt.Errorf("ensuring image availability: %w", err)
}
resp, err := createGoTestContainer(ctx, cli, config, containerName, absLogsDir, goTestCmd)
if err != nil {
return fmt.Errorf("creating container: %w", err)
}
if config.Verbose {
log.Printf("Created container: %s", resp.ID)
}
if err := cli.ContainerStart(ctx, resp.ID, container.StartOptions{}); err != nil { //nolint:noinlineerr
return fmt.Errorf("starting container: %w", err)
}
log.Printf("Starting test: %s", config.TestPattern)
log.Printf("Run ID: %s", runID)
log.Printf("Monitor with: docker logs -f %s", containerName)
log.Printf("Logs directory: %s", logsDir)
// Start stats collection for container resource monitoring (if enabled)
var statsCollector *StatsCollector
if config.Stats {
var err error
statsCollector, err = NewStatsCollector(ctx)
if err != nil {
if config.Verbose {
log.Printf("Warning: failed to create stats collector: %v", err)
}
statsCollector = nil
}
if statsCollector != nil {
defer statsCollector.Close()
// Start stats collection immediately - no need for complex retry logic
// The new implementation monitors Docker events and will catch containers as they start
err := statsCollector.StartCollection(ctx, runID, config.Verbose)
if err != nil {
if config.Verbose {
log.Printf("Warning: failed to start stats collection: %v", err)
}
}
defer statsCollector.StopCollection()
}
}
exitCode, err := streamAndWait(ctx, cli, resp.ID)
// Ensure all containers have finished and logs are flushed before extracting artifacts
waitErr := waitForContainerFinalization(ctx, cli, resp.ID, config.Verbose)
if waitErr != nil && config.Verbose {
log.Printf("Warning: failed to wait for container finalization: %v", waitErr)
}
// Extract artifacts from test containers before cleanup
if err := extractArtifactsFromContainers(ctx, resp.ID, logsDir, config.Verbose); err != nil && config.Verbose { //nolint:noinlineerr
log.Printf("Warning: failed to extract artifacts from containers: %v", err)
}
// Always list control files regardless of test outcome
listControlFiles(logsDir)
// Print stats summary and check memory limits if enabled
if config.Stats && statsCollector != nil {
violations := statsCollector.PrintSummaryAndCheckLimits(config.HSMemoryLimit, config.TSMemoryLimit)
if len(violations) > 0 {
log.Printf("MEMORY LIMIT VIOLATIONS DETECTED:")
log.Printf("=================================")
for _, violation := range violations {
log.Printf("Container %s exceeded memory limit: %.1f MB > %.1f MB",
violation.ContainerName, violation.MaxMemoryMB, violation.LimitMB)
}
return fmt.Errorf("test failed: %d %w", len(violations), ErrMemoryLimitViolations)
}
}
shouldCleanup := config.CleanAfter && (!config.KeepOnFailure || exitCode == 0)
if shouldCleanup {
if config.Verbose {
log.Printf("Running post-test cleanup for run %s...", runID)
}
cleanErr := cleanupAfterTest(ctx, cli, resp.ID, runID)
if cleanErr != nil && config.Verbose {
log.Printf("Warning: post-test cleanup failed: %v", cleanErr)
}
// Clean up artifacts from successful tests to save disk space in CI
if exitCode == 0 {
if config.Verbose {
log.Printf("Test succeeded, cleaning up artifacts to save disk space...")
}
cleanErr := cleanupSuccessfulTestArtifacts(logsDir, config.Verbose)
if cleanErr != nil && config.Verbose {
log.Printf("Warning: artifact cleanup failed: %v", cleanErr)
}
}
}
if err != nil {
return fmt.Errorf("executing test: %w", err)
}
if exitCode != 0 {
return fmt.Errorf("%w: exit code %d", ErrTestFailed, exitCode)
}
log.Printf("Test completed successfully!")
return nil
}
// buildGoTestCommand constructs the go test command arguments.
func buildGoTestCommand(config *RunConfig) []string {
cmd := []string{"go", "test", "./..."}
if config.TestPattern != "" {
cmd = append(cmd, "-run", config.TestPattern)
}
if config.FailFast {
cmd = append(cmd, "-failfast")
}
cmd = append(cmd, "-timeout", config.Timeout.String())
cmd = append(cmd, "-v")
return cmd
}
// createGoTestContainer creates a Docker container configured for running integration tests.
func createGoTestContainer(ctx context.Context, cli *client.Client, config *RunConfig, containerName, logsDir string, goTestCmd []string) (container.CreateResponse, error) {
pwd, err := os.Getwd()
if err != nil {
return container.CreateResponse{}, fmt.Errorf("getting working directory: %w", err)
}
projectRoot := findProjectRoot(pwd)
runID := dockertestutil.ExtractRunIDFromContainerName(containerName)
env := []string{
fmt.Sprintf("HEADSCALE_INTEGRATION_POSTGRES=%d", boolToInt(config.UsePostgres)),
"HEADSCALE_INTEGRATION_RUN_ID=" + runID,
}
// Pass through CI environment variable for CI detection
if ci := os.Getenv("CI"); ci != "" {
env = append(env, "CI="+ci)
}
// Pass through all HEADSCALE_INTEGRATION_* environment variables
for _, e := range os.Environ() {
if strings.HasPrefix(e, "HEADSCALE_INTEGRATION_") {
// Skip the ones we already set explicitly
if strings.HasPrefix(e, "HEADSCALE_INTEGRATION_POSTGRES=") ||
strings.HasPrefix(e, "HEADSCALE_INTEGRATION_RUN_ID=") {
continue
}
env = append(env, e)
}
}
// Set GOCACHE to a known location (used by both bind mount and volume cases)
env = append(env, "GOCACHE=/cache/go-build")
containerConfig := &container.Config{
Image: "golang:" + config.GoVersion,
Cmd: goTestCmd,
Env: env,
WorkingDir: projectRoot + "/integration",
Tty: true,
Labels: map[string]string{
"hi.run-id": runID,
"hi.test-type": "test-runner",
},
}
// Get the correct Docker socket path from the current context
dockerSocketPath := getDockerSocketPath()
if config.Verbose {
log.Printf("Using Docker socket: %s", dockerSocketPath)
}
binds := []string{
fmt.Sprintf("%s:%s", projectRoot, projectRoot),
dockerSocketPath + ":/var/run/docker.sock",
logsDir + ":/tmp/control",
}
// Use bind mounts for Go cache if provided via environment variables,
// otherwise fall back to Docker volumes for local development
var mounts []mount.Mount
goCache := os.Getenv("HEADSCALE_INTEGRATION_GO_CACHE")
goBuildCache := os.Getenv("HEADSCALE_INTEGRATION_GO_BUILD_CACHE")
if goCache != "" {
binds = append(binds, goCache+":/go")
} else {
mounts = append(mounts, mount.Mount{
Type: mount.TypeVolume,
Source: "hs-integration-go-cache",
Target: "/go",
})
}
if goBuildCache != "" {
binds = append(binds, goBuildCache+":/cache/go-build")
} else {
mounts = append(mounts, mount.Mount{
Type: mount.TypeVolume,
Source: "hs-integration-go-build-cache",
Target: "/cache/go-build",
})
}
hostConfig := &container.HostConfig{
AutoRemove: false, // We'll remove manually for better control
Binds: binds,
Mounts: mounts,
}
return cli.ContainerCreate(ctx, containerConfig, hostConfig, nil, nil, containerName)
}
// streamAndWait streams container output and waits for completion.
func streamAndWait(ctx context.Context, cli *client.Client, containerID string) (int, error) {
out, err := cli.ContainerLogs(ctx, containerID, container.LogsOptions{
ShowStdout: true,
ShowStderr: true,
Follow: true,
})
if err != nil {
return -1, fmt.Errorf("getting container logs: %w", err)
}
defer out.Close()
go func() {
_, _ = io.Copy(os.Stdout, out)
}()
statusCh, errCh := cli.ContainerWait(ctx, containerID, container.WaitConditionNotRunning)
select {
case err := <-errCh:
if err != nil {
return -1, fmt.Errorf("waiting for container: %w", err)
}
case status := <-statusCh:
return int(status.StatusCode), nil
}
return -1, ErrUnexpectedContainerWait
}
// waitForContainerFinalization ensures all test containers have properly finished and flushed their output.
func waitForContainerFinalization(ctx context.Context, cli *client.Client, testContainerID string, verbose bool) error {
// First, get all related test containers
containers, err := cli.ContainerList(ctx, container.ListOptions{All: true})
if err != nil {
return fmt.Errorf("listing containers: %w", err)
}
testContainers := getCurrentTestContainers(containers, testContainerID, verbose)
// Wait for all test containers to reach a final state
maxWaitTime := 10 * time.Second
checkInterval := 500 * time.Millisecond
timeout := time.After(maxWaitTime)
ticker := time.NewTicker(checkInterval)
defer ticker.Stop()
for {
select {
case <-timeout:
if verbose {
log.Printf("Timeout waiting for container finalization, proceeding with artifact extraction")
}
return nil
case <-ticker.C:
allFinalized := true
for _, testCont := range testContainers {
inspect, err := cli.ContainerInspect(ctx, testCont.ID)
if err != nil {
if verbose {
log.Printf("Warning: failed to inspect container %s: %v", testCont.name, err)
}
continue
}
// Check if container is in a final state
if !isContainerFinalized(inspect.State) {
allFinalized = false
if verbose {
log.Printf("Container %s still finalizing (state: %s)", testCont.name, inspect.State.Status)
}
break
}
}
if allFinalized {
if verbose {
log.Printf("All test containers finalized, ready for artifact extraction")
}
return nil
}
}
}
}
// isContainerFinalized checks if a container has reached a final state where logs are flushed.
func isContainerFinalized(state *container.State) bool {
// Container is finalized if it's not running and has a finish time
return !state.Running && state.FinishedAt != ""
}
// findProjectRoot locates the project root by finding the directory containing go.mod.
func findProjectRoot(startPath string) string {
current := startPath
for {
if _, err := os.Stat(filepath.Join(current, "go.mod")); err == nil { //nolint:noinlineerr
return current
}
parent := filepath.Dir(current)
if parent == current {
return startPath
}
current = parent
}
}
// boolToInt converts a boolean to an integer for environment variables.
func boolToInt(b bool) int {
if b {
return 1
}
return 0
}
// DockerContext represents Docker context information.
type DockerContext struct {
Name string `json:"Name"`
Metadata map[string]any `json:"Metadata"`
Endpoints map[string]any `json:"Endpoints"`
Current bool `json:"Current"`
}
// createDockerClient creates a Docker client with context detection.
func createDockerClient(ctx context.Context) (*client.Client, error) {
contextInfo, err := getCurrentDockerContext(ctx)
if err != nil {
return client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
}
var clientOpts []client.Opt
clientOpts = append(clientOpts, client.WithAPIVersionNegotiation())
if contextInfo != nil {
if endpoints, ok := contextInfo.Endpoints["docker"]; ok {
if endpointMap, ok := endpoints.(map[string]any); ok {
if host, ok := endpointMap["Host"].(string); ok {
if runConfig.Verbose {
log.Printf("Using Docker host from context '%s': %s", contextInfo.Name, host)
}
clientOpts = append(clientOpts, client.WithHost(host))
}
}
}
}
if len(clientOpts) == 1 {
clientOpts = append(clientOpts, client.FromEnv)
}
return client.NewClientWithOpts(clientOpts...)
}
// getCurrentDockerContext retrieves the current Docker context information.
func getCurrentDockerContext(ctx context.Context) (*DockerContext, error) {
cmd := exec.CommandContext(ctx, "docker", "context", "inspect")
output, err := cmd.Output()
if err != nil {
return nil, fmt.Errorf("getting docker context: %w", err)
}
var contexts []DockerContext
if err := json.Unmarshal(output, &contexts); err != nil { //nolint:noinlineerr
return nil, fmt.Errorf("parsing docker context: %w", err)
}
if len(contexts) > 0 {
return &contexts[0], nil
}
return nil, ErrNoDockerContext
}
// getDockerSocketPath returns the correct Docker socket path for the current context.
func getDockerSocketPath() string {
// Always use the default socket path for mounting since Docker handles
// the translation to the actual socket (e.g., colima socket) internally
return "/var/run/docker.sock"
}
// checkImageAvailableLocally checks if the specified Docker image is available locally.
func checkImageAvailableLocally(ctx context.Context, cli *client.Client, imageName string) (bool, error) {
_, _, err := cli.ImageInspectWithRaw(ctx, imageName) //nolint:staticcheck // SA1019: deprecated but functional
if err != nil {
if client.IsErrNotFound(err) { //nolint:staticcheck // SA1019: deprecated but functional
return false, nil
}
return false, fmt.Errorf("inspecting image %s: %w", imageName, err)
}
return true, nil
}
// ensureImageAvailable checks if the image is available locally first, then pulls if needed.
func ensureImageAvailable(ctx context.Context, cli *client.Client, imageName string, verbose bool) error {
// First check if image is available locally
available, err := checkImageAvailableLocally(ctx, cli, imageName)
if err != nil {
return fmt.Errorf("checking local image availability: %w", err)
}
if available {
if verbose {
log.Printf("Image %s is available locally", imageName)
}
return nil
}
// Image not available locally, try to pull it
if verbose {
log.Printf("Image %s not found locally, pulling...", imageName)
}
reader, err := cli.ImagePull(ctx, imageName, image.PullOptions{})
if err != nil {
return fmt.Errorf("pulling image %s: %w", imageName, err)
}
defer reader.Close()
if verbose {
_, err = io.Copy(os.Stdout, reader)
if err != nil {
return fmt.Errorf("reading pull output: %w", err)
}
} else {
_, err = io.Copy(io.Discard, reader)
if err != nil {
return fmt.Errorf("reading pull output: %w", err)
}
log.Printf("Image %s pulled successfully", imageName)
}
return nil
}
// listControlFiles displays the headscale test artifacts created in the control logs directory.
func listControlFiles(logsDir string) {
entries, err := os.ReadDir(logsDir)
if err != nil {
log.Printf("Logs directory: %s", logsDir)
return
}
var (
logFiles []string
dataFiles []string
dataDirs []string
)
for _, entry := range entries {
name := entry.Name()
// Only show headscale (hs-*) files and directories
if !strings.HasPrefix(name, "hs-") {
continue
}
if entry.IsDir() {
// Include directories (pprof, mapresponses)
if strings.Contains(name, "-pprof") || strings.Contains(name, "-mapresponses") {
dataDirs = append(dataDirs, name)
}
} else {
// Include files
switch {
case strings.HasSuffix(name, ".stderr.log") || strings.HasSuffix(name, ".stdout.log"):
logFiles = append(logFiles, name)
case strings.HasSuffix(name, ".db"):
dataFiles = append(dataFiles, name)
}
}
}
log.Printf("Test artifacts saved to: %s", logsDir)
if len(logFiles) > 0 {
log.Printf("Headscale logs:")
for _, file := range logFiles {
log.Printf(" %s", file)
}
}
if len(dataFiles) > 0 || len(dataDirs) > 0 {
log.Printf("Headscale data:")
for _, file := range dataFiles {
log.Printf(" %s", file)
}
for _, dir := range dataDirs {
log.Printf(" %s/", dir)
}
}
}
// extractArtifactsFromContainers collects container logs and files from the specific test run.
func extractArtifactsFromContainers(ctx context.Context, testContainerID, logsDir string, verbose bool) error {
cli, err := createDockerClient(ctx)
if err != nil {
return fmt.Errorf("creating Docker client: %w", err)
}
defer cli.Close()
// List all containers
containers, err := cli.ContainerList(ctx, container.ListOptions{All: true})
if err != nil {
return fmt.Errorf("listing containers: %w", err)
}
// Get containers from the specific test run
currentTestContainers := getCurrentTestContainers(containers, testContainerID, verbose)
extractedCount := 0
for _, cont := range currentTestContainers {
// Extract container logs and tar files
err := extractContainerArtifacts(ctx, cli, cont.ID, cont.name, logsDir, verbose)
if err != nil {
if verbose {
log.Printf("Warning: failed to extract artifacts from container %s (%s): %v", cont.name, cont.ID[:12], err)
}
} else {
if verbose {
log.Printf("Extracted artifacts from container %s (%s)", cont.name, cont.ID[:12])
}
extractedCount++
}
}
if verbose && extractedCount > 0 {
log.Printf("Extracted artifacts from %d containers", extractedCount)
}
return nil
}
// testContainer represents a container from the current test run.
type testContainer struct {
ID string
name string
}
// getCurrentTestContainers filters containers to only include those from the current test run.
func getCurrentTestContainers(containers []container.Summary, testContainerID string, verbose bool) []testContainer {
var testRunContainers []testContainer
// Find the test container to get its run ID label
var runID string
for _, cont := range containers {
if cont.ID == testContainerID {
if cont.Labels != nil {
runID = cont.Labels["hi.run-id"]
}
break
}
}
if runID == "" {
log.Printf("Error: test container %s missing required hi.run-id label", testContainerID[:12])
return testRunContainers
}
if verbose {
log.Printf("Looking for containers with run ID: %s", runID)
}
// Find all containers with the same run ID
for _, cont := range containers {
for _, name := range cont.Names {
containerName := strings.TrimPrefix(name, "/")
if strings.HasPrefix(containerName, "hs-") || strings.HasPrefix(containerName, "ts-") {
// Check if container has matching run ID label
if cont.Labels != nil && cont.Labels["hi.run-id"] == runID {
testRunContainers = append(testRunContainers, testContainer{
ID: cont.ID,
name: containerName,
})
if verbose {
log.Printf("Including container %s (run ID: %s)", containerName, runID)
}
}
break
}
}
}
return testRunContainers
}
// extractContainerArtifacts saves logs and tar files from a container.
func extractContainerArtifacts(ctx context.Context, cli *client.Client, containerID, containerName, logsDir string, verbose bool) error {
// Ensure the logs directory exists
err := os.MkdirAll(logsDir, defaultDirPerm)
if err != nil {
return fmt.Errorf("creating logs directory: %w", err)
}
// Extract container logs
err = extractContainerLogs(ctx, cli, containerID, containerName, logsDir, verbose)
if err != nil {
return fmt.Errorf("extracting logs: %w", err)
}
// Extract tar files for headscale containers only
if strings.HasPrefix(containerName, "hs-") {
err := extractContainerFiles(ctx, cli, containerID, containerName, logsDir, verbose)
if err != nil {
if verbose {
log.Printf("Warning: failed to extract files from %s: %v", containerName, err)
}
// Don't fail the whole extraction if files are missing
}
}
return nil
}
// extractContainerLogs saves the stdout and stderr logs from a container to files.
func extractContainerLogs(ctx context.Context, cli *client.Client, containerID, containerName, logsDir string, verbose bool) error {
// Get container logs
logReader, err := cli.ContainerLogs(ctx, containerID, container.LogsOptions{
ShowStdout: true,
ShowStderr: true,
Timestamps: false,
Follow: false,
Tail: "all",
})
if err != nil {
return fmt.Errorf("getting container logs: %w", err)
}
defer logReader.Close()
// Create log files following the headscale naming convention
stdoutPath := filepath.Join(logsDir, containerName+".stdout.log")
stderrPath := filepath.Join(logsDir, containerName+".stderr.log")
// Create buffers to capture stdout and stderr separately
var stdoutBuf, stderrBuf bytes.Buffer
// Demultiplex the Docker logs stream to separate stdout and stderr
_, err = stdcopy.StdCopy(&stdoutBuf, &stderrBuf, logReader)
if err != nil {
return fmt.Errorf("demultiplexing container logs: %w", err)
}
// Write stdout logs
if err := os.WriteFile(stdoutPath, stdoutBuf.Bytes(), 0o644); err != nil { //nolint:gosec,noinlineerr // log files should be readable
return fmt.Errorf("writing stdout log: %w", err)
}
// Write stderr logs
if err := os.WriteFile(stderrPath, stderrBuf.Bytes(), 0o644); err != nil { //nolint:gosec,noinlineerr // log files should be readable
return fmt.Errorf("writing stderr log: %w", err)
}
if verbose {
log.Printf("Saved logs for %s: %s, %s", containerName, stdoutPath, stderrPath)
}
return nil
}
// extractContainerFiles extracts database file and directories from headscale containers.
// Note: The actual file extraction is now handled by the integration tests themselves
// via SaveProfile, SaveMapResponses, and SaveDatabase functions in hsic.go.
func extractContainerFiles(ctx context.Context, cli *client.Client, containerID, containerName, logsDir string, verbose bool) error {
// Files are now extracted directly by the integration tests
// This function is kept for potential future use or other file types
return nil
}
================================================
FILE: cmd/hi/doctor.go
================================================
package main
import (
"context"
"errors"
"fmt"
"log"
"os/exec"
"strings"
)
var ErrSystemChecksFailed = errors.New("system checks failed")
// DoctorResult represents the result of a single health check.
type DoctorResult struct {
Name string
Status string // "PASS", "FAIL", "WARN"
Message string
Suggestions []string
}
// runDoctorCheck performs comprehensive pre-flight checks for integration testing.
func runDoctorCheck(ctx context.Context) error {
results := []DoctorResult{}
// Check 1: Docker binary availability
results = append(results, checkDockerBinary())
// Check 2: Docker daemon connectivity
dockerResult := checkDockerDaemon(ctx)
results = append(results, dockerResult)
// If Docker is available, run additional checks
if dockerResult.Status == "PASS" {
results = append(results, checkDockerContext(ctx))
results = append(results, checkDockerSocket(ctx))
results = append(results, checkGolangImage(ctx))
}
// Check 3: Go installation
results = append(results, checkGoInstallation(ctx))
// Check 4: Git repository
results = append(results, checkGitRepository(ctx))
// Check 5: Required files
results = append(results, checkRequiredFiles(ctx))
// Display results
displayDoctorResults(results)
// Return error if any critical checks failed
for _, result := range results {
if result.Status == "FAIL" {
return fmt.Errorf("%w - see details above", ErrSystemChecksFailed)
}
}
log.Printf("✅ All system checks passed - ready to run integration tests!")
return nil
}
// checkDockerBinary verifies Docker binary is available.
func checkDockerBinary() DoctorResult {
_, err := exec.LookPath("docker")
if err != nil {
return DoctorResult{
Name: "Docker Binary",
Status: "FAIL",
Message: "Docker binary not found in PATH",
Suggestions: []string{
"Install Docker: https://docs.docker.com/get-docker/",
"For macOS: consider using colima or Docker Desktop",
"Ensure docker is in your PATH",
},
}
}
return DoctorResult{
Name: "Docker Binary",
Status: "PASS",
Message: "Docker binary found",
}
}
// checkDockerDaemon verifies Docker daemon is running and accessible.
func checkDockerDaemon(ctx context.Context) DoctorResult {
cli, err := createDockerClient(ctx)
if err != nil {
return DoctorResult{
Name: "Docker Daemon",
Status: "FAIL",
Message: fmt.Sprintf("Cannot create Docker client: %v", err),
Suggestions: []string{
"Start Docker daemon/service",
"Check Docker Desktop is running (if using Docker Desktop)",
"For colima: run 'colima start'",
"Verify DOCKER_HOST environment variable if set",
},
}
}
defer cli.Close()
_, err = cli.Ping(ctx)
if err != nil {
return DoctorResult{
Name: "Docker Daemon",
Status: "FAIL",
Message: fmt.Sprintf("Cannot ping Docker daemon: %v", err),
Suggestions: []string{
"Ensure Docker daemon is running",
"Check Docker socket permissions",
"Try: docker info",
},
}
}
return DoctorResult{
Name: "Docker Daemon",
Status: "PASS",
Message: "Docker daemon is running and accessible",
}
}
// checkDockerContext verifies Docker context configuration.
func checkDockerContext(ctx context.Context) DoctorResult {
contextInfo, err := getCurrentDockerContext(ctx)
if err != nil {
return DoctorResult{
Name: "Docker Context",
Status: "WARN",
Message: "Could not detect Docker context, using default settings",
Suggestions: []string{
"Check: docker context ls",
"Consider setting up a specific context if needed",
},
}
}
if contextInfo == nil {
return DoctorResult{
Name: "Docker Context",
Status: "PASS",
Message: "Using default Docker context",
}
}
return DoctorResult{
Name: "Docker Context",
Status: "PASS",
Message: "Using Docker context: " + contextInfo.Name,
}
}
// checkDockerSocket verifies Docker socket accessibility.
func checkDockerSocket(ctx context.Context) DoctorResult {
cli, err := createDockerClient(ctx)
if err != nil {
return DoctorResult{
Name: "Docker Socket",
Status: "FAIL",
Message: fmt.Sprintf("Cannot access Docker socket: %v", err),
Suggestions: []string{
"Check Docker socket permissions",
"Add user to docker group: sudo usermod -aG docker $USER",
"For colima: ensure socket is accessible",
},
}
}
defer cli.Close()
info, err := cli.Info(ctx)
if err != nil {
return DoctorResult{
Name: "Docker Socket",
Status: "FAIL",
Message: fmt.Sprintf("Cannot get Docker info: %v", err),
Suggestions: []string{
"Check Docker daemon status",
"Verify socket permissions",
},
}
}
return DoctorResult{
Name: "Docker Socket",
Status: "PASS",
Message: fmt.Sprintf("Docker socket accessible (Server: %s)", info.ServerVersion),
}
}
// checkGolangImage verifies the golang Docker image is available locally or can be pulled.
func checkGolangImage(ctx context.Context) DoctorResult {
cli, err := createDockerClient(ctx)
if err != nil {
return DoctorResult{
Name: "Golang Image",
Status: "FAIL",
Message: "Cannot create Docker client for image check",
}
}
defer cli.Close()
goVersion := detectGoVersion()
imageName := "golang:" + goVersion
// First check if image is available locally
available, err := checkImageAvailableLocally(ctx, cli, imageName)
if err != nil {
return DoctorResult{
Name: "Golang Image",
Status: "FAIL",
Message: fmt.Sprintf("Cannot check golang image %s: %v", imageName, err),
Suggestions: []string{
"Check Docker daemon status",
"Try: docker images | grep golang",
},
}
}
if available {
return DoctorResult{
Name: "Golang Image",
Status: "PASS",
Message: fmt.Sprintf("Golang image %s is available locally", imageName),
}
}
// Image not available locally, try to pull it
err = ensureImageAvailable(ctx, cli, imageName, false)
if err != nil {
return DoctorResult{
Name: "Golang Image",
Status: "FAIL",
Message: fmt.Sprintf("Golang image %s not available locally and cannot pull: %v", imageName, err),
Suggestions: []string{
"Check internet connectivity",
"Verify Docker Hub access",
"Try: docker pull " + imageName,
"Or run tests offline if image was pulled previously",
},
}
}
return DoctorResult{
Name: "Golang Image",
Status: "PASS",
Message: fmt.Sprintf("Golang image %s is now available", imageName),
}
}
// checkGoInstallation verifies Go is installed and working.
func checkGoInstallation(ctx context.Context) DoctorResult {
_, err := exec.LookPath("go")
if err != nil {
return DoctorResult{
Name: "Go Installation",
Status: "FAIL",
Message: "Go binary not found in PATH",
Suggestions: []string{
"Install Go: https://golang.org/dl/",
"Ensure go is in your PATH",
},
}
}
cmd := exec.CommandContext(ctx, "go", "version")
output, err := cmd.Output()
if err != nil {
return DoctorResult{
Name: "Go Installation",
Status: "FAIL",
Message: fmt.Sprintf("Cannot get Go version: %v", err),
}
}
version := strings.TrimSpace(string(output))
return DoctorResult{
Name: "Go Installation",
Status: "PASS",
Message: version,
}
}
// checkGitRepository verifies we're in a git repository.
func checkGitRepository(ctx context.Context) DoctorResult {
cmd := exec.CommandContext(ctx, "git", "rev-parse", "--git-dir")
err := cmd.Run()
if err != nil {
return DoctorResult{
Name: "Git Repository",
Status: "FAIL",
Message: "Not in a Git repository",
Suggestions: []string{
"Run from within the headscale git repository",
"Clone the repository: git clone https://github.com/juanfont/headscale.git",
},
}
}
return DoctorResult{
Name: "Git Repository",
Status: "PASS",
Message: "Running in Git repository",
}
}
// checkRequiredFiles verifies required files exist.
func checkRequiredFiles(ctx context.Context) DoctorResult {
requiredFiles := []string{
"go.mod",
"integration/",
"cmd/hi/",
}
var missingFiles []string
for _, file := range requiredFiles {
cmd := exec.CommandContext(ctx, "test", "-e", file)
err := cmd.Run()
if err != nil {
missingFiles = append(missingFiles, file)
}
}
if len(missingFiles) > 0 {
return DoctorResult{
Name: "Required Files",
Status: "FAIL",
Message: "Missing required files: " + strings.Join(missingFiles, ", "),
Suggestions: []string{
"Ensure you're in the headscale project root directory",
"Check that integration/ directory exists",
"Verify this is a complete headscale repository",
},
}
}
return DoctorResult{
Name: "Required Files",
Status: "PASS",
Message: "All required files found",
}
}
// displayDoctorResults shows the results in a formatted way.
func displayDoctorResults(results []DoctorResult) {
log.Printf("🔍 System Health Check Results")
log.Printf("================================")
for _, result := range results {
var icon string
switch result.Status {
case "PASS":
icon = "✅"
case "WARN":
icon = "⚠️"
case "FAIL":
icon = "❌"
default:
icon = "❓"
}
log.Printf("%s %s: %s", icon, result.Name, result.Message)
if len(result.Suggestions) > 0 {
for _, suggestion := range result.Suggestions {
log.Printf(" 💡 %s", suggestion)
}
}
}
log.Printf("================================")
}
================================================
FILE: cmd/hi/main.go
================================================
package main
import (
"context"
"os"
"github.com/creachadair/command"
"github.com/creachadair/flax"
)
var runConfig RunConfig
func main() {
root := command.C{
Name: "hi",
Help: "Headscale Integration test runner",
Commands: []*command.C{
{
Name: "run",
Help: "Run integration tests",
Usage: "run [test-pattern] [flags]",
SetFlags: command.Flags(flax.MustBind, &runConfig),
Run: runIntegrationTest,
},
{
Name: "doctor",
Help: "Check system requirements for running integration tests",
Run: func(env *command.Env) error {
return runDoctorCheck(env.Context())
},
},
{
Name: "clean",
Help: "Clean Docker resources",
Commands: []*command.C{
{
Name: "networks",
Help: "Prune unused Docker networks",
Run: func(env *command.Env) error {
return pruneDockerNetworks(env.Context())
},
},
{
Name: "images",
Help: "Clean old test images",
Run: func(env *command.Env) error {
return cleanOldImages(env.Context())
},
},
{
Name: "containers",
Help: "Kill all test containers",
Run: func(env *command.Env) error {
return killTestContainers(env.Context())
},
},
{
Name: "cache",
Help: "Clean Go module cache volume",
Run: func(env *command.Env) error {
return cleanCacheVolume(env.Context())
},
},
{
Name: "all",
Help: "Run all cleanup operations",
Run: func(env *command.Env) error {
return cleanAll(env.Context())
},
},
},
},
command.HelpCommand(nil),
},
}
env := root.NewEnv(nil).MergeFlags(true)
command.RunOrFail(env, os.Args[1:])
}
func cleanAll(ctx context.Context) error {
err := killTestContainers(ctx)
if err != nil {
return err
}
err = pruneDockerNetworks(ctx)
if err != nil {
return err
}
err = cleanOldImages(ctx)
if err != nil {
return err
}
return cleanCacheVolume(ctx)
}
================================================
FILE: cmd/hi/run.go
================================================
package main
import (
"errors"
"fmt"
"log"
"os"
"path/filepath"
"time"
"github.com/creachadair/command"
)
var ErrTestPatternRequired = errors.New("test pattern is required as first argument or use --test flag")
type RunConfig struct {
TestPattern string `flag:"test,Test pattern to run"`
Timeout time.Duration `flag:"timeout,default=120m,Test timeout"`
FailFast bool `flag:"failfast,default=true,Stop on first test failure"`
UsePostgres bool `flag:"postgres,default=false,Use PostgreSQL instead of SQLite"`
GoVersion string `flag:"go-version,Go version to use (auto-detected from go.mod)"`
CleanBefore bool `flag:"clean-before,default=true,Clean stale resources before test"`
CleanAfter bool `flag:"clean-after,default=true,Clean resources after test"`
KeepOnFailure bool `flag:"keep-on-failure,default=false,Keep containers on test failure"`
LogsDir string `flag:"logs-dir,default=control_logs,Control logs directory"`
Verbose bool `flag:"verbose,default=false,Verbose output"`
Stats bool `flag:"stats,default=false,Collect and display container resource usage statistics"`
HSMemoryLimit float64 `flag:"hs-memory-limit,default=0,Fail test if any Headscale container exceeds this memory limit in MB (0 = disabled)"`
TSMemoryLimit float64 `flag:"ts-memory-limit,default=0,Fail test if any Tailscale container exceeds this memory limit in MB (0 = disabled)"`
}
// runIntegrationTest executes the integration test workflow.
func runIntegrationTest(env *command.Env) error {
args := env.Args
if len(args) > 0 && runConfig.TestPattern == "" {
runConfig.TestPattern = args[0]
}
if runConfig.TestPattern == "" {
return ErrTestPatternRequired
}
if runConfig.GoVersion == "" {
runConfig.GoVersion = detectGoVersion()
}
// Run pre-flight checks
if runConfig.Verbose {
log.Printf("Running pre-flight system checks...")
}
err := runDoctorCheck(env.Context())
if err != nil {
return fmt.Errorf("pre-flight checks failed: %w", err)
}
if runConfig.Verbose {
log.Printf("Running test: %s", runConfig.TestPattern)
log.Printf("Go version: %s", runConfig.GoVersion)
log.Printf("Timeout: %s", runConfig.Timeout)
log.Printf("Use PostgreSQL: %t", runConfig.UsePostgres)
}
return runTestContainer(env.Context(), &runConfig)
}
// detectGoVersion reads the Go version from go.mod file.
func detectGoVersion() string {
goModPath := filepath.Join("..", "..", "go.mod")
if _, err := os.Stat("go.mod"); err == nil { //nolint:noinlineerr
goModPath = "go.mod"
} else if _, err := os.Stat("../../go.mod"); err == nil { //nolint:noinlineerr
goModPath = "../../go.mod"
}
content, err := os.ReadFile(goModPath)
if err != nil {
return "1.26.1"
}
lines := splitLines(string(content))
for _, line := range lines {
if len(line) > 3 && line[:3] == "go " {
version := line[3:]
if idx := indexOf(version, " "); idx != -1 {
version = version[:idx]
}
return version
}
}
return "1.26.1"
}
// splitLines splits a string into lines without using strings.Split.
func splitLines(s string) []string {
var (
lines []string
current string
)
for _, char := range s {
if char == '\n' {
lines = append(lines, current)
current = ""
} else {
current += string(char)
}
}
if current != "" {
lines = append(lines, current)
}
return lines
}
// indexOf finds the first occurrence of substr in s.
func indexOf(s, substr string) int {
for i := 0; i <= len(s)-len(substr); i++ {
if s[i:i+len(substr)] == substr {
return i
}
}
return -1
}
================================================
FILE: cmd/hi/stats.go
================================================
package main
import (
"context"
"encoding/json"
"errors"
"fmt"
"log"
"sort"
"strings"
"sync"
"time"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/events"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/client"
)
// ErrStatsCollectionAlreadyStarted is returned when trying to start stats collection that is already running.
var ErrStatsCollectionAlreadyStarted = errors.New("stats collection already started")
// ContainerStats represents statistics for a single container.
type ContainerStats struct {
ContainerID string
ContainerName string
Stats []StatsSample
mutex sync.RWMutex
}
// StatsSample represents a single stats measurement.
type StatsSample struct {
Timestamp time.Time
CPUUsage float64 // CPU usage percentage
MemoryMB float64 // Memory usage in MB
}
// StatsCollector manages collection of container statistics.
type StatsCollector struct {
client *client.Client
containers map[string]*ContainerStats
stopChan chan struct{}
wg sync.WaitGroup
mutex sync.RWMutex
collectionStarted bool
}
// NewStatsCollector creates a new stats collector instance.
func NewStatsCollector(ctx context.Context) (*StatsCollector, error) {
cli, err := createDockerClient(ctx)
if err != nil {
return nil, fmt.Errorf("creating Docker client: %w", err)
}
return &StatsCollector{
client: cli,
containers: make(map[string]*ContainerStats),
stopChan: make(chan struct{}),
}, nil
}
// StartCollection begins monitoring all containers and collecting stats for hs- and ts- containers with matching run ID.
func (sc *StatsCollector) StartCollection(ctx context.Context, runID string, verbose bool) error {
sc.mutex.Lock()
defer sc.mutex.Unlock()
if sc.collectionStarted {
return ErrStatsCollectionAlreadyStarted
}
sc.collectionStarted = true
// Start monitoring existing containers
sc.wg.Add(1)
go sc.monitorExistingContainers(ctx, runID, verbose)
// Start Docker events monitoring for new containers
sc.wg.Add(1)
go sc.monitorDockerEvents(ctx, runID, verbose)
if verbose {
log.Printf("Started container monitoring for run ID %s", runID)
}
return nil
}
// StopCollection stops all stats collection.
func (sc *StatsCollector) StopCollection() {
// Check if already stopped without holding lock
sc.mutex.RLock()
if !sc.collectionStarted {
sc.mutex.RUnlock()
return
}
sc.mutex.RUnlock()
// Signal stop to all goroutines
close(sc.stopChan)
// Wait for all goroutines to finish
sc.wg.Wait()
// Mark as stopped
sc.mutex.Lock()
sc.collectionStarted = false
sc.mutex.Unlock()
}
// monitorExistingContainers checks for existing containers that match our criteria.
func (sc *StatsCollector) monitorExistingContainers(ctx context.Context, runID string, verbose bool) {
defer sc.wg.Done()
containers, err := sc.client.ContainerList(ctx, container.ListOptions{})
if err != nil {
if verbose {
log.Printf("Failed to list existing containers: %v", err)
}
return
}
for _, cont := range containers {
if sc.shouldMonitorContainer(cont, runID) {
sc.startStatsForContainer(ctx, cont.ID, cont.Names[0], verbose)
}
}
}
// monitorDockerEvents listens for container start events and begins monitoring relevant containers.
func (sc *StatsCollector) monitorDockerEvents(ctx context.Context, runID string, verbose bool) {
defer sc.wg.Done()
filter := filters.NewArgs()
filter.Add("type", "container")
filter.Add("event", "start")
eventOptions := events.ListOptions{
Filters: filter,
}
events, errs := sc.client.Events(ctx, eventOptions)
for {
select {
case <-sc.stopChan:
return
case <-ctx.Done():
return
case event := <-events:
if event.Type == "container" && event.Action == "start" {
// Get container details
containerInfo, err := sc.client.ContainerInspect(ctx, event.ID) //nolint:staticcheck // SA1019: use Actor.ID
if err != nil {
continue
}
// Convert to types.Container format for consistency
cont := types.Container{ //nolint:staticcheck // SA1019: use container.Summary
ID: containerInfo.ID,
Names: []string{containerInfo.Name},
Labels: containerInfo.Config.Labels,
}
if sc.shouldMonitorContainer(cont, runID) {
sc.startStatsForContainer(ctx, cont.ID, cont.Names[0], verbose)
}
}
case err := <-errs:
if verbose {
log.Printf("Error in Docker events stream: %v", err)
}
return
}
}
}
// shouldMonitorContainer determines if a container should be monitored.
func (sc *StatsCollector) shouldMonitorContainer(cont types.Container, runID string) bool { //nolint:staticcheck // SA1019: use container.Summary
// Check if it has the correct run ID label
if cont.Labels == nil || cont.Labels["hi.run-id"] != runID {
return false
}
// Check if it's an hs- or ts- container
for _, name := range cont.Names {
containerName := strings.TrimPrefix(name, "/")
if strings.HasPrefix(containerName, "hs-") || strings.HasPrefix(containerName, "ts-") {
return true
}
}
return false
}
// startStatsForContainer begins stats collection for a specific container.
func (sc *StatsCollector) startStatsForContainer(ctx context.Context, containerID, containerName string, verbose bool) {
containerName = strings.TrimPrefix(containerName, "/")
sc.mutex.Lock()
// Check if we're already monitoring this container
if _, exists := sc.containers[containerID]; exists {
sc.mutex.Unlock()
return
}
sc.containers[containerID] = &ContainerStats{
ContainerID: containerID,
ContainerName: containerName,
Stats: make([]StatsSample, 0),
}
sc.mutex.Unlock()
if verbose {
log.Printf("Starting stats collection for container %s (%s)", containerName, containerID[:12])
}
sc.wg.Add(1)
go sc.collectStatsForContainer(ctx, containerID, verbose)
}
// collectStatsForContainer collects stats for a specific container using Docker API streaming.
func (sc *StatsCollector) collectStatsForContainer(ctx context.Context, containerID string, verbose bool) {
defer sc.wg.Done()
// Use Docker API streaming stats - much more efficient than CLI
statsResponse, err := sc.client.ContainerStats(ctx, containerID, true)
if err != nil {
if verbose {
log.Printf("Failed to get stats stream for container %s: %v", containerID[:12], err)
}
return
}
defer statsResponse.Body.Close()
decoder := json.NewDecoder(statsResponse.Body)
var prevStats *container.Stats //nolint:staticcheck // SA1019: use StatsResponse
for {
select {
case <-sc.stopChan:
return
case <-ctx.Done():
return
default:
var stats container.Stats //nolint:staticcheck // SA1019: use StatsResponse
err := decoder.Decode(&stats)
if err != nil {
// EOF is expected when container stops or stream ends
if err.Error() != "EOF" && verbose {
log.Printf("Failed to decode stats for container %s: %v", containerID[:12], err)
}
return
}
// Calculate CPU percentage (only if we have previous stats)
var cpuPercent float64
if prevStats != nil {
cpuPercent = calculateCPUPercent(prevStats, &stats)
}
// Calculate memory usage in MB
memoryMB := float64(stats.MemoryStats.Usage) / (1024 * 1024)
// Store the sample (skip first sample since CPU calculation needs previous stats)
if prevStats != nil {
// Get container stats reference without holding the main mutex
var (
containerStats *ContainerStats
exists bool
)
sc.mutex.RLock()
containerStats, exists = sc.containers[containerID]
sc.mutex.RUnlock()
if exists && containerStats != nil {
containerStats.mutex.Lock()
containerStats.Stats = append(containerStats.Stats, StatsSample{
Timestamp: time.Now(),
CPUUsage: cpuPercent,
MemoryMB: memoryMB,
})
containerStats.mutex.Unlock()
}
}
// Save current stats for next iteration
prevStats = &stats
}
}
}
// calculateCPUPercent calculates CPU usage percentage from Docker stats.
func calculateCPUPercent(prevStats, stats *container.Stats) float64 { //nolint:staticcheck // SA1019: use StatsResponse
// CPU calculation based on Docker's implementation
cpuDelta := float64(stats.CPUStats.CPUUsage.TotalUsage) - float64(prevStats.CPUStats.CPUUsage.TotalUsage)
systemDelta := float64(stats.CPUStats.SystemUsage) - float64(prevStats.CPUStats.SystemUsage)
if systemDelta > 0 && cpuDelta >= 0 {
// Calculate CPU percentage: (container CPU delta / system CPU delta) * number of CPUs * 100
numCPUs := float64(len(stats.CPUStats.CPUUsage.PercpuUsage))
if numCPUs == 0 {
// Fallback: if PercpuUsage is not available, assume 1 CPU
numCPUs = 1.0
}
return (cpuDelta / systemDelta) * numCPUs * 100.0
}
return 0.0
}
// ContainerStatsSummary represents summary statistics for a container.
type ContainerStatsSummary struct {
ContainerName string
SampleCount int
CPU StatsSummary
Memory StatsSummary
}
// MemoryViolation represents a container that exceeded the memory limit.
type MemoryViolation struct {
ContainerName string
MaxMemoryMB float64
LimitMB float64
}
// StatsSummary represents min, max, and average for a metric.
type StatsSummary struct {
Min float64
Max float64
Average float64
}
// GetSummary returns a summary of collected statistics.
func (sc *StatsCollector) GetSummary() []ContainerStatsSummary {
// Take snapshot of container references without holding main lock long
sc.mutex.RLock()
containerRefs := make([]*ContainerStats, 0, len(sc.containers))
for _, containerStats := range sc.containers {
containerRefs = append(containerRefs, containerStats)
}
sc.mutex.RUnlock()
summaries := make([]ContainerStatsSummary, 0, len(containerRefs))
for _, containerStats := range containerRefs {
containerStats.mutex.RLock()
stats := make([]StatsSample, len(containerStats.Stats))
copy(stats, containerStats.Stats)
containerName := containerStats.ContainerName
containerStats.mutex.RUnlock()
if len(stats) == 0 {
continue
}
summary := ContainerStatsSummary{
ContainerName: containerName,
SampleCount: len(stats),
}
// Calculate CPU stats
cpuValues := make([]float64, len(stats))
memoryValues := make([]float64, len(stats))
for i, sample := range stats {
cpuValues[i] = sample.CPUUsage
memoryValues[i] = sample.MemoryMB
}
summary.CPU = calculateStatsSummary(cpuValues)
summary.Memory = calculateStatsSummary(memoryValues)
summaries = append(summaries, summary)
}
// Sort by container name for consistent output
sort.Slice(summaries, func(i, j int) bool {
return summaries[i].ContainerName < summaries[j].ContainerName
})
return summaries
}
// calculateStatsSummary calculates min, max, and average for a slice of values.
func calculateStatsSummary(values []float64) StatsSummary {
if len(values) == 0 {
return StatsSummary{}
}
minVal := values[0]
maxVal := values[0]
sum := 0.0
for _, value := range values {
if value < minVal {
minVal = value
}
if value > maxVal {
maxVal = value
}
sum += value
}
return StatsSummary{
Min: minVal,
Max: maxVal,
Average: sum / float64(len(values)),
}
}
// PrintSummary prints the statistics summary to the console.
func (sc *StatsCollector) PrintSummary() {
summaries := sc.GetSummary()
if len(summaries) == 0 {
log.Printf("No container statistics collected")
return
}
log.Printf("Container Resource Usage Summary:")
log.Printf("================================")
for _, summary := range summaries {
log.Printf("Container: %s (%d samples)", summary.ContainerName, summary.SampleCount)
log.Printf(" CPU Usage: Min: %6.2f%% Max: %6.2f%% Avg: %6.2f%%",
summary.CPU.Min, summary.CPU.Max, summary.CPU.Average)
log.Printf(" Memory Usage: Min: %6.1f MB Max: %6.1f MB Avg: %6.1f MB",
summary.Memory.Min, summary.Memory.Max, summary.Memory.Average)
log.Printf("")
}
}
// CheckMemoryLimits checks if any containers exceeded their memory limits.
func (sc *StatsCollector) CheckMemoryLimits(hsLimitMB, tsLimitMB float64) []MemoryViolation {
if hsLimitMB <= 0 && tsLimitMB <= 0 {
return nil
}
summaries := sc.GetSummary()
var violations []MemoryViolation
for _, summary := range summaries {
var limitMB float64
if strings.HasPrefix(summary.ContainerName, "hs-") {
limitMB = hsLimitMB
} else if strings.HasPrefix(summary.ContainerName, "ts-") {
limitMB = tsLimitMB
} else {
continue // Skip containers that don't match our patterns
}
if limitMB > 0 && summary.Memory.Max > limitMB {
violations = append(violations, MemoryViolation{
ContainerName: summary.ContainerName,
MaxMemoryMB: summary.Memory.Max,
LimitMB: limitMB,
})
}
}
return violations
}
// PrintSummaryAndCheckLimits prints the statistics summary and returns memory violations if any.
func (sc *StatsCollector) PrintSummaryAndCheckLimits(hsLimitMB, tsLimitMB float64) []MemoryViolation {
sc.PrintSummary()
return sc.CheckMemoryLimits(hsLimitMB, tsLimitMB)
}
// Close closes the stats collector and cleans up resources.
func (sc *StatsCollector) Close() error {
sc.StopCollection()
return sc.client.Close()
}
================================================
FILE: cmd/mapresponses/main.go
================================================
package main
import (
"encoding/json"
"errors"
"fmt"
"os"
"github.com/creachadair/command"
"github.com/creachadair/flax"
"github.com/juanfont/headscale/hscontrol/mapper"
"github.com/juanfont/headscale/integration/integrationutil"
)
type MapConfig struct {
Directory string `flag:"directory,Directory to read map responses from"`
}
var (
mapConfig MapConfig
errDirectoryRequired = errors.New("directory is required")
)
func main() {
root := command.C{
Name: "mapresponses",
Help: "MapResponses is a tool to map and compare map responses from a directory",
Commands: []*command.C{
{
Name: "online",
Help: "",
Usage: "run [test-pattern] [flags]",
SetFlags: command.Flags(flax.MustBind, &mapConfig),
Run: runOnline,
},
command.HelpCommand(nil),
},
}
env := root.NewEnv(nil).MergeFlags(true)
command.RunOrFail(env, os.Args[1:])
}
// runIntegrationTest executes the integration test workflow.
func runOnline(env *command.Env) error {
if mapConfig.Directory == "" {
return errDirectoryRequired
}
resps, err := mapper.ReadMapResponsesFromDirectory(mapConfig.Directory)
if err != nil {
return fmt.Errorf("reading map responses from directory: %w", err)
}
expected := integrationutil.BuildExpectedOnlineMap(resps)
out, err := json.MarshalIndent(expected, "", " ")
if err != nil {
return fmt.Errorf("marshaling expected online map: %w", err)
}
os.Stderr.Write(out)
os.Stderr.Write([]byte("\n"))
return nil
}
================================================
FILE: config-example.yaml
================================================
---
# headscale will look for a configuration file named `config.yaml` (or `config.json`) in the following order:
#
# - `/etc/headscale`
# - `~/.headscale`
# - current working directory
# The url clients will connect to.
# Typically this will be a domain like:
#
# https://myheadscale.example.com:443
#
server_url: http://127.0.0.1:8080
# Address to listen to / bind to on the server
#
# For production:
# listen_addr: 0.0.0.0:8080
listen_addr: 127.0.0.1:8080
# Address to listen to /metrics and /debug, you may want
# to keep this endpoint private to your internal network
# Use an emty value to disable the metrics listener.
metrics_listen_addr: 127.0.0.1:9090
# Address to listen for gRPC.
# gRPC is used for controlling a headscale server
# remotely with the CLI
# Note: Remote access _only_ works if you have
# valid certificates.
#
# For production:
# grpc_listen_addr: 0.0.0.0:50443
grpc_listen_addr: 127.0.0.1:50443
# Allow the gRPC admin interface to run in INSECURE
# mode. This is not recommended as the traffic will
# be unencrypted. Only enable if you know what you
# are doing.
grpc_allow_insecure: false
# The Noise section includes specific configuration for the
# TS2021 Noise protocol
noise:
# The Noise private key is used to encrypt the traffic between headscale and
# Tailscale clients when using the new Noise-based protocol. A missing key
# will be automatically generated.
private_key_path: /var/lib/headscale/noise_private.key
# List of IP prefixes to allocate tailaddresses from.
# Each prefix consists of either an IPv4 or IPv6 address,
# and the associated prefix length, delimited by a slash.
#
# WARNING: These prefixes MUST be subsets of the standard Tailscale ranges:
# - IPv4: 100.64.0.0/10 (CGNAT range)
# - IPv6: fd7a:115c:a1e0::/48 (Tailscale ULA range)
#
# Using a SUBSET of these ranges is supported and useful if you want to
# limit IP allocation to a smaller block (e.g., 100.64.0.0/24).
#
# Using ranges OUTSIDE of CGNAT/ULA is NOT supported and will cause
# undefined behaviour. The Tailscale client has hard-coded assumptions
# about these ranges and will break in subtle, hard-to-debug ways.
#
# See:
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
# IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71
prefixes:
v4: 100.64.0.0/10
v6: fd7a:115c:a1e0::/48
# Strategy used for allocation of IPs to nodes, available options:
# - sequential (default): assigns the next free IP from the previous given
# IP. A best-effort approach is used and Headscale might leave holes in the
# IP range or fill up existing holes in the IP range.
# - random: assigns the next free IP from a pseudo-random IP generator (crypto/rand).
allocation: sequential
# DERP is a relay system that Tailscale uses when a direct
# connection cannot be established.
# https://tailscale.com/blog/how-tailscale-works/#encrypted-tcp-relays-derp
#
# headscale needs a list of DERP servers that can be presented
# to the clients.
derp:
server:
# If enabled, runs the embedded DERP server and merges it into the rest of the DERP config
# The Headscale server_url defined above MUST be using https, DERP requires TLS to be in place
enabled: false
# Region ID to use for the embedded DERP server.
# The local DERP prevails if the region ID collides with other region ID coming from
# the regular DERP config.
region_id: 999
# Region code and name are displayed in the Tailscale UI to identify a DERP region
region_code: "headscale"
region_name: "Headscale Embedded DERP"
# Only allow clients associated with this server access
verify_clients: true
# Listens over UDP at the configured address for STUN connections - to help with NAT traversal.
# When the embedded DERP server is enabled stun_listen_addr MUST be defined.
#
# For more details on how this works, check this great article: https://tailscale.com/blog/how-tailscale-works/
stun_listen_addr: "0.0.0.0:3478"
# Private key used to encrypt the traffic between headscale DERP and
# Tailscale clients. A missing key will be automatically generated.
private_key_path: /var/lib/headscale/derp_server_private.key
# This flag can be used, so the DERP map entry for the embedded DERP server is not written automatically,
# it enables the creation of your very own DERP map entry using a locally available file with the parameter DERP.paths
# If you enable the DERP server and set this to false, it is required to add the DERP server to the DERP map using DERP.paths
automatically_add_embedded_derp_region: true
# For better connection stability (especially when using an Exit-Node and DNS is not working),
# it is possible to optionally add the public IPv4 and IPv6 address to the Derp-Map using:
ipv4: 198.51.100.1
ipv6: 2001:db8::1
# List of externally available DERP maps encoded in JSON
urls:
- https://controlplane.tailscale.com/derpmap/default
# Locally available DERP map files encoded in YAML
#
# This option is mostly interesting for people hosting
# their own DERP servers:
# https://tailscale.com/kb/1118/custom-derp-servers/
#
# paths:
# - /etc/headscale/derp-example.yaml
paths: []
# If enabled, a worker will be set up to periodically
# refresh the given sources and update the derpmap
# will be set up.
auto_update_enabled: true
# How often should we check for DERP updates?
update_frequency: 3h
# Disables the automatic check for headscale updates on startup
disable_check_updates: false
# Time before an inactive ephemeral node is deleted?
ephemeral_node_inactivity_timeout: 30m
database:
# Database type. Available options: sqlite, postgres
# Please note that using Postgres is highly discouraged as it is only supported for legacy reasons.
# All new development, testing and optimisations are done with SQLite in mind.
type: sqlite
# Enable debug mode. This setting requires the log.level to be set to "debug" or "trace".
debug: false
# GORM configuration settings.
gorm:
# Enable prepared statements.
prepare_stmt: true
# Enable parameterized queries.
parameterized_queries: true
# Skip logging "record not found" errors.
skip_err_record_not_found: true
# Threshold for slow queries in milliseconds.
slow_threshold: 1000
# SQLite config
sqlite:
path: /var/lib/headscale/db.sqlite
# Enable WAL mode for SQLite. This is recommended for production environments.
# https://www.sqlite.org/wal.html
write_ahead_log: true
# Maximum number of WAL file frames before the WAL file is automatically checkpointed.
# https://www.sqlite.org/c3ref/wal_autocheckpoint.html
# Set to 0 to disable automatic checkpointing.
wal_autocheckpoint: 1000
# # Postgres config
# Please note that using Postgres is highly discouraged as it is only supported for legacy reasons.
# See database.type for more information.
# postgres:
# # If using a Unix socket to connect to Postgres, set the socket path in the 'host' field and leave 'port' blank.
# host: localhost
# port: 5432
# name: headscale
# user: foo
# pass: bar
# max_open_conns: 10
# max_idle_conns: 10
# conn_max_idle_time_secs: 3600
# # If other 'sslmode' is required instead of 'require(true)' and 'disabled(false)', set the 'sslmode' you need
# # in the 'ssl' field. Refers to https://www.postgresql.org/docs/current/libpq-ssl.html Table 34.1.
# ssl: false
### TLS configuration
#
## Let's encrypt / ACME
#
# headscale supports automatically requesting and setting up
# TLS for a domain with Let's Encrypt.
#
# URL to ACME directory
acme_url: https://acme-v02.api.letsencrypt.org/directory
# Email to register with ACME provider
acme_email: ""
# Domain name to request a TLS certificate for:
tls_letsencrypt_hostname: ""
# Path to store certificates and metadata needed by
# letsencrypt
# For production:
tls_letsencrypt_cache_dir: /var/lib/headscale/cache
# Type of ACME challenge to use, currently supported types:
# HTTP-01 or TLS-ALPN-01
# See: docs/ref/tls.md for more information
tls_letsencrypt_challenge_type: HTTP-01
# When HTTP-01 challenge is chosen, letsencrypt must set up a
# verification endpoint, and it will be listening on:
# :http = port 80
tls_letsencrypt_listen: ":http"
## Use already defined certificates:
tls_cert_path: ""
tls_key_path: ""
log:
# Valid log levels: panic, fatal, error, warn, info, debug, trace
level: info
# Output formatting for logs: text or json
format: text
## Policy
# headscale supports Tailscale's ACL policies.
# Please have a look to their KB to better
# understand the concepts: https://tailscale.com/kb/1018/acls/
policy:
# The mode can be "file" or "database" that defines
# where the ACL policies are stored and read from.
mode: file
# If the mode is set to "file", the path to a
# HuJSON file containing ACL policies.
path: ""
## DNS
#
# headscale supports Tailscale's DNS configuration and MagicDNS.
# Please have a look to their KB to better understand the concepts:
#
# - https://tailscale.com/kb/1054/dns/
# - https://tailscale.com/kb/1081/magicdns/
# - https://tailscale.com/blog/2021-09-private-dns-with-magicdns/
#
# Please note that for the DNS configuration to have any effect,
# clients must have the `--accept-dns=true` option enabled. This is the
# default for the Tailscale client. This option is enabled by default
# in the Tailscale client.
#
# Setting _any_ of the configuration and `--accept-dns=true` on the
# clients will integrate with the DNS manager on the client or
# overwrite /etc/resolv.conf.
# https://tailscale.com/kb/1235/resolv-conf
#
# If you want stop Headscale from managing the DNS configuration
# all the fields under `dns` should be set to empty values.
dns:
# Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/).
magic_dns: true
# Defines the base domain to create the hostnames for MagicDNS.
# This domain _must_ be different from the server_url domain.
# `base_domain` must be a FQDN, without the trailing dot.
# The FQDN of the hosts will be
# `hostname.base_domain` (e.g., _myhost.example.com_).
base_domain: example.com
# Whether to use the local DNS settings of a node or override the local DNS
# settings (default) and force the use of Headscale's DNS configuration.
override_local_dns: true
# List of DNS servers to expose to clients.
nameservers:
global:
- 1.1.1.1
- 1.0.0.1
- 2606:4700:4700::1111
- 2606:4700:4700::1001
# NextDNS (see https://tailscale.com/kb/1218/nextdns/).
# "abc123" is example NextDNS ID, replace with yours.
# - https://dns.nextdns.io/abc123
# Split DNS (see https://tailscale.com/kb/1054/dns/),
# a map of domains and which DNS server to use for each.
split: {}
# foo.bar.com:
# - 1.1.1.1
# darp.headscale.net:
# - 1.1.1.1
# - 8.8.8.8
# Set custom DNS search domains. With MagicDNS enabled,
# your tailnet base_domain is always the first search domain.
search_domains: []
# Extra DNS records
# so far only A and AAAA records are supported (on the tailscale side)
# See: docs/ref/dns.md
extra_records: []
# - name: "grafana.myvpn.example.com"
# type: "A"
# value: "100.64.0.3"
#
# # you can also put it in one line
# - { name: "prometheus.myvpn.example.com", type: "A", value: "100.64.0.3" }
#
# Alternatively, extra DNS records can be loaded from a JSON file.
# Headscale processes this file on each change.
# extra_records_path: /var/lib/headscale/extra-records.json
# Unix socket used for the CLI to connect without authentication
# Note: for production you will want to set this to something like:
unix_socket: /var/run/headscale/headscale.sock
unix_socket_permission: "0770"
# OpenID Connect
# oidc:
# # Block startup until the identity provider is available and healthy.
# only_start_if_oidc_is_available: true
#
# # OpenID Connect Issuer URL from the identity provider
# issuer: "https://your-oidc.issuer.com/path"
#
# # Client ID from the identity provider
# client_id: "your-oidc-client-id"
#
# # Client secret generated by the identity provider
# # Note: client_secret and client_secret_path are mutually exclusive.
# client_secret: "your-oidc-client-secret"
# # Alternatively, set `client_secret_path` to read the secret from the file.
# # It resolves environment variables, making integration to systemd's
# # `LoadCredential` straightforward:
# client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret"
#
# # The amount of time a node is authenticated with OpenID until it expires
# # and needs to reauthenticate.
# # Setting the value to "0" will mean no expiry.
# expiry: 180d
#
# # Use the expiry from the token received from OpenID when the user logged
# # in. This will typically lead to frequent need to reauthenticate and should
# # only be enabled if you know what you are doing.
# # Note: enabling this will cause `oidc.expiry` to be ignored.
# use_expiry_from_token: false
#
# # The OIDC scopes to use, defaults to "openid", "profile" and "email".
# # Custom scopes can be configured as needed, be sure to always include the
# # required "openid" scope.
# scope: ["openid", "profile", "email"]
#
# # Only verified email addresses are synchronized to the user profile by
# # default. Unverified emails may be allowed in case an identity provider
# # does not send the "email_verified: true" claim or email verification is
# # not required.
# email_verified_required: true
#
# # Provide custom key/value pairs which get sent to the identity provider's
# # authorization endpoint.
# extra_params:
# domain_hint: example.com
#
# # Only accept users whose email domain is part of the allowed_domains list.
# allowed_domains:
# - example.com
#
# # Only accept users whose email address is part of the allowed_users list.
# allowed_users:
# - alice@example.com
#
# # Only accept users which are members of at least one group in the
# # allowed_groups list.
# allowed_groups:
# - /headscale
#
# # Optional: PKCE (Proof Key for Code Exchange) configuration
# # PKCE adds an additional layer of security to the OAuth 2.0 authorization code flow
# # by preventing authorization code interception attacks
# # See https://datatracker.ietf.org/doc/html/rfc7636
# pkce:
# # Enable or disable PKCE support (default: false)
# enabled: false
#
# # PKCE method to use:
# # - plain: Use plain code verifier
# # - S256: Use SHA256 hashed code verifier (default, recommended)
# method: S256
# Logtail configuration
# Logtail is Tailscales logging and auditing infrastructure, it allows the
# control panel to instruct tailscale nodes to log their activity to a remote
# server. To disable logging on the client side, please refer to:
# https://tailscale.com/kb/1011/log-mesh-traffic#opting-out-of-client-logging
logtail:
# Enable logtail for tailscale nodes of this Headscale instance.
# As there is currently no support for overriding the log server in Headscale, this is
# disabled by default. Enabling this will make your clients send logs to Tailscale Inc.
enabled: false
# Enabling this option makes devices prefer a random port for WireGuard traffic over the
# default static port 41641. This option is intended as a workaround for some buggy
# firewall devices. See https://tailscale.com/kb/1181/firewalls/ for more information.
randomize_client_port: false
# Taildrop configuration
# Taildrop is the file sharing feature of Tailscale, allowing nodes to send files to each other.
# https://tailscale.com/kb/1106/taildrop/
taildrop:
# Enable or disable Taildrop for all nodes.
# When enabled, nodes can send files to other nodes owned by the same user.
# Tagged devices and cross-user transfers are not permitted by Tailscale clients.
enabled: true
# Advanced performance tuning parameters.
# The defaults are carefully chosen and should rarely need adjustment.
# Only modify these if you have identified a specific performance issue.
#
# tuning:
# # NodeStore write batching configuration.
# # The NodeStore batches write operations before rebuilding peer relationships,
# # which is computationally expensive. Batching reduces rebuild frequency.
# #
# # node_store_batch_size: 100
# # node_store_batch_timeout: 500ms
================================================
FILE: derp-example.yaml
================================================
# If you plan to somehow use headscale, please deploy your own DERP infra: https://tailscale.com/kb/1118/custom-derp-servers/
regions:
1: null # Disable DERP region with ID 1
900:
regionid: 900
regioncode: custom
regionname: My Region
nodes:
- name: 900a
regionid: 900
hostname: myderp.example.com
ipv4: 198.51.100.1
ipv6: 2001:db8::1
stunport: 0
stunonly: false
derpport: 0
================================================
FILE: docs/about/clients.md
================================================
# Client and operating system support
We aim to support the [**last 10 releases** of the Tailscale client](https://tailscale.com/changelog#client) on all
provided operating systems and platforms. Some platforms might require additional configuration to connect with
headscale.
| OS | Supports headscale |
| ------- | ----------------------------------------------------------------------------------------------------- |
| Linux | Yes |
| OpenBSD | Yes |
| FreeBSD | Yes |
| Windows | Yes (see [docs](../usage/connect/windows.md) and `/windows` on your headscale for more information) |
| Android | Yes (see [docs](../usage/connect/android.md) for more information) |
| macOS | Yes (see [docs](../usage/connect/apple.md#macos) and `/apple` on your headscale for more information) |
| iOS | Yes (see [docs](../usage/connect/apple.md#ios) and `/apple` on your headscale for more information) |
| tvOS | Yes (see [docs](../usage/connect/apple.md#tvos) and `/apple` on your headscale for more information) |
================================================
FILE: docs/about/contributing.md
================================================
{%
include-markdown "../../CONTRIBUTING.md"
%}
================================================
FILE: docs/about/faq.md
================================================
# Frequently Asked Questions
## What is the design goal of headscale?
Headscale aims to implement a self-hosted, open source alternative to the
[Tailscale](https://tailscale.com/) control server. Headscale's goal is to
provide self-hosters and hobbyists with an open-source server they can use for
their projects and labs. It implements a narrow scope, a _single_ Tailscale
network (tailnet), suitable for a personal use, or a small open-source
organisation.
## How can I contribute?
Headscale is "Open Source, acknowledged contribution", this means that any
contribution will have to be discussed with the Maintainers before being submitted.
Please see [Contributing](contributing.md) for more information.
## Why is 'acknowledged contribution' the chosen model?
Both maintainers have full-time jobs and families, and we want to avoid burnout. We also want to avoid frustration from contributors when their PRs are not accepted.
We are more than happy to exchange emails, or to have dedicated calls before a PR is submitted.
## When/Why is Feature X going to be implemented?
We use [GitHub Milestones to plan for upcoming Headscale releases](https://github.com/juanfont/headscale/milestones).
Have a look at [our current plan](https://github.com/juanfont/headscale/milestones) to get an idea when a specific
feature is about to be implemented. The release plan is subject to change at any time.
If you're interested in contributing, please post a feature request about it. Please be aware that there are a number of
reasons why we might not accept specific contributions:
- It is not possible to implement the feature in a way that makes sense in a self-hosted environment.
- Given that we are reverse-engineering Tailscale to satisfy our own curiosity, we might be interested in implementing the feature ourselves.
- You are not sending unit and integration tests with it.
## Do you support Y method of deploying headscale?
We currently support deploying headscale using our binaries and the DEB packages. Visit our [installation guide using
official releases](../setup/install/official.md) for more information.
In addition to that, you may use packages provided by the community or from distributions. Learn more in the
[installation guide using community packages](../setup/install/community.md).
For convenience, we also [build container images with headscale](../setup/install/container.md). But **please be aware that
we don't officially support deploying headscale using Docker**. On our [Discord server](https://discord.gg/c84AZQhmpx)
we have a "docker-issues" channel where you can ask for Docker-specific help to the community.
## What is the recommended update path? Can I skip multiple versions while updating?
Please follow the steps outlined in the [upgrade guide](../setup/upgrade.md) to update your existing Headscale
installation. Its required to update from one stable version to the next (e.g. 0.26.0 → 0.27.1 → 0.28.0) without
skipping minor versions in between. You should always pick the latest available patch release.
Be sure to check the [changelog](https://github.com/juanfont/headscale/blob/main/CHANGELOG.md) for version specific
upgrade instructions and breaking changes.
## Scaling / How many clients does Headscale support?
It depends. As often stated, Headscale is not enterprise software and our focus
is homelabbers and self-hosters. Of course, we do not prevent people from using
it in a commercial/professional setting and often get questions about scaling.
Please note that when Headscale is developed, performance is not part of the
consideration as the main audience is considered to be users with a modest
amount of devices. We focus on correctness and feature parity with Tailscale
SaaS over time.
To understand if you might be able to use Headscale for your use case, I will
describe two scenarios in an effort to explain what is the central bottleneck
of Headscale:
1. An environment with 1000 servers
- they rarely "move" (change their endpoints)
- new nodes are added rarely
1. An environment with 80 laptops/phones (end user devices)
- nodes move often, e.g. switching from home to office
Headscale calculates a map of all nodes that need to talk to each other,
creating this "world map" requires a lot of CPU time. When an event that
requires changes to this map happens, the whole "world" is recalculated, and a
new "world map" is created for every node in the network.
This means that under certain conditions, Headscale can likely handle 100s
of devices (maybe more), if there is _little to no change_ happening in the
network. For example, in Scenario 1, the process of computing the world map is
extremely demanding due to the size of the network, but when the map has been
created and the nodes are not changing, the Headscale instance will likely
return to a very low resource usage until the next time there is an event
requiring the new map.
In the case of Scenario 2, the process of computing the world map is less
demanding due to the smaller size of the network, however, the type of nodes
will likely change frequently, which would lead to a constant resource usage.
Headscale will start to struggle when the two scenarios overlap, e.g. many nodes
with frequent changes will cause the resource usage to remain constantly high.
In the worst case scenario, the queue of nodes waiting for their map will grow
to a point where Headscale never will be able to catch up, and nodes will never
learn about the current state of the world.
We expect that the performance will improve over time as we improve the code
base, but it is not a focus. In general, we will never make the tradeoff to make
things faster on the cost of less maintainable or readable code. We are a small
team and have to optimise for maintainability.
## Which database should I use?
We recommend the use of SQLite as database for headscale:
- SQLite is simple to setup and easy to use
- It scales well for all of headscale's use cases
- Development and testing happens primarily on SQLite
- PostgreSQL is still supported, but is considered to be in "maintenance mode"
The headscale project itself does not provide a tool to migrate from PostgreSQL to SQLite. Please have a look at [the
related tools documentation](../ref/integration/tools.md) for migration tooling provided by the community.
The choice of database has little to no impact on the performance of the server,
see [Scaling / How many clients does Headscale support?](#scaling-how-many-clients-does-headscale-support) for understanding how Headscale spends its resources.
## Why is my reverse proxy not working with headscale?
We don't know. We don't use reverse proxies with headscale ourselves, so we don't have any experience with them. We have
[community documentation](../ref/integration/reverse-proxy.md) on how to configure various reverse proxies, and a
dedicated "reverse-proxy-issues" channel on our [Discord server](https://discord.gg/c84AZQhmpx) where you can ask for
help to the community.
## Can I use headscale and tailscale on the same machine?
Running headscale on a machine that is also in the tailnet can cause problems with subnet routers, traffic relay nodes, and MagicDNS. It might work, but it is not supported.
## Why do two nodes see each other in their status, even if an ACL allows traffic only in one direction?
A frequent use case is to allow traffic only from one node to another, but not the other way around. For example, the
workstation of an administrator should be able to connect to all nodes but the nodes themselves shouldn't be able to
connect back to the administrator's node. Why do all nodes see the administrator's workstation in the output of
`tailscale status`?
This is essentially how Tailscale works. If traffic is allowed to flow in one direction, then both nodes see each other
in their output of `tailscale status`. Traffic is still filtered according to the ACL, with the exception of
`tailscale ping` which is always allowed in either direction.
See also .
## My policy is stored in the database and Headscale refuses to start due to an invalid policy. How can I recover?
Headscale checks if the policy is valid during startup and refuses to start if it detects an error. The error message
indicates which part of the policy is invalid. Follow these steps to fix your policy:
- Dump the policy to a file: `headscale policy get --bypass-grpc-and-access-database-directly > policy.json`
- Edit and fixup `policy.json`. Use the command `headscale policy check --file policy.json` to validate the policy.
- Load the modified policy: `headscale policy set --bypass-grpc-and-access-database-directly --file policy.json`
- Start Headscale as usual.
!!! warning "Full server configuration required"
The above commands to get/set the policy require a complete server configuration file including database settings. A
minimal config to [control Headscale via remote CLI](../ref/api.md#grpc) is not sufficient. You may use
`headscale -c /path/to/config.yaml` to specify the path to an alternative configuration file.
## How can I migrate back to the recommended IP prefixes?
Tailscale only supports the IP prefixes `100.64.0.0/10` and `fd7a:115c:a1e0::/48` or smaller subnets thereof. The
following steps can be used to migrate from unsupported IP prefixes back to the supported and recommended ones.
!!! warning "Backup and test in a demo environment required"
The commands below update the IP addresses of all nodes in your tailnet and this might have a severe impact in your
specific environment. At a minimum:
- [Create a backup of your database](../setup/upgrade.md#backup)
- Test the commands below in a representive demo environment. This allows to catch subsequent connectivity errors
early and see how the tailnet behaves in your specific environment.
- Stop Headscale
- Restore the default prefixes in the [configuration file](../ref/configuration.md):
```yaml
prefixes:
v4: 100.64.0.0/10
v6: fd7a:115c:a1e0::/48
```
- Update the `nodes.ipv4` and `nodes.ipv6` columns in the database and assign each node a unique IPv4 and IPv6 address.
The following SQL statement assigns IP addresses based on the node ID:
```sql
UPDATE nodes
SET ipv4=concat('100.64.', id/256, '.', id%256),
ipv6=concat('fd7a:115c:a1e0::', format('%x', id));
```
- Update the [policy](../ref/acls.md) to reflect the IP address changes (if any)
- Start Headscale
Nodes should reconnect within a few seconds and pickup their newly assigned IP addresses.
## How can I avoid to send logs to Tailscale Inc?
A Tailscale client [collects logs about its operation and connection attempts with other
clients](https://tailscale.com/kb/1011/log-mesh-traffic#client-logs) and sends them to a central log service operated by
Tailscale Inc.
Headscale, by default, instructs clients to disable log submission to the central log service. This configuration is
applied by a client once it successfully connected with Headscale. See the configuration option `logtail.enabled` in the
[configuration file](../ref/configuration.md) for details.
Alternatively, logging can also be disabled on the client side. This is independent of Headscale and opting out of
client logging disables log submission early during client startup. The configuration is operating system specific and
is usually achieved by setting the environment variable `TS_NO_LOGS_NO_SUPPORT=true` or by passing the flag
`--no-logs-no-support` to `tailscaled`. See
for details.
================================================
FILE: docs/about/features.md
================================================
# Features
Headscale aims to implement a self-hosted, open source alternative to the Tailscale control server. Headscale's goal is
to provide self-hosters and hobbyists with an open-source server they can use for their projects and labs. This page
provides on overview of Headscale's feature and compatibility with the Tailscale control server:
- [x] Full "base" support of Tailscale's features
- [x] [Node registration](../ref/registration.md)
- [x] [Web authentication](../ref/registration.md#web-authentication)
- [x] [Pre authenticated key](../ref/registration.md#pre-authenticated-key)
- [x] [DNS](../ref/dns.md)
- [x] [MagicDNS](https://tailscale.com/kb/1081/magicdns)
- [x] [Global and restricted nameservers (split DNS)](https://tailscale.com/kb/1054/dns#nameservers)
- [x] [search domains](https://tailscale.com/kb/1054/dns#search-domains)
- [x] [Extra DNS records (Headscale only)](../ref/dns.md#setting-extra-dns-records)
- [x] [Taildrop (File Sharing)](https://tailscale.com/kb/1106/taildrop)
- [x] [Tags](../ref/tags.md)
- [x] [Routes](../ref/routes.md)
- [x] [Subnet routers](../ref/routes.md#subnet-router)
- [x] [Exit nodes](../ref/routes.md#exit-node)
- [x] Dual stack (IPv4 and IPv6)
- [x] Ephemeral nodes
- [x] Embedded [DERP server](../ref/derp.md)
- [x] Access control lists ([GitHub label "policy"](https://github.com/juanfont/headscale/labels/policy%20%F0%9F%93%9D))
- [x] ACL management via API
- [x] Some [Autogroups](https://tailscale.com/kb/1396/targets#autogroups), currently: `autogroup:internet`,
`autogroup:nonroot`, `autogroup:member`, `autogroup:tagged`, `autogroup:self`
- [x] [Auto approvers](https://tailscale.com/kb/1337/acl-syntax#auto-approvers) for [subnet
routers](../ref/routes.md#automatically-approve-routes-of-a-subnet-router) and [exit
nodes](../ref/routes.md#automatically-approve-an-exit-node-with-auto-approvers)
- [x] [Tailscale SSH](https://tailscale.com/kb/1193/tailscale-ssh)
- [x] [Node registration using Single-Sign-On (OpenID Connect)](../ref/oidc.md) ([GitHub label "OIDC"](https://github.com/juanfont/headscale/labels/OIDC))
- [x] Basic registration
- [x] Update user profile from identity provider
- [ ] OIDC groups cannot be used in ACLs
- [ ] [Funnel](https://tailscale.com/kb/1223/funnel) ([#1040](https://github.com/juanfont/headscale/issues/1040))
- [ ] [Serve](https://tailscale.com/kb/1312/serve) ([#1234](https://github.com/juanfont/headscale/issues/1921))
- [ ] [Network flow logs](https://tailscale.com/kb/1219/network-flow-logs) ([#1687](https://github.com/juanfont/headscale/issues/1687))
================================================
FILE: docs/about/help.md
================================================
# Getting help
Join our [Discord server](https://discord.gg/c84AZQhmpx) for announcements and community support.
Please report bugs via [GitHub issues](https://github.com/juanfont/headscale/issues)
================================================
FILE: docs/about/releases.md
================================================
# Releases
All headscale releases are available on the [GitHub release page](https://github.com/juanfont/headscale/releases). Those
releases are available as binaries for various platforms and architectures, packages for Debian based systems and source
code archives. Container images are available on [Docker Hub](https://hub.docker.com/r/headscale/headscale) and
[GitHub Container Registry](https://github.com/juanfont/headscale/pkgs/container/headscale).
An Atom/RSS feed of headscale releases is available [here](https://github.com/juanfont/headscale/releases.atom).
See the "announcements" channel on our [Discord server](https://discord.gg/c84AZQhmpx) for news about headscale.
================================================
FILE: docs/about/sponsor.md
================================================
# Sponsor
If you like to support the development of headscale, please consider a donation via
[ko-fi.com/headscale](https://ko-fi.com/headscale). Thank you!
================================================
FILE: docs/index.md
================================================
---
hide:
- navigation
- toc
---
# Welcome to headscale
Headscale is an open source, self-hosted implementation of the Tailscale control server.
This page contains the documentation for the latest version of headscale. Please also check our [FAQ](./about/faq.md).
Join our [Discord server](https://discord.gg/c84AZQhmpx) for a chat and community support.
## Design goal
Headscale aims to implement a self-hosted, open source alternative to the
[Tailscale](https://tailscale.com/) control server. Headscale's goal is to
provide self-hosters and hobbyists with an open-source server they can use for
their projects and labs. It implements a narrow scope, a _single_ Tailscale
network (tailnet), suitable for a personal use, or a small open-source
organisation.
## Supporting headscale
Please see [Sponsor](about/sponsor.md) for more information.
## Contributing
Headscale is "Open Source, acknowledged contribution", this means that any
contribution will have to be discussed with the Maintainers before being submitted.
Please see [Contributing](about/contributing.md) for more information.
## About
Headscale is maintained by [Kristoffer Dalby](https://kradalby.no/) and [Juan Font](https://font.eu).
================================================
FILE: docs/ref/acls.md
================================================
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
For instance, instead of referring to users when defining groups you must
use users (which are the equivalent to user/logins in Tailscale.com).
Please check https://tailscale.com/kb/1018/acls/ for further information.
When using ACL's the User borders are no longer applied. All machines
whichever the User have the ability to communicate with other hosts as
long as the ACL's permits this exchange.
## ACL Setup
To enable and configure ACLs in Headscale, you need to specify the path to your ACL policy file in the `policy.path` key in `config.yaml`.
Your ACL policy file must be formatted using [huJSON](https://github.com/tailscale/hujson).
Info on how these policies are written can be found
[here](https://tailscale.com/kb/1018/acls/).
Please reload or restart Headscale after updating the ACL file. Headscale may be reloaded either via its systemd service
(`sudo systemctl reload headscale`) or by sending a SIGHUP signal (`sudo kill -HUP $(pidof headscale)`) to the main
process. Headscale logs the result of ACL policy processing after each reload.
## Simple Examples
- [**Allow All**](https://tailscale.com/kb/1192/acl-samples#allow-all-default-acl): If you define an ACL file but completely omit the `"acls"` field from its content, Headscale will default to an "allow all" policy. This means all devices connected to your tailnet will be able to communicate freely with each other.
```json
{}
```
- [**Deny All**](https://tailscale.com/kb/1192/acl-samples#deny-all): To prevent all communication within your tailnet, you can include an empty array for the `"acls"` field in your policy file.
```json
{
"acls": []
}
```
## Complex Example
Let's build a more complex example use case for a small business (It may be the place where
ACL's are the most useful).
We have a small company with a boss, an admin, two developers and an intern.
The boss should have access to all servers but not to the user's hosts. Admin
should also have access to all hosts except that their permissions should be
limited to maintaining the hosts (for example purposes). The developers can do
anything they want on dev hosts but only watch on productions hosts. Intern
can only interact with the development servers.
There's an additional server that acts as a router, connecting the VPN users
to an internal network `10.20.0.0/16`. Developers must have access to those
internal resources.
Each user have at least a device connected to the network and we have some
servers.
- database.prod
- database.dev
- app-server1.prod
- app-server1.dev
- billing.internal
- router.internal

When [registering the servers](../usage/getting-started.md#register-a-node) we
will need to add the flag `--advertise-tags=tag:,tag:`, and the user
that is registering the server should be allowed to do it. Since anyone can add
tags to a server they can register, the check of the tags is done on headscale
server and only valid tags are applied. A tag is valid if the user that is
registering it is allowed to do it.
Here are the ACL's to implement the same permissions as above:
```json title="acl.json"
{
// groups are collections of users having a common scope. A user can be in multiple groups
// groups cannot be composed of groups
"groups": {
"group:boss": ["boss@"],
"group:dev": ["dev1@", "dev2@"],
"group:admin": ["admin1@"],
"group:intern": ["intern1@"]
},
// tagOwners in tailscale is an association between a TAG and the people allowed to set this TAG on a server.
// This is documented [here](https://tailscale.com/kb/1068/acl-tags#defining-a-tag)
// and explained [here](https://tailscale.com/blog/rbac-like-it-was-meant-to-be/)
"tagOwners": {
// the administrators can add servers in production
"tag:prod-databases": ["group:admin"],
"tag:prod-app-servers": ["group:admin"],
// the boss can tag any server as internal
"tag:internal": ["group:boss"],
// dev can add servers for dev purposes as well as admins
"tag:dev-databases": ["group:admin", "group:dev"],
"tag:dev-app-servers": ["group:admin", "group:dev"]
// interns cannot add servers
},
// hosts should be defined using its IP addresses and a subnet mask.
// to define a single host, use a /32 mask. You cannot use DNS entries here,
// as they're prone to be hijacked by replacing their IP addresses.
// see https://github.com/tailscale/tailscale/issues/3800 for more information.
"hosts": {
"postgresql.internal": "10.20.0.2/32",
"webservers.internal": "10.20.10.1/29"
},
"acls": [
// boss have access to all servers
{
"action": "accept",
"src": ["group:boss"],
"dst": [
"tag:prod-databases:*",
"tag:prod-app-servers:*",
"tag:internal:*",
"tag:dev-databases:*",
"tag:dev-app-servers:*"
]
},
// admin have only access to administrative ports of the servers, in tcp/22
{
"action": "accept",
"src": ["group:admin"],
"proto": "tcp",
"dst": [
"tag:prod-databases:22",
"tag:prod-app-servers:22",
"tag:internal:22",
"tag:dev-databases:22",
"tag:dev-app-servers:22"
]
},
// we also allow admin to ping the servers
{
"action": "accept",
"src": ["group:admin"],
"proto": "icmp",
"dst": [
"tag:prod-databases:*",
"tag:prod-app-servers:*",
"tag:internal:*",
"tag:dev-databases:*",
"tag:dev-app-servers:*"
]
},
// developers have access to databases servers and application servers on all ports
// they can only view the applications servers in prod and have no access to databases servers in production
{
"action": "accept",
"src": ["group:dev"],
"dst": [
"tag:dev-databases:*",
"tag:dev-app-servers:*",
"tag:prod-app-servers:80,443"
]
},
// developers have access to the internal network through the router.
// the internal network is composed of HTTPS endpoints and Postgresql
// database servers.
{
"action": "accept",
"src": ["group:dev"],
"dst": ["10.20.0.0/16:443,5432"]
},
// servers should be able to talk to database in tcp/5432. Database should not be able to initiate connections to
// applications servers
{
"action": "accept",
"src": ["tag:dev-app-servers"],
"proto": "tcp",
"dst": ["tag:dev-databases:5432"]
},
{
"action": "accept",
"src": ["tag:prod-app-servers"],
"dst": ["tag:prod-databases:5432"]
},
// interns have access to dev-app-servers only in reading mode
{
"action": "accept",
"src": ["group:intern"],
"dst": ["tag:dev-app-servers:80,443"]
},
// Allow users to access their own devices using autogroup:self (see below for more details about performance impact)
{
"action": "accept",
"src": ["autogroup:member"],
"dst": ["autogroup:self:*"]
}
]
}
```
## Autogroups
Headscale supports several autogroups that automatically include users, destinations, or devices with specific properties. Autogroups provide a convenient way to write ACL rules without manually listing individual users or devices.
### `autogroup:internet`
Allows access to the internet through [exit nodes](routes.md#exit-node). Can only be used in ACL destinations.
```json
{
"action": "accept",
"src": ["group:users"],
"dst": ["autogroup:internet:*"]
}
```
### `autogroup:member`
Includes all [personal (untagged) devices](registration.md/#identity-model).
```json
{
"action": "accept",
"src": ["autogroup:member"],
"dst": ["tag:prod-app-servers:80,443"]
}
```
### `autogroup:tagged`
Includes all devices that [have at least one tag](registration.md/#identity-model).
```json
{
"action": "accept",
"src": ["autogroup:tagged"],
"dst": ["tag:monitoring:9090"]
}
```
### `autogroup:self`
!!! warning "The current implementation of `autogroup:self` is inefficient"
Includes devices where the same user is authenticated on both the source and destination. Does not include tagged devices. Can only be used in ACL destinations.
```json
{
"action": "accept",
"src": ["autogroup:member"],
"dst": ["autogroup:self:*"]
}
```
*Using `autogroup:self` may cause performance degradation on the Headscale coordinator server in large deployments, as filter rules must be compiled per-node rather than globally and the current implementation is not very efficient.*
If you experience performance issues, consider using more specific ACL rules or limiting the use of `autogroup:self`.
```json
{
// The following rules allow internal users to communicate with their
// own nodes in case autogroup:self is causing performance issues.
{ "action": "accept", "src": ["boss@"], "dst": ["boss@:*"] },
{ "action": "accept", "src": ["dev1@"], "dst": ["dev1@:*"] },
{ "action": "accept", "src": ["dev2@"], "dst": ["dev2@:*"] },
{ "action": "accept", "src": ["admin1@"], "dst": ["admin1@:*"] },
{ "action": "accept", "src": ["intern1@"], "dst": ["intern1@:*"] }
}
```
### `autogroup:nonroot`
Used in Tailscale SSH rules to allow access to any user except root. Can only be used in the `users` field of SSH rules.
```json
{
"action": "accept",
"src": ["autogroup:member"],
"dst": ["autogroup:self"],
"users": ["autogroup:nonroot"]
}
```
================================================
FILE: docs/ref/api.md
================================================
# API
Headscale provides a [HTTP REST API](#rest-api) and a [gRPC interface](#grpc) which may be used to integrate a [web
interface](integration/web-ui.md), [remote control Headscale](#setup-remote-control) or provide a base for custom
integration and tooling.
Both interfaces require a valid API key before use. To create an API key, log into your Headscale server and generate
one with the default expiration of 90 days:
```shell
headscale apikeys create
```
Copy the output of the command and save it for later. Please note that you can not retrieve an API key again. If the API
key is lost, expire the old one, and create a new one.
To list the API keys currently associated with the server:
```shell
headscale apikeys list
```
and to expire an API key:
```shell
headscale apikeys expire --prefix
```
## REST API
- API endpoint: `/api/v1`, e.g. `https://headscale.example.com/api/v1`
- Documentation: `/swagger`, e.g. `https://headscale.example.com/swagger`
- Headscale Version: `/version`, e.g. `https://headscale.example.com/version`
- Authenticate using HTTP Bearer authentication by sending the [API key](#api) with the HTTP `Authorization: Bearer ` header.
Start by [creating an API key](#api) and test it with the examples below. Read the API documentation provided by your
Headscale server at `/swagger` for details.
=== "Get details for all users"
```console
curl -H "Authorization: Bearer " \
https://headscale.example.com/api/v1/user
```
=== "Get details for user 'bob'"
```console
curl -H "Authorization: Bearer " \
https://headscale.example.com/api/v1/user?name=bob
```
=== "Register a node"
```console
curl -H "Authorization: Bearer " \
--json '{"user": "", "authId": "AUTH_ID>"}' \
https://headscale.example.com/api/v1/auth/register
```
## gRPC
The gRPC interface can be used to control a Headscale instance from a remote machine with the `headscale` binary.
### Prerequisite
- A workstation to run `headscale` (any supported platform, e.g. Linux).
- A Headscale server with gRPC enabled.
- Connections to the gRPC port (default: `50443`) are allowed.
- Remote access requires an encrypted connection via TLS.
- An [API key](#api) to authenticate with the Headscale server.
### Setup remote control
1. Download the [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases). Make
sure to use the same version as on the server.
1. Put the binary somewhere in your `PATH`, e.g. `/usr/local/bin/headscale`
1. Make `headscale` executable: `chmod +x /usr/local/bin/headscale`
1. [Create an API key](#api) on the Headscale server.
1. Provide the connection parameters for the remote Headscale server either via a minimal YAML configuration file or
via environment variables:
=== "Minimal YAML configuration file"
```yaml title="config.yaml"
cli:
address: :
api_key:
```
=== "Environment variables"
```shell
export HEADSCALE_CLI_ADDRESS=":"
export HEADSCALE_CLI_API_KEY=""
```
This instructs the `headscale` binary to connect to a remote instance at `:`, instead of
connecting to the local instance.
1. Test the connection by listing all nodes:
```shell
headscale nodes list
```
You should now be able to see a list of your nodes from your workstation, and you can
now control the Headscale server from your workstation.
### Behind a proxy
It's possible to run the gRPC remote endpoint behind a reverse proxy, like Nginx, and have it run on the _same_ port as Headscale.
While this is _not a supported_ feature, an example on how this can be set up on
[NixOS is shown here](https://github.com/kradalby/dotfiles/blob/4489cdbb19cddfbfae82cd70448a38fde5a76711/machines/headscale.oracldn/headscale.nix#L61-L91).
### Troubleshooting
- Make sure you have the _same_ Headscale version on your server and workstation.
- Ensure that connections to the gRPC port are allowed.
- Verify that your TLS certificate is valid and trusted.
- If you don't have access to a trusted certificate (e.g. from Let's Encrypt), either:
- Add your self-signed certificate to the trust store of your OS _or_
- Disable certificate verification by either setting `cli.insecure: true` in the configuration file or by setting
`HEADSCALE_CLI_INSECURE=1` via an environment variable. We do **not** recommend to disable certificate validation.
================================================
FILE: docs/ref/configuration.md
================================================
# Configuration
- Headscale loads its configuration from a YAML file
- It searches for `config.yaml` in the following paths:
- `/etc/headscale`
- `$HOME/.headscale`
- the current working directory
- To load the configuration from a different path, use:
- the command line flag `-c`, `--config`
- the environment variable `HEADSCALE_CONFIG`
- Validate the configuration file with: `headscale configtest`
!!! example "Get the [example configuration from the GitHub repository](https://github.com/juanfont/headscale/blob/main/config-example.yaml)"
Always select the [same GitHub tag](https://github.com/juanfont/headscale/tags) as the released version you use to
ensure you have the correct example configuration. The `main` branch might contain unreleased changes.
=== "View on GitHub"
- Development version:
- Version {{ headscale.version }}: https://github.com/juanfont/headscale/blob/v{{ headscale.version }}/config-example.yaml
=== "Download with `wget`"
```shell
# Development version
wget -O config.yaml https://raw.githubusercontent.com/juanfont/headscale/main/config-example.yaml
# Version {{ headscale.version }}
wget -O config.yaml https://raw.githubusercontent.com/juanfont/headscale/v{{ headscale.version }}/config-example.yaml
```
=== "Download with `curl`"
```shell
# Development version
curl -o config.yaml https://raw.githubusercontent.com/juanfont/headscale/main/config-example.yaml
# Version {{ headscale.version }}
curl -o config.yaml https://raw.githubusercontent.com/juanfont/headscale/v{{ headscale.version }}/config-example.yaml
```
================================================
FILE: docs/ref/debug.md
================================================
# Debugging and troubleshooting
Headscale and Tailscale provide debug and introspection capabilities that can be helpful when things don't work as
expected. This page explains some debugging techniques to help pinpoint problems.
Please also have a look at [Tailscale's Troubleshooting guide](https://tailscale.com/kb/1023/troubleshooting). It offers
a many tips and suggestions to troubleshoot common issues.
## Tailscale
The Tailscale client itself offers many commands to introspect its state as well as the state of the network:
- [Check local network conditions](https://tailscale.com/kb/1080/cli#netcheck): `tailscale netcheck`
- [Get the client status](https://tailscale.com/kb/1080/cli#status): `tailscale status --json`
- [Get DNS status](https://tailscale.com/kb/1080/cli#dns): `tailscale dns status --all`
- Client logs: `tailscale debug daemon-logs`
- Client netmap: `tailscale debug netmap`
- Test DERP connection: `tailscale debug derp headscale`
- And many more, see: `tailscale debug --help`
Many of the commands are helpful when trying to understand differences between Headscale and Tailscale SaaS.
## Headscale
### Application logging
The log levels `debug` and `trace` can be useful to get more information from Headscale.
```yaml hl_lines="3"
log:
# Valid log levels: panic, fatal, error, warn, info, debug, trace
level: debug
```
### Database logging
The database debug mode logs all database queries. Enable it to see how Headscale interacts with its database. This also
requires the application log level to be set to either `debug` or `trace`.
```yaml hl_lines="3 7"
database:
# Enable debug mode. This setting requires the log.level to be set to "debug" or "trace".
debug: false
log:
# Valid log levels: panic, fatal, error, warn, info, debug, trace
level: debug
```
### Metrics and debug endpoint
Headscale provides a metrics and debug endpoint. It allows to introspect different aspects such as:
- Information about the Go runtime, memory usage and statistics
- Connected nodes and pending registrations
- Active ACLs, filters and SSH policy
- Current DERPMap
- Prometheus metrics
!!! warning "Keep the metrics and debug endpoint private"
The listen address and port can be configured with the `metrics_listen_addr` variable in the [configuration
file](./configuration.md). By default it listens on localhost, port 9090.
Keep the metrics and debug endpoint private to your internal network and don't expose it to the Internet.
The metrics and debug interface can be disabled completely by setting `metrics_listen_addr: null` in the
[configuration file](./configuration.md).
Query metrics via and get an overview of available debug information via
. Metrics may be queried from outside localhost but the debug interface is subject to
additional protection despite listening on all interfaces.
=== "Direct access"
Access the debug interface directly on the server where Headscale is installed.
```console
curl http://localhost:9090/debug/
```
=== "SSH port forwarding"
Use SSH port forwarding to forward Headscale's metrics and debug port to your device.
```console
ssh -L 9090:localhost:9090
```
Access the debug interface on your device by opening in your web browser.
=== "Via debug key"
The access control of the debug interface supports the use of a debug key. Traffic is accepted if the path to a
debug key is set via the environment variable `TS_DEBUG_KEY_PATH` and the debug key sent as value for `debugkey`
parameter with each request.
```console
openssl rand -hex 32 | tee debugkey.txt
export TS_DEBUG_KEY_PATH=debugkey.txt
headscale serve
```
Access the debug interface on your device by opening `http://:9090/debug/?debugkey=` in
your web browser. The `debugkey` parameter must be sent with every request.
=== "Via debug IP address"
The debug endpoint expects traffic from localhost. A different debug IP address may be configured by setting the
`TS_ALLOW_DEBUG_IP` environment variable before starting Headscale. The debug IP address is ignored when the HTTP
header `X-Forwarded-For` is present.
```console
export TS_ALLOW_DEBUG_IP=192.168.0.10 # IP address of your device
headscale serve
```
Access the debug interface on your device by opening `http://:9090/debug/` in your web browser.
================================================
FILE: docs/ref/derp.md
================================================
# DERP
A [DERP (Designated Encrypted Relay for Packets) server](https://tailscale.com/kb/1232/derp-servers) is mainly used to
relay traffic between two nodes in case a direct connection can't be established. Headscale provides an embedded DERP
server to ensure seamless connectivity between nodes.
## Configuration
DERP related settings are configured within the `derp` section of the [configuration file](./configuration.md). The
following sections only use a few of the available settings, check the [example configuration](./configuration.md) for
all available configuration options.
### Enable embedded DERP
Headscale ships with an embedded DERP server which allows to run your own self-hosted DERP server easily. The embedded
DERP server is disabled by default and needs to be enabled. In addition, you should configure the public IPv4 and public
IPv6 address of your Headscale server for improved connection stability:
```yaml title="config.yaml" hl_lines="3-5"
derp:
server:
enabled: true
ipv4: 198.51.100.1
ipv6: 2001:db8::1
```
Keep in mind that [additional ports are needed to run a DERP server](../setup/requirements.md#ports-in-use). Besides
relaying traffic, it also uses STUN (udp/3478) to help clients discover their public IP addresses and perform NAT
traversal. [Check DERP server connectivity](#check-derp-server-connectivity) to see if everything works.
### Remove Tailscale's DERP servers
Once enabled, Headscale's embedded DERP is added to the list of free-to-use [DERP
servers](https://tailscale.com/kb/1232/derp-servers) offered by Tailscale Inc. To only use Headscale's embedded DERP
server, disable the loading of the default DERP map:
```yaml title="config.yaml" hl_lines="6"
derp:
server:
enabled: true
ipv4: 198.51.100.1
ipv6: 2001:db8::1
urls: []
```
!!! warning "Single point of failure"
Removing Tailscale's DERP servers means that there is now just a single DERP server available for clients. This is a
single point of failure and could hamper connectivity.
[Check DERP server connectivity](#check-derp-server-connectivity) with your embedded DERP server before removing
Tailscale's DERP servers.
### Customize DERP map
The DERP map offered to clients can be customized with a [dedicated YAML-configuration
file](https://github.com/juanfont/headscale/blob/main/derp-example.yaml). This allows to modify previously loaded DERP
maps fetched via URL or to offer your own, custom DERP servers to nodes.
=== "Remove specific DERP regions"
The free-to-use [DERP servers](https://tailscale.com/kb/1232/derp-servers) are organized into regions via a region
ID. You can explicitly disable a specific region by setting its region ID to `null`. The following sample
`derp.yaml` disables the New York DERP region (which has the region ID 1):
```yaml title="derp.yaml"
regions:
1: null
```
Use the following configuration to serve the default DERP map (excluding New York) to nodes:
```yaml title="config.yaml" hl_lines="6 7"
derp:
server:
enabled: false
urls:
- https://controlplane.tailscale.com/derpmap/default
paths:
- /etc/headscale/derp.yaml
```
=== "Provide custom DERP servers"
The following sample `derp.yaml` references two custom regions (`custom-east` with ID 900 and `custom-west` with ID 901)
with one custom DERP server in each region. Each DERP server offers DERP relay via HTTPS on tcp/443, support for captive
portal checks via HTTP on tcp/80 and STUN on udp/3478. See the definitions of
[DERPMap](https://pkg.go.dev/tailscale.com/tailcfg#DERPMap),
[DERPRegion](https://pkg.go.dev/tailscale.com/tailcfg#DERPRegion) and
[DERPNode](https://pkg.go.dev/tailscale.com/tailcfg#DERPNode) for all available options.
```yaml title="derp.yaml"
regions:
900:
regionid: 900
regioncode: custom-east
regionname: My region (east)
nodes:
- name: 900a
regionid: 900
hostname: derp900a.example.com
ipv4: 198.51.100.1
ipv6: 2001:db8::1
canport80: true
901:
regionid: 901
regioncode: custom-west
regionname: My Region (west)
nodes:
- name: 901a
regionid: 901
hostname: derp901a.example.com
ipv4: 198.51.100.2
ipv6: 2001:db8::2
canport80: true
```
Use the following configuration to only serve the two DERP servers from the above `derp.yaml`:
```yaml title="config.yaml" hl_lines="5 6"
derp:
server:
enabled: false
urls: []
paths:
- /etc/headscale/derp.yaml
```
Independent of the custom DERP map, you may choose to [enable the embedded DERP server and have it automatically added
to the custom DERP map](#enable-embedded-derp).
### Verify clients
Access to DERP serves can be restricted to nodes that are members of your Tailnet. Relay access is denied for unknown
clients.
=== "Embedded DERP"
Client verification is enabled by default.
```yaml title="config.yaml" hl_lines="3"
derp:
server:
verify_clients: true
```
=== "3rd-party DERP"
Tailscale's `derper` provides two parameters to configure client verification:
- Use the `-verify-client-url` parameter of the `derper` and point it towards the `/verify` endpoint of your
Headscale server (e.g `https://headscale.example.com/verify`). The DERP server will query your Headscale instance
as soon as a client connects with it to ask whether access should be allowed or denied. Access is allowed if
Headscale knows about the connecting client and denied otherwise.
- The parameter `-verify-client-url-fail-open` controls what should happen when the DERP server can't reach the
Headscale instance. By default, it will allow access if Headscale is unreachable.
## Check DERP server connectivity
Any Tailscale client may be used to introspect the DERP map and to check for connectivity issues with DERP servers.
- Display DERP map: `tailscale debug derp-map`
- Check connectivity with the embedded DERP[^1]:`tailscale debug derp headscale`
Additional DERP related metrics and information is available via the [metrics and debug
endpoint](./debug.md#metrics-and-debug-endpoint).
## Limitations
- The embedded DERP server can't be used for Tailscale's captive portal checks as it doesn't support the `/generate_204`
endpoint via HTTP on port tcp/80.
- There are no speed or throughput optimisations, the main purpose is to assist in node connectivity.
[^1]: This assumes that the default region code of the [configuration file](./configuration.md) is used.
================================================
FILE: docs/ref/dns.md
================================================
# DNS
Headscale supports [most DNS features](../about/features.md) from Tailscale. DNS related settings can be configured
within the `dns` section of the [configuration file](./configuration.md).
## Setting extra DNS records
Headscale allows to set extra DNS records which are made available via
[MagicDNS](https://tailscale.com/kb/1081/magicdns). Extra DNS records can be configured either via static entries in the
[configuration file](./configuration.md) or from a JSON file that Headscale continuously watches for changes:
- Use the `dns.extra_records` option in the [configuration file](./configuration.md) for entries that are static and
don't change while Headscale is running. Those entries are processed when Headscale is starting up and changes to the
configuration require a restart of Headscale.
- For dynamic DNS records that may be added, updated or removed while Headscale is running or DNS records that are
generated by scripts the option `dns.extra_records_path` in the [configuration file](./configuration.md) is useful.
Set it to the absolute path of the JSON file containing DNS records and Headscale processes this file as it detects
changes.
An example use case is to serve multiple apps on the same host via a reverse proxy like NGINX, in this case a Prometheus
monitoring stack. This allows to nicely access the service with "http://grafana.myvpn.example.com" instead of the
hostname and port combination "http://hostname-in-magic-dns.myvpn.example.com:3000".
!!! warning "Limitations"
Currently, [only A and AAAA records are processed by Tailscale](https://github.com/tailscale/tailscale/blob/v1.86.5/ipn/ipnlocal/node_backend.go#L662).
1. Configure extra DNS records using one of the available configuration options:
=== "Static entries, via `dns.extra_records`"
```yaml title="config.yaml"
dns:
...
extra_records:
- name: "grafana.myvpn.example.com"
type: "A"
value: "100.64.0.3"
- name: "prometheus.myvpn.example.com"
type: "A"
value: "100.64.0.3"
...
```
Restart your headscale instance.
=== "Dynamic entries, via `dns.extra_records_path`"
```json title="extra-records.json"
[
{
"name": "grafana.myvpn.example.com",
"type": "A",
"value": "100.64.0.3"
},
{
"name": "prometheus.myvpn.example.com",
"type": "A",
"value": "100.64.0.3"
}
]
```
Headscale picks up changes to the above JSON file automatically.
!!! tip "Good to know"
- The `dns.extra_records_path` option in the [configuration file](./configuration.md) needs to reference the
JSON file containing extra DNS records.
- Be sure to "sort keys" and produce a stable output in case you generate the JSON file with a script.
Headscale uses a checksum to detect changes to the file and a stable output avoids unnecessary processing.
1. Verify that DNS records are properly set using the DNS querying tool of your choice:
=== "Query with dig"
```console
dig +short grafana.myvpn.example.com
100.64.0.3
```
=== "Query with drill"
```console
drill -Q grafana.myvpn.example.com
100.64.0.3
```
1. Optional: Setup the reverse proxy
The motivating example here was to be able to access internal monitoring services on the same host without
specifying a port, depicted as NGINX configuration snippet:
```nginx title="nginx.conf"
server {
listen 80;
listen [::]:80;
server_name grafana.myvpn.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
================================================
FILE: docs/ref/integration/reverse-proxy.md
================================================
# Running headscale behind a reverse proxy
!!! warning "Community documentation"
This page is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by headscale developers.
**It might be outdated and it might miss necessary steps**.
Running headscale behind a reverse proxy is useful when running multiple applications on the same server, and you want to reuse the same external IP and port - usually tcp/443 for HTTPS.
### WebSockets
The reverse proxy MUST be configured to support WebSockets to communicate with Tailscale clients.
WebSockets support is also required when using the Headscale [embedded DERP server](../derp.md). In this case, you will also need to expose the UDP port used for STUN (by default, udp/3478). Please check our [config-example.yaml](https://github.com/juanfont/headscale/blob/main/config-example.yaml).
### Cloudflare
Running headscale behind a cloudflare proxy or cloudflare tunnel is not supported and will not work as Cloudflare does not support WebSocket POSTs as required by the Tailscale protocol. See [this issue](https://github.com/juanfont/headscale/issues/1468)
### TLS
Headscale can be configured not to use TLS, leaving it to the reverse proxy to handle. Add the following configuration values to your headscale config file.
```yaml title="config.yaml"
server_url: https:// # This should be the FQDN at which headscale will be served
listen_addr: 0.0.0.0:8080
metrics_listen_addr: 0.0.0.0:9090
tls_cert_path: ""
tls_key_path: ""
```
## nginx
The following example configuration can be used in your nginx setup, substituting values as necessary. `` should be the IP address and port where headscale is running. In most cases, this will be `http://localhost:8080`.
```nginx title="nginx.conf"
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name ;
ssl_certificate ;
ssl_certificate_key ;
ssl_protocols TLSv1.2 TLSv1.3;
location / {
proxy_pass http://;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $server_name;
proxy_redirect http:// https://;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
add_header Strict-Transport-Security "max-age=15552000; includeSubDomains" always;
}
}
```
## istio/envoy
If you using [Istio](https://istio.io/) ingressgateway or [Envoy](https://www.envoyproxy.io/) as reverse proxy, there are some tips for you. If not set, you may see some debug log in proxy as below:
```log
Sending local reply with details upgrade_failed
```
### Envoy
You need to add a new upgrade_type named `tailscale-control-protocol`. [see details](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto#extensions-filters-network-http-connection-manager-v3-httpconnectionmanager-upgradeconfig)
### Istio
Same as envoy, we can use `EnvoyFilter` to add upgrade_type.
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: headscale-behind-istio-ingress
namespace: istio-system
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
listener:
filterChain:
filter:
name: envoy.filters.network.http_connection_manager
patch:
operation: MERGE
value:
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
upgrade_configs:
- upgrade_type: tailscale-control-protocol
```
## Caddy
The following Caddyfile is all that is necessary to use Caddy as a reverse proxy for headscale, in combination with the `config.yaml` specifications above to disable headscale's built in TLS. Replace values as necessary - `` should be the FQDN at which headscale will be served, and `` should be the IP address and port where headscale is running. In most cases, this will be `localhost:8080`.
```none title="Caddyfile"
{
reverse_proxy
}
```
Caddy v2 will [automatically](https://caddyserver.com/docs/automatic-https) provision a certificate for your domain/subdomain, force HTTPS, and proxy websockets - no further configuration is necessary.
For a slightly more complex configuration which utilizes Docker containers to manage Caddy, headscale, and Headscale-UI, [Guru Computing's guide](https://blog.gurucomputing.com.au/smart-vpns-with-headscale/) is an excellent reference.
## Apache
The following minimal Apache config will proxy traffic to the headscale instance on ``. Note that `upgrade=any` is required as a parameter for `ProxyPass` so that WebSockets traffic whose `Upgrade` header value is not equal to `WebSocket` (i. e. Tailscale Control Protocol) is forwarded correctly. See the [Apache docs](https://httpd.apache.org/docs/2.4/mod/mod_proxy_wstunnel.html) for more information on this.
```apache title="apache.conf"
ServerName
ProxyPreserveHost On
ProxyPass / http:/// upgrade=any
SSLEngine On
SSLCertificateFile
SSLCertificateKeyFile
```
================================================
FILE: docs/ref/integration/tools.md
================================================
# Tools related to headscale
!!! warning "Community contributions"
This page contains community contributions. The projects listed here are not
maintained by the headscale authors and are written by community members.
This page collects third-party tools, client libraries, and scripts related to headscale.
- [headscale-operator](https://github.com/infradohq/headscale-operator) - Headscale Kubernetes Operator
- [tailscale-manager](https://github.com/singlestore-labs/tailscale-manager) - Dynamically manage Tailscale route
advertisements
- [headscalebacktosqlite](https://github.com/bigbozza/headscalebacktosqlite) - Migrate headscale from PostgreSQL back to
SQLite
- [headscale-pf](https://github.com/YouSysAdmin/headscale-pf) - Populates user groups based on user groups in Jumpcloud
or Authentik
- [headscale-client-go](https://github.com/hibare/headscale-client-go) - A Go client implementation for the Headscale
HTTP API.
- [headscale-zabbix](https://github.com/dblanque/headscale-zabbix) - A Zabbix Monitoring Template for the Headscale
Service.
- [tailscale-exporter](https://github.com/adinhodovic/tailscale-exporter) - A Prometheus exporter for Headscale that
provides network-level metrics using the Headscale API.
================================================
FILE: docs/ref/integration/web-ui.md
================================================
# Web interfaces for headscale
!!! warning "Community contributions"
This page contains community contributions. The projects listed here are not
maintained by the headscale authors and are written by community members.
Headscale doesn't provide a built-in web interface but users may pick one from the available options.
- [headscale-ui](https://github.com/gurucomputing/headscale-ui) - A web frontend for the headscale Tailscale-compatible
coordination server
- [HeadscaleUi](https://github.com/simcu/headscale-ui) - A static headscale admin ui, no backend environment required
- [Headplane](https://github.com/tale/headplane) - An advanced Tailscale inspired frontend for headscale
- [headscale-admin](https://github.com/GoodiesHQ/headscale-admin) - Headscale-Admin is meant to be a simple, modern web
interface for headscale
- [ouroboros](https://github.com/yellowsink/ouroboros) - Ouroboros is designed for users to manage their own devices,
rather than for admins
- [unraid-headscale-admin](https://github.com/ich777/unraid-headscale-admin) - A simple headscale admin UI for Unraid,
it offers Local (`docker exec`) and API Mode
- [headscale-console](https://github.com/rickli-cloud/headscale-console) - WebAssembly-based client supporting SSH, VNC
and RDP with optional self-service capabilities
- [headscale-piying](https://github.com/wszgrcy/headscale-piying) - headscale web ui,support visual ACL configuration
- [HeadControl](https://github.com/ahmadzip/HeadControl) - Minimal Headscale admin dashboard, built with Go and HTMX
- [Headscale Manager](https://github.com/hkdone/headscalemanager) - Headscale UI for Android
You can ask for support on our [Discord server](https://discord.gg/c84AZQhmpx) in the "web-interfaces" channel.
================================================
FILE: docs/ref/oidc.md
================================================
# OpenID Connect
Headscale supports authentication via external identity providers using OpenID Connect (OIDC). It features:
- Auto configuration via OpenID Connect Discovery Protocol
- [Proof Key for Code Exchange (PKCE) code verification](#enable-pkce-recommended)
- [Authorization based on a user's domain, email address or group membership](#authorize-users-with-filters)
- Synchronization of [standard OIDC claims](#supported-oidc-claims)
Please see [limitations](#limitations) for known issues and limitations.
## Configuration
OpenID requires configuration in Headscale and your identity provider:
- Headscale: The `oidc` section of the Headscale [configuration](configuration.md) contains all available configuration
options along with a description and their default values.
- Identity provider: Please refer to the official documentation of your identity provider for specific instructions.
Additionally, there might be some useful hints in the [Identity provider specific
configuration](#identity-provider-specific-configuration) section below.
### Basic configuration
A basic configuration connects Headscale to an identity provider and typically requires:
- OpenID Connect Issuer URL from the identity provider. Headscale uses the OpenID Connect Discovery Protocol 1.0 to
automatically obtain OpenID configuration parameters (example: `https://sso.example.com`).
- Client ID from the identity provider (example: `headscale`).
- Client secret generated by the identity provider (example: `generated-secret`).
- Redirect URI for your identity provider (example: `https://headscale.example.com/oidc/callback`).
=== "Headscale"
```yaml
oidc:
issuer: "https://sso.example.com"
client_id: "headscale"
client_secret: "generated-secret"
```
=== "Identity provider"
- Create a new confidential client (`Client ID`, `Client secret`)
- Add Headscale's OIDC callback URL as valid redirect URL: `https://headscale.example.com/oidc/callback`
- Configure additional parameters to improve user experience such as: name, description, logo, …
### Enable PKCE (recommended)
Proof Key for Code Exchange (PKCE) adds an additional layer of security to the OAuth 2.0 authorization code flow by
preventing authorization code interception attacks, see: . PKCE is
recommended and needs to be configured for Headscale and the identity provider alike:
=== "Headscale"
```yaml hl_lines="5-6"
oidc:
issuer: "https://sso.example.com"
client_id: "headscale"
client_secret: "generated-secret"
pkce:
enabled: true
```
=== "Identity provider"
- Enable PKCE for the headscale client
- Set the PKCE challenge method to "S256"
### Authorize users with filters
Headscale allows to filter for allowed users based on their domain, email address or group membership. These filters can
be helpful to apply additional restrictions and control which users are allowed to join. Filters are disabled by
default, users are allowed to join once the authentication with the identity provider succeeds. In case multiple filters
are configured, a user needs to pass all of them.
=== "Allowed domains"
- Check the email domain of each authenticating user against the list of allowed domains and only authorize users
whose email domain matches `example.com`.
- A verified email address is required [unless email verification is disabled](#control-email-verification).
- Access allowed: `alice@example.com`
- Access denied: `bob@example.net`
```yaml hl_lines="5-6"
oidc:
issuer: "https://sso.example.com"
client_id: "headscale"
client_secret: "generated-secret"
allowed_domains:
- "example.com"
```
=== "Allowed users/emails"
- Check the email address of each authenticating user against the list of allowed email addresses and only authorize
users whose email is part of the `allowed_users` list.
- A verified email address is required [unless email verification is disabled](#control-email-verification).
- Access allowed: `alice@example.com`, `bob@example.net`
- Access denied: `mallory@example.net`
```yaml hl_lines="5-7"
oidc:
issuer: "https://sso.example.com"
client_id: "headscale"
client_secret: "generated-secret"
allowed_users:
- "alice@example.com"
- "bob@example.net"
```
=== "Allowed groups"
- Use the OIDC `groups` claim of each authenticating user to get their group membership and only authorize users
which are members in at least one of the referenced groups.
- Access allowed: users in the `headscale_users` group
- Access denied: users without groups, users with other groups
```yaml hl_lines="5-7"
oidc:
issuer: "https://sso.example.com"
client_id: "headscale"
client_secret: "generated-secret"
scope: ["openid", "profile", "email", "groups"]
allowed_groups:
- "headscale_users"
```
### Control email verification
Headscale uses the `email` claim from the identity provider to synchronize the email address to its user profile. By
default, a user's email address is only synchronized when the identity provider reports the email address as verified
via the `email_verified: true` claim.
Unverified emails may be allowed in case an identity provider does not send the `email_verified` claim or email
verification is not required. In that case, a user's email address is always synchronized to the user profile.
```yaml hl_lines="5"
oidc:
issuer: "https://sso.example.com"
client_id: "headscale"
client_secret: "generated-secret"
email_verified_required: false
```
### Customize node expiration
The node expiration is the amount of time a node is authenticated with OpenID Connect until it expires and needs to
reauthenticate. The default node expiration is 180 days. This can either be customized or set to the expiration from the
Access Token.
=== "Customize node expiration"
```yaml hl_lines="5"
oidc:
issuer: "https://sso.example.com"
client_id: "headscale"
client_secret: "generated-secret"
expiry: 30d # Use 0 to disable node expiration
```
=== "Use expiration from Access Token"
Please keep in mind that the Access Token is typically a short-lived token that expires within a few minutes. You
will have to configure token expiration in your identity provider to avoid frequent re-authentication.
```yaml hl_lines="5"
oidc:
issuer: "https://sso.example.com"
client_id: "headscale"
client_secret: "generated-secret"
use_expiry_from_token: true
```
!!! tip "Expire a node and force re-authentication"
A node can be expired immediately via:
```console
headscale node expire -i
```
### Reference a user in the policy
You may refer to users in the Headscale policy via:
- Email address
- Username
- Provider identifier (this value is currently only available from the [API](api.md), database or directly from your
identity provider)
!!! note "A user identifier in the policy must contain a single `@`"
The Headscale policy requires a single `@` to reference a user. If the username or provider identifier doesn't
already contain a single `@`, it needs to be appended at the end. For example: the username `ssmith` has to be
written as `ssmith@` to be correctly identified as user within the policy.
!!! warning "Email address or username might be updated by users"
Many identity providers allow users to update their own profile. Depending on the identity provider and its
configuration, the values for username or email address might change over time. This might have unexpected
consequences for Headscale where a policy might no longer work or a user might obtain more access by hijacking an
existing username or email address.
!!! tip "Howto use the provider identifier in the policy"
The provider identifier uniquely identifies an OIDC user and a well-behaving identity provider guarantees that this
value never changes for a particular user. It is usually an opaque and long string and its value is currently only
available from the [API](api.md), database or directly from your identity provider).
Use the [API](api.md) with the `/api/v1/user` endpoint to fetch the provider identifier (`providerId`). The value
(be sure to append an `@` in case the provider identifier doesn't already contain an `@` somewhere) can be used
directly to reference a user in the policy. To improve readability of the policy, one may use the `groups` section
as an alias:
```json
{
"groups": {
"group:alice": [
"https://soo.example.com/oauth2/openid/59ac9125-c31b-46c5-814e-06242908cf57@"
]
},
"acls": [
{
"action": "accept",
"src": ["group:alice"],
"dst": ["*:*"]
}
]
}
```
## Supported OIDC claims
Headscale uses [the standard OIDC claims](https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims) to
populate and update its local user profile on each login. OIDC claims are read from the ID Token and from the UserInfo
endpoint.
| Headscale profile | OIDC claim | Notes / examples |
| ------------------- | -------------------- | ------------------------------------------------------------------------------------------------- |
| email address | `email` | Only verified emails are synchronized, unless `email_verified_required: false` is configured |
| display name | `name` | eg: `Sam Smith` |
| username | `preferred_username` | Depends on identity provider, eg: `ssmith`, `ssmith@idp.example.com`, `\\example.com\ssmith` |
| profile picture | `picture` | URL to a profile picture or avatar |
| provider identifier | `iss`, `sub` | A stable and unique identifier for a user, typically a combination of `iss` and `sub` OIDC claims |
| | `groups` | [Only used to filter for allowed groups](#authorize-users-with-filters) |
## Limitations
- Support for OpenID Connect aims to be generic and vendor independent. It offers only limited support for quirks of
specific identity providers.
- OIDC groups cannot be used in ACLs.
- The username provided by the identity provider needs to adhere to this pattern:
- The username must be at least two characters long.
- It must only contain letters, digits, hyphens, dots, underscores, and up to a single `@`.
- The username must start with a letter.
Please see the [GitHub label "OIDC"](https://github.com/juanfont/headscale/labels/OIDC) for OIDC related issues.
## Identity provider specific configuration
!!! warning "Third-party software and services"
This section of the documentation is specific for third-party software and services. We recommend users read the
third-party documentation on how to configure and integrate an OIDC client. Please see the [Configuration
section](#configuration) for a description of Headscale's OIDC related configuration settings.
Any identity provider with OpenID Connect support should "just work" with Headscale. The following identity providers
are known to work:
- [Authelia](#authelia)
- [Authentik](#authentik)
- [Kanidm](#kanidm)
- [Keycloak](#keycloak)
### Authelia
Authelia is fully supported by Headscale.
### Authentik
- Authentik is fully supported by Headscale.
- [Headscale does not support JSON Web Encryption](https://github.com/juanfont/headscale/issues/2446). Leave the field
`Encryption Key` in the providers section unset.
- See Authentik's [Integrate with Headscale](https://integrations.goauthentik.io/networking/headscale/)
### Google OAuth
!!! warning "No username due to missing preferred_username"
Google OAuth does not send the `preferred_username` claim when the scope `profile` is requested. The username in
Headscale will be blank/not set.
In order to integrate Headscale with Google, you'll need to have a [Google Cloud
Console](https://console.cloud.google.com) account.
Google OAuth has a [verification process](https://support.google.com/cloud/answer/9110914?hl=en) if you need to have
users authenticate who are outside of your domain. If you only need to authenticate users from your domain name (ie
`@example.com`), you don't need to go through the verification process.
However if you don't have a domain, or need to add users outside of your domain, you can manually add emails via Google
Console.
#### Steps
1. Go to [Google Console](https://console.cloud.google.com) and login or create an account if you don't have one.
1. Create a project (if you don't already have one).
1. On the left hand menu, go to `APIs and services` -> `Credentials`
1. Click `Create Credentials` -> `OAuth client ID`
1. Under `Application Type`, choose `Web Application`
1. For `Name`, enter whatever you like
1. Under `Authorised redirect URIs`, add Headscale's OIDC callback URL: `https://headscale.example.com/oidc/callback`
1. Click `Save` at the bottom of the form
1. Take note of the `Client ID` and `Client secret`, you can also download it for reference if you need it.
1. [Configure Headscale following the "Basic configuration" steps](#basic-configuration). The issuer URL for Google
OAuth is: `https://accounts.google.com`.
### Kanidm
- Kanidm is fully supported by Headscale.
- Groups for the [allowed groups filter](#authorize-users-with-filters) need to be specified with their full SPN, for
example: `headscale_users@sso.example.com`.
- Kanidm sends the full SPN (`alice@sso.example.com`) as `preferred_username` by default. Headscale stores this value as
username which might be confusing as the username and email fields now contain values that look like an email address.
[Kanidm can be configured to send the short username as `preferred_username` attribute
instead](https://kanidm.github.io/kanidm/stable/integrations/oauth2.html#short-names):
```console
kanidm system oauth2 prefer-short-username
```
Once configured, the short username in Headscale will be `alice` and can be referred to as `alice@` in the policy.
### Keycloak
Keycloak is fully supported by Headscale.
#### Additional configuration to use the allowed groups filter
Keycloak has no built-in client scope for the OIDC `groups` claim. This extra configuration step is **only** needed if
you need to [authorize access based on group membership](#authorize-users-with-filters).
- Create a new client scope `groups` for OpenID Connect:
- Configure a `Group Membership` mapper with name `groups` and the token claim name `groups`.
- Add the mapper to at least the UserInfo endpoint.
- Configure the new client scope for your Headscale client:
- Edit the Headscale client.
- Search for the client scope `group`.
- Add it with assigned type `Default`.
- [Configure the allowed groups in Headscale](#authorize-users-with-filters). How groups need to be specified depends on
Keycloak's `Full group path` option:
- `Full group path` is enabled: groups contain their full path, e.g. `/top/group1`
- `Full group path` is disabled: only the name of the group is used, e.g. `group1`
### Microsoft Entra ID
In order to integrate Headscale with Microsoft Entra ID, you'll need to provision an App Registration with the correct
scopes and redirect URI.
[Configure Headscale following the "Basic configuration" steps](#basic-configuration). The issuer URL for Microsoft
Entra ID is: `https://login.microsoftonline.com//v2.0`. The following `extra_params` might be useful:
- `domain_hint: example.com` to use your own domain
- `prompt: select_account` to force an account picker during login
When using Microsoft Entra ID together with the [allowed groups filter](#authorize-users-with-filters), configure the
Headscale OIDC scope without the `groups` claim, for example:
```yaml
oidc:
scope: ["openid", "profile", "email"]
```
Groups for the [allowed groups filter](#authorize-users-with-filters) need to be specified with their group ID(UUID) instead
of the group name.
## Switching OIDC providers
Headscale only supports a single OIDC provider in its configuration, but it does store the provider identifier of each user. When switching providers, this might lead to issues with existing users: all user details (name, email, groups) might be identical with the new provider, but the identifier will differ. Headscale will be unable to create a new user as the name and email will already be in use for the existing users.
At this time, you will need to manually update the `provider_identifier` column in the `users` table for each user with the appropriate value for the new provider. The identifier is built from the `iss` and `sub` claims of the OIDC ID token, for example `https://id.example.com/12340987`.
================================================
FILE: docs/ref/registration.md
================================================
# Registration methods
Headscale supports multiple ways to register a node. The preferred registration method depends on the identity of a node
and your use case.
## Identity model
Tailscale's identity model distinguishes between personal and tagged nodes:
- A personal node (or user-owned node) is owned by a human and typically refers to end-user devices such as laptops,
workstations or mobile phones. End-user devices are managed by a single user.
- A tagged node (or service-based node or non-human node) provides services to the network. Common examples include web-
and database servers. Those nodes are typically managed by a team of users. Some additional restrictions apply for
tagged nodes, e.g. a tagged node is not allowed to [Tailscale SSH](https://tailscale.com/kb/1193/tailscale-ssh) into a
personal node.
Headscale implements Tailscale's identity model and distinguishes between personal and tagged nodes where a personal
node is owned by a Headscale user and a tagged node is owned by a tag. Tagged devices are grouped under the special user
`tagged-devices`.
## Registration methods
There are two main ways to register new nodes, [web authentication](#web-authentication) and [registration with a pre
authenticated key](#pre-authenticated-key). Both methods can be used to register personal and tagged nodes.
### Web authentication
Web authentication is the default method to register a new node. It's interactive, where the client initiates the
registration and the Headscale administrator needs to approve the new node before it is allowed to join the network. A
node can be approved with:
- Headscale CLI (described in this documentation)
- [Headscale API](api.md)
- Or delegated to an identity provider via [OpenID Connect](oidc.md)
Web authentication relies on the presence of a Headscale user. Use the `headscale users` command to create a new user:
```console
headscale users create
```
=== "Personal devices"
Run `tailscale up` to login your personal device:
```console
tailscale up --login-server
```
Usually, a browser window with further instructions is opened. This page explains how to complete the registration
on your Headscale server and it also prints the Auth ID required to approve the node:
```console
headscale auth register --user --auth-id
```
Congrations, the registration of your personal node is complete and it should be listed as "online" in the output of
`headscale nodes list`. The "User" column displays `` as the owner of the node.
=== "Tagged devices"
Your Headscale user needs to be authorized to register tagged devices. This authorization is specified in the
[`tagOwners`](https://tailscale.com/kb/1337/policy-syntax#tag-owners) section of the [ACL](acls.md). A simple
example looks like this:
```json title="The user alice can register nodes tagged with tag:server"
{
"tagOwners": {
"tag:server": ["alice@"]
},
// more rules
}
```
Run `tailscale up` and provide at least one tag to login a tagged device:
```console
tailscale up --login-server --advertise-tags tag:
```
Usually, a browser window with further instructions is opened. This page explains how to complete the registration
on your Headscale server and it also prints the Auth ID required to approve the node:
```console
headscale auth register --user --auth-id
```
Headscale checks that `` is allowed to register a node with the specified tag(s) and then transfers ownership
of the new node to the special user `tagged-devices`. The registration of a tagged node is complete and it should be
listed as "online" in the output of `headscale nodes list`. The "User" column displays `tagged-devices` as the owner
of the node. See the "Tags" column for the list of assigned tags.
### Pre authenticated key
Registration with a pre authenticated key (or auth key) is a non-interactive way to register a new node. The Headscale
administrator creates a preauthkey upfront and this preauthkey can then be used to register a node non-interactively.
Its best suited for automation.
=== "Personal devices"
A personal node is always assigned to a Headscale user. Use the `headscale users` command to create a new user:
```console
headscale users create
```
Use the `headscale user list` command to learn its `` and create a new pre authenticated key for your user:
```console
headscale preauthkeys create --user
```
The above prints a pre authenticated key with the default settings (can be used once and is valid for one hour). Use
this auth key to register a node non-interactively:
```console
tailscale up --login-server --authkey
```
Congrations, the registration of your personal node is complete and it should be listed as "online" in the output of
`headscale nodes list`. The "User" column displays `` as the owner of the node.
=== "Tagged devices"
Create a new pre authenticated key and provide at least one tag:
```console
headscale preauthkeys create --tags tag:
```
The above prints a pre authenticated key with the default settings (can be used once and is valid for one hour). Use
this auth key to register a node non-interactively. You don't need to provide the `--advertise-tags` parameter as
the tags are automatically read from the pre authenticated key:
```console
tailscale up --login-server --authkey
```
The registration of a tagged node is complete and it should be listed as "online" in the output of
`headscale nodes list`. The "User" column displays `tagged-devices` as the owner of the node. See the "Tags" column for the list of
assigned tags.
================================================
FILE: docs/ref/routes.md
================================================
# Routes
Headscale supports route advertising and can be used to manage [subnet routers](https://tailscale.com/kb/1019/subnets)
and [exit nodes](https://tailscale.com/kb/1103/exit-nodes) for a tailnet.
- [Subnet routers](#subnet-router) may be used to connect an existing network such as a virtual
private cloud or an on-premise network with your tailnet. Use a subnet router to access devices where Tailscale can't
be installed or to gradually rollout Tailscale.
- [Exit nodes](#exit-node) can be used to route all Internet traffic for another Tailscale
node. Use it to securely access the Internet on an untrusted Wi-Fi or to access online services that expect traffic
from a specific IP address.
## Subnet router
The setup of a subnet router requires double opt-in, once from a subnet router and once on the control server to allow
its use within the tailnet. Optionally, use [`autoApprovers` to automatically approve routes from a subnet
router](#automatically-approve-routes-of-a-subnet-router).
### Setup a subnet router
#### Configure a node as subnet router
Register a node and advertise the routes it should handle as comma separated list:
```console
$ sudo tailscale up --login-server --advertise-routes=10.0.0.0/8,192.168.0.0/24
```
If the node is already registered, it can advertise new routes or update previously announced routes with:
```console
$ sudo tailscale set --advertise-routes=10.0.0.0/8,192.168.0.0/24
```
Finally, [enable IP forwarding](#enable-ip-forwarding) to route traffic.
#### Enable the subnet router on the control server
The routes of a tailnet can be displayed with the `headscale nodes list-routes` command. A subnet router with the
hostname `myrouter` announced the IPv4 networks `10.0.0.0/8` and `192.168.0.0/24`. Those need to be approved before they
can be used.
```console
$ headscale nodes list-routes
ID | Hostname | Approved | Available | Serving (Primary)
1 | myrouter | | 10.0.0.0/8 |
| | | 192.168.0.0/24 |
```
Approve all desired routes of a subnet router by specifying them as comma separated list:
```console
$ headscale nodes approve-routes --identifier 1 --routes 10.0.0.0/8,192.168.0.0/24
Node updated
```
The node `myrouter` can now route the IPv4 networks `10.0.0.0/8` and `192.168.0.0/24` for the tailnet.
```console
$ headscale nodes list-routes
ID | Hostname | Approved | Available | Serving (Primary)
1 | myrouter | 10.0.0.0/8 | 10.0.0.0/8 | 10.0.0.0/8
| | 192.168.0.0/24 | 192.168.0.0/24 | 192.168.0.0/24
```
#### Use the subnet router
To accept routes advertised by a subnet router on a node:
```console
$ sudo tailscale set --accept-routes
```
Please refer to the official [Tailscale
documentation](https://tailscale.com/kb/1019/subnets#use-your-subnet-routes-from-other-devices) for how to use a subnet
router on different operating systems.
### Restrict the use of a subnet router with ACL
The routes announced by subnet routers are available to the nodes in a tailnet. By default, without an ACL enabled, all
nodes can accept and use such routes. Configure an ACL to explicitly manage who can use routes.
The ACL snippet below defines three hosts, a subnet router `router`, a regular node `node` and `service.example.net` as
internal service that can be reached via a route on the subnet router `router`. It allows the node `node` to access
`service.example.net` on port 80 and 443 which is reachable via the subnet router. Access to the subnet router itself is
denied.
```json title="Access the routes of a subnet router without the subnet router itself"
{
"hosts": {
// the router is not referenced but announces 192.168.0.0/24"
"router": "100.64.0.1/32",
"node": "100.64.0.2/32",
"service.example.net": "192.168.0.1/32"
},
"acls": [
{
"action": "accept",
"src": ["node"],
"dst": ["service.example.net:80,443"]
}
]
}
```
### Automatically approve routes of a subnet router
The initial setup of a subnet router usually requires manual approval of their announced routes on the control server
before they can be used by a node in a tailnet. Headscale supports the `autoApprovers` section of an ACL to automate the
approval of routes served with a subnet router.
The ACL snippet below defines the tag `tag:router` owned by the user `alice`. This tag is used for `routes` in the
`autoApprovers` section. The IPv4 route `192.168.0.0/24` is automatically approved once announced by a subnet router
that advertises the tag `tag:router`.
```json title="Subnet routers tagged with tag:router are automatically approved"
{
"tagOwners": {
"tag:router": ["alice@"]
},
"autoApprovers": {
"routes": {
"192.168.0.0/24": ["tag:router"]
}
},
"acls": [
// more rules
]
}
```
Advertise the route `192.168.0.0/24` from a subnet router that also advertises the tag `tag:router` when joining the tailnet:
```console
$ sudo tailscale up --login-server --advertise-tags tag:router --advertise-routes 192.168.0.0/24
```
Please see the [official Tailscale documentation](https://tailscale.com/kb/1337/acl-syntax#autoapprovers) for more
information on auto approvers.
## Exit node
The setup of an exit node requires double opt-in, once from an exit node and once on the control server to allow its use
within the tailnet. Optionally, use [`autoApprovers` to automatically approve an exit
node](#automatically-approve-an-exit-node-with-auto-approvers).
### Setup an exit node
#### Configure a node as exit node
Register a node and make it advertise itself as an exit node:
```console
$ sudo tailscale up --login-server --advertise-exit-node
```
If the node is already registered, it can advertise exit capabilities like this:
```console
$ sudo tailscale set --advertise-exit-node
```
Finally, [enable IP forwarding](#enable-ip-forwarding) to route traffic.
#### Enable the exit node on the control server
The routes of a tailnet can be displayed with the `headscale nodes list-routes` command. An exit node can be recognized
by its announced routes: `0.0.0.0/0` for IPv4 and `::/0` for IPv6. The exit node with the hostname `myexit` is already
available, but needs to be approved:
```console
$ headscale nodes list-routes
ID | Hostname | Approved | Available | Serving (Primary)
1 | myexit | | 0.0.0.0/0 |
| | | ::/0 |
```
For exit nodes, it is sufficient to approve either the IPv4 or IPv6 route. The other will be approved automatically.
```console
$ headscale nodes approve-routes --identifier 1 --routes 0.0.0.0/0
Node updated
```
The node `myexit` is now approved as exit node for the tailnet:
```console
$ headscale nodes list-routes
ID | Hostname | Approved | Available | Serving (Primary)
1 | myexit | 0.0.0.0/0 | 0.0.0.0/0 | 0.0.0.0/0
| | ::/0 | ::/0 | ::/0
```
#### Use the exit node
The exit node can now be used on a node with:
```console
$ sudo tailscale set --exit-node myexit
```
Please refer to the official [Tailscale documentation](https://tailscale.com/kb/1103/exit-nodes#use-the-exit-node) for
how to use an exit node on different operating systems.
### Restrict the use of an exit node with ACL
An exit node is offered to all nodes in a tailnet. By default, without an ACL enabled, all nodes in a tailnet can select
and use an exit node. Configure `autogroup:internet` in an ACL rule to restrict who can use _any_ of the available exit
nodes.
```json title="Example use of autogroup:internet"
{
"acls": [
{
"action": "accept",
"src": ["..."],
"dst": ["autogroup:internet:*"]
}
]
}
```
### Restrict access to exit nodes per user or group
A user can use _any_ of the available exit nodes with `autogroup:internet`. Alternatively, the ACL snippet below assigns
each user a specific exit node while hiding all other exit nodes. The user `alice` can only use exit node `exit1` while
user `bob` can only use exit node `exit2`.
```json title="Assign each user a dedicated exit node"
{
"hosts": {
"exit1": "100.64.0.1/32",
"exit2": "100.64.0.2/32"
},
"acls": [
{
"action": "accept",
"src": ["alice@"],
"dst": ["exit1:*"]
},
{
"action": "accept",
"src": ["bob@"],
"dst": ["exit2:*"]
}
]
}
```
!!! warning
- The above implementation is Headscale specific and will likely be removed once [support for
`via`](https://github.com/juanfont/headscale/issues/2409) is available.
- Beware that a user can also connect to any port of the exit node itself.
### Automatically approve an exit node with auto approvers
The initial setup of an exit node usually requires manual approval on the control server before it can be used by a node
in a tailnet. Headscale supports the `autoApprovers` section of an ACL to automate the approval of a new exit node as
soon as it joins the tailnet.
The ACL snippet below defines the tag `tag:exit` owned by the user `alice`. This tag is used for `exitNode` in the
`autoApprovers` section. A new exit node that advertises the tag `tag:exit` is automatically approved:
```json title="Exit nodes tagged with tag:exit are automatically approved"
{
"tagOwners": {
"tag:exit": ["alice@"]
},
"autoApprovers": {
"exitNode": ["tag:exit"]
},
"acls": [
// more rules
]
}
```
Advertise a node as exit node and also advertise the tag `tag:exit` when joining the tailnet:
```console
$ sudo tailscale up --login-server --advertise-tags tag:exit --advertise-exit-node
```
Please see the [official Tailscale documentation](https://tailscale.com/kb/1337/acl-syntax#autoapprovers) for more
information on auto approvers.
## High availability
Headscale has limited support for high availability routing. Multiple subnet routers with overlapping routes or multiple
exit nodes can be used to provide high availability for users. If one router node goes offline, another one can serve
the same routes to clients. Please see the official [Tailscale documentation on high
availability](https://tailscale.com/kb/1115/high-availability#subnet-router-high-availability) for details.
!!! bug
In certain situations it might take up to 16 minutes for Headscale to detect a node as offline. A failover node
might not be selected fast enough, if such a node is used as subnet router or exit node causing service
interruptions for clients. See [issue 2129](https://github.com/juanfont/headscale/issues/2129) for more information.
## Troubleshooting
### Enable IP forwarding
A subnet router or exit node is routing traffic on behalf of other nodes and thus requires IP forwarding. Check the
official [Tailscale documentation](https://tailscale.com/kb/1019/subnets/?tab=linux#enable-ip-forwarding) for how to
enable IP forwarding.
================================================
FILE: docs/ref/tags.md
================================================
# Tags
Headscale supports Tailscale tags. Please read [Tailscale's tag documentation](https://tailscale.com/kb/1068/tags) to
learn how tags work and how to use them.
Tags can be applied during [node registration](registration.md):
- using the `--advertise-tags` flag, see [web authentication for tagged devices](registration.md#__tabbed_1_2)
- using a tagged pre authenticated key, see [how to create and use it](registration.md#__tabbed_2_2)
Administrators can manage tags with:
- Headscale CLI
- [Headscale API](api.md)
## Common operations
### Manage tags for a node
Run `headscale nodes list` to list the tags for a node.
Use the `headscale nodes tag` command to modify the tags for a node. At least one tag is required and multiple tags can
be provided as comma separated list. The following command sets the tags `tag:server` and `tag:prod` on node with ID 1:
```console
headscale nodes tag -i 1 -t tag:server,tag:prod
```
### Convert from personal to tagged node
Use the `headscale nodes tag` command to convert a personal (user-owned) node to a tagged node:
```console
headscale nodes tag -i -t
```
The node is now owned by the special user `tagged-devices` and has the specified tags assigned to it.
### Convert from tagged to personal node
Tagged nodes can return to personal (user-owned) nodes by re-authenticating with:
```console
tailscale up --login-server --advertise-tags= --force-reauth
```
Usually, a browser window with further instructions is opened. This page explains how to complete the registration on
your Headscale server and it also prints the Auth ID required to approve the node:
```console
headscale auth register --user --auth-id
```
All previously assigned tags get removed and the node is now owned by the user specified in the above command.
================================================
FILE: docs/ref/tls.md
================================================
# Running the service via TLS (optional)
## Bring your own certificate
Headscale can be configured to expose its web service via TLS. To configure the certificate and key file manually, set the `tls_cert_path` and `tls_key_path` configuration parameters. If the path is relative, it will be interpreted as relative to the directory the configuration file was read from.
```yaml title="config.yaml"
tls_cert_path: ""
tls_key_path: ""
```
The certificate should contain the full chain, else some clients, like the Tailscale Android client, will reject it.
## Let's Encrypt / ACME
To get a certificate automatically via [Let's Encrypt](https://letsencrypt.org/), set `tls_letsencrypt_hostname` to the desired certificate hostname. This name must resolve to the IP address(es) headscale is reachable on (i.e., it must correspond to the `server_url` configuration parameter). The certificate and Let's Encrypt account credentials will be stored in the directory configured in `tls_letsencrypt_cache_dir`. If the path is relative, it will be interpreted as relative to the directory the configuration file was read from.
```yaml title="config.yaml"
tls_letsencrypt_hostname: ""
tls_letsencrypt_listen: ":http"
tls_letsencrypt_cache_dir: ".cache"
tls_letsencrypt_challenge_type: HTTP-01
```
### Challenge types
Headscale only supports two values for `tls_letsencrypt_challenge_type`: `HTTP-01` (default) and `TLS-ALPN-01`.
#### HTTP-01
For `HTTP-01`, headscale must be reachable on port 80 for the Let's Encrypt automated validation, in addition to whatever port is configured in `listen_addr`. By default, headscale listens on port 80 on all local IPs for Let's Encrypt automated validation.
If you need to change the ip and/or port used by headscale for the Let's Encrypt validation process, set `tls_letsencrypt_listen` to the appropriate value. This can be handy if you are running headscale as a non-root user (or can't run `setcap`). Keep in mind, however, that Let's Encrypt will _only_ connect to port 80 for the validation callback, so if you change `tls_letsencrypt_listen` you will also need to configure something else (e.g. a firewall rule) to forward the traffic from port 80 to the ip:port combination specified in `tls_letsencrypt_listen`.
#### TLS-ALPN-01
For `TLS-ALPN-01`, headscale listens on the ip:port combination defined in `listen_addr`. Let's Encrypt will _only_ connect to port 443 for the validation callback, so if `listen_addr` is not set to port 443, something else (e.g. a firewall rule) will be required to forward the traffic from port 443 to the ip:port combination specified in `listen_addr`.
### Technical description
Headscale uses [autocert](https://pkg.go.dev/golang.org/x/crypto/acme/autocert), a Golang library providing [ACME protocol](https://en.wikipedia.org/wiki/Automatic_Certificate_Management_Environment) verification, to facilitate certificate renewals via [Let's Encrypt](https://letsencrypt.org/about/). Certificates will be renewed automatically, and the following can be expected:
- Certificates provided from Let's Encrypt have a validity of 3 months from date issued.
- Renewals are only attempted by headscale when 30 days or less remains until certificate expiry.
- Renewal attempts by autocert are triggered at a random interval of 30-60 minutes.
- No log output is generated when renewals are skipped, or successful.
#### Checking certificate expiry
If you want to validate that certificate renewal completed successfully, this can be done either manually, or through external monitoring software. Two examples of doing this manually:
1. Open the URL for your headscale server in your browser of choice, and manually inspecting the expiry date of the certificate you receive.
1. Or, check remotely from CLI using `openssl`:
```console
$ openssl s_client -servername [hostname] -connect [hostname]:443 | openssl x509 -noout -dates
(...)
notBefore=Feb 8 09:48:26 2024 GMT
notAfter=May 8 09:48:25 2024 GMT
```
#### Log output from the autocert library
As these log lines are from the autocert library, they are not strictly generated by headscale itself.
```plaintext
acme/autocert: missing server name
```
Likely caused by an incoming connection that does not specify a hostname, for example a `curl` request directly against the IP of the server, or an unexpected hostname.
```plaintext
acme/autocert: host "[foo]" not configured in HostWhitelist
```
Similarly to the above, this likely indicates an invalid incoming request for an incorrect hostname, commonly just the IP itself.
The source code for autocert can be found [here](https://cs.opensource.google/go/x/crypto/+/refs/tags/v0.19.0:acme/autocert/autocert.go)
================================================
FILE: docs/requirements.txt
================================================
mike~=2.1
mkdocs-include-markdown-plugin~=7.1
mkdocs-macros-plugin~=1.3
mkdocs-material[imaging]~=9.5
mkdocs-minify-plugin~=0.7
mkdocs-redirects~=1.2
================================================
FILE: docs/setup/install/community.md
================================================
# Community packages
Several Linux distributions and community members provide packages for headscale. Those packages may be used instead of
the [official releases](./official.md) provided by the headscale maintainers. Such packages offer improved integration
for their targeted operating system and usually:
- setup a dedicated local user account to run headscale
- provide a default configuration
- install headscale as system service
!!! warning "Community packages might be outdated"
The packages mentioned on this page might be outdated or unmaintained. Use the [official releases](./official.md) to
get the current stable version or to test pre-releases.
[](https://repology.org/project/headscale/versions)
## Arch Linux
Arch Linux offers a package for headscale, install via:
```shell
pacman -S headscale
```
The [AUR package `headscale-git`](https://aur.archlinux.org/packages/headscale-git) can be used to build the current
development version.
## Fedora, RHEL, CentOS
A third-party repository for various RPM based distributions is available at:
. The site provides detailed setup and installation
instructions.
## Nix, NixOS
A Nix package is available as: `headscale`. See the [NixOS package site for installation
details](https://search.nixos.org/packages?show=headscale).
## Gentoo
```shell
emerge --ask net-vpn/headscale
```
Gentoo specific documentation is available [here](https://wiki.gentoo.org/wiki/User:Maffblaster/Drafts/Headscale).
## OpenBSD
Headscale is available in ports. The port installs headscale as system service with `rc.d` and provides usage
instructions upon installation.
```shell
pkg_add headscale
```
================================================
FILE: docs/setup/install/container.md
================================================
# Running headscale in a container
!!! warning "Community documentation"
This page is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by headscale developers.
**It might be outdated and it might miss necessary steps**.
This documentation has the goal of showing a user how-to set up and run headscale in a container. A container runtime
such as [Docker](https://www.docker.com) or [Podman](https://podman.io) is required. The container image can be found on
[Docker Hub](https://hub.docker.com/r/headscale/headscale) and [GitHub Container
Registry](https://github.com/juanfont/headscale/pkgs/container/headscale). The container image URLs are:
- [Docker Hub](https://hub.docker.com/r/headscale/headscale): `docker.io/headscale/headscale:`
- [GitHub Container Registry](https://github.com/juanfont/headscale/pkgs/container/headscale):
`ghcr.io/juanfont/headscale:`
## Configure and run headscale
1. Create a directory on the container host to store headscale's [configuration](../../ref/configuration.md) and the [SQLite](https://www.sqlite.org/) database:
```shell
mkdir -p ./headscale/{config,lib}
cd ./headscale
```
1. Download the example configuration for your chosen version and save it as: `$(pwd)/config/config.yaml`. Adjust the
configuration to suit your local environment. See [Configuration](../../ref/configuration.md) for details.
1. Start headscale from within the previously created `./headscale` directory:
```shell
docker run \
--name headscale \
--detach \
--read-only \
--tmpfs /var/run/headscale \
--volume "$(pwd)/config:/etc/headscale:ro" \
--volume "$(pwd)/lib:/var/lib/headscale" \
--publish 127.0.0.1:8080:8080 \
--publish 127.0.0.1:9090:9090 \
--health-cmd "CMD headscale health" \
docker.io/headscale/headscale: \
serve
```
Note: use `0.0.0.0:8080:8080` instead of `127.0.0.1:8080:8080` if you want to expose the container externally.
This command mounts the local directories inside the container, forwards port 8080 and 9090 out of the container so
the headscale instance becomes available and then detaches so headscale runs in the background.
A similar configuration for `docker-compose`:
```yaml title="docker-compose.yaml"
services:
headscale:
image: docker.io/headscale/headscale:
restart: unless-stopped
container_name: headscale
read_only: true
tmpfs:
- /var/run/headscale
ports:
- "127.0.0.1:8080:8080"
- "127.0.0.1:9090:9090"
volumes:
# Please set to the absolute path
# of the previously created headscale directory.
- /config:/etc/headscale:ro
- /lib:/var/lib/headscale
command: serve
healthcheck:
test: ["CMD", "headscale", "health"]
```
1. Verify headscale is running:
Follow the container logs:
```shell
docker logs --follow headscale
```
Verify running containers:
```shell
docker ps
```
Verify headscale is available:
```shell
curl http://127.0.0.1:8080/health
```
Continue on the [getting started page](../../usage/getting-started.md) to register your first machine.
## Debugging headscale running in Docker
The Headscale container image is based on a "distroless" image that does not contain a shell or any other debug tools. If you need to debug headscale running in the Docker container, you can use the `-debug` variant, for example `docker.io/headscale/headscale:x.x.x-debug`.
### Running the debug Docker container
To run the debug Docker container, use the exact same commands as above, but replace `docker.io/headscale/headscale:x.x.x` with `docker.io/headscale/headscale:x.x.x-debug` (`x.x.x` is the version of headscale). The two containers are compatible with each other, so you can alternate between them.
### Executing commands in the debug container
The default command in the debug container is to run `headscale`, which is located at `/ko-app/headscale` inside the container.
Additionally, the debug container includes a minimalist Busybox shell.
To launch a shell in the container, use:
```shell
docker run -it docker.io/headscale/headscale:x.x.x-debug sh
```
You can also execute commands directly, such as `ls /ko-app` in this example:
```shell
docker run docker.io/headscale/headscale:x.x.x-debug ls /ko-app
```
Using `docker exec -it` allows you to run commands in an existing container.
================================================
FILE: docs/setup/install/official.md
================================================
# Official releases
Official releases for headscale are available as binaries for various platforms and DEB packages for Debian and Ubuntu.
Both are available on the [GitHub releases page](https://github.com/juanfont/headscale/releases).
## Using packages for Debian/Ubuntu (recommended)
It is recommended to use our DEB packages to install headscale on a Debian based system as those packages configure a
local user to run headscale, provide a default configuration and ship with a systemd service file. Supported
distributions are Ubuntu 22.04 or newer, Debian 12 or newer.
1. Download the [latest headscale package](https://github.com/juanfont/headscale/releases/latest) for your platform (`.deb` for Ubuntu and Debian).
```shell
HEADSCALE_VERSION="" # See above URL for latest version, e.g. "X.Y.Z" (NOTE: do not add the "v" prefix!)
HEADSCALE_ARCH="" # Your system architecture, e.g. "amd64"
wget --output-document=headscale.deb \
"https://github.com/juanfont/headscale/releases/download/v${HEADSCALE_VERSION}/headscale_${HEADSCALE_VERSION}_linux_${HEADSCALE_ARCH}.deb"
```
1. Install headscale:
```shell
sudo apt install ./headscale.deb
```
1. [Configure headscale by editing the configuration file](../../ref/configuration.md):
```shell
sudo nano /etc/headscale/config.yaml
```
1. Enable and start the headscale service:
```shell
sudo systemctl enable --now headscale
```
1. Verify that headscale is running as intended:
```shell
sudo systemctl status headscale
```
Continue on the [getting started page](../../usage/getting-started.md) to register your first machine.
## Using standalone binaries (advanced)
!!! warning "Advanced"
This installation method is considered advanced as one needs to take care of the local user and the systemd
service themselves. If possible, use the [DEB packages](#using-packages-for-debianubuntu-recommended) or a
[community package](./community.md) instead.
This section describes the installation of headscale according to the [Requirements and
assumptions](../requirements.md#assumptions). Headscale is run by a dedicated local user and the service itself is
managed by systemd.
1. Download the latest [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases):
```shell
sudo wget --output-document=/usr/bin/headscale \
https://github.com/juanfont/headscale/releases/download/v/headscale__linux_
```
1. Make `headscale` executable:
```shell
sudo chmod +x /usr/bin/headscale
```
1. Add a dedicated local user to run headscale:
```shell
sudo useradd \
--create-home \
--home-dir /var/lib/headscale/ \
--system \
--user-group \
--shell /usr/sbin/nologin \
headscale
```
1. Download the example configuration for your chosen version and save it as: `/etc/headscale/config.yaml`. Adjust the
configuration to suit your local environment. See [Configuration](../../ref/configuration.md) for details.
```shell
sudo mkdir -p /etc/headscale
sudo nano /etc/headscale/config.yaml
```
1. Copy [headscale's systemd service file](https://github.com/juanfont/headscale/blob/main/packaging/systemd/headscale.service)
to `/etc/systemd/system/headscale.service` and adjust it to suit your local setup. The following parameters likely need
to be modified: `ExecStart`, `WorkingDirectory`, `ReadWritePaths`.
1. In `/etc/headscale/config.yaml`, override the default `headscale` unix socket with a path that is writable by the
`headscale` user or group:
```yaml title="config.yaml"
unix_socket: /var/run/headscale/headscale.sock
```
1. Reload systemd to load the new configuration file:
```shell
systemctl daemon-reload
```
1. Enable and start the new headscale service:
```shell
systemctl enable --now headscale
```
1. Verify that headscale is running as intended:
```shell
systemctl status headscale
```
Continue on the [getting started page](../../usage/getting-started.md) to register your first machine.
================================================
FILE: docs/setup/install/source.md
================================================
# Build from source
!!! warning "Community documentation"
This page is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by headscale developers.
**It might be outdated and it might miss necessary steps**.
Headscale can be built from source using the latest version of [Go](https://golang.org) and [Buf](https://buf.build)
(Protobuf generator). See the [Contributing section in the GitHub
README](https://github.com/juanfont/headscale#contributing) for more information.
## OpenBSD
### Install from source
```shell
# Install prerequisites
pkg_add go git
git clone https://github.com/juanfont/headscale.git
cd headscale
# optionally checkout a release
# option a. you can find official release at https://github.com/juanfont/headscale/releases/latest
# option b. get latest tag, this may be a beta release
latestTag=$(git describe --tags `git rev-list --tags --max-count=1`)
git checkout $latestTag
go build -ldflags="-s -w -X github.com/juanfont/headscale/hscontrol/types.Version=$latestTag" -X github.com/juanfont/headscale/hscontrol/types.GitCommitHash=HASH" github.com/juanfont/headscale
# make it executable
chmod a+x headscale
# copy it to /usr/local/sbin
cp headscale /usr/local/sbin
```
### Install from source via cross compile
```shell
# Install prerequisites
# 1. go v1.20+: headscale newer than 0.21 needs go 1.20+ to compile
# 2. gmake: Makefile in the headscale repo is written in GNU make syntax
git clone https://github.com/juanfont/headscale.git
cd headscale
# optionally checkout a release
# option a. you can find official release at https://github.com/juanfont/headscale/releases/latest
# option b. get latest tag, this may be a beta release
latestTag=$(git describe --tags `git rev-list --tags --max-count=1`)
git checkout $latestTag
make build GOOS=openbsd
# copy headscale to openbsd machine and put it in /usr/local/sbin
```
================================================
FILE: docs/setup/requirements.md
================================================
# Requirements
Headscale should just work as long as the following requirements are met:
- A server with a public IP address for headscale. A dual-stack setup with a public IPv4 and a public IPv6 address is
recommended.
- Headscale is served via HTTPS on port 443[^1] and [may use additional ports](#ports-in-use).
- A reasonably modern Linux or BSD based operating system.
- A dedicated local user account to run headscale.
- A little bit of command line knowledge to configure and operate headscale.
## Ports in use
The ports in use vary with the intended scenario and enabled features. Some of the listed ports may be changed via the
[configuration file](../ref/configuration.md) but we recommend to stick with the default values.
- tcp/80
- Expose publicly: yes
- HTTP, used by Let's Encrypt to verify ownership via the HTTP-01 challenge.
- Only required if the built-in Let's Enrypt client with the HTTP-01 challenge is used. See [TLS](../ref/tls.md) for
details.
- tcp/443
- Expose publicly: yes
- HTTPS, required to make Headscale available to Tailscale clients[^1]
- Required if the [embedded DERP server](../ref/derp.md) is enabled
- udp/3478
- Expose publicly: yes
- STUN, required if the [embedded DERP server](../ref/derp.md) is enabled
- tcp/50443
- Expose publicly: yes
- Only required if the gRPC interface is used to [remote-control Headscale](../ref/api.md#grpc).
- tcp/9090
- Expose publicly: no
- [Metrics and debug endpoint](../ref/debug.md#metrics-and-debug-endpoint)
## Assumptions
The headscale documentation and the provided examples are written with a few assumptions in mind:
- Headscale is running as system service via a dedicated local user `headscale`.
- The [configuration](../ref/configuration.md) is loaded from `/etc/headscale/config.yaml`.
- SQLite is used as database.
- The data directory for headscale (used for private keys, ACLs, SQLite database, …) is located in `/var/lib/headscale`.
- URLs and values that need to be replaced by the user are either denoted as `` or use placeholder
values such as `headscale.example.com`.
Please adjust to your local environment accordingly.
[^1]: The Tailscale client assumes HTTPS on port 443 in certain situations. Serving headscale either via HTTP or via
HTTPS on a port other than 443 is possible but sticking with HTTPS on port 443 is strongly recommended for
production setups. See [issue 2164](https://github.com/juanfont/headscale/issues/2164) for more information.
================================================
FILE: docs/setup/upgrade.md
================================================
# Upgrade an existing installation
!!! tip "Required update path"
Its required to update from one stable version to the next (e.g. 0.26.0 → 0.27.1 → 0.28.0) without skipping minor
versions in between. You should always pick the latest available patch release.
Update an existing Headscale installation to a new version:
- Read the announcement on the [GitHub releases](https://github.com/juanfont/headscale/releases) page for the new
version. It lists the changes of the release along with possible breaking changes and version-specific upgrade
instructions.
- Stop Headscale
- **[Create a backup of your installation](#backup)**
- Update Headscale to the new version, preferably by following the same installation method.
- Compare and update the [configuration](../ref/configuration.md) file.
- Start Headscale
## Backup
Headscale applies database migrations during upgrades and we highly recommend to create a backup of your database before
upgrading. A full backup of Headscale depends on your individual setup, but below are some typical setup scenarios.
=== "Standard installation"
A installation that follows our [official releases](install/official.md) setup guide uses the following paths:
- [Configuration file](../ref/configuration.md): `/etc/headscale/config.yaml`
- Data directory: `/var/lib/headscale`
- SQLite as database: `/var/lib/headscale/db.sqlite`
```console
TIMESTAMP=$(date +%Y%m%d%H%M%S)
cp -aR /etc/headscale /etc/headscale.backup-$TIMESTAMP
cp -aR /var/lib/headscale /var/lib/headscale.backup-$TIMESTAMP
```
=== "Container"
A installation that follows our [container](install/container.md) setup guide uses a single source volume directory
that contains the configuration file, data directory and the SQLite database.
```console
cp -aR /path/to/headscale /path/to/headscale.backup-$(date +%Y%m%d%H%M%S)
```
=== "PostgreSQL"
Please follow PostgreSQL's [Backup and Restore](https://www.postgresql.org/docs/current/backup.html) documentation
to create a backup of your PostgreSQL database.
================================================
FILE: docs/usage/connect/android.md
================================================
# Connecting an Android client
This documentation has the goal of showing how a user can use the official Android [Tailscale](https://tailscale.com) client with headscale.
## Installation
Install the official Tailscale Android client from the [Google Play Store](https://play.google.com/store/apps/details?id=com.tailscale.ipn) or [F-Droid](https://f-droid.org/packages/com.tailscale.ipn/).
## Connect via web authentication
- Open the app and select the settings menu in the upper-right corner
- Tap on `Accounts`
- In the kebab menu icon (three dots) in the upper-right corner select `Use an alternate server`
- Enter your server URL (e.g `https://headscale.example.com`) and follow the instructions
- The client connects automatically as soon as the node registration is complete on headscale. Until then, nothing is
visible in the server logs.
## Connect using a pre authenticated key
- Open the app and select the settings menu in the upper-right corner
- Tap on `Accounts`
- In the kebab menu icon (three dots) in the upper-right corner select `Use an alternate server`
- Enter your server URL (e.g `https://headscale.example.com`). If login prompts open, close it and continue
- Open the settings menu in the upper-right corner
- Tap on `Accounts`
- In the kebab menu icon (three dots) in the upper-right corner select `Use an auth key`
- Enter your [preauthkey generated from headscale](../../ref/registration.md#pre-authenticated-key)
- If needed, tap `Log in` on the main screen. You should now be connected to your headscale.
================================================
FILE: docs/usage/connect/apple.md
================================================
# Connecting an Apple client
This documentation has the goal of showing how a user can use the official iOS and macOS [Tailscale](https://tailscale.com) clients with headscale.
!!! info "Instructions on your headscale instance"
An endpoint with information on how to connect your Apple device
is also available at `/apple` on your running instance.
## iOS
### Installation
Install the official Tailscale iOS client from the [App Store](https://apps.apple.com/app/tailscale/id1470499037).
### Configuring the headscale URL
- Open the Tailscale app
- Click the account icon in the top-right corner and select `Log in…`.
- Tap the top-right options menu button and select `Use custom coordination server`.
- Enter your instance url (e.g `https://headscale.example.com`)
- Enter your credentials and log in. Headscale should now be working on your iOS device.
## macOS
### Installation
Choose one of the available [Tailscale clients for macOS](https://tailscale.com/kb/1065/macos-variants) and install it.
### Configuring the headscale URL
#### Command line
Use Tailscale's login command to connect with your headscale instance (e.g `https://headscale.example.com`):
```
tailscale login --login-server
```
#### GUI
- Option + Click the Tailscale icon in the menu and hover over the Debug menu
- Under `Custom Login Server`, select `Add Account...`
- Enter the URL of your headscale instance (e.g `https://headscale.example.com`) and press `Add Account`
- Follow the login procedure in the browser
## tvOS
### Installation
Install the official Tailscale tvOS client from the [App Store](https://apps.apple.com/app/tailscale/id1470499037).
!!! danger
**Don't** open the Tailscale App after installation!
### Configuring the headscale URL
- Open Settings (the Apple tvOS settings) > Apps > Tailscale
- Under `ALTERNATE COORDINATION SERVER URL`, select `URL`
- Enter the URL of your headscale instance (e.g `https://headscale.example.com`) and press `OK`
- Return to the tvOS Home screen
- Open Tailscale
- Click the button `Install VPN configuration` and confirm the appearing popup by clicking the `Allow` button
- Scan the QR code and follow the login procedure
================================================
FILE: docs/usage/connect/windows.md
================================================
# Connecting a Windows client
This documentation has the goal of showing how a user can use the official Windows [Tailscale](https://tailscale.com) client with headscale.
!!! info "Instructions on your headscale instance"
An endpoint with information on how to connect your Windows device
is also available at `/windows` on your running instance.
## Installation
Download the [Official Windows Client](https://tailscale.com/download/windows) and install it.
## Configuring the headscale URL
Open a Command Prompt or Powershell and use Tailscale's login command to connect with your headscale instance (e.g
`https://headscale.example.com`):
```
tailscale login --login-server
```
Follow the instructions in the opened browser window to finish the configuration.
## Troubleshooting
### Unattended mode
By default, Tailscale's Windows client is only running when the user is logged in. If you want to keep Tailscale running
all the time, please enable "Unattended mode":
- Click on the Tailscale tray icon and select `Preferences`
- Enable `Run unattended`
- Confirm the "Unattended mode" message
See also [Keep Tailscale running when I'm not logged in to my computer](https://tailscale.com/kb/1088/run-unattended)
### Failing node registration
If you are seeing repeated messages like:
```
[GIN] 2022/02/10 - 16:39:34 | 200 | 1.105306ms | 127.0.0.1 | POST "/machine/redacted"
```
in your headscale output, turn on `DEBUG` logging and look for:
```
2022-02-11T00:59:29Z DBG Machine registration has expired. Sending a authurl to register machine=redacted
```
This typically means that the registry keys above was not set appropriately.
To reset and try again, it is important to do the following:
1. Shut down the Tailscale service (or the client running in the tray)
1. Delete Tailscale Application data folder, located at `C:\Users\\AppData\Local\Tailscale` and try to connect again.
1. Ensure the Windows node is deleted from headscale (to ensure fresh setup)
1. Start Tailscale on the Windows machine and retry the login.
================================================
FILE: docs/usage/getting-started.md
================================================
# Getting started
This page helps you get started with headscale and provides a few usage examples for the headscale command line tool
`headscale`.
!!! note "Prerequisites"
- Headscale is installed and running as system service. Read the [setup section](../setup/requirements.md) for
installation instructions.
- The configuration file exists and is adjusted to suit your environment, see
[Configuration](../ref/configuration.md) for details.
- Headscale is reachable from the Internet. Verify this by visiting the health endpoint:
https://headscale.example.com/health
- The Tailscale client is installed, see [Client and operating system support](../about/clients.md) for more
information.
## Getting help
The `headscale` command line tool provides built-in help. To show available commands along with their arguments and
options, run:
=== "Native"
```shell
# Show help
headscale help
# Show help for a specific command
headscale --help
```
=== "Container"
```shell
# Show help
docker exec -it headscale \
headscale help
# Show help for a specific command
docker exec -it headscale \
headscale --help
```
!!! note "Manage headscale from another local user"
By default only the user `headscale` or `root` will have the necessary permissions to access the unix socket
(`/var/run/headscale/headscale.sock`) that is used to communicate with the service. In order to be able to
communicate with the headscale service you have to make sure the unix socket is accessible by the user that runs
the commands. In general you can achieve this by any of the following methods:
- using `sudo`
- run the commands as user `headscale`
- add your user to the `headscale` group
To verify you can run the following command using your preferred method:
```shell
headscale users list
```
## Manage headscale users
In headscale, a node (also known as machine or device) is [typically assigned to a headscale
user](../ref/registration.md#identity-model). Such a headscale user may have many nodes assigned to them and can be
managed with the `headscale users` command. Invoke the built-in help for more information: `headscale users --help`.
### Create a headscale user
=== "Native"
```shell
headscale users create
```
=== "Container"
```shell
docker exec -it headscale \
headscale users create
```
### List existing headscale users
=== "Native"
```shell
headscale users list
```
=== "Container"
```shell
docker exec -it headscale \
headscale users list
```
## Register a node
One has to [register a node](../ref/registration.md) first to use headscale as coordination server with Tailscale. The
following examples work for the Tailscale client on Linux/BSD operating systems. Alternatively, follow the instructions
to connect [Android](connect/android.md), [Apple](connect/apple.md) or [Windows](connect/windows.md) devices. Read
[registration methods](../ref/registration.md) for an overview of available registration methods.
### [Web authentication](../ref/registration.md#web-authentication)
On a client machine, run the `tailscale up` command and provide the FQDN of your headscale instance as argument:
```shell
tailscale up --login-server
```
Usually, a browser window with further instructions is opened. This page explains how to complete the registration on
your headscale server and it also prints the Auth ID required to approve the node:
=== "Native"
```shell
headscale auth register --user --auth-id
```
=== "Container"
```shell
docker exec -it headscale \
headscale auth register --user --auth-id
```
### [Pre authenticated key](../ref/registration.md#pre-authenticated-key)
It is also possible to generate a preauthkey and register a node non-interactively. First, generate a preauthkey on the
headscale instance. By default, the key is valid for one hour and can only be used once (see `headscale preauthkeys --help` for other options):
=== "Native"
```shell
headscale preauthkeys create --user
```
=== "Container"
```shell
docker exec -it headscale \
headscale preauthkeys create --user
```
The command returns the preauthkey on success which is used to connect a node to the headscale instance via the
`tailscale up` command:
```shell
tailscale up --login-server --authkey
```
================================================
FILE: flake.nix
================================================
{
description = "headscale - Open Source Tailscale Control server";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
flake-utils.url = "github:numtide/flake-utils";
};
outputs =
{ self
, nixpkgs
, flake-utils
, ...
}:
let
headscaleVersion = self.shortRev or self.dirtyShortRev;
commitHash = self.rev or self.dirtyRev;
in
{
# NixOS module
nixosModules = rec {
headscale = import ./nix/module.nix;
default = headscale;
};
overlays.default = _: prev:
let
pkgs = nixpkgs.legacyPackages.${prev.stdenv.hostPlatform.system};
buildGo = pkgs.buildGo126Module;
vendorHash = "sha256-jom1279Lx2Knff93rfoEgGeBBk+EjJO7GAkaQYlchgY=";
in
{
headscale = buildGo {
pname = "headscale";
version = headscaleVersion;
src = pkgs.lib.cleanSource self;
# Only run unit tests when testing a build
checkFlags = [ "-short" ];
# When updating go.mod or go.sum, a new sha will need to be calculated,
# update this if you have a mismatch after doing a change to those files.
inherit vendorHash;
subPackages = [ "cmd/headscale" ];
meta = {
mainProgram = "headscale";
};
};
hi = buildGo {
pname = "hi";
version = headscaleVersion;
src = pkgs.lib.cleanSource self;
checkFlags = [ "-short" ];
inherit vendorHash;
subPackages = [ "cmd/hi" ];
};
protoc-gen-grpc-gateway = buildGo rec {
pname = "grpc-gateway";
version = "2.27.7";
src = pkgs.fetchFromGitHub {
owner = "grpc-ecosystem";
repo = "grpc-gateway";
rev = "v${version}";
sha256 = "sha256-6R0EhNnOBEISJddjkbVTcBvUuU5U3r9Hu2UPfAZDep4=";
};
vendorHash = "sha256-SOAbRrzMf2rbKaG9PGSnPSLY/qZVgbHcNjOLmVonycY=";
nativeBuildInputs = [ pkgs.installShellFiles ];
subPackages = [ "protoc-gen-grpc-gateway" "protoc-gen-openapiv2" ];
};
protobuf-language-server = buildGo rec {
pname = "protobuf-language-server";
version = "1cf777d";
src = pkgs.fetchFromGitHub {
owner = "lasorda";
repo = "protobuf-language-server";
rev = "1cf777de4d35a6e493a689e3ca1a6183ce3206b6";
sha256 = "sha256-9MkBQPxr/TDr/sNz/Sk7eoZwZwzdVbE5u6RugXXk5iY=";
};
vendorHash = "sha256-4nTpKBe7ekJsfQf+P6edT/9Vp2SBYbKz1ITawD3bhkI=";
subPackages = [ "." ];
};
# Build golangci-lint with Go 1.26 (upstream uses hardcoded Go version)
golangci-lint = buildGo rec {
pname = "golangci-lint";
version = "2.9.0";
src = pkgs.fetchFromGitHub {
owner = "golangci";
repo = "golangci-lint";
rev = "v${version}";
hash = "sha256-8LEtm1v0slKwdLBtS41OilKJLXytSxcI9fUlZbj5Gfw=";
};
vendorHash = "sha256-w8JfF6n1ylrU652HEv/cYdsOdDZz9J2uRQDqxObyhkY=";
subPackages = [ "cmd/golangci-lint" ];
nativeBuildInputs = [ pkgs.installShellFiles ];
ldflags = [
"-s"
"-w"
"-X main.version=${version}"
"-X main.commit=v${version}"
"-X main.date=1970-01-01T00:00:00Z"
];
postInstall = ''
for shell in bash zsh fish; do
HOME=$TMPDIR $out/bin/golangci-lint completion $shell > golangci-lint.$shell
installShellCompletion golangci-lint.$shell
done
'';
meta = {
description = "Fast linters runner for Go";
homepage = "https://golangci-lint.run/";
changelog = "https://github.com/golangci/golangci-lint/blob/v${version}/CHANGELOG.md";
mainProgram = "golangci-lint";
};
};
gotestsum = prev.gotestsum.override {
buildGoModule = buildGo;
};
gotests = prev.gotests.override {
buildGoModule = buildGo;
};
gofumpt = prev.gofumpt.override {
buildGoModule = buildGo;
};
gopls = prev.gopls.override {
buildGoLatestModule = buildGo;
};
};
}
// flake-utils.lib.eachDefaultSystem
(system:
let
pkgs = import nixpkgs {
overlays = [ self.overlays.default ];
inherit system;
};
buildDeps = with pkgs; [ git go_1_26 gnumake ];
devDeps = with pkgs;
buildDeps
++ [
golangci-lint
golangci-lint-langserver
golines
nodePackages.prettier
nixpkgs-fmt
goreleaser
nfpm
gotestsum
gotests
gofumpt
gopls
ksh
ko
yq-go
ripgrep
postgresql
python314Packages.mdformat
python314Packages.mdformat-footnote
python314Packages.mdformat-frontmatter
python314Packages.mdformat-mkdocs
prek
# 'dot' is needed for pprof graphs
# go tool pprof -http=:
graphviz
# Protobuf dependencies
protobuf
protoc-gen-go
protoc-gen-go-grpc
protoc-gen-grpc-gateway
buf
clang-tools # clang-format
protobuf-language-server
]
++ lib.optional pkgs.stdenv.isLinux [ traceroute ];
# Add entry to build a docker image with headscale
# caveat: only works on Linux
#
# Usage:
# nix build .#headscale-docker
# docker load < result
headscale-docker = pkgs.dockerTools.buildLayeredImage {
name = "headscale";
tag = headscaleVersion;
contents = [ pkgs.headscale ];
config.Entrypoint = [ (pkgs.headscale + "/bin/headscale") ];
};
in
{
# `nix develop`
devShells.default = pkgs.mkShell {
buildInputs =
devDeps
++ [
(pkgs.writeShellScriptBin
"nix-vendor-sri"
''
set -eu
OUT=$(mktemp -d -t nar-hash-XXXXXX)
rm -rf "$OUT"
go mod vendor -o "$OUT"
go run tailscale.com/cmd/nardump --sri "$OUT"
rm -rf "$OUT"
'')
(pkgs.writeShellScriptBin
"go-mod-update-all"
''
cat go.mod | ${pkgs.silver-searcher}/bin/ag "\t" | ${pkgs.silver-searcher}/bin/ag -v indirect | ${pkgs.gawk}/bin/awk '{print $1}' | ${pkgs.findutils}/bin/xargs go get -u
go mod tidy
'')
];
shellHook = ''
export PATH="$PWD/result/bin:$PATH"
export CGO_ENABLED=0
'';
};
# `nix build`
packages = with pkgs; {
inherit headscale;
inherit headscale-docker;
default = headscale;
};
# `nix run`
apps.headscale = flake-utils.lib.mkApp {
drv = pkgs.headscale;
};
apps.default = flake-utils.lib.mkApp {
drv = pkgs.headscale;
};
checks = {
headscale = pkgs.testers.nixosTest (import ./nix/tests/headscale.nix);
};
});
}
================================================
FILE: gen/go/headscale/v1/apikey.pb.go
================================================
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.11
// protoc (unknown)
// source: headscale/v1/apikey.proto
package v1
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
timestamppb "google.golang.org/protobuf/types/known/timestamppb"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type ApiKey struct {
state protoimpl.MessageState `protogen:"open.v1"`
Id uint64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
Prefix string `protobuf:"bytes,2,opt,name=prefix,proto3" json:"prefix,omitempty"`
Expiration *timestamppb.Timestamp `protobuf:"bytes,3,opt,name=expiration,proto3" json:"expiration,omitempty"`
CreatedAt *timestamppb.Timestamp `protobuf:"bytes,4,opt,name=created_at,json=createdAt,proto3" json:"created_at,omitempty"`
LastSeen *timestamppb.Timestamp `protobuf:"bytes,5,opt,name=last_seen,json=lastSeen,proto3" json:"last_seen,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ApiKey) Reset() {
*x = ApiKey{}
mi := &file_headscale_v1_apikey_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ApiKey) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ApiKey) ProtoMessage() {}
func (x *ApiKey) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_apikey_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ApiKey.ProtoReflect.Descriptor instead.
func (*ApiKey) Descriptor() ([]byte, []int) {
return file_headscale_v1_apikey_proto_rawDescGZIP(), []int{0}
}
func (x *ApiKey) GetId() uint64 {
if x != nil {
return x.Id
}
return 0
}
func (x *ApiKey) GetPrefix() string {
if x != nil {
return x.Prefix
}
return ""
}
func (x *ApiKey) GetExpiration() *timestamppb.Timestamp {
if x != nil {
return x.Expiration
}
return nil
}
func (x *ApiKey) GetCreatedAt() *timestamppb.Timestamp {
if x != nil {
return x.CreatedAt
}
return nil
}
func (x *ApiKey) GetLastSeen() *timestamppb.Timestamp {
if x != nil {
return x.LastSeen
}
return nil
}
type CreateApiKeyRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
Expiration *timestamppb.Timestamp `protobuf:"bytes,1,opt,name=expiration,proto3" json:"expiration,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *CreateApiKeyRequest) Reset() {
*x = CreateApiKeyRequest{}
mi := &file_headscale_v1_apikey_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *CreateApiKeyRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CreateApiKeyRequest) ProtoMessage() {}
func (x *CreateApiKeyRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_apikey_proto_msgTypes[1]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CreateApiKeyRequest.ProtoReflect.Descriptor instead.
func (*CreateApiKeyRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_apikey_proto_rawDescGZIP(), []int{1}
}
func (x *CreateApiKeyRequest) GetExpiration() *timestamppb.Timestamp {
if x != nil {
return x.Expiration
}
return nil
}
type CreateApiKeyResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
ApiKey string `protobuf:"bytes,1,opt,name=api_key,json=apiKey,proto3" json:"api_key,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *CreateApiKeyResponse) Reset() {
*x = CreateApiKeyResponse{}
mi := &file_headscale_v1_apikey_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *CreateApiKeyResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CreateApiKeyResponse) ProtoMessage() {}
func (x *CreateApiKeyResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_apikey_proto_msgTypes[2]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CreateApiKeyResponse.ProtoReflect.Descriptor instead.
func (*CreateApiKeyResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_apikey_proto_rawDescGZIP(), []int{2}
}
func (x *CreateApiKeyResponse) GetApiKey() string {
if x != nil {
return x.ApiKey
}
return ""
}
type ExpireApiKeyRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
Prefix string `protobuf:"bytes,1,opt,name=prefix,proto3" json:"prefix,omitempty"`
Id uint64 `protobuf:"varint,2,opt,name=id,proto3" json:"id,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ExpireApiKeyRequest) Reset() {
*x = ExpireApiKeyRequest{}
mi := &file_headscale_v1_apikey_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ExpireApiKeyRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ExpireApiKeyRequest) ProtoMessage() {}
func (x *ExpireApiKeyRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_apikey_proto_msgTypes[3]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ExpireApiKeyRequest.ProtoReflect.Descriptor instead.
func (*ExpireApiKeyRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_apikey_proto_rawDescGZIP(), []int{3}
}
func (x *ExpireApiKeyRequest) GetPrefix() string {
if x != nil {
return x.Prefix
}
return ""
}
func (x *ExpireApiKeyRequest) GetId() uint64 {
if x != nil {
return x.Id
}
return 0
}
type ExpireApiKeyResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ExpireApiKeyResponse) Reset() {
*x = ExpireApiKeyResponse{}
mi := &file_headscale_v1_apikey_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ExpireApiKeyResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ExpireApiKeyResponse) ProtoMessage() {}
func (x *ExpireApiKeyResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_apikey_proto_msgTypes[4]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ExpireApiKeyResponse.ProtoReflect.Descriptor instead.
func (*ExpireApiKeyResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_apikey_proto_rawDescGZIP(), []int{4}
}
type ListApiKeysRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ListApiKeysRequest) Reset() {
*x = ListApiKeysRequest{}
mi := &file_headscale_v1_apikey_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ListApiKeysRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ListApiKeysRequest) ProtoMessage() {}
func (x *ListApiKeysRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_apikey_proto_msgTypes[5]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ListApiKeysRequest.ProtoReflect.Descriptor instead.
func (*ListApiKeysRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_apikey_proto_rawDescGZIP(), []int{5}
}
type ListApiKeysResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
ApiKeys []*ApiKey `protobuf:"bytes,1,rep,name=api_keys,json=apiKeys,proto3" json:"api_keys,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ListApiKeysResponse) Reset() {
*x = ListApiKeysResponse{}
mi := &file_headscale_v1_apikey_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ListApiKeysResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ListApiKeysResponse) ProtoMessage() {}
func (x *ListApiKeysResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_apikey_proto_msgTypes[6]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ListApiKeysResponse.ProtoReflect.Descriptor instead.
func (*ListApiKeysResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_apikey_proto_rawDescGZIP(), []int{6}
}
func (x *ListApiKeysResponse) GetApiKeys() []*ApiKey {
if x != nil {
return x.ApiKeys
}
return nil
}
type DeleteApiKeyRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
Prefix string `protobuf:"bytes,1,opt,name=prefix,proto3" json:"prefix,omitempty"`
Id uint64 `protobuf:"varint,2,opt,name=id,proto3" json:"id,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DeleteApiKeyRequest) Reset() {
*x = DeleteApiKeyRequest{}
mi := &file_headscale_v1_apikey_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DeleteApiKeyRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DeleteApiKeyRequest) ProtoMessage() {}
func (x *DeleteApiKeyRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_apikey_proto_msgTypes[7]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DeleteApiKeyRequest.ProtoReflect.Descriptor instead.
func (*DeleteApiKeyRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_apikey_proto_rawDescGZIP(), []int{7}
}
func (x *DeleteApiKeyRequest) GetPrefix() string {
if x != nil {
return x.Prefix
}
return ""
}
func (x *DeleteApiKeyRequest) GetId() uint64 {
if x != nil {
return x.Id
}
return 0
}
type DeleteApiKeyResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DeleteApiKeyResponse) Reset() {
*x = DeleteApiKeyResponse{}
mi := &file_headscale_v1_apikey_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DeleteApiKeyResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DeleteApiKeyResponse) ProtoMessage() {}
func (x *DeleteApiKeyResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_apikey_proto_msgTypes[8]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DeleteApiKeyResponse.ProtoReflect.Descriptor instead.
func (*DeleteApiKeyResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_apikey_proto_rawDescGZIP(), []int{8}
}
var File_headscale_v1_apikey_proto protoreflect.FileDescriptor
const file_headscale_v1_apikey_proto_rawDesc = "" +
"\n" +
"\x19headscale/v1/apikey.proto\x12\fheadscale.v1\x1a\x1fgoogle/protobuf/timestamp.proto\"\xe0\x01\n" +
"\x06ApiKey\x12\x0e\n" +
"\x02id\x18\x01 \x01(\x04R\x02id\x12\x16\n" +
"\x06prefix\x18\x02 \x01(\tR\x06prefix\x12:\n" +
"\n" +
"expiration\x18\x03 \x01(\v2\x1a.google.protobuf.TimestampR\n" +
"expiration\x129\n" +
"\n" +
"created_at\x18\x04 \x01(\v2\x1a.google.protobuf.TimestampR\tcreatedAt\x127\n" +
"\tlast_seen\x18\x05 \x01(\v2\x1a.google.protobuf.TimestampR\blastSeen\"Q\n" +
"\x13CreateApiKeyRequest\x12:\n" +
"\n" +
"expiration\x18\x01 \x01(\v2\x1a.google.protobuf.TimestampR\n" +
"expiration\"/\n" +
"\x14CreateApiKeyResponse\x12\x17\n" +
"\aapi_key\x18\x01 \x01(\tR\x06apiKey\"=\n" +
"\x13ExpireApiKeyRequest\x12\x16\n" +
"\x06prefix\x18\x01 \x01(\tR\x06prefix\x12\x0e\n" +
"\x02id\x18\x02 \x01(\x04R\x02id\"\x16\n" +
"\x14ExpireApiKeyResponse\"\x14\n" +
"\x12ListApiKeysRequest\"F\n" +
"\x13ListApiKeysResponse\x12/\n" +
"\bapi_keys\x18\x01 \x03(\v2\x14.headscale.v1.ApiKeyR\aapiKeys\"=\n" +
"\x13DeleteApiKeyRequest\x12\x16\n" +
"\x06prefix\x18\x01 \x01(\tR\x06prefix\x12\x0e\n" +
"\x02id\x18\x02 \x01(\x04R\x02id\"\x16\n" +
"\x14DeleteApiKeyResponseB)Z'github.com/juanfont/headscale/gen/go/v1b\x06proto3"
var (
file_headscale_v1_apikey_proto_rawDescOnce sync.Once
file_headscale_v1_apikey_proto_rawDescData []byte
)
func file_headscale_v1_apikey_proto_rawDescGZIP() []byte {
file_headscale_v1_apikey_proto_rawDescOnce.Do(func() {
file_headscale_v1_apikey_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_headscale_v1_apikey_proto_rawDesc), len(file_headscale_v1_apikey_proto_rawDesc)))
})
return file_headscale_v1_apikey_proto_rawDescData
}
var file_headscale_v1_apikey_proto_msgTypes = make([]protoimpl.MessageInfo, 9)
var file_headscale_v1_apikey_proto_goTypes = []any{
(*ApiKey)(nil), // 0: headscale.v1.ApiKey
(*CreateApiKeyRequest)(nil), // 1: headscale.v1.CreateApiKeyRequest
(*CreateApiKeyResponse)(nil), // 2: headscale.v1.CreateApiKeyResponse
(*ExpireApiKeyRequest)(nil), // 3: headscale.v1.ExpireApiKeyRequest
(*ExpireApiKeyResponse)(nil), // 4: headscale.v1.ExpireApiKeyResponse
(*ListApiKeysRequest)(nil), // 5: headscale.v1.ListApiKeysRequest
(*ListApiKeysResponse)(nil), // 6: headscale.v1.ListApiKeysResponse
(*DeleteApiKeyRequest)(nil), // 7: headscale.v1.DeleteApiKeyRequest
(*DeleteApiKeyResponse)(nil), // 8: headscale.v1.DeleteApiKeyResponse
(*timestamppb.Timestamp)(nil), // 9: google.protobuf.Timestamp
}
var file_headscale_v1_apikey_proto_depIdxs = []int32{
9, // 0: headscale.v1.ApiKey.expiration:type_name -> google.protobuf.Timestamp
9, // 1: headscale.v1.ApiKey.created_at:type_name -> google.protobuf.Timestamp
9, // 2: headscale.v1.ApiKey.last_seen:type_name -> google.protobuf.Timestamp
9, // 3: headscale.v1.CreateApiKeyRequest.expiration:type_name -> google.protobuf.Timestamp
0, // 4: headscale.v1.ListApiKeysResponse.api_keys:type_name -> headscale.v1.ApiKey
5, // [5:5] is the sub-list for method output_type
5, // [5:5] is the sub-list for method input_type
5, // [5:5] is the sub-list for extension type_name
5, // [5:5] is the sub-list for extension extendee
0, // [0:5] is the sub-list for field type_name
}
func init() { file_headscale_v1_apikey_proto_init() }
func file_headscale_v1_apikey_proto_init() {
if File_headscale_v1_apikey_proto != nil {
return
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_headscale_v1_apikey_proto_rawDesc), len(file_headscale_v1_apikey_proto_rawDesc)),
NumEnums: 0,
NumMessages: 9,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_headscale_v1_apikey_proto_goTypes,
DependencyIndexes: file_headscale_v1_apikey_proto_depIdxs,
MessageInfos: file_headscale_v1_apikey_proto_msgTypes,
}.Build()
File_headscale_v1_apikey_proto = out.File
file_headscale_v1_apikey_proto_goTypes = nil
file_headscale_v1_apikey_proto_depIdxs = nil
}
================================================
FILE: gen/go/headscale/v1/auth.pb.go
================================================
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.11
// protoc (unknown)
// source: headscale/v1/auth.proto
package v1
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type AuthRegisterRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
User string `protobuf:"bytes,1,opt,name=user,proto3" json:"user,omitempty"`
AuthId string `protobuf:"bytes,2,opt,name=auth_id,json=authId,proto3" json:"auth_id,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *AuthRegisterRequest) Reset() {
*x = AuthRegisterRequest{}
mi := &file_headscale_v1_auth_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *AuthRegisterRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*AuthRegisterRequest) ProtoMessage() {}
func (x *AuthRegisterRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_auth_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use AuthRegisterRequest.ProtoReflect.Descriptor instead.
func (*AuthRegisterRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_auth_proto_rawDescGZIP(), []int{0}
}
func (x *AuthRegisterRequest) GetUser() string {
if x != nil {
return x.User
}
return ""
}
func (x *AuthRegisterRequest) GetAuthId() string {
if x != nil {
return x.AuthId
}
return ""
}
type AuthRegisterResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Node *Node `protobuf:"bytes,1,opt,name=node,proto3" json:"node,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *AuthRegisterResponse) Reset() {
*x = AuthRegisterResponse{}
mi := &file_headscale_v1_auth_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *AuthRegisterResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*AuthRegisterResponse) ProtoMessage() {}
func (x *AuthRegisterResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_auth_proto_msgTypes[1]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use AuthRegisterResponse.ProtoReflect.Descriptor instead.
func (*AuthRegisterResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_auth_proto_rawDescGZIP(), []int{1}
}
func (x *AuthRegisterResponse) GetNode() *Node {
if x != nil {
return x.Node
}
return nil
}
type AuthApproveRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
AuthId string `protobuf:"bytes,1,opt,name=auth_id,json=authId,proto3" json:"auth_id,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *AuthApproveRequest) Reset() {
*x = AuthApproveRequest{}
mi := &file_headscale_v1_auth_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *AuthApproveRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*AuthApproveRequest) ProtoMessage() {}
func (x *AuthApproveRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_auth_proto_msgTypes[2]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use AuthApproveRequest.ProtoReflect.Descriptor instead.
func (*AuthApproveRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_auth_proto_rawDescGZIP(), []int{2}
}
func (x *AuthApproveRequest) GetAuthId() string {
if x != nil {
return x.AuthId
}
return ""
}
type AuthApproveResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *AuthApproveResponse) Reset() {
*x = AuthApproveResponse{}
mi := &file_headscale_v1_auth_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *AuthApproveResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*AuthApproveResponse) ProtoMessage() {}
func (x *AuthApproveResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_auth_proto_msgTypes[3]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use AuthApproveResponse.ProtoReflect.Descriptor instead.
func (*AuthApproveResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_auth_proto_rawDescGZIP(), []int{3}
}
type AuthRejectRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
AuthId string `protobuf:"bytes,1,opt,name=auth_id,json=authId,proto3" json:"auth_id,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *AuthRejectRequest) Reset() {
*x = AuthRejectRequest{}
mi := &file_headscale_v1_auth_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *AuthRejectRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*AuthRejectRequest) ProtoMessage() {}
func (x *AuthRejectRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_auth_proto_msgTypes[4]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use AuthRejectRequest.ProtoReflect.Descriptor instead.
func (*AuthRejectRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_auth_proto_rawDescGZIP(), []int{4}
}
func (x *AuthRejectRequest) GetAuthId() string {
if x != nil {
return x.AuthId
}
return ""
}
type AuthRejectResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *AuthRejectResponse) Reset() {
*x = AuthRejectResponse{}
mi := &file_headscale_v1_auth_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *AuthRejectResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*AuthRejectResponse) ProtoMessage() {}
func (x *AuthRejectResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_auth_proto_msgTypes[5]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use AuthRejectResponse.ProtoReflect.Descriptor instead.
func (*AuthRejectResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_auth_proto_rawDescGZIP(), []int{5}
}
var File_headscale_v1_auth_proto protoreflect.FileDescriptor
const file_headscale_v1_auth_proto_rawDesc = "" +
"\n" +
"\x17headscale/v1/auth.proto\x12\fheadscale.v1\x1a\x17headscale/v1/node.proto\"B\n" +
"\x13AuthRegisterRequest\x12\x12\n" +
"\x04user\x18\x01 \x01(\tR\x04user\x12\x17\n" +
"\aauth_id\x18\x02 \x01(\tR\x06authId\">\n" +
"\x14AuthRegisterResponse\x12&\n" +
"\x04node\x18\x01 \x01(\v2\x12.headscale.v1.NodeR\x04node\"-\n" +
"\x12AuthApproveRequest\x12\x17\n" +
"\aauth_id\x18\x01 \x01(\tR\x06authId\"\x15\n" +
"\x13AuthApproveResponse\",\n" +
"\x11AuthRejectRequest\x12\x17\n" +
"\aauth_id\x18\x01 \x01(\tR\x06authId\"\x14\n" +
"\x12AuthRejectResponseB)Z'github.com/juanfont/headscale/gen/go/v1b\x06proto3"
var (
file_headscale_v1_auth_proto_rawDescOnce sync.Once
file_headscale_v1_auth_proto_rawDescData []byte
)
func file_headscale_v1_auth_proto_rawDescGZIP() []byte {
file_headscale_v1_auth_proto_rawDescOnce.Do(func() {
file_headscale_v1_auth_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_headscale_v1_auth_proto_rawDesc), len(file_headscale_v1_auth_proto_rawDesc)))
})
return file_headscale_v1_auth_proto_rawDescData
}
var file_headscale_v1_auth_proto_msgTypes = make([]protoimpl.MessageInfo, 6)
var file_headscale_v1_auth_proto_goTypes = []any{
(*AuthRegisterRequest)(nil), // 0: headscale.v1.AuthRegisterRequest
(*AuthRegisterResponse)(nil), // 1: headscale.v1.AuthRegisterResponse
(*AuthApproveRequest)(nil), // 2: headscale.v1.AuthApproveRequest
(*AuthApproveResponse)(nil), // 3: headscale.v1.AuthApproveResponse
(*AuthRejectRequest)(nil), // 4: headscale.v1.AuthRejectRequest
(*AuthRejectResponse)(nil), // 5: headscale.v1.AuthRejectResponse
(*Node)(nil), // 6: headscale.v1.Node
}
var file_headscale_v1_auth_proto_depIdxs = []int32{
6, // 0: headscale.v1.AuthRegisterResponse.node:type_name -> headscale.v1.Node
1, // [1:1] is the sub-list for method output_type
1, // [1:1] is the sub-list for method input_type
1, // [1:1] is the sub-list for extension type_name
1, // [1:1] is the sub-list for extension extendee
0, // [0:1] is the sub-list for field type_name
}
func init() { file_headscale_v1_auth_proto_init() }
func file_headscale_v1_auth_proto_init() {
if File_headscale_v1_auth_proto != nil {
return
}
file_headscale_v1_node_proto_init()
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_headscale_v1_auth_proto_rawDesc), len(file_headscale_v1_auth_proto_rawDesc)),
NumEnums: 0,
NumMessages: 6,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_headscale_v1_auth_proto_goTypes,
DependencyIndexes: file_headscale_v1_auth_proto_depIdxs,
MessageInfos: file_headscale_v1_auth_proto_msgTypes,
}.Build()
File_headscale_v1_auth_proto = out.File
file_headscale_v1_auth_proto_goTypes = nil
file_headscale_v1_auth_proto_depIdxs = nil
}
================================================
FILE: gen/go/headscale/v1/device.pb.go
================================================
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.11
// protoc (unknown)
// source: headscale/v1/device.proto
package v1
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
timestamppb "google.golang.org/protobuf/types/known/timestamppb"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type Latency struct {
state protoimpl.MessageState `protogen:"open.v1"`
LatencyMs float32 `protobuf:"fixed32,1,opt,name=latency_ms,json=latencyMs,proto3" json:"latency_ms,omitempty"`
Preferred bool `protobuf:"varint,2,opt,name=preferred,proto3" json:"preferred,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *Latency) Reset() {
*x = Latency{}
mi := &file_headscale_v1_device_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *Latency) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Latency) ProtoMessage() {}
func (x *Latency) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_device_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Latency.ProtoReflect.Descriptor instead.
func (*Latency) Descriptor() ([]byte, []int) {
return file_headscale_v1_device_proto_rawDescGZIP(), []int{0}
}
func (x *Latency) GetLatencyMs() float32 {
if x != nil {
return x.LatencyMs
}
return 0
}
func (x *Latency) GetPreferred() bool {
if x != nil {
return x.Preferred
}
return false
}
type ClientSupports struct {
state protoimpl.MessageState `protogen:"open.v1"`
HairPinning bool `protobuf:"varint,1,opt,name=hair_pinning,json=hairPinning,proto3" json:"hair_pinning,omitempty"`
Ipv6 bool `protobuf:"varint,2,opt,name=ipv6,proto3" json:"ipv6,omitempty"`
Pcp bool `protobuf:"varint,3,opt,name=pcp,proto3" json:"pcp,omitempty"`
Pmp bool `protobuf:"varint,4,opt,name=pmp,proto3" json:"pmp,omitempty"`
Udp bool `protobuf:"varint,5,opt,name=udp,proto3" json:"udp,omitempty"`
Upnp bool `protobuf:"varint,6,opt,name=upnp,proto3" json:"upnp,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ClientSupports) Reset() {
*x = ClientSupports{}
mi := &file_headscale_v1_device_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ClientSupports) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ClientSupports) ProtoMessage() {}
func (x *ClientSupports) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_device_proto_msgTypes[1]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ClientSupports.ProtoReflect.Descriptor instead.
func (*ClientSupports) Descriptor() ([]byte, []int) {
return file_headscale_v1_device_proto_rawDescGZIP(), []int{1}
}
func (x *ClientSupports) GetHairPinning() bool {
if x != nil {
return x.HairPinning
}
return false
}
func (x *ClientSupports) GetIpv6() bool {
if x != nil {
return x.Ipv6
}
return false
}
func (x *ClientSupports) GetPcp() bool {
if x != nil {
return x.Pcp
}
return false
}
func (x *ClientSupports) GetPmp() bool {
if x != nil {
return x.Pmp
}
return false
}
func (x *ClientSupports) GetUdp() bool {
if x != nil {
return x.Udp
}
return false
}
func (x *ClientSupports) GetUpnp() bool {
if x != nil {
return x.Upnp
}
return false
}
type ClientConnectivity struct {
state protoimpl.MessageState `protogen:"open.v1"`
Endpoints []string `protobuf:"bytes,1,rep,name=endpoints,proto3" json:"endpoints,omitempty"`
Derp string `protobuf:"bytes,2,opt,name=derp,proto3" json:"derp,omitempty"`
MappingVariesByDestIp bool `protobuf:"varint,3,opt,name=mapping_varies_by_dest_ip,json=mappingVariesByDestIp,proto3" json:"mapping_varies_by_dest_ip,omitempty"`
Latency map[string]*Latency `protobuf:"bytes,4,rep,name=latency,proto3" json:"latency,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
ClientSupports *ClientSupports `protobuf:"bytes,5,opt,name=client_supports,json=clientSupports,proto3" json:"client_supports,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ClientConnectivity) Reset() {
*x = ClientConnectivity{}
mi := &file_headscale_v1_device_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ClientConnectivity) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ClientConnectivity) ProtoMessage() {}
func (x *ClientConnectivity) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_device_proto_msgTypes[2]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ClientConnectivity.ProtoReflect.Descriptor instead.
func (*ClientConnectivity) Descriptor() ([]byte, []int) {
return file_headscale_v1_device_proto_rawDescGZIP(), []int{2}
}
func (x *ClientConnectivity) GetEndpoints() []string {
if x != nil {
return x.Endpoints
}
return nil
}
func (x *ClientConnectivity) GetDerp() string {
if x != nil {
return x.Derp
}
return ""
}
func (x *ClientConnectivity) GetMappingVariesByDestIp() bool {
if x != nil {
return x.MappingVariesByDestIp
}
return false
}
func (x *ClientConnectivity) GetLatency() map[string]*Latency {
if x != nil {
return x.Latency
}
return nil
}
func (x *ClientConnectivity) GetClientSupports() *ClientSupports {
if x != nil {
return x.ClientSupports
}
return nil
}
type GetDeviceRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *GetDeviceRequest) Reset() {
*x = GetDeviceRequest{}
mi := &file_headscale_v1_device_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *GetDeviceRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GetDeviceRequest) ProtoMessage() {}
func (x *GetDeviceRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_device_proto_msgTypes[3]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GetDeviceRequest.ProtoReflect.Descriptor instead.
func (*GetDeviceRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_device_proto_rawDescGZIP(), []int{3}
}
func (x *GetDeviceRequest) GetId() string {
if x != nil {
return x.Id
}
return ""
}
type GetDeviceResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Addresses []string `protobuf:"bytes,1,rep,name=addresses,proto3" json:"addresses,omitempty"`
Id string `protobuf:"bytes,2,opt,name=id,proto3" json:"id,omitempty"`
User string `protobuf:"bytes,3,opt,name=user,proto3" json:"user,omitempty"`
Name string `protobuf:"bytes,4,opt,name=name,proto3" json:"name,omitempty"`
Hostname string `protobuf:"bytes,5,opt,name=hostname,proto3" json:"hostname,omitempty"`
ClientVersion string `protobuf:"bytes,6,opt,name=client_version,json=clientVersion,proto3" json:"client_version,omitempty"`
UpdateAvailable bool `protobuf:"varint,7,opt,name=update_available,json=updateAvailable,proto3" json:"update_available,omitempty"`
Os string `protobuf:"bytes,8,opt,name=os,proto3" json:"os,omitempty"`
Created *timestamppb.Timestamp `protobuf:"bytes,9,opt,name=created,proto3" json:"created,omitempty"`
LastSeen *timestamppb.Timestamp `protobuf:"bytes,10,opt,name=last_seen,json=lastSeen,proto3" json:"last_seen,omitempty"`
KeyExpiryDisabled bool `protobuf:"varint,11,opt,name=key_expiry_disabled,json=keyExpiryDisabled,proto3" json:"key_expiry_disabled,omitempty"`
Expires *timestamppb.Timestamp `protobuf:"bytes,12,opt,name=expires,proto3" json:"expires,omitempty"`
Authorized bool `protobuf:"varint,13,opt,name=authorized,proto3" json:"authorized,omitempty"`
IsExternal bool `protobuf:"varint,14,opt,name=is_external,json=isExternal,proto3" json:"is_external,omitempty"`
MachineKey string `protobuf:"bytes,15,opt,name=machine_key,json=machineKey,proto3" json:"machine_key,omitempty"`
NodeKey string `protobuf:"bytes,16,opt,name=node_key,json=nodeKey,proto3" json:"node_key,omitempty"`
BlocksIncomingConnections bool `protobuf:"varint,17,opt,name=blocks_incoming_connections,json=blocksIncomingConnections,proto3" json:"blocks_incoming_connections,omitempty"`
EnabledRoutes []string `protobuf:"bytes,18,rep,name=enabled_routes,json=enabledRoutes,proto3" json:"enabled_routes,omitempty"`
AdvertisedRoutes []string `protobuf:"bytes,19,rep,name=advertised_routes,json=advertisedRoutes,proto3" json:"advertised_routes,omitempty"`
ClientConnectivity *ClientConnectivity `protobuf:"bytes,20,opt,name=client_connectivity,json=clientConnectivity,proto3" json:"client_connectivity,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *GetDeviceResponse) Reset() {
*x = GetDeviceResponse{}
mi := &file_headscale_v1_device_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *GetDeviceResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GetDeviceResponse) ProtoMessage() {}
func (x *GetDeviceResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_device_proto_msgTypes[4]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GetDeviceResponse.ProtoReflect.Descriptor instead.
func (*GetDeviceResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_device_proto_rawDescGZIP(), []int{4}
}
func (x *GetDeviceResponse) GetAddresses() []string {
if x != nil {
return x.Addresses
}
return nil
}
func (x *GetDeviceResponse) GetId() string {
if x != nil {
return x.Id
}
return ""
}
func (x *GetDeviceResponse) GetUser() string {
if x != nil {
return x.User
}
return ""
}
func (x *GetDeviceResponse) GetName() string {
if x != nil {
return x.Name
}
return ""
}
func (x *GetDeviceResponse) GetHostname() string {
if x != nil {
return x.Hostname
}
return ""
}
func (x *GetDeviceResponse) GetClientVersion() string {
if x != nil {
return x.ClientVersion
}
return ""
}
func (x *GetDeviceResponse) GetUpdateAvailable() bool {
if x != nil {
return x.UpdateAvailable
}
return false
}
func (x *GetDeviceResponse) GetOs() string {
if x != nil {
return x.Os
}
return ""
}
func (x *GetDeviceResponse) GetCreated() *timestamppb.Timestamp {
if x != nil {
return x.Created
}
return nil
}
func (x *GetDeviceResponse) GetLastSeen() *timestamppb.Timestamp {
if x != nil {
return x.LastSeen
}
return nil
}
func (x *GetDeviceResponse) GetKeyExpiryDisabled() bool {
if x != nil {
return x.KeyExpiryDisabled
}
return false
}
func (x *GetDeviceResponse) GetExpires() *timestamppb.Timestamp {
if x != nil {
return x.Expires
}
return nil
}
func (x *GetDeviceResponse) GetAuthorized() bool {
if x != nil {
return x.Authorized
}
return false
}
func (x *GetDeviceResponse) GetIsExternal() bool {
if x != nil {
return x.IsExternal
}
return false
}
func (x *GetDeviceResponse) GetMachineKey() string {
if x != nil {
return x.MachineKey
}
return ""
}
func (x *GetDeviceResponse) GetNodeKey() string {
if x != nil {
return x.NodeKey
}
return ""
}
func (x *GetDeviceResponse) GetBlocksIncomingConnections() bool {
if x != nil {
return x.BlocksIncomingConnections
}
return false
}
func (x *GetDeviceResponse) GetEnabledRoutes() []string {
if x != nil {
return x.EnabledRoutes
}
return nil
}
func (x *GetDeviceResponse) GetAdvertisedRoutes() []string {
if x != nil {
return x.AdvertisedRoutes
}
return nil
}
func (x *GetDeviceResponse) GetClientConnectivity() *ClientConnectivity {
if x != nil {
return x.ClientConnectivity
}
return nil
}
type DeleteDeviceRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DeleteDeviceRequest) Reset() {
*x = DeleteDeviceRequest{}
mi := &file_headscale_v1_device_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DeleteDeviceRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DeleteDeviceRequest) ProtoMessage() {}
func (x *DeleteDeviceRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_device_proto_msgTypes[5]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DeleteDeviceRequest.ProtoReflect.Descriptor instead.
func (*DeleteDeviceRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_device_proto_rawDescGZIP(), []int{5}
}
func (x *DeleteDeviceRequest) GetId() string {
if x != nil {
return x.Id
}
return ""
}
type DeleteDeviceResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DeleteDeviceResponse) Reset() {
*x = DeleteDeviceResponse{}
mi := &file_headscale_v1_device_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DeleteDeviceResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DeleteDeviceResponse) ProtoMessage() {}
func (x *DeleteDeviceResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_device_proto_msgTypes[6]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DeleteDeviceResponse.ProtoReflect.Descriptor instead.
func (*DeleteDeviceResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_device_proto_rawDescGZIP(), []int{6}
}
type GetDeviceRoutesRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *GetDeviceRoutesRequest) Reset() {
*x = GetDeviceRoutesRequest{}
mi := &file_headscale_v1_device_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *GetDeviceRoutesRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GetDeviceRoutesRequest) ProtoMessage() {}
func (x *GetDeviceRoutesRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_device_proto_msgTypes[7]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GetDeviceRoutesRequest.ProtoReflect.Descriptor instead.
func (*GetDeviceRoutesRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_device_proto_rawDescGZIP(), []int{7}
}
func (x *GetDeviceRoutesRequest) GetId() string {
if x != nil {
return x.Id
}
return ""
}
type GetDeviceRoutesResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
EnabledRoutes []string `protobuf:"bytes,1,rep,name=enabled_routes,json=enabledRoutes,proto3" json:"enabled_routes,omitempty"`
AdvertisedRoutes []string `protobuf:"bytes,2,rep,name=advertised_routes,json=advertisedRoutes,proto3" json:"advertised_routes,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *GetDeviceRoutesResponse) Reset() {
*x = GetDeviceRoutesResponse{}
mi := &file_headscale_v1_device_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *GetDeviceRoutesResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GetDeviceRoutesResponse) ProtoMessage() {}
func (x *GetDeviceRoutesResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_device_proto_msgTypes[8]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GetDeviceRoutesResponse.ProtoReflect.Descriptor instead.
func (*GetDeviceRoutesResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_device_proto_rawDescGZIP(), []int{8}
}
func (x *GetDeviceRoutesResponse) GetEnabledRoutes() []string {
if x != nil {
return x.EnabledRoutes
}
return nil
}
func (x *GetDeviceRoutesResponse) GetAdvertisedRoutes() []string {
if x != nil {
return x.AdvertisedRoutes
}
return nil
}
type EnableDeviceRoutesRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
Routes []string `protobuf:"bytes,2,rep,name=routes,proto3" json:"routes,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *EnableDeviceRoutesRequest) Reset() {
*x = EnableDeviceRoutesRequest{}
mi := &file_headscale_v1_device_proto_msgTypes[9]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *EnableDeviceRoutesRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*EnableDeviceRoutesRequest) ProtoMessage() {}
func (x *EnableDeviceRoutesRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_device_proto_msgTypes[9]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use EnableDeviceRoutesRequest.ProtoReflect.Descriptor instead.
func (*EnableDeviceRoutesRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_device_proto_rawDescGZIP(), []int{9}
}
func (x *EnableDeviceRoutesRequest) GetId() string {
if x != nil {
return x.Id
}
return ""
}
func (x *EnableDeviceRoutesRequest) GetRoutes() []string {
if x != nil {
return x.Routes
}
return nil
}
type EnableDeviceRoutesResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
EnabledRoutes []string `protobuf:"bytes,1,rep,name=enabled_routes,json=enabledRoutes,proto3" json:"enabled_routes,omitempty"`
AdvertisedRoutes []string `protobuf:"bytes,2,rep,name=advertised_routes,json=advertisedRoutes,proto3" json:"advertised_routes,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *EnableDeviceRoutesResponse) Reset() {
*x = EnableDeviceRoutesResponse{}
mi := &file_headscale_v1_device_proto_msgTypes[10]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *EnableDeviceRoutesResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*EnableDeviceRoutesResponse) ProtoMessage() {}
func (x *EnableDeviceRoutesResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_device_proto_msgTypes[10]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use EnableDeviceRoutesResponse.ProtoReflect.Descriptor instead.
func (*EnableDeviceRoutesResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_device_proto_rawDescGZIP(), []int{10}
}
func (x *EnableDeviceRoutesResponse) GetEnabledRoutes() []string {
if x != nil {
return x.EnabledRoutes
}
return nil
}
func (x *EnableDeviceRoutesResponse) GetAdvertisedRoutes() []string {
if x != nil {
return x.AdvertisedRoutes
}
return nil
}
var File_headscale_v1_device_proto protoreflect.FileDescriptor
const file_headscale_v1_device_proto_rawDesc = "" +
"\n" +
"\x19headscale/v1/device.proto\x12\fheadscale.v1\x1a\x1fgoogle/protobuf/timestamp.proto\"F\n" +
"\aLatency\x12\x1d\n" +
"\n" +
"latency_ms\x18\x01 \x01(\x02R\tlatencyMs\x12\x1c\n" +
"\tpreferred\x18\x02 \x01(\bR\tpreferred\"\x91\x01\n" +
"\x0eClientSupports\x12!\n" +
"\fhair_pinning\x18\x01 \x01(\bR\vhairPinning\x12\x12\n" +
"\x04ipv6\x18\x02 \x01(\bR\x04ipv6\x12\x10\n" +
"\x03pcp\x18\x03 \x01(\bR\x03pcp\x12\x10\n" +
"\x03pmp\x18\x04 \x01(\bR\x03pmp\x12\x10\n" +
"\x03udp\x18\x05 \x01(\bR\x03udp\x12\x12\n" +
"\x04upnp\x18\x06 \x01(\bR\x04upnp\"\xe3\x02\n" +
"\x12ClientConnectivity\x12\x1c\n" +
"\tendpoints\x18\x01 \x03(\tR\tendpoints\x12\x12\n" +
"\x04derp\x18\x02 \x01(\tR\x04derp\x128\n" +
"\x19mapping_varies_by_dest_ip\x18\x03 \x01(\bR\x15mappingVariesByDestIp\x12G\n" +
"\alatency\x18\x04 \x03(\v2-.headscale.v1.ClientConnectivity.LatencyEntryR\alatency\x12E\n" +
"\x0fclient_supports\x18\x05 \x01(\v2\x1c.headscale.v1.ClientSupportsR\x0eclientSupports\x1aQ\n" +
"\fLatencyEntry\x12\x10\n" +
"\x03key\x18\x01 \x01(\tR\x03key\x12+\n" +
"\x05value\x18\x02 \x01(\v2\x15.headscale.v1.LatencyR\x05value:\x028\x01\"\"\n" +
"\x10GetDeviceRequest\x12\x0e\n" +
"\x02id\x18\x01 \x01(\tR\x02id\"\xa0\x06\n" +
"\x11GetDeviceResponse\x12\x1c\n" +
"\taddresses\x18\x01 \x03(\tR\taddresses\x12\x0e\n" +
"\x02id\x18\x02 \x01(\tR\x02id\x12\x12\n" +
"\x04user\x18\x03 \x01(\tR\x04user\x12\x12\n" +
"\x04name\x18\x04 \x01(\tR\x04name\x12\x1a\n" +
"\bhostname\x18\x05 \x01(\tR\bhostname\x12%\n" +
"\x0eclient_version\x18\x06 \x01(\tR\rclientVersion\x12)\n" +
"\x10update_available\x18\a \x01(\bR\x0fupdateAvailable\x12\x0e\n" +
"\x02os\x18\b \x01(\tR\x02os\x124\n" +
"\acreated\x18\t \x01(\v2\x1a.google.protobuf.TimestampR\acreated\x127\n" +
"\tlast_seen\x18\n" +
" \x01(\v2\x1a.google.protobuf.TimestampR\blastSeen\x12.\n" +
"\x13key_expiry_disabled\x18\v \x01(\bR\x11keyExpiryDisabled\x124\n" +
"\aexpires\x18\f \x01(\v2\x1a.google.protobuf.TimestampR\aexpires\x12\x1e\n" +
"\n" +
"authorized\x18\r \x01(\bR\n" +
"authorized\x12\x1f\n" +
"\vis_external\x18\x0e \x01(\bR\n" +
"isExternal\x12\x1f\n" +
"\vmachine_key\x18\x0f \x01(\tR\n" +
"machineKey\x12\x19\n" +
"\bnode_key\x18\x10 \x01(\tR\anodeKey\x12>\n" +
"\x1bblocks_incoming_connections\x18\x11 \x01(\bR\x19blocksIncomingConnections\x12%\n" +
"\x0eenabled_routes\x18\x12 \x03(\tR\renabledRoutes\x12+\n" +
"\x11advertised_routes\x18\x13 \x03(\tR\x10advertisedRoutes\x12Q\n" +
"\x13client_connectivity\x18\x14 \x01(\v2 .headscale.v1.ClientConnectivityR\x12clientConnectivity\"%\n" +
"\x13DeleteDeviceRequest\x12\x0e\n" +
"\x02id\x18\x01 \x01(\tR\x02id\"\x16\n" +
"\x14DeleteDeviceResponse\"(\n" +
"\x16GetDeviceRoutesRequest\x12\x0e\n" +
"\x02id\x18\x01 \x01(\tR\x02id\"m\n" +
"\x17GetDeviceRoutesResponse\x12%\n" +
"\x0eenabled_routes\x18\x01 \x03(\tR\renabledRoutes\x12+\n" +
"\x11advertised_routes\x18\x02 \x03(\tR\x10advertisedRoutes\"C\n" +
"\x19EnableDeviceRoutesRequest\x12\x0e\n" +
"\x02id\x18\x01 \x01(\tR\x02id\x12\x16\n" +
"\x06routes\x18\x02 \x03(\tR\x06routes\"p\n" +
"\x1aEnableDeviceRoutesResponse\x12%\n" +
"\x0eenabled_routes\x18\x01 \x03(\tR\renabledRoutes\x12+\n" +
"\x11advertised_routes\x18\x02 \x03(\tR\x10advertisedRoutesB)Z'github.com/juanfont/headscale/gen/go/v1b\x06proto3"
var (
file_headscale_v1_device_proto_rawDescOnce sync.Once
file_headscale_v1_device_proto_rawDescData []byte
)
func file_headscale_v1_device_proto_rawDescGZIP() []byte {
file_headscale_v1_device_proto_rawDescOnce.Do(func() {
file_headscale_v1_device_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_headscale_v1_device_proto_rawDesc), len(file_headscale_v1_device_proto_rawDesc)))
})
return file_headscale_v1_device_proto_rawDescData
}
var file_headscale_v1_device_proto_msgTypes = make([]protoimpl.MessageInfo, 12)
var file_headscale_v1_device_proto_goTypes = []any{
(*Latency)(nil), // 0: headscale.v1.Latency
(*ClientSupports)(nil), // 1: headscale.v1.ClientSupports
(*ClientConnectivity)(nil), // 2: headscale.v1.ClientConnectivity
(*GetDeviceRequest)(nil), // 3: headscale.v1.GetDeviceRequest
(*GetDeviceResponse)(nil), // 4: headscale.v1.GetDeviceResponse
(*DeleteDeviceRequest)(nil), // 5: headscale.v1.DeleteDeviceRequest
(*DeleteDeviceResponse)(nil), // 6: headscale.v1.DeleteDeviceResponse
(*GetDeviceRoutesRequest)(nil), // 7: headscale.v1.GetDeviceRoutesRequest
(*GetDeviceRoutesResponse)(nil), // 8: headscale.v1.GetDeviceRoutesResponse
(*EnableDeviceRoutesRequest)(nil), // 9: headscale.v1.EnableDeviceRoutesRequest
(*EnableDeviceRoutesResponse)(nil), // 10: headscale.v1.EnableDeviceRoutesResponse
nil, // 11: headscale.v1.ClientConnectivity.LatencyEntry
(*timestamppb.Timestamp)(nil), // 12: google.protobuf.Timestamp
}
var file_headscale_v1_device_proto_depIdxs = []int32{
11, // 0: headscale.v1.ClientConnectivity.latency:type_name -> headscale.v1.ClientConnectivity.LatencyEntry
1, // 1: headscale.v1.ClientConnectivity.client_supports:type_name -> headscale.v1.ClientSupports
12, // 2: headscale.v1.GetDeviceResponse.created:type_name -> google.protobuf.Timestamp
12, // 3: headscale.v1.GetDeviceResponse.last_seen:type_name -> google.protobuf.Timestamp
12, // 4: headscale.v1.GetDeviceResponse.expires:type_name -> google.protobuf.Timestamp
2, // 5: headscale.v1.GetDeviceResponse.client_connectivity:type_name -> headscale.v1.ClientConnectivity
0, // 6: headscale.v1.ClientConnectivity.LatencyEntry.value:type_name -> headscale.v1.Latency
7, // [7:7] is the sub-list for method output_type
7, // [7:7] is the sub-list for method input_type
7, // [7:7] is the sub-list for extension type_name
7, // [7:7] is the sub-list for extension extendee
0, // [0:7] is the sub-list for field type_name
}
func init() { file_headscale_v1_device_proto_init() }
func file_headscale_v1_device_proto_init() {
if File_headscale_v1_device_proto != nil {
return
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_headscale_v1_device_proto_rawDesc), len(file_headscale_v1_device_proto_rawDesc)),
NumEnums: 0,
NumMessages: 12,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_headscale_v1_device_proto_goTypes,
DependencyIndexes: file_headscale_v1_device_proto_depIdxs,
MessageInfos: file_headscale_v1_device_proto_msgTypes,
}.Build()
File_headscale_v1_device_proto = out.File
file_headscale_v1_device_proto_goTypes = nil
file_headscale_v1_device_proto_depIdxs = nil
}
================================================
FILE: gen/go/headscale/v1/headscale.pb.go
================================================
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.11
// protoc (unknown)
// source: headscale/v1/headscale.proto
package v1
import (
_ "google.golang.org/genproto/googleapis/api/annotations"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type HealthRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *HealthRequest) Reset() {
*x = HealthRequest{}
mi := &file_headscale_v1_headscale_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *HealthRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*HealthRequest) ProtoMessage() {}
func (x *HealthRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_headscale_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use HealthRequest.ProtoReflect.Descriptor instead.
func (*HealthRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_headscale_proto_rawDescGZIP(), []int{0}
}
type HealthResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
DatabaseConnectivity bool `protobuf:"varint,1,opt,name=database_connectivity,json=databaseConnectivity,proto3" json:"database_connectivity,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *HealthResponse) Reset() {
*x = HealthResponse{}
mi := &file_headscale_v1_headscale_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *HealthResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*HealthResponse) ProtoMessage() {}
func (x *HealthResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_headscale_proto_msgTypes[1]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use HealthResponse.ProtoReflect.Descriptor instead.
func (*HealthResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_headscale_proto_rawDescGZIP(), []int{1}
}
func (x *HealthResponse) GetDatabaseConnectivity() bool {
if x != nil {
return x.DatabaseConnectivity
}
return false
}
var File_headscale_v1_headscale_proto protoreflect.FileDescriptor
const file_headscale_v1_headscale_proto_rawDesc = "" +
"\n" +
"\x1cheadscale/v1/headscale.proto\x12\fheadscale.v1\x1a\x1cgoogle/api/annotations.proto\x1a\x17headscale/v1/user.proto\x1a\x1dheadscale/v1/preauthkey.proto\x1a\x17headscale/v1/node.proto\x1a\x19headscale/v1/apikey.proto\x1a\x17headscale/v1/auth.proto\x1a\x19headscale/v1/policy.proto\"\x0f\n" +
"\rHealthRequest\"E\n" +
"\x0eHealthResponse\x123\n" +
"\x15database_connectivity\x18\x01 \x01(\bR\x14databaseConnectivity2\xeb\x19\n" +
"\x10HeadscaleService\x12h\n" +
"\n" +
"CreateUser\x12\x1f.headscale.v1.CreateUserRequest\x1a .headscale.v1.CreateUserResponse\"\x17\x82\xd3\xe4\x93\x02\x11:\x01*\"\f/api/v1/user\x12\x80\x01\n" +
"\n" +
"RenameUser\x12\x1f.headscale.v1.RenameUserRequest\x1a .headscale.v1.RenameUserResponse\"/\x82\xd3\xe4\x93\x02)\"'/api/v1/user/{old_id}/rename/{new_name}\x12j\n" +
"\n" +
"DeleteUser\x12\x1f.headscale.v1.DeleteUserRequest\x1a .headscale.v1.DeleteUserResponse\"\x19\x82\xd3\xe4\x93\x02\x13*\x11/api/v1/user/{id}\x12b\n" +
"\tListUsers\x12\x1e.headscale.v1.ListUsersRequest\x1a\x1f.headscale.v1.ListUsersResponse\"\x14\x82\xd3\xe4\x93\x02\x0e\x12\f/api/v1/user\x12\x80\x01\n" +
"\x10CreatePreAuthKey\x12%.headscale.v1.CreatePreAuthKeyRequest\x1a&.headscale.v1.CreatePreAuthKeyResponse\"\x1d\x82\xd3\xe4\x93\x02\x17:\x01*\"\x12/api/v1/preauthkey\x12\x87\x01\n" +
"\x10ExpirePreAuthKey\x12%.headscale.v1.ExpirePreAuthKeyRequest\x1a&.headscale.v1.ExpirePreAuthKeyResponse\"$\x82\xd3\xe4\x93\x02\x1e:\x01*\"\x19/api/v1/preauthkey/expire\x12}\n" +
"\x10DeletePreAuthKey\x12%.headscale.v1.DeletePreAuthKeyRequest\x1a&.headscale.v1.DeletePreAuthKeyResponse\"\x1a\x82\xd3\xe4\x93\x02\x14*\x12/api/v1/preauthkey\x12z\n" +
"\x0fListPreAuthKeys\x12$.headscale.v1.ListPreAuthKeysRequest\x1a%.headscale.v1.ListPreAuthKeysResponse\"\x1a\x82\xd3\xe4\x93\x02\x14\x12\x12/api/v1/preauthkey\x12}\n" +
"\x0fDebugCreateNode\x12$.headscale.v1.DebugCreateNodeRequest\x1a%.headscale.v1.DebugCreateNodeResponse\"\x1d\x82\xd3\xe4\x93\x02\x17:\x01*\"\x12/api/v1/debug/node\x12f\n" +
"\aGetNode\x12\x1c.headscale.v1.GetNodeRequest\x1a\x1d.headscale.v1.GetNodeResponse\"\x1e\x82\xd3\xe4\x93\x02\x18\x12\x16/api/v1/node/{node_id}\x12n\n" +
"\aSetTags\x12\x1c.headscale.v1.SetTagsRequest\x1a\x1d.headscale.v1.SetTagsResponse\"&\x82\xd3\xe4\x93\x02 :\x01*\"\x1b/api/v1/node/{node_id}/tags\x12\x96\x01\n" +
"\x11SetApprovedRoutes\x12&.headscale.v1.SetApprovedRoutesRequest\x1a'.headscale.v1.SetApprovedRoutesResponse\"0\x82\xd3\xe4\x93\x02*:\x01*\"%/api/v1/node/{node_id}/approve_routes\x12t\n" +
"\fRegisterNode\x12!.headscale.v1.RegisterNodeRequest\x1a\".headscale.v1.RegisterNodeResponse\"\x1d\x82\xd3\xe4\x93\x02\x17\"\x15/api/v1/node/register\x12o\n" +
"\n" +
"DeleteNode\x12\x1f.headscale.v1.DeleteNodeRequest\x1a .headscale.v1.DeleteNodeResponse\"\x1e\x82\xd3\xe4\x93\x02\x18*\x16/api/v1/node/{node_id}\x12v\n" +
"\n" +
"ExpireNode\x12\x1f.headscale.v1.ExpireNodeRequest\x1a .headscale.v1.ExpireNodeResponse\"%\x82\xd3\xe4\x93\x02\x1f\"\x1d/api/v1/node/{node_id}/expire\x12\x81\x01\n" +
"\n" +
"RenameNode\x12\x1f.headscale.v1.RenameNodeRequest\x1a .headscale.v1.RenameNodeResponse\"0\x82\xd3\xe4\x93\x02*\"(/api/v1/node/{node_id}/rename/{new_name}\x12b\n" +
"\tListNodes\x12\x1e.headscale.v1.ListNodesRequest\x1a\x1f.headscale.v1.ListNodesResponse\"\x14\x82\xd3\xe4\x93\x02\x0e\x12\f/api/v1/node\x12\x80\x01\n" +
"\x0fBackfillNodeIPs\x12$.headscale.v1.BackfillNodeIPsRequest\x1a%.headscale.v1.BackfillNodeIPsResponse\" \x82\xd3\xe4\x93\x02\x1a\"\x18/api/v1/node/backfillips\x12w\n" +
"\fAuthRegister\x12!.headscale.v1.AuthRegisterRequest\x1a\".headscale.v1.AuthRegisterResponse\" \x82\xd3\xe4\x93\x02\x1a:\x01*\"\x15/api/v1/auth/register\x12s\n" +
"\vAuthApprove\x12 .headscale.v1.AuthApproveRequest\x1a!.headscale.v1.AuthApproveResponse\"\x1f\x82\xd3\xe4\x93\x02\x19:\x01*\"\x14/api/v1/auth/approve\x12o\n" +
"\n" +
"AuthReject\x12\x1f.headscale.v1.AuthRejectRequest\x1a .headscale.v1.AuthRejectResponse\"\x1e\x82\xd3\xe4\x93\x02\x18:\x01*\"\x13/api/v1/auth/reject\x12p\n" +
"\fCreateApiKey\x12!.headscale.v1.CreateApiKeyRequest\x1a\".headscale.v1.CreateApiKeyResponse\"\x19\x82\xd3\xe4\x93\x02\x13:\x01*\"\x0e/api/v1/apikey\x12w\n" +
"\fExpireApiKey\x12!.headscale.v1.ExpireApiKeyRequest\x1a\".headscale.v1.ExpireApiKeyResponse\" \x82\xd3\xe4\x93\x02\x1a:\x01*\"\x15/api/v1/apikey/expire\x12j\n" +
"\vListApiKeys\x12 .headscale.v1.ListApiKeysRequest\x1a!.headscale.v1.ListApiKeysResponse\"\x16\x82\xd3\xe4\x93\x02\x10\x12\x0e/api/v1/apikey\x12v\n" +
"\fDeleteApiKey\x12!.headscale.v1.DeleteApiKeyRequest\x1a\".headscale.v1.DeleteApiKeyResponse\"\x1f\x82\xd3\xe4\x93\x02\x19*\x17/api/v1/apikey/{prefix}\x12d\n" +
"\tGetPolicy\x12\x1e.headscale.v1.GetPolicyRequest\x1a\x1f.headscale.v1.GetPolicyResponse\"\x16\x82\xd3\xe4\x93\x02\x10\x12\x0e/api/v1/policy\x12g\n" +
"\tSetPolicy\x12\x1e.headscale.v1.SetPolicyRequest\x1a\x1f.headscale.v1.SetPolicyResponse\"\x19\x82\xd3\xe4\x93\x02\x13:\x01*\x1a\x0e/api/v1/policy\x12[\n" +
"\x06Health\x12\x1b.headscale.v1.HealthRequest\x1a\x1c.headscale.v1.HealthResponse\"\x16\x82\xd3\xe4\x93\x02\x10\x12\x0e/api/v1/healthB)Z'github.com/juanfont/headscale/gen/go/v1b\x06proto3"
var (
file_headscale_v1_headscale_proto_rawDescOnce sync.Once
file_headscale_v1_headscale_proto_rawDescData []byte
)
func file_headscale_v1_headscale_proto_rawDescGZIP() []byte {
file_headscale_v1_headscale_proto_rawDescOnce.Do(func() {
file_headscale_v1_headscale_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_headscale_v1_headscale_proto_rawDesc), len(file_headscale_v1_headscale_proto_rawDesc)))
})
return file_headscale_v1_headscale_proto_rawDescData
}
var file_headscale_v1_headscale_proto_msgTypes = make([]protoimpl.MessageInfo, 2)
var file_headscale_v1_headscale_proto_goTypes = []any{
(*HealthRequest)(nil), // 0: headscale.v1.HealthRequest
(*HealthResponse)(nil), // 1: headscale.v1.HealthResponse
(*CreateUserRequest)(nil), // 2: headscale.v1.CreateUserRequest
(*RenameUserRequest)(nil), // 3: headscale.v1.RenameUserRequest
(*DeleteUserRequest)(nil), // 4: headscale.v1.DeleteUserRequest
(*ListUsersRequest)(nil), // 5: headscale.v1.ListUsersRequest
(*CreatePreAuthKeyRequest)(nil), // 6: headscale.v1.CreatePreAuthKeyRequest
(*ExpirePreAuthKeyRequest)(nil), // 7: headscale.v1.ExpirePreAuthKeyRequest
(*DeletePreAuthKeyRequest)(nil), // 8: headscale.v1.DeletePreAuthKeyRequest
(*ListPreAuthKeysRequest)(nil), // 9: headscale.v1.ListPreAuthKeysRequest
(*DebugCreateNodeRequest)(nil), // 10: headscale.v1.DebugCreateNodeRequest
(*GetNodeRequest)(nil), // 11: headscale.v1.GetNodeRequest
(*SetTagsRequest)(nil), // 12: headscale.v1.SetTagsRequest
(*SetApprovedRoutesRequest)(nil), // 13: headscale.v1.SetApprovedRoutesRequest
(*RegisterNodeRequest)(nil), // 14: headscale.v1.RegisterNodeRequest
(*DeleteNodeRequest)(nil), // 15: headscale.v1.DeleteNodeRequest
(*ExpireNodeRequest)(nil), // 16: headscale.v1.ExpireNodeRequest
(*RenameNodeRequest)(nil), // 17: headscale.v1.RenameNodeRequest
(*ListNodesRequest)(nil), // 18: headscale.v1.ListNodesRequest
(*BackfillNodeIPsRequest)(nil), // 19: headscale.v1.BackfillNodeIPsRequest
(*AuthRegisterRequest)(nil), // 20: headscale.v1.AuthRegisterRequest
(*AuthApproveRequest)(nil), // 21: headscale.v1.AuthApproveRequest
(*AuthRejectRequest)(nil), // 22: headscale.v1.AuthRejectRequest
(*CreateApiKeyRequest)(nil), // 23: headscale.v1.CreateApiKeyRequest
(*ExpireApiKeyRequest)(nil), // 24: headscale.v1.ExpireApiKeyRequest
(*ListApiKeysRequest)(nil), // 25: headscale.v1.ListApiKeysRequest
(*DeleteApiKeyRequest)(nil), // 26: headscale.v1.DeleteApiKeyRequest
(*GetPolicyRequest)(nil), // 27: headscale.v1.GetPolicyRequest
(*SetPolicyRequest)(nil), // 28: headscale.v1.SetPolicyRequest
(*CreateUserResponse)(nil), // 29: headscale.v1.CreateUserResponse
(*RenameUserResponse)(nil), // 30: headscale.v1.RenameUserResponse
(*DeleteUserResponse)(nil), // 31: headscale.v1.DeleteUserResponse
(*ListUsersResponse)(nil), // 32: headscale.v1.ListUsersResponse
(*CreatePreAuthKeyResponse)(nil), // 33: headscale.v1.CreatePreAuthKeyResponse
(*ExpirePreAuthKeyResponse)(nil), // 34: headscale.v1.ExpirePreAuthKeyResponse
(*DeletePreAuthKeyResponse)(nil), // 35: headscale.v1.DeletePreAuthKeyResponse
(*ListPreAuthKeysResponse)(nil), // 36: headscale.v1.ListPreAuthKeysResponse
(*DebugCreateNodeResponse)(nil), // 37: headscale.v1.DebugCreateNodeResponse
(*GetNodeResponse)(nil), // 38: headscale.v1.GetNodeResponse
(*SetTagsResponse)(nil), // 39: headscale.v1.SetTagsResponse
(*SetApprovedRoutesResponse)(nil), // 40: headscale.v1.SetApprovedRoutesResponse
(*RegisterNodeResponse)(nil), // 41: headscale.v1.RegisterNodeResponse
(*DeleteNodeResponse)(nil), // 42: headscale.v1.DeleteNodeResponse
(*ExpireNodeResponse)(nil), // 43: headscale.v1.ExpireNodeResponse
(*RenameNodeResponse)(nil), // 44: headscale.v1.RenameNodeResponse
(*ListNodesResponse)(nil), // 45: headscale.v1.ListNodesResponse
(*BackfillNodeIPsResponse)(nil), // 46: headscale.v1.BackfillNodeIPsResponse
(*AuthRegisterResponse)(nil), // 47: headscale.v1.AuthRegisterResponse
(*AuthApproveResponse)(nil), // 48: headscale.v1.AuthApproveResponse
(*AuthRejectResponse)(nil), // 49: headscale.v1.AuthRejectResponse
(*CreateApiKeyResponse)(nil), // 50: headscale.v1.CreateApiKeyResponse
(*ExpireApiKeyResponse)(nil), // 51: headscale.v1.ExpireApiKeyResponse
(*ListApiKeysResponse)(nil), // 52: headscale.v1.ListApiKeysResponse
(*DeleteApiKeyResponse)(nil), // 53: headscale.v1.DeleteApiKeyResponse
(*GetPolicyResponse)(nil), // 54: headscale.v1.GetPolicyResponse
(*SetPolicyResponse)(nil), // 55: headscale.v1.SetPolicyResponse
}
var file_headscale_v1_headscale_proto_depIdxs = []int32{
2, // 0: headscale.v1.HeadscaleService.CreateUser:input_type -> headscale.v1.CreateUserRequest
3, // 1: headscale.v1.HeadscaleService.RenameUser:input_type -> headscale.v1.RenameUserRequest
4, // 2: headscale.v1.HeadscaleService.DeleteUser:input_type -> headscale.v1.DeleteUserRequest
5, // 3: headscale.v1.HeadscaleService.ListUsers:input_type -> headscale.v1.ListUsersRequest
6, // 4: headscale.v1.HeadscaleService.CreatePreAuthKey:input_type -> headscale.v1.CreatePreAuthKeyRequest
7, // 5: headscale.v1.HeadscaleService.ExpirePreAuthKey:input_type -> headscale.v1.ExpirePreAuthKeyRequest
8, // 6: headscale.v1.HeadscaleService.DeletePreAuthKey:input_type -> headscale.v1.DeletePreAuthKeyRequest
9, // 7: headscale.v1.HeadscaleService.ListPreAuthKeys:input_type -> headscale.v1.ListPreAuthKeysRequest
10, // 8: headscale.v1.HeadscaleService.DebugCreateNode:input_type -> headscale.v1.DebugCreateNodeRequest
11, // 9: headscale.v1.HeadscaleService.GetNode:input_type -> headscale.v1.GetNodeRequest
12, // 10: headscale.v1.HeadscaleService.SetTags:input_type -> headscale.v1.SetTagsRequest
13, // 11: headscale.v1.HeadscaleService.SetApprovedRoutes:input_type -> headscale.v1.SetApprovedRoutesRequest
14, // 12: headscale.v1.HeadscaleService.RegisterNode:input_type -> headscale.v1.RegisterNodeRequest
15, // 13: headscale.v1.HeadscaleService.DeleteNode:input_type -> headscale.v1.DeleteNodeRequest
16, // 14: headscale.v1.HeadscaleService.ExpireNode:input_type -> headscale.v1.ExpireNodeRequest
17, // 15: headscale.v1.HeadscaleService.RenameNode:input_type -> headscale.v1.RenameNodeRequest
18, // 16: headscale.v1.HeadscaleService.ListNodes:input_type -> headscale.v1.ListNodesRequest
19, // 17: headscale.v1.HeadscaleService.BackfillNodeIPs:input_type -> headscale.v1.BackfillNodeIPsRequest
20, // 18: headscale.v1.HeadscaleService.AuthRegister:input_type -> headscale.v1.AuthRegisterRequest
21, // 19: headscale.v1.HeadscaleService.AuthApprove:input_type -> headscale.v1.AuthApproveRequest
22, // 20: headscale.v1.HeadscaleService.AuthReject:input_type -> headscale.v1.AuthRejectRequest
23, // 21: headscale.v1.HeadscaleService.CreateApiKey:input_type -> headscale.v1.CreateApiKeyRequest
24, // 22: headscale.v1.HeadscaleService.ExpireApiKey:input_type -> headscale.v1.ExpireApiKeyRequest
25, // 23: headscale.v1.HeadscaleService.ListApiKeys:input_type -> headscale.v1.ListApiKeysRequest
26, // 24: headscale.v1.HeadscaleService.DeleteApiKey:input_type -> headscale.v1.DeleteApiKeyRequest
27, // 25: headscale.v1.HeadscaleService.GetPolicy:input_type -> headscale.v1.GetPolicyRequest
28, // 26: headscale.v1.HeadscaleService.SetPolicy:input_type -> headscale.v1.SetPolicyRequest
0, // 27: headscale.v1.HeadscaleService.Health:input_type -> headscale.v1.HealthRequest
29, // 28: headscale.v1.HeadscaleService.CreateUser:output_type -> headscale.v1.CreateUserResponse
30, // 29: headscale.v1.HeadscaleService.RenameUser:output_type -> headscale.v1.RenameUserResponse
31, // 30: headscale.v1.HeadscaleService.DeleteUser:output_type -> headscale.v1.DeleteUserResponse
32, // 31: headscale.v1.HeadscaleService.ListUsers:output_type -> headscale.v1.ListUsersResponse
33, // 32: headscale.v1.HeadscaleService.CreatePreAuthKey:output_type -> headscale.v1.CreatePreAuthKeyResponse
34, // 33: headscale.v1.HeadscaleService.ExpirePreAuthKey:output_type -> headscale.v1.ExpirePreAuthKeyResponse
35, // 34: headscale.v1.HeadscaleService.DeletePreAuthKey:output_type -> headscale.v1.DeletePreAuthKeyResponse
36, // 35: headscale.v1.HeadscaleService.ListPreAuthKeys:output_type -> headscale.v1.ListPreAuthKeysResponse
37, // 36: headscale.v1.HeadscaleService.DebugCreateNode:output_type -> headscale.v1.DebugCreateNodeResponse
38, // 37: headscale.v1.HeadscaleService.GetNode:output_type -> headscale.v1.GetNodeResponse
39, // 38: headscale.v1.HeadscaleService.SetTags:output_type -> headscale.v1.SetTagsResponse
40, // 39: headscale.v1.HeadscaleService.SetApprovedRoutes:output_type -> headscale.v1.SetApprovedRoutesResponse
41, // 40: headscale.v1.HeadscaleService.RegisterNode:output_type -> headscale.v1.RegisterNodeResponse
42, // 41: headscale.v1.HeadscaleService.DeleteNode:output_type -> headscale.v1.DeleteNodeResponse
43, // 42: headscale.v1.HeadscaleService.ExpireNode:output_type -> headscale.v1.ExpireNodeResponse
44, // 43: headscale.v1.HeadscaleService.RenameNode:output_type -> headscale.v1.RenameNodeResponse
45, // 44: headscale.v1.HeadscaleService.ListNodes:output_type -> headscale.v1.ListNodesResponse
46, // 45: headscale.v1.HeadscaleService.BackfillNodeIPs:output_type -> headscale.v1.BackfillNodeIPsResponse
47, // 46: headscale.v1.HeadscaleService.AuthRegister:output_type -> headscale.v1.AuthRegisterResponse
48, // 47: headscale.v1.HeadscaleService.AuthApprove:output_type -> headscale.v1.AuthApproveResponse
49, // 48: headscale.v1.HeadscaleService.AuthReject:output_type -> headscale.v1.AuthRejectResponse
50, // 49: headscale.v1.HeadscaleService.CreateApiKey:output_type -> headscale.v1.CreateApiKeyResponse
51, // 50: headscale.v1.HeadscaleService.ExpireApiKey:output_type -> headscale.v1.ExpireApiKeyResponse
52, // 51: headscale.v1.HeadscaleService.ListApiKeys:output_type -> headscale.v1.ListApiKeysResponse
53, // 52: headscale.v1.HeadscaleService.DeleteApiKey:output_type -> headscale.v1.DeleteApiKeyResponse
54, // 53: headscale.v1.HeadscaleService.GetPolicy:output_type -> headscale.v1.GetPolicyResponse
55, // 54: headscale.v1.HeadscaleService.SetPolicy:output_type -> headscale.v1.SetPolicyResponse
1, // 55: headscale.v1.HeadscaleService.Health:output_type -> headscale.v1.HealthResponse
28, // [28:56] is the sub-list for method output_type
0, // [0:28] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name
}
func init() { file_headscale_v1_headscale_proto_init() }
func file_headscale_v1_headscale_proto_init() {
if File_headscale_v1_headscale_proto != nil {
return
}
file_headscale_v1_user_proto_init()
file_headscale_v1_preauthkey_proto_init()
file_headscale_v1_node_proto_init()
file_headscale_v1_apikey_proto_init()
file_headscale_v1_auth_proto_init()
file_headscale_v1_policy_proto_init()
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_headscale_v1_headscale_proto_rawDesc), len(file_headscale_v1_headscale_proto_rawDesc)),
NumEnums: 0,
NumMessages: 2,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_headscale_v1_headscale_proto_goTypes,
DependencyIndexes: file_headscale_v1_headscale_proto_depIdxs,
MessageInfos: file_headscale_v1_headscale_proto_msgTypes,
}.Build()
File_headscale_v1_headscale_proto = out.File
file_headscale_v1_headscale_proto_goTypes = nil
file_headscale_v1_headscale_proto_depIdxs = nil
}
================================================
FILE: gen/go/headscale/v1/headscale.pb.gw.go
================================================
// Code generated by protoc-gen-grpc-gateway. DO NOT EDIT.
// source: headscale/v1/headscale.proto
/*
Package v1 is a reverse proxy.
It translates gRPC into RESTful JSON APIs.
*/
package v1
import (
"context"
"errors"
"io"
"net/http"
"github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/grpc-ecosystem/grpc-gateway/v2/utilities"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/grpclog"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/status"
"google.golang.org/protobuf/proto"
)
// Suppress "imported and not used" errors
var (
_ codes.Code
_ io.Reader
_ status.Status
_ = errors.New
_ = runtime.String
_ = utilities.NewDoubleArray
_ = metadata.Join
)
func request_HeadscaleService_CreateUser_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq CreateUserRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.CreateUser(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_CreateUser_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq CreateUserRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.CreateUser(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_RenameUser_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq RenameUserRequest
metadata runtime.ServerMetadata
err error
)
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
val, ok := pathParams["old_id"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "old_id")
}
protoReq.OldId, err = runtime.Uint64(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "old_id", err)
}
val, ok = pathParams["new_name"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "new_name")
}
protoReq.NewName, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "new_name", err)
}
msg, err := client.RenameUser(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_RenameUser_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq RenameUserRequest
metadata runtime.ServerMetadata
err error
)
val, ok := pathParams["old_id"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "old_id")
}
protoReq.OldId, err = runtime.Uint64(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "old_id", err)
}
val, ok = pathParams["new_name"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "new_name")
}
protoReq.NewName, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "new_name", err)
}
msg, err := server.RenameUser(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_DeleteUser_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq DeleteUserRequest
metadata runtime.ServerMetadata
err error
)
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
val, ok := pathParams["id"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "id")
}
protoReq.Id, err = runtime.Uint64(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "id", err)
}
msg, err := client.DeleteUser(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_DeleteUser_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq DeleteUserRequest
metadata runtime.ServerMetadata
err error
)
val, ok := pathParams["id"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "id")
}
protoReq.Id, err = runtime.Uint64(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "id", err)
}
msg, err := server.DeleteUser(ctx, &protoReq)
return msg, metadata, err
}
var filter_HeadscaleService_ListUsers_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)}
func request_HeadscaleService_ListUsers_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq ListUsersRequest
metadata runtime.ServerMetadata
)
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_ListUsers_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.ListUsers(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_ListUsers_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq ListUsersRequest
metadata runtime.ServerMetadata
)
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_ListUsers_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.ListUsers(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_CreatePreAuthKey_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq CreatePreAuthKeyRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.CreatePreAuthKey(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_CreatePreAuthKey_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq CreatePreAuthKeyRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.CreatePreAuthKey(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_ExpirePreAuthKey_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq ExpirePreAuthKeyRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.ExpirePreAuthKey(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_ExpirePreAuthKey_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq ExpirePreAuthKeyRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.ExpirePreAuthKey(ctx, &protoReq)
return msg, metadata, err
}
var filter_HeadscaleService_DeletePreAuthKey_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)}
func request_HeadscaleService_DeletePreAuthKey_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq DeletePreAuthKeyRequest
metadata runtime.ServerMetadata
)
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_DeletePreAuthKey_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.DeletePreAuthKey(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_DeletePreAuthKey_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq DeletePreAuthKeyRequest
metadata runtime.ServerMetadata
)
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_DeletePreAuthKey_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.DeletePreAuthKey(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_ListPreAuthKeys_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq ListPreAuthKeysRequest
metadata runtime.ServerMetadata
)
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.ListPreAuthKeys(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_ListPreAuthKeys_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq ListPreAuthKeysRequest
metadata runtime.ServerMetadata
)
msg, err := server.ListPreAuthKeys(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_DebugCreateNode_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq DebugCreateNodeRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.DebugCreateNode(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_DebugCreateNode_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq DebugCreateNodeRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.DebugCreateNode(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_GetNode_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq GetNodeRequest
metadata runtime.ServerMetadata
err error
)
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
val, ok := pathParams["node_id"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id")
}
protoReq.NodeId, err = runtime.Uint64(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "node_id", err)
}
msg, err := client.GetNode(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_GetNode_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq GetNodeRequest
metadata runtime.ServerMetadata
err error
)
val, ok := pathParams["node_id"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id")
}
protoReq.NodeId, err = runtime.Uint64(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "node_id", err)
}
msg, err := server.GetNode(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_SetTags_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq SetTagsRequest
metadata runtime.ServerMetadata
err error
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
val, ok := pathParams["node_id"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id")
}
protoReq.NodeId, err = runtime.Uint64(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "node_id", err)
}
msg, err := client.SetTags(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_SetTags_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq SetTagsRequest
metadata runtime.ServerMetadata
err error
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
val, ok := pathParams["node_id"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id")
}
protoReq.NodeId, err = runtime.Uint64(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "node_id", err)
}
msg, err := server.SetTags(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_SetApprovedRoutes_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq SetApprovedRoutesRequest
metadata runtime.ServerMetadata
err error
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
val, ok := pathParams["node_id"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id")
}
protoReq.NodeId, err = runtime.Uint64(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "node_id", err)
}
msg, err := client.SetApprovedRoutes(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_SetApprovedRoutes_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq SetApprovedRoutesRequest
metadata runtime.ServerMetadata
err error
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
val, ok := pathParams["node_id"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id")
}
protoReq.NodeId, err = runtime.Uint64(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "node_id", err)
}
msg, err := server.SetApprovedRoutes(ctx, &protoReq)
return msg, metadata, err
}
var filter_HeadscaleService_RegisterNode_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)}
func request_HeadscaleService_RegisterNode_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq RegisterNodeRequest
metadata runtime.ServerMetadata
)
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_RegisterNode_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.RegisterNode(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_RegisterNode_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq RegisterNodeRequest
metadata runtime.ServerMetadata
)
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_RegisterNode_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.RegisterNode(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_DeleteNode_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq DeleteNodeRequest
metadata runtime.ServerMetadata
err error
)
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
val, ok := pathParams["node_id"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id")
}
protoReq.NodeId, err = runtime.Uint64(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "node_id", err)
}
msg, err := client.DeleteNode(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_DeleteNode_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq DeleteNodeRequest
metadata runtime.ServerMetadata
err error
)
val, ok := pathParams["node_id"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id")
}
protoReq.NodeId, err = runtime.Uint64(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "node_id", err)
}
msg, err := server.DeleteNode(ctx, &protoReq)
return msg, metadata, err
}
var filter_HeadscaleService_ExpireNode_0 = &utilities.DoubleArray{Encoding: map[string]int{"node_id": 0}, Base: []int{1, 1, 0}, Check: []int{0, 1, 2}}
func request_HeadscaleService_ExpireNode_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq ExpireNodeRequest
metadata runtime.ServerMetadata
err error
)
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
val, ok := pathParams["node_id"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id")
}
protoReq.NodeId, err = runtime.Uint64(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "node_id", err)
}
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_ExpireNode_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.ExpireNode(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_ExpireNode_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq ExpireNodeRequest
metadata runtime.ServerMetadata
err error
)
val, ok := pathParams["node_id"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id")
}
protoReq.NodeId, err = runtime.Uint64(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "node_id", err)
}
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_ExpireNode_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.ExpireNode(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_RenameNode_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq RenameNodeRequest
metadata runtime.ServerMetadata
err error
)
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
val, ok := pathParams["node_id"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id")
}
protoReq.NodeId, err = runtime.Uint64(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "node_id", err)
}
val, ok = pathParams["new_name"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "new_name")
}
protoReq.NewName, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "new_name", err)
}
msg, err := client.RenameNode(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_RenameNode_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq RenameNodeRequest
metadata runtime.ServerMetadata
err error
)
val, ok := pathParams["node_id"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id")
}
protoReq.NodeId, err = runtime.Uint64(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "node_id", err)
}
val, ok = pathParams["new_name"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "new_name")
}
protoReq.NewName, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "new_name", err)
}
msg, err := server.RenameNode(ctx, &protoReq)
return msg, metadata, err
}
var filter_HeadscaleService_ListNodes_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)}
func request_HeadscaleService_ListNodes_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq ListNodesRequest
metadata runtime.ServerMetadata
)
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_ListNodes_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.ListNodes(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_ListNodes_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq ListNodesRequest
metadata runtime.ServerMetadata
)
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_ListNodes_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.ListNodes(ctx, &protoReq)
return msg, metadata, err
}
var filter_HeadscaleService_BackfillNodeIPs_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)}
func request_HeadscaleService_BackfillNodeIPs_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq BackfillNodeIPsRequest
metadata runtime.ServerMetadata
)
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_BackfillNodeIPs_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.BackfillNodeIPs(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_BackfillNodeIPs_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq BackfillNodeIPsRequest
metadata runtime.ServerMetadata
)
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_BackfillNodeIPs_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.BackfillNodeIPs(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_AuthRegister_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq AuthRegisterRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.AuthRegister(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_AuthRegister_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq AuthRegisterRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.AuthRegister(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_AuthApprove_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq AuthApproveRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.AuthApprove(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_AuthApprove_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq AuthApproveRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.AuthApprove(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_AuthReject_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq AuthRejectRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.AuthReject(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_AuthReject_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq AuthRejectRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.AuthReject(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_CreateApiKey_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq CreateApiKeyRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.CreateApiKey(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_CreateApiKey_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq CreateApiKeyRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.CreateApiKey(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_ExpireApiKey_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq ExpireApiKeyRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.ExpireApiKey(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_ExpireApiKey_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq ExpireApiKeyRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.ExpireApiKey(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_ListApiKeys_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq ListApiKeysRequest
metadata runtime.ServerMetadata
)
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.ListApiKeys(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_ListApiKeys_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq ListApiKeysRequest
metadata runtime.ServerMetadata
)
msg, err := server.ListApiKeys(ctx, &protoReq)
return msg, metadata, err
}
var filter_HeadscaleService_DeleteApiKey_0 = &utilities.DoubleArray{Encoding: map[string]int{"prefix": 0}, Base: []int{1, 1, 0}, Check: []int{0, 1, 2}}
func request_HeadscaleService_DeleteApiKey_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq DeleteApiKeyRequest
metadata runtime.ServerMetadata
err error
)
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
val, ok := pathParams["prefix"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "prefix")
}
protoReq.Prefix, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "prefix", err)
}
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_DeleteApiKey_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.DeleteApiKey(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_DeleteApiKey_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq DeleteApiKeyRequest
metadata runtime.ServerMetadata
err error
)
val, ok := pathParams["prefix"]
if !ok {
return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "prefix")
}
protoReq.Prefix, err = runtime.String(val)
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "prefix", err)
}
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_DeleteApiKey_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.DeleteApiKey(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_GetPolicy_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq GetPolicyRequest
metadata runtime.ServerMetadata
)
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.GetPolicy(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_GetPolicy_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq GetPolicyRequest
metadata runtime.ServerMetadata
)
msg, err := server.GetPolicy(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_SetPolicy_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq SetPolicyRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.SetPolicy(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_SetPolicy_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq SetPolicyRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.SetPolicy(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_Health_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq HealthRequest
metadata runtime.ServerMetadata
)
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.Health(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_Health_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq HealthRequest
metadata runtime.ServerMetadata
)
msg, err := server.Health(ctx, &protoReq)
return msg, metadata, err
}
// RegisterHeadscaleServiceHandlerServer registers the http handlers for service HeadscaleService to "mux".
// UnaryRPC :call HeadscaleServiceServer directly.
// StreamingRPC :currently unsupported pending https://github.com/grpc/grpc-go/issues/906.
// Note that using this registration option will cause many gRPC library features to stop working. Consider using RegisterHeadscaleServiceHandlerFromEndpoint instead.
// GRPC interceptors will not work for this type of registration. To use interceptors, you must use the "runtime.WithMiddlewares" option in the "runtime.NewServeMux" call.
func RegisterHeadscaleServiceHandlerServer(ctx context.Context, mux *runtime.ServeMux, server HeadscaleServiceServer) error {
mux.Handle(http.MethodPost, pattern_HeadscaleService_CreateUser_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/CreateUser", runtime.WithHTTPPathPattern("/api/v1/user"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_CreateUser_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_CreateUser_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_RenameUser_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/RenameUser", runtime.WithHTTPPathPattern("/api/v1/user/{old_id}/rename/{new_name}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_RenameUser_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_RenameUser_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodDelete, pattern_HeadscaleService_DeleteUser_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/DeleteUser", runtime.WithHTTPPathPattern("/api/v1/user/{id}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_DeleteUser_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_DeleteUser_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_ListUsers_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/ListUsers", runtime.WithHTTPPathPattern("/api/v1/user"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_ListUsers_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_ListUsers_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_CreatePreAuthKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/CreatePreAuthKey", runtime.WithHTTPPathPattern("/api/v1/preauthkey"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_CreatePreAuthKey_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_CreatePreAuthKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_ExpirePreAuthKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/ExpirePreAuthKey", runtime.WithHTTPPathPattern("/api/v1/preauthkey/expire"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_ExpirePreAuthKey_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_ExpirePreAuthKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodDelete, pattern_HeadscaleService_DeletePreAuthKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/DeletePreAuthKey", runtime.WithHTTPPathPattern("/api/v1/preauthkey"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_DeletePreAuthKey_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_DeletePreAuthKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_ListPreAuthKeys_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/ListPreAuthKeys", runtime.WithHTTPPathPattern("/api/v1/preauthkey"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_ListPreAuthKeys_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_ListPreAuthKeys_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_DebugCreateNode_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/DebugCreateNode", runtime.WithHTTPPathPattern("/api/v1/debug/node"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_DebugCreateNode_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_DebugCreateNode_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_GetNode_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/GetNode", runtime.WithHTTPPathPattern("/api/v1/node/{node_id}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_GetNode_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_GetNode_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_SetTags_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/SetTags", runtime.WithHTTPPathPattern("/api/v1/node/{node_id}/tags"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_SetTags_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_SetTags_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_SetApprovedRoutes_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/SetApprovedRoutes", runtime.WithHTTPPathPattern("/api/v1/node/{node_id}/approve_routes"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_SetApprovedRoutes_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_SetApprovedRoutes_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_RegisterNode_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/RegisterNode", runtime.WithHTTPPathPattern("/api/v1/node/register"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_RegisterNode_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_RegisterNode_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodDelete, pattern_HeadscaleService_DeleteNode_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/DeleteNode", runtime.WithHTTPPathPattern("/api/v1/node/{node_id}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_DeleteNode_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_DeleteNode_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_ExpireNode_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/ExpireNode", runtime.WithHTTPPathPattern("/api/v1/node/{node_id}/expire"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_ExpireNode_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_ExpireNode_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_RenameNode_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/RenameNode", runtime.WithHTTPPathPattern("/api/v1/node/{node_id}/rename/{new_name}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_RenameNode_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_RenameNode_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_ListNodes_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/ListNodes", runtime.WithHTTPPathPattern("/api/v1/node"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_ListNodes_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_ListNodes_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_BackfillNodeIPs_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/BackfillNodeIPs", runtime.WithHTTPPathPattern("/api/v1/node/backfillips"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_BackfillNodeIPs_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_BackfillNodeIPs_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_AuthRegister_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/AuthRegister", runtime.WithHTTPPathPattern("/api/v1/auth/register"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_AuthRegister_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_AuthRegister_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_AuthApprove_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/AuthApprove", runtime.WithHTTPPathPattern("/api/v1/auth/approve"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_AuthApprove_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_AuthApprove_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_AuthReject_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/AuthReject", runtime.WithHTTPPathPattern("/api/v1/auth/reject"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_AuthReject_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_AuthReject_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_CreateApiKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/CreateApiKey", runtime.WithHTTPPathPattern("/api/v1/apikey"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_CreateApiKey_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_CreateApiKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_ExpireApiKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/ExpireApiKey", runtime.WithHTTPPathPattern("/api/v1/apikey/expire"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_ExpireApiKey_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_ExpireApiKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_ListApiKeys_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/ListApiKeys", runtime.WithHTTPPathPattern("/api/v1/apikey"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_ListApiKeys_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_ListApiKeys_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodDelete, pattern_HeadscaleService_DeleteApiKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/DeleteApiKey", runtime.WithHTTPPathPattern("/api/v1/apikey/{prefix}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_DeleteApiKey_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_DeleteApiKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_GetPolicy_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/GetPolicy", runtime.WithHTTPPathPattern("/api/v1/policy"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_GetPolicy_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_GetPolicy_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPut, pattern_HeadscaleService_SetPolicy_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/SetPolicy", runtime.WithHTTPPathPattern("/api/v1/policy"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_SetPolicy_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_SetPolicy_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_Health_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/Health", runtime.WithHTTPPathPattern("/api/v1/health"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_Health_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_Health_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
return nil
}
// RegisterHeadscaleServiceHandlerFromEndpoint is same as RegisterHeadscaleServiceHandler but
// automatically dials to "endpoint" and closes the connection when "ctx" gets done.
func RegisterHeadscaleServiceHandlerFromEndpoint(ctx context.Context, mux *runtime.ServeMux, endpoint string, opts []grpc.DialOption) (err error) {
conn, err := grpc.NewClient(endpoint, opts...)
if err != nil {
return err
}
defer func() {
if err != nil {
if cerr := conn.Close(); cerr != nil {
grpclog.Errorf("Failed to close conn to %s: %v", endpoint, cerr)
}
return
}
go func() {
<-ctx.Done()
if cerr := conn.Close(); cerr != nil {
grpclog.Errorf("Failed to close conn to %s: %v", endpoint, cerr)
}
}()
}()
return RegisterHeadscaleServiceHandler(ctx, mux, conn)
}
// RegisterHeadscaleServiceHandler registers the http handlers for service HeadscaleService to "mux".
// The handlers forward requests to the grpc endpoint over "conn".
func RegisterHeadscaleServiceHandler(ctx context.Context, mux *runtime.ServeMux, conn *grpc.ClientConn) error {
return RegisterHeadscaleServiceHandlerClient(ctx, mux, NewHeadscaleServiceClient(conn))
}
// RegisterHeadscaleServiceHandlerClient registers the http handlers for service HeadscaleService
// to "mux". The handlers forward requests to the grpc endpoint over the given implementation of "HeadscaleServiceClient".
// Note: the gRPC framework executes interceptors within the gRPC handler. If the passed in "HeadscaleServiceClient"
// doesn't go through the normal gRPC flow (creating a gRPC client etc.) then it will be up to the passed in
// "HeadscaleServiceClient" to call the correct interceptors. This client ignores the HTTP middlewares.
func RegisterHeadscaleServiceHandlerClient(ctx context.Context, mux *runtime.ServeMux, client HeadscaleServiceClient) error {
mux.Handle(http.MethodPost, pattern_HeadscaleService_CreateUser_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/CreateUser", runtime.WithHTTPPathPattern("/api/v1/user"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_CreateUser_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_CreateUser_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_RenameUser_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/RenameUser", runtime.WithHTTPPathPattern("/api/v1/user/{old_id}/rename/{new_name}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_RenameUser_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_RenameUser_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodDelete, pattern_HeadscaleService_DeleteUser_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/DeleteUser", runtime.WithHTTPPathPattern("/api/v1/user/{id}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_DeleteUser_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_DeleteUser_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_ListUsers_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/ListUsers", runtime.WithHTTPPathPattern("/api/v1/user"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_ListUsers_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_ListUsers_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_CreatePreAuthKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/CreatePreAuthKey", runtime.WithHTTPPathPattern("/api/v1/preauthkey"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_CreatePreAuthKey_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_CreatePreAuthKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_ExpirePreAuthKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/ExpirePreAuthKey", runtime.WithHTTPPathPattern("/api/v1/preauthkey/expire"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_ExpirePreAuthKey_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_ExpirePreAuthKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodDelete, pattern_HeadscaleService_DeletePreAuthKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/DeletePreAuthKey", runtime.WithHTTPPathPattern("/api/v1/preauthkey"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_DeletePreAuthKey_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_DeletePreAuthKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_ListPreAuthKeys_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/ListPreAuthKeys", runtime.WithHTTPPathPattern("/api/v1/preauthkey"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_ListPreAuthKeys_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_ListPreAuthKeys_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_DebugCreateNode_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/DebugCreateNode", runtime.WithHTTPPathPattern("/api/v1/debug/node"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_DebugCreateNode_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_DebugCreateNode_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_GetNode_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/GetNode", runtime.WithHTTPPathPattern("/api/v1/node/{node_id}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_GetNode_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_GetNode_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_SetTags_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/SetTags", runtime.WithHTTPPathPattern("/api/v1/node/{node_id}/tags"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_SetTags_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_SetTags_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_SetApprovedRoutes_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/SetApprovedRoutes", runtime.WithHTTPPathPattern("/api/v1/node/{node_id}/approve_routes"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_SetApprovedRoutes_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_SetApprovedRoutes_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_RegisterNode_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/RegisterNode", runtime.WithHTTPPathPattern("/api/v1/node/register"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_RegisterNode_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_RegisterNode_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodDelete, pattern_HeadscaleService_DeleteNode_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/DeleteNode", runtime.WithHTTPPathPattern("/api/v1/node/{node_id}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_DeleteNode_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_DeleteNode_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_ExpireNode_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/ExpireNode", runtime.WithHTTPPathPattern("/api/v1/node/{node_id}/expire"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_ExpireNode_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_ExpireNode_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_RenameNode_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/RenameNode", runtime.WithHTTPPathPattern("/api/v1/node/{node_id}/rename/{new_name}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_RenameNode_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_RenameNode_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_ListNodes_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/ListNodes", runtime.WithHTTPPathPattern("/api/v1/node"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_ListNodes_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_ListNodes_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_BackfillNodeIPs_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/BackfillNodeIPs", runtime.WithHTTPPathPattern("/api/v1/node/backfillips"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_BackfillNodeIPs_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_BackfillNodeIPs_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_AuthRegister_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/AuthRegister", runtime.WithHTTPPathPattern("/api/v1/auth/register"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_AuthRegister_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_AuthRegister_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_AuthApprove_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/AuthApprove", runtime.WithHTTPPathPattern("/api/v1/auth/approve"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_AuthApprove_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_AuthApprove_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_AuthReject_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/AuthReject", runtime.WithHTTPPathPattern("/api/v1/auth/reject"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_AuthReject_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_AuthReject_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_CreateApiKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/CreateApiKey", runtime.WithHTTPPathPattern("/api/v1/apikey"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_CreateApiKey_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_CreateApiKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_ExpireApiKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/ExpireApiKey", runtime.WithHTTPPathPattern("/api/v1/apikey/expire"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_ExpireApiKey_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_ExpireApiKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_ListApiKeys_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/ListApiKeys", runtime.WithHTTPPathPattern("/api/v1/apikey"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_ListApiKeys_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_ListApiKeys_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodDelete, pattern_HeadscaleService_DeleteApiKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/DeleteApiKey", runtime.WithHTTPPathPattern("/api/v1/apikey/{prefix}"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_DeleteApiKey_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_DeleteApiKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_GetPolicy_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/GetPolicy", runtime.WithHTTPPathPattern("/api/v1/policy"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_GetPolicy_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_GetPolicy_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPut, pattern_HeadscaleService_SetPolicy_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/SetPolicy", runtime.WithHTTPPathPattern("/api/v1/policy"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_SetPolicy_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_SetPolicy_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_Health_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/Health", runtime.WithHTTPPathPattern("/api/v1/health"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_Health_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_Health_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
return nil
}
var (
pattern_HeadscaleService_CreateUser_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "user"}, ""))
pattern_HeadscaleService_RenameUser_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4, 1, 0, 4, 1, 5, 5}, []string{"api", "v1", "user", "old_id", "rename", "new_name"}, ""))
pattern_HeadscaleService_DeleteUser_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"api", "v1", "user", "id"}, ""))
pattern_HeadscaleService_ListUsers_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "user"}, ""))
pattern_HeadscaleService_CreatePreAuthKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "preauthkey"}, ""))
pattern_HeadscaleService_ExpirePreAuthKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "preauthkey", "expire"}, ""))
pattern_HeadscaleService_DeletePreAuthKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "preauthkey"}, ""))
pattern_HeadscaleService_ListPreAuthKeys_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "preauthkey"}, ""))
pattern_HeadscaleService_DebugCreateNode_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "debug", "node"}, ""))
pattern_HeadscaleService_GetNode_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"api", "v1", "node", "node_id"}, ""))
pattern_HeadscaleService_SetTags_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4}, []string{"api", "v1", "node", "node_id", "tags"}, ""))
pattern_HeadscaleService_SetApprovedRoutes_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4}, []string{"api", "v1", "node", "node_id", "approve_routes"}, ""))
pattern_HeadscaleService_RegisterNode_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "node", "register"}, ""))
pattern_HeadscaleService_DeleteNode_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"api", "v1", "node", "node_id"}, ""))
pattern_HeadscaleService_ExpireNode_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4}, []string{"api", "v1", "node", "node_id", "expire"}, ""))
pattern_HeadscaleService_RenameNode_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4, 1, 0, 4, 1, 5, 5}, []string{"api", "v1", "node", "node_id", "rename", "new_name"}, ""))
pattern_HeadscaleService_ListNodes_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "node"}, ""))
pattern_HeadscaleService_BackfillNodeIPs_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "node", "backfillips"}, ""))
pattern_HeadscaleService_AuthRegister_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "auth", "register"}, ""))
pattern_HeadscaleService_AuthApprove_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "auth", "approve"}, ""))
pattern_HeadscaleService_AuthReject_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "auth", "reject"}, ""))
pattern_HeadscaleService_CreateApiKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "apikey"}, ""))
pattern_HeadscaleService_ExpireApiKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "apikey", "expire"}, ""))
pattern_HeadscaleService_ListApiKeys_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "apikey"}, ""))
pattern_HeadscaleService_DeleteApiKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"api", "v1", "apikey", "prefix"}, ""))
pattern_HeadscaleService_GetPolicy_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "policy"}, ""))
pattern_HeadscaleService_SetPolicy_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "policy"}, ""))
pattern_HeadscaleService_Health_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "health"}, ""))
)
var (
forward_HeadscaleService_CreateUser_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_RenameUser_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_DeleteUser_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_ListUsers_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_CreatePreAuthKey_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_ExpirePreAuthKey_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_DeletePreAuthKey_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_ListPreAuthKeys_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_DebugCreateNode_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_GetNode_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_SetTags_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_SetApprovedRoutes_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_RegisterNode_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_DeleteNode_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_ExpireNode_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_RenameNode_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_ListNodes_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_BackfillNodeIPs_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_AuthRegister_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_AuthApprove_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_AuthReject_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_CreateApiKey_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_ExpireApiKey_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_ListApiKeys_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_DeleteApiKey_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_GetPolicy_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_SetPolicy_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_Health_0 = runtime.ForwardResponseMessage
)
================================================
FILE: gen/go/headscale/v1/headscale_grpc.pb.go
================================================
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.6.0
// - protoc (unknown)
// source: headscale/v1/headscale.proto
package v1
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
)
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.64.0 or later.
const _ = grpc.SupportPackageIsVersion9
const (
HeadscaleService_CreateUser_FullMethodName = "/headscale.v1.HeadscaleService/CreateUser"
HeadscaleService_RenameUser_FullMethodName = "/headscale.v1.HeadscaleService/RenameUser"
HeadscaleService_DeleteUser_FullMethodName = "/headscale.v1.HeadscaleService/DeleteUser"
HeadscaleService_ListUsers_FullMethodName = "/headscale.v1.HeadscaleService/ListUsers"
HeadscaleService_CreatePreAuthKey_FullMethodName = "/headscale.v1.HeadscaleService/CreatePreAuthKey"
HeadscaleService_ExpirePreAuthKey_FullMethodName = "/headscale.v1.HeadscaleService/ExpirePreAuthKey"
HeadscaleService_DeletePreAuthKey_FullMethodName = "/headscale.v1.HeadscaleService/DeletePreAuthKey"
HeadscaleService_ListPreAuthKeys_FullMethodName = "/headscale.v1.HeadscaleService/ListPreAuthKeys"
HeadscaleService_DebugCreateNode_FullMethodName = "/headscale.v1.HeadscaleService/DebugCreateNode"
HeadscaleService_GetNode_FullMethodName = "/headscale.v1.HeadscaleService/GetNode"
HeadscaleService_SetTags_FullMethodName = "/headscale.v1.HeadscaleService/SetTags"
HeadscaleService_SetApprovedRoutes_FullMethodName = "/headscale.v1.HeadscaleService/SetApprovedRoutes"
HeadscaleService_RegisterNode_FullMethodName = "/headscale.v1.HeadscaleService/RegisterNode"
HeadscaleService_DeleteNode_FullMethodName = "/headscale.v1.HeadscaleService/DeleteNode"
HeadscaleService_ExpireNode_FullMethodName = "/headscale.v1.HeadscaleService/ExpireNode"
HeadscaleService_RenameNode_FullMethodName = "/headscale.v1.HeadscaleService/RenameNode"
HeadscaleService_ListNodes_FullMethodName = "/headscale.v1.HeadscaleService/ListNodes"
HeadscaleService_BackfillNodeIPs_FullMethodName = "/headscale.v1.HeadscaleService/BackfillNodeIPs"
HeadscaleService_AuthRegister_FullMethodName = "/headscale.v1.HeadscaleService/AuthRegister"
HeadscaleService_AuthApprove_FullMethodName = "/headscale.v1.HeadscaleService/AuthApprove"
HeadscaleService_AuthReject_FullMethodName = "/headscale.v1.HeadscaleService/AuthReject"
HeadscaleService_CreateApiKey_FullMethodName = "/headscale.v1.HeadscaleService/CreateApiKey"
HeadscaleService_ExpireApiKey_FullMethodName = "/headscale.v1.HeadscaleService/ExpireApiKey"
HeadscaleService_ListApiKeys_FullMethodName = "/headscale.v1.HeadscaleService/ListApiKeys"
HeadscaleService_DeleteApiKey_FullMethodName = "/headscale.v1.HeadscaleService/DeleteApiKey"
HeadscaleService_GetPolicy_FullMethodName = "/headscale.v1.HeadscaleService/GetPolicy"
HeadscaleService_SetPolicy_FullMethodName = "/headscale.v1.HeadscaleService/SetPolicy"
HeadscaleService_Health_FullMethodName = "/headscale.v1.HeadscaleService/Health"
)
// HeadscaleServiceClient is the client API for HeadscaleService service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type HeadscaleServiceClient interface {
// --- User start ---
CreateUser(ctx context.Context, in *CreateUserRequest, opts ...grpc.CallOption) (*CreateUserResponse, error)
RenameUser(ctx context.Context, in *RenameUserRequest, opts ...grpc.CallOption) (*RenameUserResponse, error)
DeleteUser(ctx context.Context, in *DeleteUserRequest, opts ...grpc.CallOption) (*DeleteUserResponse, error)
ListUsers(ctx context.Context, in *ListUsersRequest, opts ...grpc.CallOption) (*ListUsersResponse, error)
// --- PreAuthKeys start ---
CreatePreAuthKey(ctx context.Context, in *CreatePreAuthKeyRequest, opts ...grpc.CallOption) (*CreatePreAuthKeyResponse, error)
ExpirePreAuthKey(ctx context.Context, in *ExpirePreAuthKeyRequest, opts ...grpc.CallOption) (*ExpirePreAuthKeyResponse, error)
DeletePreAuthKey(ctx context.Context, in *DeletePreAuthKeyRequest, opts ...grpc.CallOption) (*DeletePreAuthKeyResponse, error)
ListPreAuthKeys(ctx context.Context, in *ListPreAuthKeysRequest, opts ...grpc.CallOption) (*ListPreAuthKeysResponse, error)
// --- Node start ---
DebugCreateNode(ctx context.Context, in *DebugCreateNodeRequest, opts ...grpc.CallOption) (*DebugCreateNodeResponse, error)
GetNode(ctx context.Context, in *GetNodeRequest, opts ...grpc.CallOption) (*GetNodeResponse, error)
SetTags(ctx context.Context, in *SetTagsRequest, opts ...grpc.CallOption) (*SetTagsResponse, error)
SetApprovedRoutes(ctx context.Context, in *SetApprovedRoutesRequest, opts ...grpc.CallOption) (*SetApprovedRoutesResponse, error)
RegisterNode(ctx context.Context, in *RegisterNodeRequest, opts ...grpc.CallOption) (*RegisterNodeResponse, error)
DeleteNode(ctx context.Context, in *DeleteNodeRequest, opts ...grpc.CallOption) (*DeleteNodeResponse, error)
ExpireNode(ctx context.Context, in *ExpireNodeRequest, opts ...grpc.CallOption) (*ExpireNodeResponse, error)
RenameNode(ctx context.Context, in *RenameNodeRequest, opts ...grpc.CallOption) (*RenameNodeResponse, error)
ListNodes(ctx context.Context, in *ListNodesRequest, opts ...grpc.CallOption) (*ListNodesResponse, error)
BackfillNodeIPs(ctx context.Context, in *BackfillNodeIPsRequest, opts ...grpc.CallOption) (*BackfillNodeIPsResponse, error)
// --- Auth start ---
AuthRegister(ctx context.Context, in *AuthRegisterRequest, opts ...grpc.CallOption) (*AuthRegisterResponse, error)
AuthApprove(ctx context.Context, in *AuthApproveRequest, opts ...grpc.CallOption) (*AuthApproveResponse, error)
AuthReject(ctx context.Context, in *AuthRejectRequest, opts ...grpc.CallOption) (*AuthRejectResponse, error)
// --- ApiKeys start ---
CreateApiKey(ctx context.Context, in *CreateApiKeyRequest, opts ...grpc.CallOption) (*CreateApiKeyResponse, error)
ExpireApiKey(ctx context.Context, in *ExpireApiKeyRequest, opts ...grpc.CallOption) (*ExpireApiKeyResponse, error)
ListApiKeys(ctx context.Context, in *ListApiKeysRequest, opts ...grpc.CallOption) (*ListApiKeysResponse, error)
DeleteApiKey(ctx context.Context, in *DeleteApiKeyRequest, opts ...grpc.CallOption) (*DeleteApiKeyResponse, error)
// --- Policy start ---
GetPolicy(ctx context.Context, in *GetPolicyRequest, opts ...grpc.CallOption) (*GetPolicyResponse, error)
SetPolicy(ctx context.Context, in *SetPolicyRequest, opts ...grpc.CallOption) (*SetPolicyResponse, error)
// --- Health start ---
Health(ctx context.Context, in *HealthRequest, opts ...grpc.CallOption) (*HealthResponse, error)
}
type headscaleServiceClient struct {
cc grpc.ClientConnInterface
}
func NewHeadscaleServiceClient(cc grpc.ClientConnInterface) HeadscaleServiceClient {
return &headscaleServiceClient{cc}
}
func (c *headscaleServiceClient) CreateUser(ctx context.Context, in *CreateUserRequest, opts ...grpc.CallOption) (*CreateUserResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(CreateUserResponse)
err := c.cc.Invoke(ctx, HeadscaleService_CreateUser_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) RenameUser(ctx context.Context, in *RenameUserRequest, opts ...grpc.CallOption) (*RenameUserResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(RenameUserResponse)
err := c.cc.Invoke(ctx, HeadscaleService_RenameUser_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) DeleteUser(ctx context.Context, in *DeleteUserRequest, opts ...grpc.CallOption) (*DeleteUserResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(DeleteUserResponse)
err := c.cc.Invoke(ctx, HeadscaleService_DeleteUser_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) ListUsers(ctx context.Context, in *ListUsersRequest, opts ...grpc.CallOption) (*ListUsersResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(ListUsersResponse)
err := c.cc.Invoke(ctx, HeadscaleService_ListUsers_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) CreatePreAuthKey(ctx context.Context, in *CreatePreAuthKeyRequest, opts ...grpc.CallOption) (*CreatePreAuthKeyResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(CreatePreAuthKeyResponse)
err := c.cc.Invoke(ctx, HeadscaleService_CreatePreAuthKey_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) ExpirePreAuthKey(ctx context.Context, in *ExpirePreAuthKeyRequest, opts ...grpc.CallOption) (*ExpirePreAuthKeyResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(ExpirePreAuthKeyResponse)
err := c.cc.Invoke(ctx, HeadscaleService_ExpirePreAuthKey_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) DeletePreAuthKey(ctx context.Context, in *DeletePreAuthKeyRequest, opts ...grpc.CallOption) (*DeletePreAuthKeyResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(DeletePreAuthKeyResponse)
err := c.cc.Invoke(ctx, HeadscaleService_DeletePreAuthKey_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) ListPreAuthKeys(ctx context.Context, in *ListPreAuthKeysRequest, opts ...grpc.CallOption) (*ListPreAuthKeysResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(ListPreAuthKeysResponse)
err := c.cc.Invoke(ctx, HeadscaleService_ListPreAuthKeys_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) DebugCreateNode(ctx context.Context, in *DebugCreateNodeRequest, opts ...grpc.CallOption) (*DebugCreateNodeResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(DebugCreateNodeResponse)
err := c.cc.Invoke(ctx, HeadscaleService_DebugCreateNode_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) GetNode(ctx context.Context, in *GetNodeRequest, opts ...grpc.CallOption) (*GetNodeResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(GetNodeResponse)
err := c.cc.Invoke(ctx, HeadscaleService_GetNode_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) SetTags(ctx context.Context, in *SetTagsRequest, opts ...grpc.CallOption) (*SetTagsResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(SetTagsResponse)
err := c.cc.Invoke(ctx, HeadscaleService_SetTags_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) SetApprovedRoutes(ctx context.Context, in *SetApprovedRoutesRequest, opts ...grpc.CallOption) (*SetApprovedRoutesResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(SetApprovedRoutesResponse)
err := c.cc.Invoke(ctx, HeadscaleService_SetApprovedRoutes_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) RegisterNode(ctx context.Context, in *RegisterNodeRequest, opts ...grpc.CallOption) (*RegisterNodeResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(RegisterNodeResponse)
err := c.cc.Invoke(ctx, HeadscaleService_RegisterNode_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) DeleteNode(ctx context.Context, in *DeleteNodeRequest, opts ...grpc.CallOption) (*DeleteNodeResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(DeleteNodeResponse)
err := c.cc.Invoke(ctx, HeadscaleService_DeleteNode_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) ExpireNode(ctx context.Context, in *ExpireNodeRequest, opts ...grpc.CallOption) (*ExpireNodeResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(ExpireNodeResponse)
err := c.cc.Invoke(ctx, HeadscaleService_ExpireNode_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) RenameNode(ctx context.Context, in *RenameNodeRequest, opts ...grpc.CallOption) (*RenameNodeResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(RenameNodeResponse)
err := c.cc.Invoke(ctx, HeadscaleService_RenameNode_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) ListNodes(ctx context.Context, in *ListNodesRequest, opts ...grpc.CallOption) (*ListNodesResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(ListNodesResponse)
err := c.cc.Invoke(ctx, HeadscaleService_ListNodes_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) BackfillNodeIPs(ctx context.Context, in *BackfillNodeIPsRequest, opts ...grpc.CallOption) (*BackfillNodeIPsResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(BackfillNodeIPsResponse)
err := c.cc.Invoke(ctx, HeadscaleService_BackfillNodeIPs_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) AuthRegister(ctx context.Context, in *AuthRegisterRequest, opts ...grpc.CallOption) (*AuthRegisterResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(AuthRegisterResponse)
err := c.cc.Invoke(ctx, HeadscaleService_AuthRegister_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) AuthApprove(ctx context.Context, in *AuthApproveRequest, opts ...grpc.CallOption) (*AuthApproveResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(AuthApproveResponse)
err := c.cc.Invoke(ctx, HeadscaleService_AuthApprove_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) AuthReject(ctx context.Context, in *AuthRejectRequest, opts ...grpc.CallOption) (*AuthRejectResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(AuthRejectResponse)
err := c.cc.Invoke(ctx, HeadscaleService_AuthReject_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) CreateApiKey(ctx context.Context, in *CreateApiKeyRequest, opts ...grpc.CallOption) (*CreateApiKeyResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(CreateApiKeyResponse)
err := c.cc.Invoke(ctx, HeadscaleService_CreateApiKey_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) ExpireApiKey(ctx context.Context, in *ExpireApiKeyRequest, opts ...grpc.CallOption) (*ExpireApiKeyResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(ExpireApiKeyResponse)
err := c.cc.Invoke(ctx, HeadscaleService_ExpireApiKey_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) ListApiKeys(ctx context.Context, in *ListApiKeysRequest, opts ...grpc.CallOption) (*ListApiKeysResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(ListApiKeysResponse)
err := c.cc.Invoke(ctx, HeadscaleService_ListApiKeys_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) DeleteApiKey(ctx context.Context, in *DeleteApiKeyRequest, opts ...grpc.CallOption) (*DeleteApiKeyResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(DeleteApiKeyResponse)
err := c.cc.Invoke(ctx, HeadscaleService_DeleteApiKey_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) GetPolicy(ctx context.Context, in *GetPolicyRequest, opts ...grpc.CallOption) (*GetPolicyResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(GetPolicyResponse)
err := c.cc.Invoke(ctx, HeadscaleService_GetPolicy_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) SetPolicy(ctx context.Context, in *SetPolicyRequest, opts ...grpc.CallOption) (*SetPolicyResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(SetPolicyResponse)
err := c.cc.Invoke(ctx, HeadscaleService_SetPolicy_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) Health(ctx context.Context, in *HealthRequest, opts ...grpc.CallOption) (*HealthResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(HealthResponse)
err := c.cc.Invoke(ctx, HeadscaleService_Health_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
// HeadscaleServiceServer is the server API for HeadscaleService service.
// All implementations must embed UnimplementedHeadscaleServiceServer
// for forward compatibility.
type HeadscaleServiceServer interface {
// --- User start ---
CreateUser(context.Context, *CreateUserRequest) (*CreateUserResponse, error)
RenameUser(context.Context, *RenameUserRequest) (*RenameUserResponse, error)
DeleteUser(context.Context, *DeleteUserRequest) (*DeleteUserResponse, error)
ListUsers(context.Context, *ListUsersRequest) (*ListUsersResponse, error)
// --- PreAuthKeys start ---
CreatePreAuthKey(context.Context, *CreatePreAuthKeyRequest) (*CreatePreAuthKeyResponse, error)
ExpirePreAuthKey(context.Context, *ExpirePreAuthKeyRequest) (*ExpirePreAuthKeyResponse, error)
DeletePreAuthKey(context.Context, *DeletePreAuthKeyRequest) (*DeletePreAuthKeyResponse, error)
ListPreAuthKeys(context.Context, *ListPreAuthKeysRequest) (*ListPreAuthKeysResponse, error)
// --- Node start ---
DebugCreateNode(context.Context, *DebugCreateNodeRequest) (*DebugCreateNodeResponse, error)
GetNode(context.Context, *GetNodeRequest) (*GetNodeResponse, error)
SetTags(context.Context, *SetTagsRequest) (*SetTagsResponse, error)
SetApprovedRoutes(context.Context, *SetApprovedRoutesRequest) (*SetApprovedRoutesResponse, error)
RegisterNode(context.Context, *RegisterNodeRequest) (*RegisterNodeResponse, error)
DeleteNode(context.Context, *DeleteNodeRequest) (*DeleteNodeResponse, error)
ExpireNode(context.Context, *ExpireNodeRequest) (*ExpireNodeResponse, error)
RenameNode(context.Context, *RenameNodeRequest) (*RenameNodeResponse, error)
ListNodes(context.Context, *ListNodesRequest) (*ListNodesResponse, error)
BackfillNodeIPs(context.Context, *BackfillNodeIPsRequest) (*BackfillNodeIPsResponse, error)
// --- Auth start ---
AuthRegister(context.Context, *AuthRegisterRequest) (*AuthRegisterResponse, error)
AuthApprove(context.Context, *AuthApproveRequest) (*AuthApproveResponse, error)
AuthReject(context.Context, *AuthRejectRequest) (*AuthRejectResponse, error)
// --- ApiKeys start ---
CreateApiKey(context.Context, *CreateApiKeyRequest) (*CreateApiKeyResponse, error)
ExpireApiKey(context.Context, *ExpireApiKeyRequest) (*ExpireApiKeyResponse, error)
ListApiKeys(context.Context, *ListApiKeysRequest) (*ListApiKeysResponse, error)
DeleteApiKey(context.Context, *DeleteApiKeyRequest) (*DeleteApiKeyResponse, error)
// --- Policy start ---
GetPolicy(context.Context, *GetPolicyRequest) (*GetPolicyResponse, error)
SetPolicy(context.Context, *SetPolicyRequest) (*SetPolicyResponse, error)
// --- Health start ---
Health(context.Context, *HealthRequest) (*HealthResponse, error)
mustEmbedUnimplementedHeadscaleServiceServer()
}
// UnimplementedHeadscaleServiceServer must be embedded to have
// forward compatible implementations.
//
// NOTE: this should be embedded by value instead of pointer to avoid a nil
// pointer dereference when methods are called.
type UnimplementedHeadscaleServiceServer struct{}
func (UnimplementedHeadscaleServiceServer) CreateUser(context.Context, *CreateUserRequest) (*CreateUserResponse, error) {
return nil, status.Error(codes.Unimplemented, "method CreateUser not implemented")
}
func (UnimplementedHeadscaleServiceServer) RenameUser(context.Context, *RenameUserRequest) (*RenameUserResponse, error) {
return nil, status.Error(codes.Unimplemented, "method RenameUser not implemented")
}
func (UnimplementedHeadscaleServiceServer) DeleteUser(context.Context, *DeleteUserRequest) (*DeleteUserResponse, error) {
return nil, status.Error(codes.Unimplemented, "method DeleteUser not implemented")
}
func (UnimplementedHeadscaleServiceServer) ListUsers(context.Context, *ListUsersRequest) (*ListUsersResponse, error) {
return nil, status.Error(codes.Unimplemented, "method ListUsers not implemented")
}
func (UnimplementedHeadscaleServiceServer) CreatePreAuthKey(context.Context, *CreatePreAuthKeyRequest) (*CreatePreAuthKeyResponse, error) {
return nil, status.Error(codes.Unimplemented, "method CreatePreAuthKey not implemented")
}
func (UnimplementedHeadscaleServiceServer) ExpirePreAuthKey(context.Context, *ExpirePreAuthKeyRequest) (*ExpirePreAuthKeyResponse, error) {
return nil, status.Error(codes.Unimplemented, "method ExpirePreAuthKey not implemented")
}
func (UnimplementedHeadscaleServiceServer) DeletePreAuthKey(context.Context, *DeletePreAuthKeyRequest) (*DeletePreAuthKeyResponse, error) {
return nil, status.Error(codes.Unimplemented, "method DeletePreAuthKey not implemented")
}
func (UnimplementedHeadscaleServiceServer) ListPreAuthKeys(context.Context, *ListPreAuthKeysRequest) (*ListPreAuthKeysResponse, error) {
return nil, status.Error(codes.Unimplemented, "method ListPreAuthKeys not implemented")
}
func (UnimplementedHeadscaleServiceServer) DebugCreateNode(context.Context, *DebugCreateNodeRequest) (*DebugCreateNodeResponse, error) {
return nil, status.Error(codes.Unimplemented, "method DebugCreateNode not implemented")
}
func (UnimplementedHeadscaleServiceServer) GetNode(context.Context, *GetNodeRequest) (*GetNodeResponse, error) {
return nil, status.Error(codes.Unimplemented, "method GetNode not implemented")
}
func (UnimplementedHeadscaleServiceServer) SetTags(context.Context, *SetTagsRequest) (*SetTagsResponse, error) {
return nil, status.Error(codes.Unimplemented, "method SetTags not implemented")
}
func (UnimplementedHeadscaleServiceServer) SetApprovedRoutes(context.Context, *SetApprovedRoutesRequest) (*SetApprovedRoutesResponse, error) {
return nil, status.Error(codes.Unimplemented, "method SetApprovedRoutes not implemented")
}
func (UnimplementedHeadscaleServiceServer) RegisterNode(context.Context, *RegisterNodeRequest) (*RegisterNodeResponse, error) {
return nil, status.Error(codes.Unimplemented, "method RegisterNode not implemented")
}
func (UnimplementedHeadscaleServiceServer) DeleteNode(context.Context, *DeleteNodeRequest) (*DeleteNodeResponse, error) {
return nil, status.Error(codes.Unimplemented, "method DeleteNode not implemented")
}
func (UnimplementedHeadscaleServiceServer) ExpireNode(context.Context, *ExpireNodeRequest) (*ExpireNodeResponse, error) {
return nil, status.Error(codes.Unimplemented, "method ExpireNode not implemented")
}
func (UnimplementedHeadscaleServiceServer) RenameNode(context.Context, *RenameNodeRequest) (*RenameNodeResponse, error) {
return nil, status.Error(codes.Unimplemented, "method RenameNode not implemented")
}
func (UnimplementedHeadscaleServiceServer) ListNodes(context.Context, *ListNodesRequest) (*ListNodesResponse, error) {
return nil, status.Error(codes.Unimplemented, "method ListNodes not implemented")
}
func (UnimplementedHeadscaleServiceServer) BackfillNodeIPs(context.Context, *BackfillNodeIPsRequest) (*BackfillNodeIPsResponse, error) {
return nil, status.Error(codes.Unimplemented, "method BackfillNodeIPs not implemented")
}
func (UnimplementedHeadscaleServiceServer) AuthRegister(context.Context, *AuthRegisterRequest) (*AuthRegisterResponse, error) {
return nil, status.Error(codes.Unimplemented, "method AuthRegister not implemented")
}
func (UnimplementedHeadscaleServiceServer) AuthApprove(context.Context, *AuthApproveRequest) (*AuthApproveResponse, error) {
return nil, status.Error(codes.Unimplemented, "method AuthApprove not implemented")
}
func (UnimplementedHeadscaleServiceServer) AuthReject(context.Context, *AuthRejectRequest) (*AuthRejectResponse, error) {
return nil, status.Error(codes.Unimplemented, "method AuthReject not implemented")
}
func (UnimplementedHeadscaleServiceServer) CreateApiKey(context.Context, *CreateApiKeyRequest) (*CreateApiKeyResponse, error) {
return nil, status.Error(codes.Unimplemented, "method CreateApiKey not implemented")
}
func (UnimplementedHeadscaleServiceServer) ExpireApiKey(context.Context, *ExpireApiKeyRequest) (*ExpireApiKeyResponse, error) {
return nil, status.Error(codes.Unimplemented, "method ExpireApiKey not implemented")
}
func (UnimplementedHeadscaleServiceServer) ListApiKeys(context.Context, *ListApiKeysRequest) (*ListApiKeysResponse, error) {
return nil, status.Error(codes.Unimplemented, "method ListApiKeys not implemented")
}
func (UnimplementedHeadscaleServiceServer) DeleteApiKey(context.Context, *DeleteApiKeyRequest) (*DeleteApiKeyResponse, error) {
return nil, status.Error(codes.Unimplemented, "method DeleteApiKey not implemented")
}
func (UnimplementedHeadscaleServiceServer) GetPolicy(context.Context, *GetPolicyRequest) (*GetPolicyResponse, error) {
return nil, status.Error(codes.Unimplemented, "method GetPolicy not implemented")
}
func (UnimplementedHeadscaleServiceServer) SetPolicy(context.Context, *SetPolicyRequest) (*SetPolicyResponse, error) {
return nil, status.Error(codes.Unimplemented, "method SetPolicy not implemented")
}
func (UnimplementedHeadscaleServiceServer) Health(context.Context, *HealthRequest) (*HealthResponse, error) {
return nil, status.Error(codes.Unimplemented, "method Health not implemented")
}
func (UnimplementedHeadscaleServiceServer) mustEmbedUnimplementedHeadscaleServiceServer() {}
func (UnimplementedHeadscaleServiceServer) testEmbeddedByValue() {}
// UnsafeHeadscaleServiceServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to HeadscaleServiceServer will
// result in compilation errors.
type UnsafeHeadscaleServiceServer interface {
mustEmbedUnimplementedHeadscaleServiceServer()
}
func RegisterHeadscaleServiceServer(s grpc.ServiceRegistrar, srv HeadscaleServiceServer) {
// If the following call panics, it indicates UnimplementedHeadscaleServiceServer was
// embedded by pointer and is nil. This will cause panics if an
// unimplemented method is ever invoked, so we test this at initialization
// time to prevent it from happening at runtime later due to I/O.
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
t.testEmbeddedByValue()
}
s.RegisterService(&HeadscaleService_ServiceDesc, srv)
}
func _HeadscaleService_CreateUser_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(CreateUserRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).CreateUser(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_CreateUser_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).CreateUser(ctx, req.(*CreateUserRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_RenameUser_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(RenameUserRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).RenameUser(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_RenameUser_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).RenameUser(ctx, req.(*RenameUserRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_DeleteUser_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DeleteUserRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).DeleteUser(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_DeleteUser_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).DeleteUser(ctx, req.(*DeleteUserRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_ListUsers_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListUsersRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).ListUsers(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_ListUsers_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).ListUsers(ctx, req.(*ListUsersRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_CreatePreAuthKey_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(CreatePreAuthKeyRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).CreatePreAuthKey(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_CreatePreAuthKey_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).CreatePreAuthKey(ctx, req.(*CreatePreAuthKeyRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_ExpirePreAuthKey_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ExpirePreAuthKeyRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).ExpirePreAuthKey(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_ExpirePreAuthKey_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).ExpirePreAuthKey(ctx, req.(*ExpirePreAuthKeyRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_DeletePreAuthKey_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DeletePreAuthKeyRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).DeletePreAuthKey(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_DeletePreAuthKey_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).DeletePreAuthKey(ctx, req.(*DeletePreAuthKeyRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_ListPreAuthKeys_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListPreAuthKeysRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).ListPreAuthKeys(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_ListPreAuthKeys_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).ListPreAuthKeys(ctx, req.(*ListPreAuthKeysRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_DebugCreateNode_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DebugCreateNodeRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).DebugCreateNode(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_DebugCreateNode_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).DebugCreateNode(ctx, req.(*DebugCreateNodeRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_GetNode_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetNodeRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).GetNode(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_GetNode_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).GetNode(ctx, req.(*GetNodeRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_SetTags_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(SetTagsRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).SetTags(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_SetTags_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).SetTags(ctx, req.(*SetTagsRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_SetApprovedRoutes_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(SetApprovedRoutesRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).SetApprovedRoutes(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_SetApprovedRoutes_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).SetApprovedRoutes(ctx, req.(*SetApprovedRoutesRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_RegisterNode_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(RegisterNodeRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).RegisterNode(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_RegisterNode_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).RegisterNode(ctx, req.(*RegisterNodeRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_DeleteNode_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DeleteNodeRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).DeleteNode(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_DeleteNode_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).DeleteNode(ctx, req.(*DeleteNodeRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_ExpireNode_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ExpireNodeRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).ExpireNode(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_ExpireNode_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).ExpireNode(ctx, req.(*ExpireNodeRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_RenameNode_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(RenameNodeRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).RenameNode(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_RenameNode_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).RenameNode(ctx, req.(*RenameNodeRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_ListNodes_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListNodesRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).ListNodes(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_ListNodes_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).ListNodes(ctx, req.(*ListNodesRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_BackfillNodeIPs_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(BackfillNodeIPsRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).BackfillNodeIPs(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_BackfillNodeIPs_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).BackfillNodeIPs(ctx, req.(*BackfillNodeIPsRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_AuthRegister_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(AuthRegisterRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).AuthRegister(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_AuthRegister_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).AuthRegister(ctx, req.(*AuthRegisterRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_AuthApprove_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(AuthApproveRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).AuthApprove(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_AuthApprove_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).AuthApprove(ctx, req.(*AuthApproveRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_AuthReject_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(AuthRejectRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).AuthReject(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_AuthReject_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).AuthReject(ctx, req.(*AuthRejectRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_CreateApiKey_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(CreateApiKeyRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).CreateApiKey(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_CreateApiKey_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).CreateApiKey(ctx, req.(*CreateApiKeyRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_ExpireApiKey_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ExpireApiKeyRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).ExpireApiKey(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_ExpireApiKey_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).ExpireApiKey(ctx, req.(*ExpireApiKeyRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_ListApiKeys_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListApiKeysRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).ListApiKeys(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_ListApiKeys_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).ListApiKeys(ctx, req.(*ListApiKeysRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_DeleteApiKey_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DeleteApiKeyRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).DeleteApiKey(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_DeleteApiKey_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).DeleteApiKey(ctx, req.(*DeleteApiKeyRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_GetPolicy_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetPolicyRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).GetPolicy(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_GetPolicy_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).GetPolicy(ctx, req.(*GetPolicyRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_SetPolicy_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(SetPolicyRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).SetPolicy(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_SetPolicy_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).SetPolicy(ctx, req.(*SetPolicyRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_Health_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(HealthRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).Health(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_Health_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).Health(ctx, req.(*HealthRequest))
}
return interceptor(ctx, in, info, handler)
}
// HeadscaleService_ServiceDesc is the grpc.ServiceDesc for HeadscaleService service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
var HeadscaleService_ServiceDesc = grpc.ServiceDesc{
ServiceName: "headscale.v1.HeadscaleService",
HandlerType: (*HeadscaleServiceServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "CreateUser",
Handler: _HeadscaleService_CreateUser_Handler,
},
{
MethodName: "RenameUser",
Handler: _HeadscaleService_RenameUser_Handler,
},
{
MethodName: "DeleteUser",
Handler: _HeadscaleService_DeleteUser_Handler,
},
{
MethodName: "ListUsers",
Handler: _HeadscaleService_ListUsers_Handler,
},
{
MethodName: "CreatePreAuthKey",
Handler: _HeadscaleService_CreatePreAuthKey_Handler,
},
{
MethodName: "ExpirePreAuthKey",
Handler: _HeadscaleService_ExpirePreAuthKey_Handler,
},
{
MethodName: "DeletePreAuthKey",
Handler: _HeadscaleService_DeletePreAuthKey_Handler,
},
{
MethodName: "ListPreAuthKeys",
Handler: _HeadscaleService_ListPreAuthKeys_Handler,
},
{
MethodName: "DebugCreateNode",
Handler: _HeadscaleService_DebugCreateNode_Handler,
},
{
MethodName: "GetNode",
Handler: _HeadscaleService_GetNode_Handler,
},
{
MethodName: "SetTags",
Handler: _HeadscaleService_SetTags_Handler,
},
{
MethodName: "SetApprovedRoutes",
Handler: _HeadscaleService_SetApprovedRoutes_Handler,
},
{
MethodName: "RegisterNode",
Handler: _HeadscaleService_RegisterNode_Handler,
},
{
MethodName: "DeleteNode",
Handler: _HeadscaleService_DeleteNode_Handler,
},
{
MethodName: "ExpireNode",
Handler: _HeadscaleService_ExpireNode_Handler,
},
{
MethodName: "RenameNode",
Handler: _HeadscaleService_RenameNode_Handler,
},
{
MethodName: "ListNodes",
Handler: _HeadscaleService_ListNodes_Handler,
},
{
MethodName: "BackfillNodeIPs",
Handler: _HeadscaleService_BackfillNodeIPs_Handler,
},
{
MethodName: "AuthRegister",
Handler: _HeadscaleService_AuthRegister_Handler,
},
{
MethodName: "AuthApprove",
Handler: _HeadscaleService_AuthApprove_Handler,
},
{
MethodName: "AuthReject",
Handler: _HeadscaleService_AuthReject_Handler,
},
{
MethodName: "CreateApiKey",
Handler: _HeadscaleService_CreateApiKey_Handler,
},
{
MethodName: "ExpireApiKey",
Handler: _HeadscaleService_ExpireApiKey_Handler,
},
{
MethodName: "ListApiKeys",
Handler: _HeadscaleService_ListApiKeys_Handler,
},
{
MethodName: "DeleteApiKey",
Handler: _HeadscaleService_DeleteApiKey_Handler,
},
{
MethodName: "GetPolicy",
Handler: _HeadscaleService_GetPolicy_Handler,
},
{
MethodName: "SetPolicy",
Handler: _HeadscaleService_SetPolicy_Handler,
},
{
MethodName: "Health",
Handler: _HeadscaleService_Health_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "headscale/v1/headscale.proto",
}
================================================
FILE: gen/go/headscale/v1/node.pb.go
================================================
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.11
// protoc (unknown)
// source: headscale/v1/node.proto
package v1
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
timestamppb "google.golang.org/protobuf/types/known/timestamppb"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type RegisterMethod int32
const (
RegisterMethod_REGISTER_METHOD_UNSPECIFIED RegisterMethod = 0
RegisterMethod_REGISTER_METHOD_AUTH_KEY RegisterMethod = 1
RegisterMethod_REGISTER_METHOD_CLI RegisterMethod = 2
RegisterMethod_REGISTER_METHOD_OIDC RegisterMethod = 3
)
// Enum value maps for RegisterMethod.
var (
RegisterMethod_name = map[int32]string{
0: "REGISTER_METHOD_UNSPECIFIED",
1: "REGISTER_METHOD_AUTH_KEY",
2: "REGISTER_METHOD_CLI",
3: "REGISTER_METHOD_OIDC",
}
RegisterMethod_value = map[string]int32{
"REGISTER_METHOD_UNSPECIFIED": 0,
"REGISTER_METHOD_AUTH_KEY": 1,
"REGISTER_METHOD_CLI": 2,
"REGISTER_METHOD_OIDC": 3,
}
)
func (x RegisterMethod) Enum() *RegisterMethod {
p := new(RegisterMethod)
*p = x
return p
}
func (x RegisterMethod) String() string {
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
}
func (RegisterMethod) Descriptor() protoreflect.EnumDescriptor {
return file_headscale_v1_node_proto_enumTypes[0].Descriptor()
}
func (RegisterMethod) Type() protoreflect.EnumType {
return &file_headscale_v1_node_proto_enumTypes[0]
}
func (x RegisterMethod) Number() protoreflect.EnumNumber {
return protoreflect.EnumNumber(x)
}
// Deprecated: Use RegisterMethod.Descriptor instead.
func (RegisterMethod) EnumDescriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{0}
}
type Node struct {
state protoimpl.MessageState `protogen:"open.v1"`
Id uint64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
MachineKey string `protobuf:"bytes,2,opt,name=machine_key,json=machineKey,proto3" json:"machine_key,omitempty"`
NodeKey string `protobuf:"bytes,3,opt,name=node_key,json=nodeKey,proto3" json:"node_key,omitempty"`
DiscoKey string `protobuf:"bytes,4,opt,name=disco_key,json=discoKey,proto3" json:"disco_key,omitempty"`
IpAddresses []string `protobuf:"bytes,5,rep,name=ip_addresses,json=ipAddresses,proto3" json:"ip_addresses,omitempty"`
Name string `protobuf:"bytes,6,opt,name=name,proto3" json:"name,omitempty"`
User *User `protobuf:"bytes,7,opt,name=user,proto3" json:"user,omitempty"`
LastSeen *timestamppb.Timestamp `protobuf:"bytes,8,opt,name=last_seen,json=lastSeen,proto3" json:"last_seen,omitempty"`
Expiry *timestamppb.Timestamp `protobuf:"bytes,10,opt,name=expiry,proto3" json:"expiry,omitempty"`
PreAuthKey *PreAuthKey `protobuf:"bytes,11,opt,name=pre_auth_key,json=preAuthKey,proto3" json:"pre_auth_key,omitempty"`
CreatedAt *timestamppb.Timestamp `protobuf:"bytes,12,opt,name=created_at,json=createdAt,proto3" json:"created_at,omitempty"`
RegisterMethod RegisterMethod `protobuf:"varint,13,opt,name=register_method,json=registerMethod,proto3,enum=headscale.v1.RegisterMethod" json:"register_method,omitempty"`
// Deprecated
// repeated string forced_tags = 18;
// repeated string invalid_tags = 19;
// repeated string valid_tags = 20;
GivenName string `protobuf:"bytes,21,opt,name=given_name,json=givenName,proto3" json:"given_name,omitempty"`
Online bool `protobuf:"varint,22,opt,name=online,proto3" json:"online,omitempty"`
ApprovedRoutes []string `protobuf:"bytes,23,rep,name=approved_routes,json=approvedRoutes,proto3" json:"approved_routes,omitempty"`
AvailableRoutes []string `protobuf:"bytes,24,rep,name=available_routes,json=availableRoutes,proto3" json:"available_routes,omitempty"`
SubnetRoutes []string `protobuf:"bytes,25,rep,name=subnet_routes,json=subnetRoutes,proto3" json:"subnet_routes,omitempty"`
Tags []string `protobuf:"bytes,26,rep,name=tags,proto3" json:"tags,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *Node) Reset() {
*x = Node{}
mi := &file_headscale_v1_node_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *Node) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Node) ProtoMessage() {}
func (x *Node) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Node.ProtoReflect.Descriptor instead.
func (*Node) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{0}
}
func (x *Node) GetId() uint64 {
if x != nil {
return x.Id
}
return 0
}
func (x *Node) GetMachineKey() string {
if x != nil {
return x.MachineKey
}
return ""
}
func (x *Node) GetNodeKey() string {
if x != nil {
return x.NodeKey
}
return ""
}
func (x *Node) GetDiscoKey() string {
if x != nil {
return x.DiscoKey
}
return ""
}
func (x *Node) GetIpAddresses() []string {
if x != nil {
return x.IpAddresses
}
return nil
}
func (x *Node) GetName() string {
if x != nil {
return x.Name
}
return ""
}
func (x *Node) GetUser() *User {
if x != nil {
return x.User
}
return nil
}
func (x *Node) GetLastSeen() *timestamppb.Timestamp {
if x != nil {
return x.LastSeen
}
return nil
}
func (x *Node) GetExpiry() *timestamppb.Timestamp {
if x != nil {
return x.Expiry
}
return nil
}
func (x *Node) GetPreAuthKey() *PreAuthKey {
if x != nil {
return x.PreAuthKey
}
return nil
}
func (x *Node) GetCreatedAt() *timestamppb.Timestamp {
if x != nil {
return x.CreatedAt
}
return nil
}
func (x *Node) GetRegisterMethod() RegisterMethod {
if x != nil {
return x.RegisterMethod
}
return RegisterMethod_REGISTER_METHOD_UNSPECIFIED
}
func (x *Node) GetGivenName() string {
if x != nil {
return x.GivenName
}
return ""
}
func (x *Node) GetOnline() bool {
if x != nil {
return x.Online
}
return false
}
func (x *Node) GetApprovedRoutes() []string {
if x != nil {
return x.ApprovedRoutes
}
return nil
}
func (x *Node) GetAvailableRoutes() []string {
if x != nil {
return x.AvailableRoutes
}
return nil
}
func (x *Node) GetSubnetRoutes() []string {
if x != nil {
return x.SubnetRoutes
}
return nil
}
func (x *Node) GetTags() []string {
if x != nil {
return x.Tags
}
return nil
}
type RegisterNodeRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
User string `protobuf:"bytes,1,opt,name=user,proto3" json:"user,omitempty"`
Key string `protobuf:"bytes,2,opt,name=key,proto3" json:"key,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *RegisterNodeRequest) Reset() {
*x = RegisterNodeRequest{}
mi := &file_headscale_v1_node_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *RegisterNodeRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RegisterNodeRequest) ProtoMessage() {}
func (x *RegisterNodeRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[1]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RegisterNodeRequest.ProtoReflect.Descriptor instead.
func (*RegisterNodeRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{1}
}
func (x *RegisterNodeRequest) GetUser() string {
if x != nil {
return x.User
}
return ""
}
func (x *RegisterNodeRequest) GetKey() string {
if x != nil {
return x.Key
}
return ""
}
type RegisterNodeResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Node *Node `protobuf:"bytes,1,opt,name=node,proto3" json:"node,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *RegisterNodeResponse) Reset() {
*x = RegisterNodeResponse{}
mi := &file_headscale_v1_node_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *RegisterNodeResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RegisterNodeResponse) ProtoMessage() {}
func (x *RegisterNodeResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[2]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RegisterNodeResponse.ProtoReflect.Descriptor instead.
func (*RegisterNodeResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{2}
}
func (x *RegisterNodeResponse) GetNode() *Node {
if x != nil {
return x.Node
}
return nil
}
type GetNodeRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
NodeId uint64 `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3" json:"node_id,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *GetNodeRequest) Reset() {
*x = GetNodeRequest{}
mi := &file_headscale_v1_node_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *GetNodeRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GetNodeRequest) ProtoMessage() {}
func (x *GetNodeRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[3]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GetNodeRequest.ProtoReflect.Descriptor instead.
func (*GetNodeRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{3}
}
func (x *GetNodeRequest) GetNodeId() uint64 {
if x != nil {
return x.NodeId
}
return 0
}
type GetNodeResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Node *Node `protobuf:"bytes,1,opt,name=node,proto3" json:"node,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *GetNodeResponse) Reset() {
*x = GetNodeResponse{}
mi := &file_headscale_v1_node_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *GetNodeResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GetNodeResponse) ProtoMessage() {}
func (x *GetNodeResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[4]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GetNodeResponse.ProtoReflect.Descriptor instead.
func (*GetNodeResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{4}
}
func (x *GetNodeResponse) GetNode() *Node {
if x != nil {
return x.Node
}
return nil
}
type SetTagsRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
NodeId uint64 `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3" json:"node_id,omitempty"`
Tags []string `protobuf:"bytes,2,rep,name=tags,proto3" json:"tags,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *SetTagsRequest) Reset() {
*x = SetTagsRequest{}
mi := &file_headscale_v1_node_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SetTagsRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SetTagsRequest) ProtoMessage() {}
func (x *SetTagsRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[5]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SetTagsRequest.ProtoReflect.Descriptor instead.
func (*SetTagsRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{5}
}
func (x *SetTagsRequest) GetNodeId() uint64 {
if x != nil {
return x.NodeId
}
return 0
}
func (x *SetTagsRequest) GetTags() []string {
if x != nil {
return x.Tags
}
return nil
}
type SetTagsResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Node *Node `protobuf:"bytes,1,opt,name=node,proto3" json:"node,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *SetTagsResponse) Reset() {
*x = SetTagsResponse{}
mi := &file_headscale_v1_node_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SetTagsResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SetTagsResponse) ProtoMessage() {}
func (x *SetTagsResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[6]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SetTagsResponse.ProtoReflect.Descriptor instead.
func (*SetTagsResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{6}
}
func (x *SetTagsResponse) GetNode() *Node {
if x != nil {
return x.Node
}
return nil
}
type SetApprovedRoutesRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
NodeId uint64 `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3" json:"node_id,omitempty"`
Routes []string `protobuf:"bytes,2,rep,name=routes,proto3" json:"routes,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *SetApprovedRoutesRequest) Reset() {
*x = SetApprovedRoutesRequest{}
mi := &file_headscale_v1_node_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SetApprovedRoutesRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SetApprovedRoutesRequest) ProtoMessage() {}
func (x *SetApprovedRoutesRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[7]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SetApprovedRoutesRequest.ProtoReflect.Descriptor instead.
func (*SetApprovedRoutesRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{7}
}
func (x *SetApprovedRoutesRequest) GetNodeId() uint64 {
if x != nil {
return x.NodeId
}
return 0
}
func (x *SetApprovedRoutesRequest) GetRoutes() []string {
if x != nil {
return x.Routes
}
return nil
}
type SetApprovedRoutesResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Node *Node `protobuf:"bytes,1,opt,name=node,proto3" json:"node,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *SetApprovedRoutesResponse) Reset() {
*x = SetApprovedRoutesResponse{}
mi := &file_headscale_v1_node_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SetApprovedRoutesResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SetApprovedRoutesResponse) ProtoMessage() {}
func (x *SetApprovedRoutesResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[8]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SetApprovedRoutesResponse.ProtoReflect.Descriptor instead.
func (*SetApprovedRoutesResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{8}
}
func (x *SetApprovedRoutesResponse) GetNode() *Node {
if x != nil {
return x.Node
}
return nil
}
type DeleteNodeRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
NodeId uint64 `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3" json:"node_id,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DeleteNodeRequest) Reset() {
*x = DeleteNodeRequest{}
mi := &file_headscale_v1_node_proto_msgTypes[9]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DeleteNodeRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DeleteNodeRequest) ProtoMessage() {}
func (x *DeleteNodeRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[9]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DeleteNodeRequest.ProtoReflect.Descriptor instead.
func (*DeleteNodeRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{9}
}
func (x *DeleteNodeRequest) GetNodeId() uint64 {
if x != nil {
return x.NodeId
}
return 0
}
type DeleteNodeResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DeleteNodeResponse) Reset() {
*x = DeleteNodeResponse{}
mi := &file_headscale_v1_node_proto_msgTypes[10]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DeleteNodeResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DeleteNodeResponse) ProtoMessage() {}
func (x *DeleteNodeResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[10]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DeleteNodeResponse.ProtoReflect.Descriptor instead.
func (*DeleteNodeResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{10}
}
type ExpireNodeRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
NodeId uint64 `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3" json:"node_id,omitempty"`
Expiry *timestamppb.Timestamp `protobuf:"bytes,2,opt,name=expiry,proto3" json:"expiry,omitempty"`
// When true, sets expiry to null (node will never expire).
DisableExpiry bool `protobuf:"varint,3,opt,name=disable_expiry,json=disableExpiry,proto3" json:"disable_expiry,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ExpireNodeRequest) Reset() {
*x = ExpireNodeRequest{}
mi := &file_headscale_v1_node_proto_msgTypes[11]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ExpireNodeRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ExpireNodeRequest) ProtoMessage() {}
func (x *ExpireNodeRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[11]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ExpireNodeRequest.ProtoReflect.Descriptor instead.
func (*ExpireNodeRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{11}
}
func (x *ExpireNodeRequest) GetNodeId() uint64 {
if x != nil {
return x.NodeId
}
return 0
}
func (x *ExpireNodeRequest) GetExpiry() *timestamppb.Timestamp {
if x != nil {
return x.Expiry
}
return nil
}
func (x *ExpireNodeRequest) GetDisableExpiry() bool {
if x != nil {
return x.DisableExpiry
}
return false
}
type ExpireNodeResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Node *Node `protobuf:"bytes,1,opt,name=node,proto3" json:"node,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ExpireNodeResponse) Reset() {
*x = ExpireNodeResponse{}
mi := &file_headscale_v1_node_proto_msgTypes[12]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ExpireNodeResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ExpireNodeResponse) ProtoMessage() {}
func (x *ExpireNodeResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[12]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ExpireNodeResponse.ProtoReflect.Descriptor instead.
func (*ExpireNodeResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{12}
}
func (x *ExpireNodeResponse) GetNode() *Node {
if x != nil {
return x.Node
}
return nil
}
type RenameNodeRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
NodeId uint64 `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3" json:"node_id,omitempty"`
NewName string `protobuf:"bytes,2,opt,name=new_name,json=newName,proto3" json:"new_name,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *RenameNodeRequest) Reset() {
*x = RenameNodeRequest{}
mi := &file_headscale_v1_node_proto_msgTypes[13]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *RenameNodeRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RenameNodeRequest) ProtoMessage() {}
func (x *RenameNodeRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[13]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RenameNodeRequest.ProtoReflect.Descriptor instead.
func (*RenameNodeRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{13}
}
func (x *RenameNodeRequest) GetNodeId() uint64 {
if x != nil {
return x.NodeId
}
return 0
}
func (x *RenameNodeRequest) GetNewName() string {
if x != nil {
return x.NewName
}
return ""
}
type RenameNodeResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Node *Node `protobuf:"bytes,1,opt,name=node,proto3" json:"node,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *RenameNodeResponse) Reset() {
*x = RenameNodeResponse{}
mi := &file_headscale_v1_node_proto_msgTypes[14]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *RenameNodeResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RenameNodeResponse) ProtoMessage() {}
func (x *RenameNodeResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[14]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RenameNodeResponse.ProtoReflect.Descriptor instead.
func (*RenameNodeResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{14}
}
func (x *RenameNodeResponse) GetNode() *Node {
if x != nil {
return x.Node
}
return nil
}
type ListNodesRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
User string `protobuf:"bytes,1,opt,name=user,proto3" json:"user,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ListNodesRequest) Reset() {
*x = ListNodesRequest{}
mi := &file_headscale_v1_node_proto_msgTypes[15]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ListNodesRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ListNodesRequest) ProtoMessage() {}
func (x *ListNodesRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[15]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ListNodesRequest.ProtoReflect.Descriptor instead.
func (*ListNodesRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{15}
}
func (x *ListNodesRequest) GetUser() string {
if x != nil {
return x.User
}
return ""
}
type ListNodesResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Nodes []*Node `protobuf:"bytes,1,rep,name=nodes,proto3" json:"nodes,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ListNodesResponse) Reset() {
*x = ListNodesResponse{}
mi := &file_headscale_v1_node_proto_msgTypes[16]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ListNodesResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ListNodesResponse) ProtoMessage() {}
func (x *ListNodesResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[16]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ListNodesResponse.ProtoReflect.Descriptor instead.
func (*ListNodesResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{16}
}
func (x *ListNodesResponse) GetNodes() []*Node {
if x != nil {
return x.Nodes
}
return nil
}
type DebugCreateNodeRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
User string `protobuf:"bytes,1,opt,name=user,proto3" json:"user,omitempty"`
Key string `protobuf:"bytes,2,opt,name=key,proto3" json:"key,omitempty"`
Name string `protobuf:"bytes,3,opt,name=name,proto3" json:"name,omitempty"`
Routes []string `protobuf:"bytes,4,rep,name=routes,proto3" json:"routes,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DebugCreateNodeRequest) Reset() {
*x = DebugCreateNodeRequest{}
mi := &file_headscale_v1_node_proto_msgTypes[17]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DebugCreateNodeRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DebugCreateNodeRequest) ProtoMessage() {}
func (x *DebugCreateNodeRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[17]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DebugCreateNodeRequest.ProtoReflect.Descriptor instead.
func (*DebugCreateNodeRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{17}
}
func (x *DebugCreateNodeRequest) GetUser() string {
if x != nil {
return x.User
}
return ""
}
func (x *DebugCreateNodeRequest) GetKey() string {
if x != nil {
return x.Key
}
return ""
}
func (x *DebugCreateNodeRequest) GetName() string {
if x != nil {
return x.Name
}
return ""
}
func (x *DebugCreateNodeRequest) GetRoutes() []string {
if x != nil {
return x.Routes
}
return nil
}
type DebugCreateNodeResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Node *Node `protobuf:"bytes,1,opt,name=node,proto3" json:"node,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DebugCreateNodeResponse) Reset() {
*x = DebugCreateNodeResponse{}
mi := &file_headscale_v1_node_proto_msgTypes[18]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DebugCreateNodeResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DebugCreateNodeResponse) ProtoMessage() {}
func (x *DebugCreateNodeResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[18]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DebugCreateNodeResponse.ProtoReflect.Descriptor instead.
func (*DebugCreateNodeResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{18}
}
func (x *DebugCreateNodeResponse) GetNode() *Node {
if x != nil {
return x.Node
}
return nil
}
type BackfillNodeIPsRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
Confirmed bool `protobuf:"varint,1,opt,name=confirmed,proto3" json:"confirmed,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *BackfillNodeIPsRequest) Reset() {
*x = BackfillNodeIPsRequest{}
mi := &file_headscale_v1_node_proto_msgTypes[19]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *BackfillNodeIPsRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*BackfillNodeIPsRequest) ProtoMessage() {}
func (x *BackfillNodeIPsRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[19]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use BackfillNodeIPsRequest.ProtoReflect.Descriptor instead.
func (*BackfillNodeIPsRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{19}
}
func (x *BackfillNodeIPsRequest) GetConfirmed() bool {
if x != nil {
return x.Confirmed
}
return false
}
type BackfillNodeIPsResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Changes []string `protobuf:"bytes,1,rep,name=changes,proto3" json:"changes,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *BackfillNodeIPsResponse) Reset() {
*x = BackfillNodeIPsResponse{}
mi := &file_headscale_v1_node_proto_msgTypes[20]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *BackfillNodeIPsResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*BackfillNodeIPsResponse) ProtoMessage() {}
func (x *BackfillNodeIPsResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_node_proto_msgTypes[20]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use BackfillNodeIPsResponse.ProtoReflect.Descriptor instead.
func (*BackfillNodeIPsResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_node_proto_rawDescGZIP(), []int{20}
}
func (x *BackfillNodeIPsResponse) GetChanges() []string {
if x != nil {
return x.Changes
}
return nil
}
var File_headscale_v1_node_proto protoreflect.FileDescriptor
const file_headscale_v1_node_proto_rawDesc = "" +
"\n" +
"\x17headscale/v1/node.proto\x12\fheadscale.v1\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x1dheadscale/v1/preauthkey.proto\x1a\x17headscale/v1/user.proto\"\xc9\x05\n" +
"\x04Node\x12\x0e\n" +
"\x02id\x18\x01 \x01(\x04R\x02id\x12\x1f\n" +
"\vmachine_key\x18\x02 \x01(\tR\n" +
"machineKey\x12\x19\n" +
"\bnode_key\x18\x03 \x01(\tR\anodeKey\x12\x1b\n" +
"\tdisco_key\x18\x04 \x01(\tR\bdiscoKey\x12!\n" +
"\fip_addresses\x18\x05 \x03(\tR\vipAddresses\x12\x12\n" +
"\x04name\x18\x06 \x01(\tR\x04name\x12&\n" +
"\x04user\x18\a \x01(\v2\x12.headscale.v1.UserR\x04user\x127\n" +
"\tlast_seen\x18\b \x01(\v2\x1a.google.protobuf.TimestampR\blastSeen\x122\n" +
"\x06expiry\x18\n" +
" \x01(\v2\x1a.google.protobuf.TimestampR\x06expiry\x12:\n" +
"\fpre_auth_key\x18\v \x01(\v2\x18.headscale.v1.PreAuthKeyR\n" +
"preAuthKey\x129\n" +
"\n" +
"created_at\x18\f \x01(\v2\x1a.google.protobuf.TimestampR\tcreatedAt\x12E\n" +
"\x0fregister_method\x18\r \x01(\x0e2\x1c.headscale.v1.RegisterMethodR\x0eregisterMethod\x12\x1d\n" +
"\n" +
"given_name\x18\x15 \x01(\tR\tgivenName\x12\x16\n" +
"\x06online\x18\x16 \x01(\bR\x06online\x12'\n" +
"\x0fapproved_routes\x18\x17 \x03(\tR\x0eapprovedRoutes\x12)\n" +
"\x10available_routes\x18\x18 \x03(\tR\x0favailableRoutes\x12#\n" +
"\rsubnet_routes\x18\x19 \x03(\tR\fsubnetRoutes\x12\x12\n" +
"\x04tags\x18\x1a \x03(\tR\x04tagsJ\x04\b\t\x10\n" +
"J\x04\b\x0e\x10\x15\";\n" +
"\x13RegisterNodeRequest\x12\x12\n" +
"\x04user\x18\x01 \x01(\tR\x04user\x12\x10\n" +
"\x03key\x18\x02 \x01(\tR\x03key\">\n" +
"\x14RegisterNodeResponse\x12&\n" +
"\x04node\x18\x01 \x01(\v2\x12.headscale.v1.NodeR\x04node\")\n" +
"\x0eGetNodeRequest\x12\x17\n" +
"\anode_id\x18\x01 \x01(\x04R\x06nodeId\"9\n" +
"\x0fGetNodeResponse\x12&\n" +
"\x04node\x18\x01 \x01(\v2\x12.headscale.v1.NodeR\x04node\"=\n" +
"\x0eSetTagsRequest\x12\x17\n" +
"\anode_id\x18\x01 \x01(\x04R\x06nodeId\x12\x12\n" +
"\x04tags\x18\x02 \x03(\tR\x04tags\"9\n" +
"\x0fSetTagsResponse\x12&\n" +
"\x04node\x18\x01 \x01(\v2\x12.headscale.v1.NodeR\x04node\"K\n" +
"\x18SetApprovedRoutesRequest\x12\x17\n" +
"\anode_id\x18\x01 \x01(\x04R\x06nodeId\x12\x16\n" +
"\x06routes\x18\x02 \x03(\tR\x06routes\"C\n" +
"\x19SetApprovedRoutesResponse\x12&\n" +
"\x04node\x18\x01 \x01(\v2\x12.headscale.v1.NodeR\x04node\",\n" +
"\x11DeleteNodeRequest\x12\x17\n" +
"\anode_id\x18\x01 \x01(\x04R\x06nodeId\"\x14\n" +
"\x12DeleteNodeResponse\"\x87\x01\n" +
"\x11ExpireNodeRequest\x12\x17\n" +
"\anode_id\x18\x01 \x01(\x04R\x06nodeId\x122\n" +
"\x06expiry\x18\x02 \x01(\v2\x1a.google.protobuf.TimestampR\x06expiry\x12%\n" +
"\x0edisable_expiry\x18\x03 \x01(\bR\rdisableExpiry\"<\n" +
"\x12ExpireNodeResponse\x12&\n" +
"\x04node\x18\x01 \x01(\v2\x12.headscale.v1.NodeR\x04node\"G\n" +
"\x11RenameNodeRequest\x12\x17\n" +
"\anode_id\x18\x01 \x01(\x04R\x06nodeId\x12\x19\n" +
"\bnew_name\x18\x02 \x01(\tR\anewName\"<\n" +
"\x12RenameNodeResponse\x12&\n" +
"\x04node\x18\x01 \x01(\v2\x12.headscale.v1.NodeR\x04node\"&\n" +
"\x10ListNodesRequest\x12\x12\n" +
"\x04user\x18\x01 \x01(\tR\x04user\"=\n" +
"\x11ListNodesResponse\x12(\n" +
"\x05nodes\x18\x01 \x03(\v2\x12.headscale.v1.NodeR\x05nodes\"j\n" +
"\x16DebugCreateNodeRequest\x12\x12\n" +
"\x04user\x18\x01 \x01(\tR\x04user\x12\x10\n" +
"\x03key\x18\x02 \x01(\tR\x03key\x12\x12\n" +
"\x04name\x18\x03 \x01(\tR\x04name\x12\x16\n" +
"\x06routes\x18\x04 \x03(\tR\x06routes\"A\n" +
"\x17DebugCreateNodeResponse\x12&\n" +
"\x04node\x18\x01 \x01(\v2\x12.headscale.v1.NodeR\x04node\"6\n" +
"\x16BackfillNodeIPsRequest\x12\x1c\n" +
"\tconfirmed\x18\x01 \x01(\bR\tconfirmed\"3\n" +
"\x17BackfillNodeIPsResponse\x12\x18\n" +
"\achanges\x18\x01 \x03(\tR\achanges*\x82\x01\n" +
"\x0eRegisterMethod\x12\x1f\n" +
"\x1bREGISTER_METHOD_UNSPECIFIED\x10\x00\x12\x1c\n" +
"\x18REGISTER_METHOD_AUTH_KEY\x10\x01\x12\x17\n" +
"\x13REGISTER_METHOD_CLI\x10\x02\x12\x18\n" +
"\x14REGISTER_METHOD_OIDC\x10\x03B)Z'github.com/juanfont/headscale/gen/go/v1b\x06proto3"
var (
file_headscale_v1_node_proto_rawDescOnce sync.Once
file_headscale_v1_node_proto_rawDescData []byte
)
func file_headscale_v1_node_proto_rawDescGZIP() []byte {
file_headscale_v1_node_proto_rawDescOnce.Do(func() {
file_headscale_v1_node_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_headscale_v1_node_proto_rawDesc), len(file_headscale_v1_node_proto_rawDesc)))
})
return file_headscale_v1_node_proto_rawDescData
}
var file_headscale_v1_node_proto_enumTypes = make([]protoimpl.EnumInfo, 1)
var file_headscale_v1_node_proto_msgTypes = make([]protoimpl.MessageInfo, 21)
var file_headscale_v1_node_proto_goTypes = []any{
(RegisterMethod)(0), // 0: headscale.v1.RegisterMethod
(*Node)(nil), // 1: headscale.v1.Node
(*RegisterNodeRequest)(nil), // 2: headscale.v1.RegisterNodeRequest
(*RegisterNodeResponse)(nil), // 3: headscale.v1.RegisterNodeResponse
(*GetNodeRequest)(nil), // 4: headscale.v1.GetNodeRequest
(*GetNodeResponse)(nil), // 5: headscale.v1.GetNodeResponse
(*SetTagsRequest)(nil), // 6: headscale.v1.SetTagsRequest
(*SetTagsResponse)(nil), // 7: headscale.v1.SetTagsResponse
(*SetApprovedRoutesRequest)(nil), // 8: headscale.v1.SetApprovedRoutesRequest
(*SetApprovedRoutesResponse)(nil), // 9: headscale.v1.SetApprovedRoutesResponse
(*DeleteNodeRequest)(nil), // 10: headscale.v1.DeleteNodeRequest
(*DeleteNodeResponse)(nil), // 11: headscale.v1.DeleteNodeResponse
(*ExpireNodeRequest)(nil), // 12: headscale.v1.ExpireNodeRequest
(*ExpireNodeResponse)(nil), // 13: headscale.v1.ExpireNodeResponse
(*RenameNodeRequest)(nil), // 14: headscale.v1.RenameNodeRequest
(*RenameNodeResponse)(nil), // 15: headscale.v1.RenameNodeResponse
(*ListNodesRequest)(nil), // 16: headscale.v1.ListNodesRequest
(*ListNodesResponse)(nil), // 17: headscale.v1.ListNodesResponse
(*DebugCreateNodeRequest)(nil), // 18: headscale.v1.DebugCreateNodeRequest
(*DebugCreateNodeResponse)(nil), // 19: headscale.v1.DebugCreateNodeResponse
(*BackfillNodeIPsRequest)(nil), // 20: headscale.v1.BackfillNodeIPsRequest
(*BackfillNodeIPsResponse)(nil), // 21: headscale.v1.BackfillNodeIPsResponse
(*User)(nil), // 22: headscale.v1.User
(*timestamppb.Timestamp)(nil), // 23: google.protobuf.Timestamp
(*PreAuthKey)(nil), // 24: headscale.v1.PreAuthKey
}
var file_headscale_v1_node_proto_depIdxs = []int32{
22, // 0: headscale.v1.Node.user:type_name -> headscale.v1.User
23, // 1: headscale.v1.Node.last_seen:type_name -> google.protobuf.Timestamp
23, // 2: headscale.v1.Node.expiry:type_name -> google.protobuf.Timestamp
24, // 3: headscale.v1.Node.pre_auth_key:type_name -> headscale.v1.PreAuthKey
23, // 4: headscale.v1.Node.created_at:type_name -> google.protobuf.Timestamp
0, // 5: headscale.v1.Node.register_method:type_name -> headscale.v1.RegisterMethod
1, // 6: headscale.v1.RegisterNodeResponse.node:type_name -> headscale.v1.Node
1, // 7: headscale.v1.GetNodeResponse.node:type_name -> headscale.v1.Node
1, // 8: headscale.v1.SetTagsResponse.node:type_name -> headscale.v1.Node
1, // 9: headscale.v1.SetApprovedRoutesResponse.node:type_name -> headscale.v1.Node
23, // 10: headscale.v1.ExpireNodeRequest.expiry:type_name -> google.protobuf.Timestamp
1, // 11: headscale.v1.ExpireNodeResponse.node:type_name -> headscale.v1.Node
1, // 12: headscale.v1.RenameNodeResponse.node:type_name -> headscale.v1.Node
1, // 13: headscale.v1.ListNodesResponse.nodes:type_name -> headscale.v1.Node
1, // 14: headscale.v1.DebugCreateNodeResponse.node:type_name -> headscale.v1.Node
15, // [15:15] is the sub-list for method output_type
15, // [15:15] is the sub-list for method input_type
15, // [15:15] is the sub-list for extension type_name
15, // [15:15] is the sub-list for extension extendee
0, // [0:15] is the sub-list for field type_name
}
func init() { file_headscale_v1_node_proto_init() }
func file_headscale_v1_node_proto_init() {
if File_headscale_v1_node_proto != nil {
return
}
file_headscale_v1_preauthkey_proto_init()
file_headscale_v1_user_proto_init()
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_headscale_v1_node_proto_rawDesc), len(file_headscale_v1_node_proto_rawDesc)),
NumEnums: 1,
NumMessages: 21,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_headscale_v1_node_proto_goTypes,
DependencyIndexes: file_headscale_v1_node_proto_depIdxs,
EnumInfos: file_headscale_v1_node_proto_enumTypes,
MessageInfos: file_headscale_v1_node_proto_msgTypes,
}.Build()
File_headscale_v1_node_proto = out.File
file_headscale_v1_node_proto_goTypes = nil
file_headscale_v1_node_proto_depIdxs = nil
}
================================================
FILE: gen/go/headscale/v1/policy.pb.go
================================================
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.11
// protoc (unknown)
// source: headscale/v1/policy.proto
package v1
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
timestamppb "google.golang.org/protobuf/types/known/timestamppb"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type SetPolicyRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
Policy string `protobuf:"bytes,1,opt,name=policy,proto3" json:"policy,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *SetPolicyRequest) Reset() {
*x = SetPolicyRequest{}
mi := &file_headscale_v1_policy_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SetPolicyRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SetPolicyRequest) ProtoMessage() {}
func (x *SetPolicyRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_policy_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SetPolicyRequest.ProtoReflect.Descriptor instead.
func (*SetPolicyRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_policy_proto_rawDescGZIP(), []int{0}
}
func (x *SetPolicyRequest) GetPolicy() string {
if x != nil {
return x.Policy
}
return ""
}
type SetPolicyResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Policy string `protobuf:"bytes,1,opt,name=policy,proto3" json:"policy,omitempty"`
UpdatedAt *timestamppb.Timestamp `protobuf:"bytes,2,opt,name=updated_at,json=updatedAt,proto3" json:"updated_at,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *SetPolicyResponse) Reset() {
*x = SetPolicyResponse{}
mi := &file_headscale_v1_policy_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SetPolicyResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SetPolicyResponse) ProtoMessage() {}
func (x *SetPolicyResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_policy_proto_msgTypes[1]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SetPolicyResponse.ProtoReflect.Descriptor instead.
func (*SetPolicyResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_policy_proto_rawDescGZIP(), []int{1}
}
func (x *SetPolicyResponse) GetPolicy() string {
if x != nil {
return x.Policy
}
return ""
}
func (x *SetPolicyResponse) GetUpdatedAt() *timestamppb.Timestamp {
if x != nil {
return x.UpdatedAt
}
return nil
}
type GetPolicyRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *GetPolicyRequest) Reset() {
*x = GetPolicyRequest{}
mi := &file_headscale_v1_policy_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *GetPolicyRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GetPolicyRequest) ProtoMessage() {}
func (x *GetPolicyRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_policy_proto_msgTypes[2]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GetPolicyRequest.ProtoReflect.Descriptor instead.
func (*GetPolicyRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_policy_proto_rawDescGZIP(), []int{2}
}
type GetPolicyResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Policy string `protobuf:"bytes,1,opt,name=policy,proto3" json:"policy,omitempty"`
UpdatedAt *timestamppb.Timestamp `protobuf:"bytes,2,opt,name=updated_at,json=updatedAt,proto3" json:"updated_at,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *GetPolicyResponse) Reset() {
*x = GetPolicyResponse{}
mi := &file_headscale_v1_policy_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *GetPolicyResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*GetPolicyResponse) ProtoMessage() {}
func (x *GetPolicyResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_policy_proto_msgTypes[3]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use GetPolicyResponse.ProtoReflect.Descriptor instead.
func (*GetPolicyResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_policy_proto_rawDescGZIP(), []int{3}
}
func (x *GetPolicyResponse) GetPolicy() string {
if x != nil {
return x.Policy
}
return ""
}
func (x *GetPolicyResponse) GetUpdatedAt() *timestamppb.Timestamp {
if x != nil {
return x.UpdatedAt
}
return nil
}
var File_headscale_v1_policy_proto protoreflect.FileDescriptor
const file_headscale_v1_policy_proto_rawDesc = "" +
"\n" +
"\x19headscale/v1/policy.proto\x12\fheadscale.v1\x1a\x1fgoogle/protobuf/timestamp.proto\"*\n" +
"\x10SetPolicyRequest\x12\x16\n" +
"\x06policy\x18\x01 \x01(\tR\x06policy\"f\n" +
"\x11SetPolicyResponse\x12\x16\n" +
"\x06policy\x18\x01 \x01(\tR\x06policy\x129\n" +
"\n" +
"updated_at\x18\x02 \x01(\v2\x1a.google.protobuf.TimestampR\tupdatedAt\"\x12\n" +
"\x10GetPolicyRequest\"f\n" +
"\x11GetPolicyResponse\x12\x16\n" +
"\x06policy\x18\x01 \x01(\tR\x06policy\x129\n" +
"\n" +
"updated_at\x18\x02 \x01(\v2\x1a.google.protobuf.TimestampR\tupdatedAtB)Z'github.com/juanfont/headscale/gen/go/v1b\x06proto3"
var (
file_headscale_v1_policy_proto_rawDescOnce sync.Once
file_headscale_v1_policy_proto_rawDescData []byte
)
func file_headscale_v1_policy_proto_rawDescGZIP() []byte {
file_headscale_v1_policy_proto_rawDescOnce.Do(func() {
file_headscale_v1_policy_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_headscale_v1_policy_proto_rawDesc), len(file_headscale_v1_policy_proto_rawDesc)))
})
return file_headscale_v1_policy_proto_rawDescData
}
var file_headscale_v1_policy_proto_msgTypes = make([]protoimpl.MessageInfo, 4)
var file_headscale_v1_policy_proto_goTypes = []any{
(*SetPolicyRequest)(nil), // 0: headscale.v1.SetPolicyRequest
(*SetPolicyResponse)(nil), // 1: headscale.v1.SetPolicyResponse
(*GetPolicyRequest)(nil), // 2: headscale.v1.GetPolicyRequest
(*GetPolicyResponse)(nil), // 3: headscale.v1.GetPolicyResponse
(*timestamppb.Timestamp)(nil), // 4: google.protobuf.Timestamp
}
var file_headscale_v1_policy_proto_depIdxs = []int32{
4, // 0: headscale.v1.SetPolicyResponse.updated_at:type_name -> google.protobuf.Timestamp
4, // 1: headscale.v1.GetPolicyResponse.updated_at:type_name -> google.protobuf.Timestamp
2, // [2:2] is the sub-list for method output_type
2, // [2:2] is the sub-list for method input_type
2, // [2:2] is the sub-list for extension type_name
2, // [2:2] is the sub-list for extension extendee
0, // [0:2] is the sub-list for field type_name
}
func init() { file_headscale_v1_policy_proto_init() }
func file_headscale_v1_policy_proto_init() {
if File_headscale_v1_policy_proto != nil {
return
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_headscale_v1_policy_proto_rawDesc), len(file_headscale_v1_policy_proto_rawDesc)),
NumEnums: 0,
NumMessages: 4,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_headscale_v1_policy_proto_goTypes,
DependencyIndexes: file_headscale_v1_policy_proto_depIdxs,
MessageInfos: file_headscale_v1_policy_proto_msgTypes,
}.Build()
File_headscale_v1_policy_proto = out.File
file_headscale_v1_policy_proto_goTypes = nil
file_headscale_v1_policy_proto_depIdxs = nil
}
================================================
FILE: gen/go/headscale/v1/preauthkey.pb.go
================================================
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.11
// protoc (unknown)
// source: headscale/v1/preauthkey.proto
package v1
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
timestamppb "google.golang.org/protobuf/types/known/timestamppb"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type PreAuthKey struct {
state protoimpl.MessageState `protogen:"open.v1"`
User *User `protobuf:"bytes,1,opt,name=user,proto3" json:"user,omitempty"`
Id uint64 `protobuf:"varint,2,opt,name=id,proto3" json:"id,omitempty"`
Key string `protobuf:"bytes,3,opt,name=key,proto3" json:"key,omitempty"`
Reusable bool `protobuf:"varint,4,opt,name=reusable,proto3" json:"reusable,omitempty"`
Ephemeral bool `protobuf:"varint,5,opt,name=ephemeral,proto3" json:"ephemeral,omitempty"`
Used bool `protobuf:"varint,6,opt,name=used,proto3" json:"used,omitempty"`
Expiration *timestamppb.Timestamp `protobuf:"bytes,7,opt,name=expiration,proto3" json:"expiration,omitempty"`
CreatedAt *timestamppb.Timestamp `protobuf:"bytes,8,opt,name=created_at,json=createdAt,proto3" json:"created_at,omitempty"`
AclTags []string `protobuf:"bytes,9,rep,name=acl_tags,json=aclTags,proto3" json:"acl_tags,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *PreAuthKey) Reset() {
*x = PreAuthKey{}
mi := &file_headscale_v1_preauthkey_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *PreAuthKey) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*PreAuthKey) ProtoMessage() {}
func (x *PreAuthKey) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_preauthkey_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use PreAuthKey.ProtoReflect.Descriptor instead.
func (*PreAuthKey) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{0}
}
func (x *PreAuthKey) GetUser() *User {
if x != nil {
return x.User
}
return nil
}
func (x *PreAuthKey) GetId() uint64 {
if x != nil {
return x.Id
}
return 0
}
func (x *PreAuthKey) GetKey() string {
if x != nil {
return x.Key
}
return ""
}
func (x *PreAuthKey) GetReusable() bool {
if x != nil {
return x.Reusable
}
return false
}
func (x *PreAuthKey) GetEphemeral() bool {
if x != nil {
return x.Ephemeral
}
return false
}
func (x *PreAuthKey) GetUsed() bool {
if x != nil {
return x.Used
}
return false
}
func (x *PreAuthKey) GetExpiration() *timestamppb.Timestamp {
if x != nil {
return x.Expiration
}
return nil
}
func (x *PreAuthKey) GetCreatedAt() *timestamppb.Timestamp {
if x != nil {
return x.CreatedAt
}
return nil
}
func (x *PreAuthKey) GetAclTags() []string {
if x != nil {
return x.AclTags
}
return nil
}
type CreatePreAuthKeyRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
User uint64 `protobuf:"varint,1,opt,name=user,proto3" json:"user,omitempty"`
Reusable bool `protobuf:"varint,2,opt,name=reusable,proto3" json:"reusable,omitempty"`
Ephemeral bool `protobuf:"varint,3,opt,name=ephemeral,proto3" json:"ephemeral,omitempty"`
Expiration *timestamppb.Timestamp `protobuf:"bytes,4,opt,name=expiration,proto3" json:"expiration,omitempty"`
AclTags []string `protobuf:"bytes,5,rep,name=acl_tags,json=aclTags,proto3" json:"acl_tags,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *CreatePreAuthKeyRequest) Reset() {
*x = CreatePreAuthKeyRequest{}
mi := &file_headscale_v1_preauthkey_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *CreatePreAuthKeyRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CreatePreAuthKeyRequest) ProtoMessage() {}
func (x *CreatePreAuthKeyRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_preauthkey_proto_msgTypes[1]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CreatePreAuthKeyRequest.ProtoReflect.Descriptor instead.
func (*CreatePreAuthKeyRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{1}
}
func (x *CreatePreAuthKeyRequest) GetUser() uint64 {
if x != nil {
return x.User
}
return 0
}
func (x *CreatePreAuthKeyRequest) GetReusable() bool {
if x != nil {
return x.Reusable
}
return false
}
func (x *CreatePreAuthKeyRequest) GetEphemeral() bool {
if x != nil {
return x.Ephemeral
}
return false
}
func (x *CreatePreAuthKeyRequest) GetExpiration() *timestamppb.Timestamp {
if x != nil {
return x.Expiration
}
return nil
}
func (x *CreatePreAuthKeyRequest) GetAclTags() []string {
if x != nil {
return x.AclTags
}
return nil
}
type CreatePreAuthKeyResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
PreAuthKey *PreAuthKey `protobuf:"bytes,1,opt,name=pre_auth_key,json=preAuthKey,proto3" json:"pre_auth_key,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *CreatePreAuthKeyResponse) Reset() {
*x = CreatePreAuthKeyResponse{}
mi := &file_headscale_v1_preauthkey_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *CreatePreAuthKeyResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CreatePreAuthKeyResponse) ProtoMessage() {}
func (x *CreatePreAuthKeyResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_preauthkey_proto_msgTypes[2]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CreatePreAuthKeyResponse.ProtoReflect.Descriptor instead.
func (*CreatePreAuthKeyResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{2}
}
func (x *CreatePreAuthKeyResponse) GetPreAuthKey() *PreAuthKey {
if x != nil {
return x.PreAuthKey
}
return nil
}
type ExpirePreAuthKeyRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
Id uint64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ExpirePreAuthKeyRequest) Reset() {
*x = ExpirePreAuthKeyRequest{}
mi := &file_headscale_v1_preauthkey_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ExpirePreAuthKeyRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ExpirePreAuthKeyRequest) ProtoMessage() {}
func (x *ExpirePreAuthKeyRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_preauthkey_proto_msgTypes[3]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ExpirePreAuthKeyRequest.ProtoReflect.Descriptor instead.
func (*ExpirePreAuthKeyRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{3}
}
func (x *ExpirePreAuthKeyRequest) GetId() uint64 {
if x != nil {
return x.Id
}
return 0
}
type ExpirePreAuthKeyResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ExpirePreAuthKeyResponse) Reset() {
*x = ExpirePreAuthKeyResponse{}
mi := &file_headscale_v1_preauthkey_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ExpirePreAuthKeyResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ExpirePreAuthKeyResponse) ProtoMessage() {}
func (x *ExpirePreAuthKeyResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_preauthkey_proto_msgTypes[4]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ExpirePreAuthKeyResponse.ProtoReflect.Descriptor instead.
func (*ExpirePreAuthKeyResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{4}
}
type DeletePreAuthKeyRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
Id uint64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DeletePreAuthKeyRequest) Reset() {
*x = DeletePreAuthKeyRequest{}
mi := &file_headscale_v1_preauthkey_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DeletePreAuthKeyRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DeletePreAuthKeyRequest) ProtoMessage() {}
func (x *DeletePreAuthKeyRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_preauthkey_proto_msgTypes[5]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DeletePreAuthKeyRequest.ProtoReflect.Descriptor instead.
func (*DeletePreAuthKeyRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{5}
}
func (x *DeletePreAuthKeyRequest) GetId() uint64 {
if x != nil {
return x.Id
}
return 0
}
type DeletePreAuthKeyResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DeletePreAuthKeyResponse) Reset() {
*x = DeletePreAuthKeyResponse{}
mi := &file_headscale_v1_preauthkey_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DeletePreAuthKeyResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DeletePreAuthKeyResponse) ProtoMessage() {}
func (x *DeletePreAuthKeyResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_preauthkey_proto_msgTypes[6]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DeletePreAuthKeyResponse.ProtoReflect.Descriptor instead.
func (*DeletePreAuthKeyResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{6}
}
type ListPreAuthKeysRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ListPreAuthKeysRequest) Reset() {
*x = ListPreAuthKeysRequest{}
mi := &file_headscale_v1_preauthkey_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ListPreAuthKeysRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ListPreAuthKeysRequest) ProtoMessage() {}
func (x *ListPreAuthKeysRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_preauthkey_proto_msgTypes[7]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ListPreAuthKeysRequest.ProtoReflect.Descriptor instead.
func (*ListPreAuthKeysRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{7}
}
type ListPreAuthKeysResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
PreAuthKeys []*PreAuthKey `protobuf:"bytes,1,rep,name=pre_auth_keys,json=preAuthKeys,proto3" json:"pre_auth_keys,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ListPreAuthKeysResponse) Reset() {
*x = ListPreAuthKeysResponse{}
mi := &file_headscale_v1_preauthkey_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ListPreAuthKeysResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ListPreAuthKeysResponse) ProtoMessage() {}
func (x *ListPreAuthKeysResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_preauthkey_proto_msgTypes[8]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ListPreAuthKeysResponse.ProtoReflect.Descriptor instead.
func (*ListPreAuthKeysResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{8}
}
func (x *ListPreAuthKeysResponse) GetPreAuthKeys() []*PreAuthKey {
if x != nil {
return x.PreAuthKeys
}
return nil
}
var File_headscale_v1_preauthkey_proto protoreflect.FileDescriptor
const file_headscale_v1_preauthkey_proto_rawDesc = "" +
"\n" +
"\x1dheadscale/v1/preauthkey.proto\x12\fheadscale.v1\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x17headscale/v1/user.proto\"\xb6\x02\n" +
"\n" +
"PreAuthKey\x12&\n" +
"\x04user\x18\x01 \x01(\v2\x12.headscale.v1.UserR\x04user\x12\x0e\n" +
"\x02id\x18\x02 \x01(\x04R\x02id\x12\x10\n" +
"\x03key\x18\x03 \x01(\tR\x03key\x12\x1a\n" +
"\breusable\x18\x04 \x01(\bR\breusable\x12\x1c\n" +
"\tephemeral\x18\x05 \x01(\bR\tephemeral\x12\x12\n" +
"\x04used\x18\x06 \x01(\bR\x04used\x12:\n" +
"\n" +
"expiration\x18\a \x01(\v2\x1a.google.protobuf.TimestampR\n" +
"expiration\x129\n" +
"\n" +
"created_at\x18\b \x01(\v2\x1a.google.protobuf.TimestampR\tcreatedAt\x12\x19\n" +
"\bacl_tags\x18\t \x03(\tR\aaclTags\"\xbe\x01\n" +
"\x17CreatePreAuthKeyRequest\x12\x12\n" +
"\x04user\x18\x01 \x01(\x04R\x04user\x12\x1a\n" +
"\breusable\x18\x02 \x01(\bR\breusable\x12\x1c\n" +
"\tephemeral\x18\x03 \x01(\bR\tephemeral\x12:\n" +
"\n" +
"expiration\x18\x04 \x01(\v2\x1a.google.protobuf.TimestampR\n" +
"expiration\x12\x19\n" +
"\bacl_tags\x18\x05 \x03(\tR\aaclTags\"V\n" +
"\x18CreatePreAuthKeyResponse\x12:\n" +
"\fpre_auth_key\x18\x01 \x01(\v2\x18.headscale.v1.PreAuthKeyR\n" +
"preAuthKey\")\n" +
"\x17ExpirePreAuthKeyRequest\x12\x0e\n" +
"\x02id\x18\x01 \x01(\x04R\x02id\"\x1a\n" +
"\x18ExpirePreAuthKeyResponse\")\n" +
"\x17DeletePreAuthKeyRequest\x12\x0e\n" +
"\x02id\x18\x01 \x01(\x04R\x02id\"\x1a\n" +
"\x18DeletePreAuthKeyResponse\"\x18\n" +
"\x16ListPreAuthKeysRequest\"W\n" +
"\x17ListPreAuthKeysResponse\x12<\n" +
"\rpre_auth_keys\x18\x01 \x03(\v2\x18.headscale.v1.PreAuthKeyR\vpreAuthKeysB)Z'github.com/juanfont/headscale/gen/go/v1b\x06proto3"
var (
file_headscale_v1_preauthkey_proto_rawDescOnce sync.Once
file_headscale_v1_preauthkey_proto_rawDescData []byte
)
func file_headscale_v1_preauthkey_proto_rawDescGZIP() []byte {
file_headscale_v1_preauthkey_proto_rawDescOnce.Do(func() {
file_headscale_v1_preauthkey_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_headscale_v1_preauthkey_proto_rawDesc), len(file_headscale_v1_preauthkey_proto_rawDesc)))
})
return file_headscale_v1_preauthkey_proto_rawDescData
}
var file_headscale_v1_preauthkey_proto_msgTypes = make([]protoimpl.MessageInfo, 9)
var file_headscale_v1_preauthkey_proto_goTypes = []any{
(*PreAuthKey)(nil), // 0: headscale.v1.PreAuthKey
(*CreatePreAuthKeyRequest)(nil), // 1: headscale.v1.CreatePreAuthKeyRequest
(*CreatePreAuthKeyResponse)(nil), // 2: headscale.v1.CreatePreAuthKeyResponse
(*ExpirePreAuthKeyRequest)(nil), // 3: headscale.v1.ExpirePreAuthKeyRequest
(*ExpirePreAuthKeyResponse)(nil), // 4: headscale.v1.ExpirePreAuthKeyResponse
(*DeletePreAuthKeyRequest)(nil), // 5: headscale.v1.DeletePreAuthKeyRequest
(*DeletePreAuthKeyResponse)(nil), // 6: headscale.v1.DeletePreAuthKeyResponse
(*ListPreAuthKeysRequest)(nil), // 7: headscale.v1.ListPreAuthKeysRequest
(*ListPreAuthKeysResponse)(nil), // 8: headscale.v1.ListPreAuthKeysResponse
(*User)(nil), // 9: headscale.v1.User
(*timestamppb.Timestamp)(nil), // 10: google.protobuf.Timestamp
}
var file_headscale_v1_preauthkey_proto_depIdxs = []int32{
9, // 0: headscale.v1.PreAuthKey.user:type_name -> headscale.v1.User
10, // 1: headscale.v1.PreAuthKey.expiration:type_name -> google.protobuf.Timestamp
10, // 2: headscale.v1.PreAuthKey.created_at:type_name -> google.protobuf.Timestamp
10, // 3: headscale.v1.CreatePreAuthKeyRequest.expiration:type_name -> google.protobuf.Timestamp
0, // 4: headscale.v1.CreatePreAuthKeyResponse.pre_auth_key:type_name -> headscale.v1.PreAuthKey
0, // 5: headscale.v1.ListPreAuthKeysResponse.pre_auth_keys:type_name -> headscale.v1.PreAuthKey
6, // [6:6] is the sub-list for method output_type
6, // [6:6] is the sub-list for method input_type
6, // [6:6] is the sub-list for extension type_name
6, // [6:6] is the sub-list for extension extendee
0, // [0:6] is the sub-list for field type_name
}
func init() { file_headscale_v1_preauthkey_proto_init() }
func file_headscale_v1_preauthkey_proto_init() {
if File_headscale_v1_preauthkey_proto != nil {
return
}
file_headscale_v1_user_proto_init()
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_headscale_v1_preauthkey_proto_rawDesc), len(file_headscale_v1_preauthkey_proto_rawDesc)),
NumEnums: 0,
NumMessages: 9,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_headscale_v1_preauthkey_proto_goTypes,
DependencyIndexes: file_headscale_v1_preauthkey_proto_depIdxs,
MessageInfos: file_headscale_v1_preauthkey_proto_msgTypes,
}.Build()
File_headscale_v1_preauthkey_proto = out.File
file_headscale_v1_preauthkey_proto_goTypes = nil
file_headscale_v1_preauthkey_proto_depIdxs = nil
}
================================================
FILE: gen/go/headscale/v1/user.pb.go
================================================
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.11
// protoc (unknown)
// source: headscale/v1/user.proto
package v1
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
timestamppb "google.golang.org/protobuf/types/known/timestamppb"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type User struct {
state protoimpl.MessageState `protogen:"open.v1"`
Id uint64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
Name string `protobuf:"bytes,2,opt,name=name,proto3" json:"name,omitempty"`
CreatedAt *timestamppb.Timestamp `protobuf:"bytes,3,opt,name=created_at,json=createdAt,proto3" json:"created_at,omitempty"`
DisplayName string `protobuf:"bytes,4,opt,name=display_name,json=displayName,proto3" json:"display_name,omitempty"`
Email string `protobuf:"bytes,5,opt,name=email,proto3" json:"email,omitempty"`
ProviderId string `protobuf:"bytes,6,opt,name=provider_id,json=providerId,proto3" json:"provider_id,omitempty"`
Provider string `protobuf:"bytes,7,opt,name=provider,proto3" json:"provider,omitempty"`
ProfilePicUrl string `protobuf:"bytes,8,opt,name=profile_pic_url,json=profilePicUrl,proto3" json:"profile_pic_url,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *User) Reset() {
*x = User{}
mi := &file_headscale_v1_user_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *User) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*User) ProtoMessage() {}
func (x *User) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_user_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use User.ProtoReflect.Descriptor instead.
func (*User) Descriptor() ([]byte, []int) {
return file_headscale_v1_user_proto_rawDescGZIP(), []int{0}
}
func (x *User) GetId() uint64 {
if x != nil {
return x.Id
}
return 0
}
func (x *User) GetName() string {
if x != nil {
return x.Name
}
return ""
}
func (x *User) GetCreatedAt() *timestamppb.Timestamp {
if x != nil {
return x.CreatedAt
}
return nil
}
func (x *User) GetDisplayName() string {
if x != nil {
return x.DisplayName
}
return ""
}
func (x *User) GetEmail() string {
if x != nil {
return x.Email
}
return ""
}
func (x *User) GetProviderId() string {
if x != nil {
return x.ProviderId
}
return ""
}
func (x *User) GetProvider() string {
if x != nil {
return x.Provider
}
return ""
}
func (x *User) GetProfilePicUrl() string {
if x != nil {
return x.ProfilePicUrl
}
return ""
}
type CreateUserRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
DisplayName string `protobuf:"bytes,2,opt,name=display_name,json=displayName,proto3" json:"display_name,omitempty"`
Email string `protobuf:"bytes,3,opt,name=email,proto3" json:"email,omitempty"`
PictureUrl string `protobuf:"bytes,4,opt,name=picture_url,json=pictureUrl,proto3" json:"picture_url,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *CreateUserRequest) Reset() {
*x = CreateUserRequest{}
mi := &file_headscale_v1_user_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *CreateUserRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CreateUserRequest) ProtoMessage() {}
func (x *CreateUserRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_user_proto_msgTypes[1]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CreateUserRequest.ProtoReflect.Descriptor instead.
func (*CreateUserRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_user_proto_rawDescGZIP(), []int{1}
}
func (x *CreateUserRequest) GetName() string {
if x != nil {
return x.Name
}
return ""
}
func (x *CreateUserRequest) GetDisplayName() string {
if x != nil {
return x.DisplayName
}
return ""
}
func (x *CreateUserRequest) GetEmail() string {
if x != nil {
return x.Email
}
return ""
}
func (x *CreateUserRequest) GetPictureUrl() string {
if x != nil {
return x.PictureUrl
}
return ""
}
type CreateUserResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
User *User `protobuf:"bytes,1,opt,name=user,proto3" json:"user,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *CreateUserResponse) Reset() {
*x = CreateUserResponse{}
mi := &file_headscale_v1_user_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *CreateUserResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CreateUserResponse) ProtoMessage() {}
func (x *CreateUserResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_user_proto_msgTypes[2]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CreateUserResponse.ProtoReflect.Descriptor instead.
func (*CreateUserResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_user_proto_rawDescGZIP(), []int{2}
}
func (x *CreateUserResponse) GetUser() *User {
if x != nil {
return x.User
}
return nil
}
type RenameUserRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
OldId uint64 `protobuf:"varint,1,opt,name=old_id,json=oldId,proto3" json:"old_id,omitempty"`
NewName string `protobuf:"bytes,2,opt,name=new_name,json=newName,proto3" json:"new_name,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *RenameUserRequest) Reset() {
*x = RenameUserRequest{}
mi := &file_headscale_v1_user_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *RenameUserRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RenameUserRequest) ProtoMessage() {}
func (x *RenameUserRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_user_proto_msgTypes[3]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RenameUserRequest.ProtoReflect.Descriptor instead.
func (*RenameUserRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_user_proto_rawDescGZIP(), []int{3}
}
func (x *RenameUserRequest) GetOldId() uint64 {
if x != nil {
return x.OldId
}
return 0
}
func (x *RenameUserRequest) GetNewName() string {
if x != nil {
return x.NewName
}
return ""
}
type RenameUserResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
User *User `protobuf:"bytes,1,opt,name=user,proto3" json:"user,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *RenameUserResponse) Reset() {
*x = RenameUserResponse{}
mi := &file_headscale_v1_user_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *RenameUserResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RenameUserResponse) ProtoMessage() {}
func (x *RenameUserResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_user_proto_msgTypes[4]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RenameUserResponse.ProtoReflect.Descriptor instead.
func (*RenameUserResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_user_proto_rawDescGZIP(), []int{4}
}
func (x *RenameUserResponse) GetUser() *User {
if x != nil {
return x.User
}
return nil
}
type DeleteUserRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
Id uint64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DeleteUserRequest) Reset() {
*x = DeleteUserRequest{}
mi := &file_headscale_v1_user_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DeleteUserRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DeleteUserRequest) ProtoMessage() {}
func (x *DeleteUserRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_user_proto_msgTypes[5]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DeleteUserRequest.ProtoReflect.Descriptor instead.
func (*DeleteUserRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_user_proto_rawDescGZIP(), []int{5}
}
func (x *DeleteUserRequest) GetId() uint64 {
if x != nil {
return x.Id
}
return 0
}
type DeleteUserResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DeleteUserResponse) Reset() {
*x = DeleteUserResponse{}
mi := &file_headscale_v1_user_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DeleteUserResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DeleteUserResponse) ProtoMessage() {}
func (x *DeleteUserResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_user_proto_msgTypes[6]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DeleteUserResponse.ProtoReflect.Descriptor instead.
func (*DeleteUserResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_user_proto_rawDescGZIP(), []int{6}
}
type ListUsersRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
Id uint64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
Name string `protobuf:"bytes,2,opt,name=name,proto3" json:"name,omitempty"`
Email string `protobuf:"bytes,3,opt,name=email,proto3" json:"email,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ListUsersRequest) Reset() {
*x = ListUsersRequest{}
mi := &file_headscale_v1_user_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ListUsersRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ListUsersRequest) ProtoMessage() {}
func (x *ListUsersRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_user_proto_msgTypes[7]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ListUsersRequest.ProtoReflect.Descriptor instead.
func (*ListUsersRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_user_proto_rawDescGZIP(), []int{7}
}
func (x *ListUsersRequest) GetId() uint64 {
if x != nil {
return x.Id
}
return 0
}
func (x *ListUsersRequest) GetName() string {
if x != nil {
return x.Name
}
return ""
}
func (x *ListUsersRequest) GetEmail() string {
if x != nil {
return x.Email
}
return ""
}
type ListUsersResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Users []*User `protobuf:"bytes,1,rep,name=users,proto3" json:"users,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ListUsersResponse) Reset() {
*x = ListUsersResponse{}
mi := &file_headscale_v1_user_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ListUsersResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ListUsersResponse) ProtoMessage() {}
func (x *ListUsersResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_user_proto_msgTypes[8]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ListUsersResponse.ProtoReflect.Descriptor instead.
func (*ListUsersResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_user_proto_rawDescGZIP(), []int{8}
}
func (x *ListUsersResponse) GetUsers() []*User {
if x != nil {
return x.Users
}
return nil
}
var File_headscale_v1_user_proto protoreflect.FileDescriptor
const file_headscale_v1_user_proto_rawDesc = "" +
"\n" +
"\x17headscale/v1/user.proto\x12\fheadscale.v1\x1a\x1fgoogle/protobuf/timestamp.proto\"\x83\x02\n" +
"\x04User\x12\x0e\n" +
"\x02id\x18\x01 \x01(\x04R\x02id\x12\x12\n" +
"\x04name\x18\x02 \x01(\tR\x04name\x129\n" +
"\n" +
"created_at\x18\x03 \x01(\v2\x1a.google.protobuf.TimestampR\tcreatedAt\x12!\n" +
"\fdisplay_name\x18\x04 \x01(\tR\vdisplayName\x12\x14\n" +
"\x05email\x18\x05 \x01(\tR\x05email\x12\x1f\n" +
"\vprovider_id\x18\x06 \x01(\tR\n" +
"providerId\x12\x1a\n" +
"\bprovider\x18\a \x01(\tR\bprovider\x12&\n" +
"\x0fprofile_pic_url\x18\b \x01(\tR\rprofilePicUrl\"\x81\x01\n" +
"\x11CreateUserRequest\x12\x12\n" +
"\x04name\x18\x01 \x01(\tR\x04name\x12!\n" +
"\fdisplay_name\x18\x02 \x01(\tR\vdisplayName\x12\x14\n" +
"\x05email\x18\x03 \x01(\tR\x05email\x12\x1f\n" +
"\vpicture_url\x18\x04 \x01(\tR\n" +
"pictureUrl\"<\n" +
"\x12CreateUserResponse\x12&\n" +
"\x04user\x18\x01 \x01(\v2\x12.headscale.v1.UserR\x04user\"E\n" +
"\x11RenameUserRequest\x12\x15\n" +
"\x06old_id\x18\x01 \x01(\x04R\x05oldId\x12\x19\n" +
"\bnew_name\x18\x02 \x01(\tR\anewName\"<\n" +
"\x12RenameUserResponse\x12&\n" +
"\x04user\x18\x01 \x01(\v2\x12.headscale.v1.UserR\x04user\"#\n" +
"\x11DeleteUserRequest\x12\x0e\n" +
"\x02id\x18\x01 \x01(\x04R\x02id\"\x14\n" +
"\x12DeleteUserResponse\"L\n" +
"\x10ListUsersRequest\x12\x0e\n" +
"\x02id\x18\x01 \x01(\x04R\x02id\x12\x12\n" +
"\x04name\x18\x02 \x01(\tR\x04name\x12\x14\n" +
"\x05email\x18\x03 \x01(\tR\x05email\"=\n" +
"\x11ListUsersResponse\x12(\n" +
"\x05users\x18\x01 \x03(\v2\x12.headscale.v1.UserR\x05usersB)Z'github.com/juanfont/headscale/gen/go/v1b\x06proto3"
var (
file_headscale_v1_user_proto_rawDescOnce sync.Once
file_headscale_v1_user_proto_rawDescData []byte
)
func file_headscale_v1_user_proto_rawDescGZIP() []byte {
file_headscale_v1_user_proto_rawDescOnce.Do(func() {
file_headscale_v1_user_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_headscale_v1_user_proto_rawDesc), len(file_headscale_v1_user_proto_rawDesc)))
})
return file_headscale_v1_user_proto_rawDescData
}
var file_headscale_v1_user_proto_msgTypes = make([]protoimpl.MessageInfo, 9)
var file_headscale_v1_user_proto_goTypes = []any{
(*User)(nil), // 0: headscale.v1.User
(*CreateUserRequest)(nil), // 1: headscale.v1.CreateUserRequest
(*CreateUserResponse)(nil), // 2: headscale.v1.CreateUserResponse
(*RenameUserRequest)(nil), // 3: headscale.v1.RenameUserRequest
(*RenameUserResponse)(nil), // 4: headscale.v1.RenameUserResponse
(*DeleteUserRequest)(nil), // 5: headscale.v1.DeleteUserRequest
(*DeleteUserResponse)(nil), // 6: headscale.v1.DeleteUserResponse
(*ListUsersRequest)(nil), // 7: headscale.v1.ListUsersRequest
(*ListUsersResponse)(nil), // 8: headscale.v1.ListUsersResponse
(*timestamppb.Timestamp)(nil), // 9: google.protobuf.Timestamp
}
var file_headscale_v1_user_proto_depIdxs = []int32{
9, // 0: headscale.v1.User.created_at:type_name -> google.protobuf.Timestamp
0, // 1: headscale.v1.CreateUserResponse.user:type_name -> headscale.v1.User
0, // 2: headscale.v1.RenameUserResponse.user:type_name -> headscale.v1.User
0, // 3: headscale.v1.ListUsersResponse.users:type_name -> headscale.v1.User
4, // [4:4] is the sub-list for method output_type
4, // [4:4] is the sub-list for method input_type
4, // [4:4] is the sub-list for extension type_name
4, // [4:4] is the sub-list for extension extendee
0, // [0:4] is the sub-list for field type_name
}
func init() { file_headscale_v1_user_proto_init() }
func file_headscale_v1_user_proto_init() {
if File_headscale_v1_user_proto != nil {
return
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_headscale_v1_user_proto_rawDesc), len(file_headscale_v1_user_proto_rawDesc)),
NumEnums: 0,
NumMessages: 9,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_headscale_v1_user_proto_goTypes,
DependencyIndexes: file_headscale_v1_user_proto_depIdxs,
MessageInfos: file_headscale_v1_user_proto_msgTypes,
}.Build()
File_headscale_v1_user_proto = out.File
file_headscale_v1_user_proto_goTypes = nil
file_headscale_v1_user_proto_depIdxs = nil
}
================================================
FILE: gen/openapiv2/headscale/v1/apikey.swagger.json
================================================
{
"swagger": "2.0",
"info": {
"title": "headscale/v1/apikey.proto",
"version": "version not set"
},
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {},
"definitions": {
"protobufAny": {
"type": "object",
"properties": {
"@type": {
"type": "string"
}
},
"additionalProperties": {}
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"type": "object",
"$ref": "#/definitions/protobufAny"
}
}
}
}
}
}
================================================
FILE: gen/openapiv2/headscale/v1/auth.swagger.json
================================================
{
"swagger": "2.0",
"info": {
"title": "headscale/v1/auth.proto",
"version": "version not set"
},
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {},
"definitions": {
"protobufAny": {
"type": "object",
"properties": {
"@type": {
"type": "string"
}
},
"additionalProperties": {}
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"type": "object",
"$ref": "#/definitions/protobufAny"
}
}
}
}
}
}
================================================
FILE: gen/openapiv2/headscale/v1/device.swagger.json
================================================
{
"swagger": "2.0",
"info": {
"title": "headscale/v1/device.proto",
"version": "version not set"
},
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {},
"definitions": {
"protobufAny": {
"type": "object",
"properties": {
"@type": {
"type": "string"
}
},
"additionalProperties": {}
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"type": "object",
"$ref": "#/definitions/protobufAny"
}
}
}
}
}
}
================================================
FILE: gen/openapiv2/headscale/v1/headscale.swagger.json
================================================
{
"swagger": "2.0",
"info": {
"title": "headscale/v1/headscale.proto",
"version": "version not set"
},
"tags": [
{
"name": "HeadscaleService"
}
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/api/v1/apikey": {
"get": {
"operationId": "HeadscaleService_ListApiKeys",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1ListApiKeysResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"tags": [
"HeadscaleService"
]
},
"post": {
"summary": "--- ApiKeys start ---",
"operationId": "HeadscaleService_CreateApiKey",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1CreateApiKeyResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1CreateApiKeyRequest"
}
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/apikey/expire": {
"post": {
"operationId": "HeadscaleService_ExpireApiKey",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1ExpireApiKeyResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1ExpireApiKeyRequest"
}
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/apikey/{prefix}": {
"delete": {
"operationId": "HeadscaleService_DeleteApiKey",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1DeleteApiKeyResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "prefix",
"in": "path",
"required": true,
"type": "string"
},
{
"name": "id",
"in": "query",
"required": false,
"type": "string",
"format": "uint64"
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/auth/approve": {
"post": {
"operationId": "HeadscaleService_AuthApprove",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1AuthApproveResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1AuthApproveRequest"
}
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/auth/register": {
"post": {
"summary": "--- Auth start ---",
"operationId": "HeadscaleService_AuthRegister",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1AuthRegisterResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1AuthRegisterRequest"
}
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/auth/reject": {
"post": {
"operationId": "HeadscaleService_AuthReject",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1AuthRejectResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1AuthRejectRequest"
}
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/debug/node": {
"post": {
"summary": "--- Node start ---",
"operationId": "HeadscaleService_DebugCreateNode",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1DebugCreateNodeResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1DebugCreateNodeRequest"
}
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/health": {
"get": {
"summary": "--- Health start ---",
"operationId": "HeadscaleService_Health",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1HealthResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/node": {
"get": {
"operationId": "HeadscaleService_ListNodes",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1ListNodesResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "user",
"in": "query",
"required": false,
"type": "string"
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/node/backfillips": {
"post": {
"operationId": "HeadscaleService_BackfillNodeIPs",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1BackfillNodeIPsResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "confirmed",
"in": "query",
"required": false,
"type": "boolean"
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/node/register": {
"post": {
"operationId": "HeadscaleService_RegisterNode",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1RegisterNodeResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "user",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "key",
"in": "query",
"required": false,
"type": "string"
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/node/{nodeId}": {
"get": {
"operationId": "HeadscaleService_GetNode",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1GetNodeResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "nodeId",
"in": "path",
"required": true,
"type": "string",
"format": "uint64"
}
],
"tags": [
"HeadscaleService"
]
},
"delete": {
"operationId": "HeadscaleService_DeleteNode",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1DeleteNodeResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "nodeId",
"in": "path",
"required": true,
"type": "string",
"format": "uint64"
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/node/{nodeId}/approve_routes": {
"post": {
"operationId": "HeadscaleService_SetApprovedRoutes",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1SetApprovedRoutesResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "nodeId",
"in": "path",
"required": true,
"type": "string",
"format": "uint64"
},
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/HeadscaleServiceSetApprovedRoutesBody"
}
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/node/{nodeId}/expire": {
"post": {
"operationId": "HeadscaleService_ExpireNode",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1ExpireNodeResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "nodeId",
"in": "path",
"required": true,
"type": "string",
"format": "uint64"
},
{
"name": "expiry",
"in": "query",
"required": false,
"type": "string",
"format": "date-time"
},
{
"name": "disableExpiry",
"description": "When true, sets expiry to null (node will never expire).",
"in": "query",
"required": false,
"type": "boolean"
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/node/{nodeId}/rename/{newName}": {
"post": {
"operationId": "HeadscaleService_RenameNode",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1RenameNodeResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "nodeId",
"in": "path",
"required": true,
"type": "string",
"format": "uint64"
},
{
"name": "newName",
"in": "path",
"required": true,
"type": "string"
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/node/{nodeId}/tags": {
"post": {
"operationId": "HeadscaleService_SetTags",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1SetTagsResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "nodeId",
"in": "path",
"required": true,
"type": "string",
"format": "uint64"
},
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/HeadscaleServiceSetTagsBody"
}
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/policy": {
"get": {
"summary": "--- Policy start ---",
"operationId": "HeadscaleService_GetPolicy",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1GetPolicyResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"tags": [
"HeadscaleService"
]
},
"put": {
"operationId": "HeadscaleService_SetPolicy",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1SetPolicyResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1SetPolicyRequest"
}
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/preauthkey": {
"get": {
"operationId": "HeadscaleService_ListPreAuthKeys",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1ListPreAuthKeysResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"tags": [
"HeadscaleService"
]
},
"delete": {
"operationId": "HeadscaleService_DeletePreAuthKey",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1DeletePreAuthKeyResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "id",
"in": "query",
"required": false,
"type": "string",
"format": "uint64"
}
],
"tags": [
"HeadscaleService"
]
},
"post": {
"summary": "--- PreAuthKeys start ---",
"operationId": "HeadscaleService_CreatePreAuthKey",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1CreatePreAuthKeyResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1CreatePreAuthKeyRequest"
}
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/preauthkey/expire": {
"post": {
"operationId": "HeadscaleService_ExpirePreAuthKey",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1ExpirePreAuthKeyResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1ExpirePreAuthKeyRequest"
}
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/user": {
"get": {
"operationId": "HeadscaleService_ListUsers",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1ListUsersResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "id",
"in": "query",
"required": false,
"type": "string",
"format": "uint64"
},
{
"name": "name",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "email",
"in": "query",
"required": false,
"type": "string"
}
],
"tags": [
"HeadscaleService"
]
},
"post": {
"summary": "--- User start ---",
"operationId": "HeadscaleService_CreateUser",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1CreateUserResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1CreateUserRequest"
}
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/user/{id}": {
"delete": {
"operationId": "HeadscaleService_DeleteUser",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1DeleteUserResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "id",
"in": "path",
"required": true,
"type": "string",
"format": "uint64"
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/user/{oldId}/rename/{newName}": {
"post": {
"operationId": "HeadscaleService_RenameUser",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1RenameUserResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "oldId",
"in": "path",
"required": true,
"type": "string",
"format": "uint64"
},
{
"name": "newName",
"in": "path",
"required": true,
"type": "string"
}
],
"tags": [
"HeadscaleService"
]
}
}
},
"definitions": {
"HeadscaleServiceSetApprovedRoutesBody": {
"type": "object",
"properties": {
"routes": {
"type": "array",
"items": {
"type": "string"
}
}
}
},
"HeadscaleServiceSetTagsBody": {
"type": "object",
"properties": {
"tags": {
"type": "array",
"items": {
"type": "string"
}
}
}
},
"protobufAny": {
"type": "object",
"properties": {
"@type": {
"type": "string"
}
},
"additionalProperties": {}
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"type": "object",
"$ref": "#/definitions/protobufAny"
}
}
}
},
"v1ApiKey": {
"type": "object",
"properties": {
"id": {
"type": "string",
"format": "uint64"
},
"prefix": {
"type": "string"
},
"expiration": {
"type": "string",
"format": "date-time"
},
"createdAt": {
"type": "string",
"format": "date-time"
},
"lastSeen": {
"type": "string",
"format": "date-time"
}
}
},
"v1AuthApproveRequest": {
"type": "object",
"properties": {
"authId": {
"type": "string"
}
}
},
"v1AuthApproveResponse": {
"type": "object"
},
"v1AuthRegisterRequest": {
"type": "object",
"properties": {
"user": {
"type": "string"
},
"authId": {
"type": "string"
}
}
},
"v1AuthRegisterResponse": {
"type": "object",
"properties": {
"node": {
"$ref": "#/definitions/v1Node"
}
}
},
"v1AuthRejectRequest": {
"type": "object",
"properties": {
"authId": {
"type": "string"
}
}
},
"v1AuthRejectResponse": {
"type": "object"
},
"v1BackfillNodeIPsResponse": {
"type": "object",
"properties": {
"changes": {
"type": "array",
"items": {
"type": "string"
}
}
}
},
"v1CreateApiKeyRequest": {
"type": "object",
"properties": {
"expiration": {
"type": "string",
"format": "date-time"
}
}
},
"v1CreateApiKeyResponse": {
"type": "object",
"properties": {
"apiKey": {
"type": "string"
}
}
},
"v1CreatePreAuthKeyRequest": {
"type": "object",
"properties": {
"user": {
"type": "string",
"format": "uint64"
},
"reusable": {
"type": "boolean"
},
"ephemeral": {
"type": "boolean"
},
"expiration": {
"type": "string",
"format": "date-time"
},
"aclTags": {
"type": "array",
"items": {
"type": "string"
}
}
}
},
"v1CreatePreAuthKeyResponse": {
"type": "object",
"properties": {
"preAuthKey": {
"$ref": "#/definitions/v1PreAuthKey"
}
}
},
"v1CreateUserRequest": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"displayName": {
"type": "string"
},
"email": {
"type": "string"
},
"pictureUrl": {
"type": "string"
}
}
},
"v1CreateUserResponse": {
"type": "object",
"properties": {
"user": {
"$ref": "#/definitions/v1User"
}
}
},
"v1DebugCreateNodeRequest": {
"type": "object",
"properties": {
"user": {
"type": "string"
},
"key": {
"type": "string"
},
"name": {
"type": "string"
},
"routes": {
"type": "array",
"items": {
"type": "string"
}
}
}
},
"v1DebugCreateNodeResponse": {
"type": "object",
"properties": {
"node": {
"$ref": "#/definitions/v1Node"
}
}
},
"v1DeleteApiKeyResponse": {
"type": "object"
},
"v1DeleteNodeResponse": {
"type": "object"
},
"v1DeletePreAuthKeyResponse": {
"type": "object"
},
"v1DeleteUserResponse": {
"type": "object"
},
"v1ExpireApiKeyRequest": {
"type": "object",
"properties": {
"prefix": {
"type": "string"
},
"id": {
"type": "string",
"format": "uint64"
}
}
},
"v1ExpireApiKeyResponse": {
"type": "object"
},
"v1ExpireNodeResponse": {
"type": "object",
"properties": {
"node": {
"$ref": "#/definitions/v1Node"
}
}
},
"v1ExpirePreAuthKeyRequest": {
"type": "object",
"properties": {
"id": {
"type": "string",
"format": "uint64"
}
}
},
"v1ExpirePreAuthKeyResponse": {
"type": "object"
},
"v1GetNodeResponse": {
"type": "object",
"properties": {
"node": {
"$ref": "#/definitions/v1Node"
}
}
},
"v1GetPolicyResponse": {
"type": "object",
"properties": {
"policy": {
"type": "string"
},
"updatedAt": {
"type": "string",
"format": "date-time"
}
}
},
"v1HealthResponse": {
"type": "object",
"properties": {
"databaseConnectivity": {
"type": "boolean"
}
}
},
"v1ListApiKeysResponse": {
"type": "object",
"properties": {
"apiKeys": {
"type": "array",
"items": {
"type": "object",
"$ref": "#/definitions/v1ApiKey"
}
}
}
},
"v1ListNodesResponse": {
"type": "object",
"properties": {
"nodes": {
"type": "array",
"items": {
"type": "object",
"$ref": "#/definitions/v1Node"
}
}
}
},
"v1ListPreAuthKeysResponse": {
"type": "object",
"properties": {
"preAuthKeys": {
"type": "array",
"items": {
"type": "object",
"$ref": "#/definitions/v1PreAuthKey"
}
}
}
},
"v1ListUsersResponse": {
"type": "object",
"properties": {
"users": {
"type": "array",
"items": {
"type": "object",
"$ref": "#/definitions/v1User"
}
}
}
},
"v1Node": {
"type": "object",
"properties": {
"id": {
"type": "string",
"format": "uint64"
},
"machineKey": {
"type": "string"
},
"nodeKey": {
"type": "string"
},
"discoKey": {
"type": "string"
},
"ipAddresses": {
"type": "array",
"items": {
"type": "string"
}
},
"name": {
"type": "string"
},
"user": {
"$ref": "#/definitions/v1User"
},
"lastSeen": {
"type": "string",
"format": "date-time"
},
"expiry": {
"type": "string",
"format": "date-time"
},
"preAuthKey": {
"$ref": "#/definitions/v1PreAuthKey"
},
"createdAt": {
"type": "string",
"format": "date-time"
},
"registerMethod": {
"$ref": "#/definitions/v1RegisterMethod"
},
"givenName": {
"type": "string",
"title": "Deprecated\nrepeated string forced_tags = 18;\nrepeated string invalid_tags = 19;\nrepeated string valid_tags = 20;"
},
"online": {
"type": "boolean"
},
"approvedRoutes": {
"type": "array",
"items": {
"type": "string"
}
},
"availableRoutes": {
"type": "array",
"items": {
"type": "string"
}
},
"subnetRoutes": {
"type": "array",
"items": {
"type": "string"
}
},
"tags": {
"type": "array",
"items": {
"type": "string"
}
}
}
},
"v1PreAuthKey": {
"type": "object",
"properties": {
"user": {
"$ref": "#/definitions/v1User"
},
"id": {
"type": "string",
"format": "uint64"
},
"key": {
"type": "string"
},
"reusable": {
"type": "boolean"
},
"ephemeral": {
"type": "boolean"
},
"used": {
"type": "boolean"
},
"expiration": {
"type": "string",
"format": "date-time"
},
"createdAt": {
"type": "string",
"format": "date-time"
},
"aclTags": {
"type": "array",
"items": {
"type": "string"
}
}
}
},
"v1RegisterMethod": {
"type": "string",
"enum": [
"REGISTER_METHOD_UNSPECIFIED",
"REGISTER_METHOD_AUTH_KEY",
"REGISTER_METHOD_CLI",
"REGISTER_METHOD_OIDC"
],
"default": "REGISTER_METHOD_UNSPECIFIED"
},
"v1RegisterNodeResponse": {
"type": "object",
"properties": {
"node": {
"$ref": "#/definitions/v1Node"
}
}
},
"v1RenameNodeResponse": {
"type": "object",
"properties": {
"node": {
"$ref": "#/definitions/v1Node"
}
}
},
"v1RenameUserResponse": {
"type": "object",
"properties": {
"user": {
"$ref": "#/definitions/v1User"
}
}
},
"v1SetApprovedRoutesResponse": {
"type": "object",
"properties": {
"node": {
"$ref": "#/definitions/v1Node"
}
}
},
"v1SetPolicyRequest": {
"type": "object",
"properties": {
"policy": {
"type": "string"
}
}
},
"v1SetPolicyResponse": {
"type": "object",
"properties": {
"policy": {
"type": "string"
},
"updatedAt": {
"type": "string",
"format": "date-time"
}
}
},
"v1SetTagsResponse": {
"type": "object",
"properties": {
"node": {
"$ref": "#/definitions/v1Node"
}
}
},
"v1User": {
"type": "object",
"properties": {
"id": {
"type": "string",
"format": "uint64"
},
"name": {
"type": "string"
},
"createdAt": {
"type": "string",
"format": "date-time"
},
"displayName": {
"type": "string"
},
"email": {
"type": "string"
},
"providerId": {
"type": "string"
},
"provider": {
"type": "string"
},
"profilePicUrl": {
"type": "string"
}
}
}
}
}
================================================
FILE: gen/openapiv2/headscale/v1/node.swagger.json
================================================
{
"swagger": "2.0",
"info": {
"title": "headscale/v1/node.proto",
"version": "version not set"
},
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {},
"definitions": {
"protobufAny": {
"type": "object",
"properties": {
"@type": {
"type": "string"
}
},
"additionalProperties": {}
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"type": "object",
"$ref": "#/definitions/protobufAny"
}
}
}
}
}
}
================================================
FILE: gen/openapiv2/headscale/v1/policy.swagger.json
================================================
{
"swagger": "2.0",
"info": {
"title": "headscale/v1/policy.proto",
"version": "version not set"
},
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {},
"definitions": {
"protobufAny": {
"type": "object",
"properties": {
"@type": {
"type": "string"
}
},
"additionalProperties": {}
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"type": "object",
"$ref": "#/definitions/protobufAny"
}
}
}
}
}
}
================================================
FILE: gen/openapiv2/headscale/v1/preauthkey.swagger.json
================================================
{
"swagger": "2.0",
"info": {
"title": "headscale/v1/preauthkey.proto",
"version": "version not set"
},
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {},
"definitions": {
"protobufAny": {
"type": "object",
"properties": {
"@type": {
"type": "string"
}
},
"additionalProperties": {}
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"type": "object",
"$ref": "#/definitions/protobufAny"
}
}
}
}
}
}
================================================
FILE: gen/openapiv2/headscale/v1/user.swagger.json
================================================
{
"swagger": "2.0",
"info": {
"title": "headscale/v1/user.proto",
"version": "version not set"
},
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {},
"definitions": {
"protobufAny": {
"type": "object",
"properties": {
"@type": {
"type": "string"
}
},
"additionalProperties": {}
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"type": "object",
"$ref": "#/definitions/protobufAny"
}
}
}
}
}
}
================================================
FILE: go.mod
================================================
module github.com/juanfont/headscale
go 1.26.1
require (
github.com/arl/statsviz v0.8.0
github.com/cenkalti/backoff/v5 v5.0.3
github.com/chasefleming/elem-go v0.31.0
github.com/coder/websocket v1.8.14
github.com/coreos/go-oidc/v3 v3.17.0
github.com/creachadair/command v0.2.0
github.com/creachadair/flax v0.0.5
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc
github.com/docker/docker v28.5.2+incompatible
github.com/fsnotify/fsnotify v1.9.0
github.com/glebarez/sqlite v1.11.0
github.com/go-chi/chi/v5 v5.2.5
github.com/go-chi/metrics v0.1.1
github.com/go-gormigrate/gormigrate/v2 v2.1.5
github.com/go-json-experiment/json v0.0.0-20251027170946-4849db3c2f7e
github.com/gofrs/uuid/v5 v5.4.0
github.com/google/go-cmp v0.7.0
github.com/gorilla/mux v1.8.1
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.7
github.com/jagottsicher/termcolor v1.0.2
github.com/oauth2-proxy/mockoidc v0.0.0-20240214162133-caebfff84d25
github.com/ory/dockertest/v3 v3.12.0
github.com/philip-bui/grpc-zerolog v1.0.1
github.com/pkg/profile v1.7.0
github.com/prometheus/client_golang v1.23.2
github.com/prometheus/common v0.67.5
github.com/pterm/pterm v0.12.82
github.com/puzpuzpuz/xsync/v4 v4.4.0
github.com/rs/zerolog v1.34.0
github.com/samber/lo v1.52.0
github.com/sasha-s/go-deadlock v0.3.6
github.com/spf13/cobra v1.10.2
github.com/spf13/viper v1.21.0
github.com/stretchr/testify v1.11.1
github.com/tailscale/hujson v0.0.0-20250605163823-992244df8c5a
github.com/tailscale/squibble v0.0.0-20251104223530-a961feffb67f
github.com/tailscale/tailsql v0.0.0-20260105194658-001575c3ca09
github.com/tcnksm/go-latest v0.0.0-20170313132115-e3007ae9052e
go4.org/netipx v0.0.0-20231129151722-fdeea329fbba
golang.org/x/crypto v0.48.0
golang.org/x/exp v0.0.0-20260112195511-716be5621a96
golang.org/x/net v0.50.0
golang.org/x/oauth2 v0.34.0
golang.org/x/sync v0.19.0
google.golang.org/genproto/googleapis/api v0.0.0-20260203192932-546029d2fa20
google.golang.org/grpc v1.78.0
google.golang.org/protobuf v1.36.11
gopkg.in/yaml.v3 v3.0.1
gorm.io/driver/postgres v1.6.0
gorm.io/gorm v1.31.1
tailscale.com v1.94.1
zgo.at/zcache/v2 v2.4.1
zombiezen.com/go/postgrestest v1.0.1
)
// NOTE: modernc sqlite has a fragile dependency
// chain and it is important that they are updated
// in lockstep to ensure that they do not break
// some architectures and similar at runtime:
// https://github.com/juanfont/headscale/issues/2188
//
// Fragile libc dependency:
// https://pkg.go.dev/modernc.org/sqlite#hdr-Fragile_modernc_org_libc_dependency
// https://gitlab.com/cznic/sqlite/-/issues/177
//
// To upgrade, determine the new SQLite version to
// be used, and consult the `go.mod` file:
// https://gitlab.com/cznic/sqlite/-/blob/master/go.mod
// to find
// the appropriate `libc` version, then upgrade them
// together, e.g:
// go get modernc.org/libc@v1.55.3 modernc.org/sqlite@v1.33.1
require (
modernc.org/libc v1.67.6 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect
modernc.org/sqlite v1.44.3
)
// NOTE: gvisor must be updated in lockstep with
// tailscale.com. The version used here should match
// the version required by the tailscale.com dependency.
// To find the correct version, check tailscale.com's
// go.mod file for the gvisor.dev/gvisor version:
// https://github.com/tailscale/tailscale/blob/main/go.mod
require gvisor.dev/gvisor v0.0.0-20250205023644-9414b50a5633 // indirect
require (
atomicgo.dev/cursor v0.2.0 // indirect
atomicgo.dev/keyboard v0.2.9 // indirect
atomicgo.dev/schedule v0.1.0 // indirect
dario.cat/mergo v1.0.2 // indirect
filippo.io/edwards25519 v1.1.0 // indirect
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5 // indirect
github.com/akutz/memconn v0.1.0 // indirect
github.com/alexbrainman/sspi v0.0.0-20250919150558-7d374ff0d59e // indirect
github.com/aws/aws-sdk-go-v2 v1.41.1 // indirect
github.com/aws/aws-sdk-go-v2/config v1.32.7 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.19.7 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.17 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.17 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.17 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.17 // indirect
github.com/aws/aws-sdk-go-v2/service/signin v1.0.5 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.30.9 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.13 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.41.6 // indirect
github.com/aws/smithy-go v1.24.0 // indirect
github.com/axiomhq/hyperloglog v0.2.6 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/clipperhouse/stringish v0.1.1 // indirect
github.com/clipperhouse/uax29/v2 v2.5.0 // indirect
github.com/containerd/console v1.0.5 // indirect
github.com/containerd/continuity v0.4.5 // indirect
github.com/containerd/errdefs v1.0.0 // indirect
github.com/containerd/errdefs/pkg v0.3.0 // indirect
github.com/creachadair/mds v0.25.15 // indirect
github.com/creachadair/msync v0.8.2 // indirect
github.com/dblohm7/wingoes v0.0.0-20250822163801-6d8e6105c62d // indirect
github.com/dgryski/go-metro v0.0.0-20250106013310-edb8663e5e33 // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/docker/cli v29.2.1+incompatible // indirect
github.com/docker/go-connections v0.6.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/felixge/fgprof v0.9.5 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fxamacker/cbor/v2 v2.9.0 // indirect
github.com/gaissmai/bart v0.26.1 // indirect
github.com/glebarez/go-sqlite v1.22.0 // indirect
github.com/go-jose/go-jose/v3 v3.0.4 // indirect
github.com/go-jose/go-jose/v4 v4.1.3 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-viper/mapstructure/v2 v2.5.0 // indirect
github.com/godbus/dbus/v5 v5.2.2 // indirect
github.com/golang-jwt/jwt/v5 v5.3.1 // indirect
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/btree v1.1.3 // indirect
github.com/google/go-github v17.0.0+incompatible // indirect
github.com/google/go-querystring v1.2.0 // indirect
github.com/google/pprof v0.0.0-20260202012954-cb029daf43ef // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/gookit/color v1.6.0 // indirect
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect
github.com/hashicorp/go-version v1.8.0 // indirect
github.com/hdevalence/ed25519consensus v0.2.0 // indirect
github.com/huin/goupnp v1.3.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jackc/pgx/v5 v5.8.0 // indirect
github.com/jackc/puddle/v2 v2.2.2 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
github.com/jsimonetti/rtnetlink v1.4.2 // indirect
github.com/kamstrup/intmap v0.5.2 // indirect
github.com/klauspost/compress v1.18.3 // indirect
github.com/lib/pq v1.11.1 // indirect
github.com/lithammer/fuzzysearch v1.1.8 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.19 // indirect
github.com/mdlayher/netlink v1.8.0 // indirect
github.com/mdlayher/socket v0.5.1 // indirect
github.com/mitchellh/go-ps v1.0.0 // indirect
github.com/moby/docker-image-spec v1.3.1 // indirect
github.com/moby/moby/api v1.53.0 // indirect
github.com/moby/moby/client v0.2.2 // indirect
github.com/moby/sys/atomicwriter v0.1.0 // indirect
github.com/moby/sys/user v0.4.0 // indirect
github.com/moby/term v0.5.2 // indirect
github.com/morikuni/aec v1.1.0 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.1 // indirect
github.com/opencontainers/runc v1.3.2 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/petermattis/goid v0.0.0-20260113132338-7c7de50cc741 // indirect
github.com/pires/go-proxyproto v0.9.2 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus-community/pro-bing v0.7.0 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/procfs v0.19.2 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/safchain/ethtool v0.7.0 // indirect
github.com/sagikazarmark/locafero v0.12.0 // indirect
github.com/sirupsen/logrus v1.9.4 // indirect
github.com/spf13/afero v1.15.0 // indirect
github.com/spf13/cast v1.10.0 // indirect
github.com/spf13/pflag v1.0.10 // indirect
github.com/subosito/gotenv v1.6.0 // indirect
github.com/tailscale/certstore v0.1.1-0.20231202035212-d3fa0460f47e // indirect
github.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55 // indirect
github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc // indirect
github.com/tailscale/setec v0.0.0-20260115174028-19d190c5556d // indirect
github.com/tailscale/web-client-prebuilt v0.0.0-20251127225136-f19339b67368 // indirect
github.com/tailscale/wireguard-go v0.0.0-20250716170648-1d0488a3d7da // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.65.0 // indirect
go.opentelemetry.io/otel v1.40.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.40.0 // indirect
go.opentelemetry.io/otel/metric v1.40.0 // indirect
go.opentelemetry.io/otel/trace v1.40.0 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
go4.org/mem v0.0.0-20240501181205-ae6ca9944745 // indirect
golang.org/x/mod v0.33.0 // indirect
golang.org/x/sys v0.41.0 // indirect
golang.org/x/term v0.40.0 // indirect
golang.org/x/text v0.34.0 // indirect
golang.org/x/time v0.14.0 // indirect
golang.org/x/tools v0.42.0 // indirect
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2 // indirect
golang.zx2c4.com/wireguard/windows v0.5.3 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20260203192932-546029d2fa20 // indirect
)
tool (
golang.org/x/tools/cmd/stress
golang.org/x/tools/cmd/stringer
tailscale.com/cmd/viewer
)
================================================
FILE: go.sum
================================================
9fans.net/go v0.0.8-0.20250307142834-96bdba94b63f h1:1C7nZuxUMNz7eiQALRfiqNOm04+m3edWlRff/BYHf0Q=
9fans.net/go v0.0.8-0.20250307142834-96bdba94b63f/go.mod h1:hHyrZRryGqVdqrknjq5OWDLGCTJ2NeEvtrpR96mjraM=
atomicgo.dev/assert v0.0.2 h1:FiKeMiZSgRrZsPo9qn/7vmr7mCsh5SZyXY4YGYiYwrg=
atomicgo.dev/assert v0.0.2/go.mod h1:ut4NcI3QDdJtlmAxQULOmA13Gz6e2DWbSAS8RUOmNYQ=
atomicgo.dev/cursor v0.2.0 h1:H6XN5alUJ52FZZUkI7AlJbUc1aW38GWZalpYRPpoPOw=
atomicgo.dev/cursor v0.2.0/go.mod h1:Lr4ZJB3U7DfPPOkbH7/6TOtJ4vFGHlgj1nc+n900IpU=
atomicgo.dev/keyboard v0.2.9 h1:tOsIid3nlPLZ3lwgG8KZMp/SFmr7P0ssEN5JUsm78K8=
atomicgo.dev/keyboard v0.2.9/go.mod h1:BC4w9g00XkxH/f1HXhW2sXmJFOCWbKn9xrOunSFtExQ=
atomicgo.dev/schedule v0.1.0 h1:nTthAbhZS5YZmgYbb2+DH8uQIZcTlIrd4eYr3UQxEjs=
atomicgo.dev/schedule v0.1.0/go.mod h1:xeUa3oAkiuHYh8bKiQBRojqAMq3PXXbJujjb0hw8pEU=
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
filippo.io/mkcert v1.4.4 h1:8eVbbwfVlaqUM7OwuftKc2nuYOoTDQWqsoXmzoXZdbc=
filippo.io/mkcert v1.4.4/go.mod h1:VyvOchVuAye3BoUsPUOOofKygVwLV2KQMVFJNRq+1dA=
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg=
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg=
github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/MarvinJWendt/testza v0.1.0/go.mod h1:7AxNvlfeHP7Z/hDQ5JtE3OKYT3XFUeLCDE2DQninSqs=
github.com/MarvinJWendt/testza v0.2.1/go.mod h1:God7bhG8n6uQxwdScay+gjm9/LnO4D3kkcZX4hv9Rp8=
github.com/MarvinJWendt/testza v0.2.8/go.mod h1:nwIcjmr0Zz+Rcwfh3/4UhBp7ePKVhuBExvZqnKYWlII=
github.com/MarvinJWendt/testza v0.2.10/go.mod h1:pd+VWsoGUiFtq+hRKSU1Bktnn+DMCSrDrXDpX2bG66k=
github.com/MarvinJWendt/testza v0.2.12/go.mod h1:JOIegYyV7rX+7VZ9r77L/eH6CfJHHzXjB69adAhzZkI=
github.com/MarvinJWendt/testza v0.3.0/go.mod h1:eFcL4I0idjtIx8P9C6KkAuLgATNKpX4/2oUqKc6bF2c=
github.com/MarvinJWendt/testza v0.4.2/go.mod h1:mSdhXiKH8sg/gQehJ63bINcCKp7RtYewEjXsvsVUPbE=
github.com/MarvinJWendt/testza v0.5.2 h1:53KDo64C1z/h/d/stCYCPY69bt/OSwjq5KpFNwi+zB4=
github.com/MarvinJWendt/testza v0.5.2/go.mod h1:xu53QFE5sCdjtMCKk8YMQ2MnymimEctc4n3EjyIYvEY=
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5 h1:TngWCqHvy9oXAN6lEVMRuU21PR1EtLVZJmdB18Gu3Rw=
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5/go.mod h1:lmUJ/7eu/Q8D7ML55dXQrVaamCz2vxCfdQBasLZfHKk=
github.com/akutz/memconn v0.1.0 h1:NawI0TORU4hcOMsMr11g7vwlCdkYeLKXBcxWu2W/P8A=
github.com/akutz/memconn v0.1.0/go.mod h1:Jo8rI7m0NieZyLI5e2CDlRdRqRRB4S7Xp77ukDjH+Fw=
github.com/alexbrainman/sspi v0.0.0-20250919150558-7d374ff0d59e h1:4dAU9FXIyQktpoUAgOJK3OTFc/xug0PCXYCqU0FgDKI=
github.com/alexbrainman/sspi v0.0.0-20250919150558-7d374ff0d59e/go.mod h1:cEWa1LVoE5KvSD9ONXsZrj0z6KqySlCCNKHlLzbqAt4=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFIImctFaOjnTIavg87rW78vTPkQqLI8=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4=
github.com/arl/statsviz v0.8.0 h1:O6GjjVxEDxcByAucOSl29HaGYLXsuwA3ujJw8H9E7/U=
github.com/arl/statsviz v0.8.0/go.mod h1:XlrbiT7xYT03xaW9JMMfD8KFUhBOESJwfyNJu83PbB0=
github.com/atomicgo/cursor v0.0.1/go.mod h1:cBON2QmmrysudxNBFthvMtN32r3jxVRIvzkUiF/RuIk=
github.com/aws/aws-sdk-go-v2 v1.41.1 h1:ABlyEARCDLN034NhxlRUSZr4l71mh+T5KAeGh6cerhU=
github.com/aws/aws-sdk-go-v2 v1.41.1/go.mod h1:MayyLB8y+buD9hZqkCW3kX1AKq07Y5pXxtgB+rRFhz0=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.4 h1:489krEF9xIGkOaaX3CE/Be2uWjiXrkCH6gUX+bZA/BU=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.4/go.mod h1:IOAPF6oT9KCsceNTvvYMNHy0+kMF8akOjeDvPENWxp4=
github.com/aws/aws-sdk-go-v2/config v1.32.7 h1:vxUyWGUwmkQ2g19n7JY/9YL8MfAIl7bTesIUykECXmY=
github.com/aws/aws-sdk-go-v2/config v1.32.7/go.mod h1:2/Qm5vKUU/r7Y+zUk/Ptt2MDAEKAfUtKc1+3U1Mo3oY=
github.com/aws/aws-sdk-go-v2/credentials v1.19.7 h1:tHK47VqqtJxOymRrNtUXN5SP/zUTvZKeLx4tH6PGQc8=
github.com/aws/aws-sdk-go-v2/credentials v1.19.7/go.mod h1:qOZk8sPDrxhf+4Wf4oT2urYJrYt3RejHSzgAquYeppw=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.17 h1:I0GyV8wiYrP8XpA70g1HBcQO1JlQxCMTW9npl5UbDHY=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.17/go.mod h1:tyw7BOl5bBe/oqvoIeECFJjMdzXoa/dfVz3QQ5lgHGA=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.17 h1:xOLELNKGp2vsiteLsvLPwxC+mYmO6OZ8PYgiuPJzF8U=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.17/go.mod h1:5M5CI3D12dNOtH3/mk6minaRwI2/37ifCURZISxA/IQ=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.17 h1:WWLqlh79iO48yLkj1v3ISRNiv+3KdQoZ6JWyfcsyQik=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.17/go.mod h1:EhG22vHRrvF8oXSTYStZhJc1aUgKtnJe+aOiFEV90cM=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 h1:WKuaxf++XKWlHWu9ECbMlha8WOEGm0OUEZqm4K/Gcfk=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4/go.mod h1:ZWy7j6v1vWGmPReu0iSGvRiise4YI5SkR3OHKTZ6Wuc=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.16 h1:CjMzUs78RDDv4ROu3JnJn/Ig1r6ZD7/T2DXLLRpejic=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.16/go.mod h1:uVW4OLBqbJXSHJYA9svT9BluSvvwbzLQ2Crf6UPzR3c=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4 h1:0ryTNEdJbzUCEWkVXEXoqlXV72J5keC1GvILMOuD00E=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4/go.mod h1:HQ4qwNZh32C3CBeO6iJLQlgtMzqeG17ziAA/3KDJFow=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.7 h1:DIBqIrJ7hv+e4CmIk2z3pyKT+3B6qVMgRsawHiR3qso=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.7/go.mod h1:vLm00xmBke75UmpNvOcZQ/Q30ZFjbczeLFqGx5urmGo=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.17 h1:RuNSMoozM8oXlgLG/n6WLaFGoea7/CddrCfIiSA+xdY=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.17/go.mod h1:F2xxQ9TZz5gDWsclCtPQscGpP0VUOc8RqgFM3vDENmU=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.16 h1:NSbvS17MlI2lurYgXnCOLvCFX38sBW4eiVER7+kkgsU=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.16/go.mod h1:SwT8Tmqd4sA6G1qaGdzWCJN99bUmPGHfRwwq3G5Qb+A=
github.com/aws/aws-sdk-go-v2/service/s3 v1.93.2 h1:U3ygWUhCpiSPYSHOrRhb3gOl9T5Y3kB8k5Vjs//57bE=
github.com/aws/aws-sdk-go-v2/service/s3 v1.93.2/go.mod h1:79S2BdqCJpScXZA2y+cpZuocWsjGjJINyXnOsf5DTz8=
github.com/aws/aws-sdk-go-v2/service/signin v1.0.5 h1:VrhDvQib/i0lxvr3zqlUwLwJP4fpmpyD9wYG1vfSu+Y=
github.com/aws/aws-sdk-go-v2/service/signin v1.0.5/go.mod h1:k029+U8SY30/3/ras4G/Fnv/b88N4mAfliNn08Dem4M=
github.com/aws/aws-sdk-go-v2/service/ssm v1.45.0 h1:IOdss+igJDFdic9w3WKwxGCmHqUxydvIhJOm9LJ32Dk=
github.com/aws/aws-sdk-go-v2/service/ssm v1.45.0/go.mod h1:Q7XIWsMo0JcMpI/6TGD6XXcXcV1DbTj6e9BKNntIMIM=
github.com/aws/aws-sdk-go-v2/service/sso v1.30.9 h1:v6EiMvhEYBoHABfbGB4alOYmCIrcgyPPiBE1wZAEbqk=
github.com/aws/aws-sdk-go-v2/service/sso v1.30.9/go.mod h1:yifAsgBxgJWn3ggx70A3urX2AN49Y5sJTD1UQFlfqBw=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.13 h1:gd84Omyu9JLriJVCbGApcLzVR3XtmC4ZDPcAI6Ftvds=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.13/go.mod h1:sTGThjphYE4Ohw8vJiRStAcu3rbjtXRsdNB0TvZ5wwo=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.6 h1:5fFjR/ToSOzB2OQ/XqWpZBmNvmP/pJ1jOWYlFDJTjRQ=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.6/go.mod h1:qgFDZQSD/Kys7nJnVqYlWKnh0SSdMjAi0uSwON4wgYQ=
github.com/aws/smithy-go v1.24.0 h1:LpilSUItNPFr1eY85RYgTIg5eIEPtvFbskaFcmmIUnk=
github.com/aws/smithy-go v1.24.0/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/axiomhq/hyperloglog v0.2.6 h1:sRhvvF3RIXWQgAXaTphLp4yJiX4S0IN3MWTaAgZoRJw=
github.com/axiomhq/hyperloglog v0.2.6/go.mod h1:YjX/dQqCR/7QYX0g8mu8UZAjpIenz1FKM71UEsjFoTo=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chasefleming/elem-go v0.31.0 h1:vZsuKmKdv6idnUbu3awMruxTiFqZ/ertFJFAyBCkVhI=
github.com/chasefleming/elem-go v0.31.0/go.mod h1:UBmmZfso2LkXA0HZInbcwsmhE/LXFClEcBPNCGeARtA=
github.com/chromedp/cdproto v0.0.0-20230802225258-3cf4e6d46a89/go.mod h1:GKljq0VrfU4D5yc+2qA6OVr8pmO/MBbPEWqWQ/oqGEs=
github.com/chromedp/chromedp v0.9.2/go.mod h1:LkSXJKONWTCHAfQasKFUZI+mxqS4tZqhmtGzzhLsnLs=
github.com/chromedp/sysutil v1.0.0/go.mod h1:kgWmDdq8fTzXYcKIBqIYvRRTnYb9aNS9moAV0xufSww=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/logex v1.2.1/go.mod h1:JLbx6lG2kDbNRFnfkgvh4eRJRPX1QCoOIWomwysCBrQ=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/readline v1.5.1/go.mod h1:Eh+b79XXUwfKfcPLepksvw2tcLE/Ct21YObkaSkeBlk=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/chzyer/test v1.0.0/go.mod h1:2JlltgoNkt4TW/z9V/IzDdFaMTM2JPIi26O1pF38GC8=
github.com/cilium/ebpf v0.17.3 h1:FnP4r16PWYSE4ux6zN+//jMcW4nMVRvuTLVTvCjyyjg=
github.com/cilium/ebpf v0.17.3/go.mod h1:G5EDHij8yiLzaqn0WjyfJHvRa+3aDlReIaLVRMvOyJk=
github.com/clipperhouse/stringish v0.1.1 h1:+NSqMOr3GR6k1FdRhhnXrLfztGzuG+VuFDfatpWHKCs=
github.com/clipperhouse/stringish v0.1.1/go.mod h1:v/WhFtE1q0ovMta2+m+UbpZ+2/HEXNWYXQgCt4hdOzA=
github.com/clipperhouse/uax29/v2 v2.5.0 h1:x7T0T4eTHDONxFJsL94uKNKPHrclyFI0lm7+w94cO8U=
github.com/clipperhouse/uax29/v2 v2.5.0/go.mod h1:Wn1g7MK6OoeDT0vL+Q0SQLDz/KpfsVRgg6W7ihQeh4g=
github.com/coder/websocket v1.8.14 h1:9L0p0iKiNOibykf283eHkKUHHrpG7f65OE3BhhO7v9g=
github.com/coder/websocket v1.8.14/go.mod h1:NX3SzP+inril6yawo5CQXx8+fk145lPDC6pumgx0mVg=
github.com/containerd/console v1.0.3/go.mod h1:7LqA/THxQ86k76b8c/EMSiaJ3h1eZkMkXar0TQ1gf3U=
github.com/containerd/console v1.0.5 h1:R0ymNeydRqH2DmakFNdmjR2k0t7UPuiOV/N/27/qqsc=
github.com/containerd/console v1.0.5/go.mod h1:YynlIjWYF8myEu6sdkwKIvGQq+cOckRm6So2avqoYAk=
github.com/containerd/continuity v0.4.5 h1:ZRoN1sXq9u7V6QoHMcVWGhOwDFqZ4B9i5H6un1Wh0x4=
github.com/containerd/continuity v0.4.5/go.mod h1:/lNJvtJKUQStBzpVQ1+rasXO1LAWtUQssk28EZvJ3nE=
github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=
github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
github.com/containerd/errdefs/pkg v0.3.0 h1:9IKJ06FvyNlexW690DXuQNx2KA2cUJXx151Xdx3ZPPE=
github.com/containerd/errdefs/pkg v0.3.0/go.mod h1:NJw6s9HwNuRhnjJhM7pylWwMyAkmCQvQ4GpJHEqRLVk=
github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6 h1:8h5+bWd7R6AYUslN6c6iuZWTKsKxUFDlpnmilO6R2n0=
github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6/go.mod h1:Qe8Bv2Xik5FyTXwgIbLAnv2sWSBmvWdFETJConOQ//Q=
github.com/coreos/go-oidc/v3 v3.17.0 h1:hWBGaQfbi0iVviX4ibC7bk8OKT5qNr4klBaCHVNvehc=
github.com/coreos/go-oidc/v3 v3.17.0/go.mod h1:wqPbKFrVnE90vty060SB40FCJ8fTHTxSwyXJqZH+sI8=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/creachadair/command v0.2.0 h1:qTA9cMMhZePAxFoNdnk6F6nn94s1qPndIg9hJbqI9cA=
github.com/creachadair/command v0.2.0/go.mod h1:j+Ar+uYnFsHpkMeV9kGj6lJ45y9u2xqtg8FYy6cm+0o=
github.com/creachadair/flax v0.0.5 h1:zt+CRuXQASxwQ68e9GHAOnEgAU29nF0zYMHOCrL5wzE=
github.com/creachadair/flax v0.0.5/go.mod h1:F1PML0JZLXSNDMNiRGK2yjm5f+L9QCHchyHBldFymj8=
github.com/creachadair/mds v0.25.15 h1:i8CUqtfgbCqbvZ++L7lm8No3cOeic9YKF4vHEvEoj+Y=
github.com/creachadair/mds v0.25.15/go.mod h1:XtMfRW15sjd1iOi1Z1k+dq0pRsR5xPbulpoTrpyhk8w=
github.com/creachadair/msync v0.8.2 h1:ujvc/SVJPn+bFwmjUHucXNTTn3opVe2YbQ46mBCnP08=
github.com/creachadair/msync v0.8.2/go.mod h1:LzxqD9kfIl/O3DczkwOgJplLPqwrTbIhINlf9bHIsEY=
github.com/creachadair/taskgroup v0.13.2 h1:3KyqakBuFsm3KkXi/9XIb0QcA8tEzLHLgaoidf0MdVc=
github.com/creachadair/taskgroup v0.13.2/go.mod h1:i3V1Zx7H8RjwljUEeUWYT30Lmb9poewSb2XI1yTwD0g=
github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s=
github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dblohm7/wingoes v0.0.0-20250822163801-6d8e6105c62d h1:QRKpU+9ZBDs62LyBfwhZkJdB5DJX2Sm3p4kUh7l1aA0=
github.com/dblohm7/wingoes v0.0.0-20250822163801-6d8e6105c62d/go.mod h1:SUxUaAK/0UG5lYyZR1L1nC4AaYYvSSYTWQSH3FPcxKU=
github.com/dgryski/go-metro v0.0.0-20250106013310-edb8663e5e33 h1:ucRHb6/lvW/+mTEIGbvhcYU3S8+uSNkuMjx/qZFfhtM=
github.com/dgryski/go-metro v0.0.0-20250106013310-edb8663e5e33/go.mod h1:c9O8+fpSOX1DM8cPNSkX/qsBWdkD4yd2dpciOWQjpBw=
github.com/digitalocean/go-smbios v0.0.0-20180907143718-390a4f403a8e h1:vUmf0yezR0y7jJ5pceLHthLaYf4bA5T14B6q39S4q2Q=
github.com/digitalocean/go-smbios v0.0.0-20180907143718-390a4f403a8e/go.mod h1:YTIHhz/QFSYnu/EhlF2SpU2Uk+32abacUYA5ZPljz1A=
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/djherbis/times v1.6.0 h1:w2ctJ92J8fBvWPxugmXIv7Nz7Q3iDMKNx9v5ocVH20c=
github.com/djherbis/times v1.6.0/go.mod h1:gOHeRAz2h+VJNZ5Gmc/o7iD9k4wW7NMVqieYCY99oc0=
github.com/docker/cli v29.2.1+incompatible h1:n3Jt0QVCN65eiVBoUTZQM9mcQICCJt3akW4pKAbKdJg=
github.com/docker/cli v29.2.1+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/docker v28.5.2+incompatible h1:DBX0Y0zAjZbSrm1uzOkdr1onVghKaftjlSWt4AFexzM=
github.com/docker/docker v28.5.2+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94=
github.com/docker/go-connections v0.6.0/go.mod h1:AahvXYshr6JgfUJGdDCs2b5EZG/vmaMAntpSFH5BFKE=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/felixge/fgprof v0.9.3/go.mod h1:RdbpDgzqYVh/T9fPELJyV7EYJuHB55UTEULNun8eiPw=
github.com/felixge/fgprof v0.9.5 h1:8+vR6yu2vvSKn08urWyEuxx75NWPEvybbkBirEpsbVY=
github.com/felixge/fgprof v0.9.5/go.mod h1:yKl+ERSa++RYOs32d8K6WEXCB4uXdLls4ZaZPpayhMM=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM=
github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ=
github.com/gaissmai/bart v0.26.1 h1:+w4rnLGNlA2GDVn382Tfe3jOsK5vOr5n4KmigJ9lbTo=
github.com/gaissmai/bart v0.26.1/go.mod h1:GREWQfTLRWz/c5FTOsIw+KkscuFkIV5t8Rp7Nd1Td5c=
github.com/github/fakeca v0.1.0 h1:Km/MVOFvclqxPM9dZBC4+QE564nU4gz4iZ0D9pMw28I=
github.com/github/fakeca v0.1.0/go.mod h1:+bormgoGMMuamOscx7N91aOuUST7wdaJ2rNjeohylyo=
github.com/glebarez/go-sqlite v1.22.0 h1:uAcMJhaA6r3LHMTFgP0SifzgXg46yJkgxqyuyec+ruQ=
github.com/glebarez/go-sqlite v1.22.0/go.mod h1:PlBIdHe0+aUEFn+r2/uthrWq4FxbzugL0L8Li6yQJbc=
github.com/glebarez/sqlite v1.11.0 h1:wSG0irqzP6VurnMEpFGer5Li19RpIRi2qvQz++w0GMw=
github.com/glebarez/sqlite v1.11.0/go.mod h1:h8/o8j5wiAsqSPoWELDUdJXhjAhsVliSn7bWZjOhrgQ=
github.com/go-chi/chi/v5 v5.2.5 h1:Eg4myHZBjyvJmAFjFvWgrqDTXFyOzjj7YIm3L3mu6Ug=
github.com/go-chi/chi/v5 v5.2.5/go.mod h1:X7Gx4mteadT3eDOMTsXzmI4/rwUpOwBHLpAfupzFJP0=
github.com/go-chi/metrics v0.1.1 h1:CXhbnkAVVjb0k73EBRQ6Z2YdWFnbXZgNtg1Mboguibk=
github.com/go-chi/metrics v0.1.1/go.mod h1:mcGTM1pPalP7WCtb+akNYFO/lwNwBBLCuedepqjoPn4=
github.com/go-gormigrate/gormigrate/v2 v2.1.5 h1:1OyorA5LtdQw12cyJDEHuTrEV3GiXiIhS4/QTTa/SM8=
github.com/go-gormigrate/gormigrate/v2 v2.1.5/go.mod h1:mj9ekk/7CPF3VjopaFvWKN2v7fN3D9d3eEOAXRhi/+M=
github.com/go-jose/go-jose/v3 v3.0.4 h1:Wp5HA7bLQcKnf6YYao/4kpRpVMp/yf6+pJKV8WFSaNY=
github.com/go-jose/go-jose/v3 v3.0.4/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ=
github.com/go-jose/go-jose/v4 v4.1.3 h1:CVLmWDhDVRa6Mi/IgCgaopNosCaHz7zrMeF9MlZRkrs=
github.com/go-jose/go-jose/v4 v4.1.3/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08=
github.com/go-json-experiment/json v0.0.0-20251027170946-4849db3c2f7e h1:Lf/gRkoycfOBPa42vU2bbgPurFong6zXeFtPoxholzU=
github.com/go-json-experiment/json v0.0.0-20251027170946-4849db3c2f7e/go.mod h1:uNVvRXArCGbZ508SxYYTC5v1JWoz2voff5pm25jU1Ok=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
github.com/go-sql-driver/mysql v1.8.1 h1:LedoTUt/eveggdHS9qUFC1EFSa8bU2+1pZjSRpvNJ1Y=
github.com/go-sql-driver/mysql v1.8.1/go.mod h1:wEBSXgmK//2ZFJyE+qWnIsVGmvmEKlqwuVSjsCm7DZg=
github.com/go-viper/mapstructure/v2 v2.5.0 h1:vM5IJoUAy3d7zRSVtIwQgBj7BiWtMPfmPEgAXnvj1Ro=
github.com/go-viper/mapstructure/v2 v2.5.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
github.com/go4org/plan9netshell v0.0.0-20250324183649-788daa080737 h1:cf60tHxREO3g1nroKr2osU3JWZsJzkfi7rEg+oAB0Lo=
github.com/go4org/plan9netshell v0.0.0-20250324183649-788daa080737/go.mod h1:MIS0jDzbU/vuM9MC4YnBITCv+RYuTRq8dJzmCrFsK9g=
github.com/gobwas/httphead v0.1.0/go.mod h1:O/RXo79gxV8G+RqlR/otEwx4Q36zl9rqC5u12GKvMCM=
github.com/gobwas/pool v0.2.1/go.mod h1:q8bcK0KcYlCgd9e7WYLm9LpyS+YeLd8JVDW6WezmKEw=
github.com/gobwas/ws v1.2.1/go.mod h1:hRKAFb8wOxFROYNsT1bqfWnhX+b5MFeJM9r2ZSwg/KY=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.2.2 h1:TUR3TgtSVDmjiXOgAAyaZbYmIeP3DPkld3jgKGV8mXQ=
github.com/godbus/dbus/v5 v5.2.2/go.mod h1:3AAv2+hPq5rdnr5txxxRwiGjPXamgoIHgz9FPBfOp3c=
github.com/gofrs/uuid/v5 v5.4.0 h1:EfbpCTjqMuGyq5ZJwxqzn3Cbr2d0rUZU7v5ycAk/e/0=
github.com/gofrs/uuid/v5 v5.4.0/go.mod h1:CDOjlDMVAtN56jqyRUZh58JT31Tiw7/oQyEXZV+9bD8=
github.com/golang-jwt/jwt/v5 v5.3.1 h1:kYf81DTWFe7t+1VvL7eS+jKFVWaUnK9cB1qbwn63YCY=
github.com/golang-jwt/jwt/v5 v5.3.1/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 h1:f+oWsMOmNPc8JmEHVZIycC7hBoQxHH9pNKQORJNozsQ=
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8/go.mod h1:wcDNUvekVysuuOpQKo3191zZyTpiI6se1N1ULghS0sw=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg=
github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/go-github v17.0.0+incompatible h1:N0LgJ1j65A7kfXrZnUDaYCs/Sf4rEjNlfyDHW9dolSY=
github.com/google/go-github v17.0.0+incompatible/go.mod h1:zLgOLi98H3fifZn+44m+umXrS52loVEgC2AApnigrVQ=
github.com/google/go-querystring v1.2.0 h1:yhqkPbu2/OH+V9BfpCVPZkNmUXhb2gBxJArfhIxNtP0=
github.com/google/go-querystring v1.2.0/go.mod h1:8IFJqpSRITyJ8QhQ13bmbeMBDfmeEJZD5A0egEOmkqU=
github.com/google/go-tpm v0.9.4 h1:awZRf9FwOeTunQmHoDYSHJps3ie6f1UlhS1fOdPEt1I=
github.com/google/go-tpm v0.9.4/go.mod h1:h9jEsEECg7gtLis0upRBQU+GhYVH6jMjrFxI8u6bVUY=
github.com/google/nftables v0.2.1-0.20240414091927-5e242ec57806 h1:wG8RYIyctLhdFk6Vl1yPGtSRtwGpVkWyZww1OCil2MI=
github.com/google/nftables v0.2.1-0.20240414091927-5e242ec57806/go.mod h1:Beg6V6zZ3oEn0JuiUQ4wqwuyqqzasOltcoXPtgLbFp4=
github.com/google/pprof v0.0.0-20211214055906-6f57359322fd/go.mod h1:KgnwoLYCZ8IQu3XUZ8Nc/bM9CCZFOyjUNOSygVozoDg=
github.com/google/pprof v0.0.0-20240227163752-401108e1b7e7/go.mod h1:czg5+yv1E0ZGTi6S6vVK1mke0fV+FaUhNGcd6VRS9Ik=
github.com/google/pprof v0.0.0-20260202012954-cb029daf43ef h1:xpF9fUHpoIrrjX24DURVKiwHcFpw19ndIs+FwTSMbno=
github.com/google/pprof v0.0.0-20260202012954-cb029daf43ef/go.mod h1:MxpfABSjhmINe3F1It9d+8exIHFvUqtLIRCdOGNXqiI=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaUGG7oYTSPP8MxqL4YI3kZKwcP4=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3pAXIBCelhxNneeOaAeabZDe5s4K6zSpQ=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gookit/assert v0.1.1 h1:lh3GcawXe/p+cU7ESTZ5Ui3Sm/x8JWpIis4/1aF0mY0=
github.com/gookit/assert v0.1.1/go.mod h1:jS5bmIVQZTIwk42uXl4lyj4iaaxx32tqH16CFj0VX2E=
github.com/gookit/color v1.4.2/go.mod h1:fqRyamkC1W8uxl+lxCQxOT09l/vYfZ+QeiX3rKQHCoQ=
github.com/gookit/color v1.5.0/go.mod h1:43aQb+Zerm/BWh2GnrgOQm7ffz7tvQXEKV6BFMl7wAo=
github.com/gookit/color v1.6.0 h1:JjJXBTk1ETNyqyilJhkTXJYYigHG24TM9Xa2M1xAhRA=
github.com/gookit/color v1.6.0/go.mod h1:9ACFc7/1IpHGBW8RwuDm/0YEnhg3dwwXpoMsmtyHfjs=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.7 h1:X+2YciYSxvMQK0UZ7sg45ZVabVZBeBuvMkmuI2V3Fak=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.7/go.mod h1:lW34nIZuQ8UDPdkon5fmfp2l3+ZkQ2me/+oecHYLOII=
github.com/hashicorp/go-version v1.8.0 h1:KAkNb1HAiZd1ukkxDFGmokVZe1Xy9HG6NUp+bPle2i4=
github.com/hashicorp/go-version v1.8.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/golang-lru v0.6.0 h1:uL2shRDx7RTrOrTCUZEGP/wJUFiUI8QT6E7z5o8jga4=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/hdevalence/ed25519consensus v0.2.0 h1:37ICyZqdyj0lAZ8P4D1d1id3HqbbG1N3iBb1Tb4rdcU=
github.com/hdevalence/ed25519consensus v0.2.0/go.mod h1:w3BHWjwJbFU29IRHL1Iqkw3sus+7FctEyM4RqDxYNzo=
github.com/huin/goupnp v1.3.0 h1:UvLUlWDNpoUdYzb2TCn+MuTWtcjXKSza2n6CBdQ0xXc=
github.com/huin/goupnp v1.3.0/go.mod h1:gnGPsThkYa7bFi/KWmEysQRf48l2dvR5bxr2OFckNX8=
github.com/ianlancetaylor/demangle v0.0.0-20210905161508-09a460cdf81d/go.mod h1:aYm2/VgdVmcIU8iMfdMvDMsRAQjcfZSKFby6HOFvi/w=
github.com/ianlancetaylor/demangle v0.0.0-20230524184225-eabc099b10ab/go.mod h1:gx7rwoVhcfuVKG5uya9Hs3Sxj7EIvldVofAWIUtGouw=
github.com/illarion/gonotify/v3 v3.0.2 h1:O7S6vcopHexutmpObkeWsnzMJt/r1hONIEogeVNmJMk=
github.com/illarion/gonotify/v3 v3.0.2/go.mod h1:HWGPdPe817GfvY3w7cx6zkbzNZfi3QjcBm/wgVvEL1U=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/insomniacslk/dhcp v0.0.0-20240129002554-15c9b8791914 h1:kD8PseueGeYiid/Mmcv17Q0Qqicc4F46jcX22L/e/Hs=
github.com/insomniacslk/dhcp v0.0.0-20240129002554-15c9b8791914/go.mod h1:3A9PQ1cunSDF/1rbTq99Ts4pVnycWg+vlPkfeD2NLFI=
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
github.com/jackc/pgx/v5 v5.8.0 h1:TYPDoleBBme0xGSAX3/+NujXXtpZn9HBONkQC7IEZSo=
github.com/jackc/pgx/v5 v5.8.0/go.mod h1:QVeDInX2m9VyzvNeiCJVjCkNFqzsNb43204HshNSZKw=
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
github.com/jagottsicher/termcolor v1.0.2 h1:fo0c51pQSuLBN1+yVX2ZE+hE+P7ULb/TY8eRowJnrsM=
github.com/jagottsicher/termcolor v1.0.2/go.mod h1:RcH8uFwF/0wbEdQmi83rjmlJ+QOKdMSE9Rc1BEB7zFo=
github.com/jellydator/ttlcache/v3 v3.1.0 h1:0gPFG0IHHP6xyUyXq+JaD8fwkDCqgqwohXNJBcYE71g=
github.com/jellydator/ttlcache/v3 v3.1.0/go.mod h1:hi7MGFdMAwZna5n2tuvh63DvFLzVKySzCVW6+0gA2n4=
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ=
github.com/jinzhu/now v1.1.5/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/jsimonetti/rtnetlink v1.4.2 h1:Df9w9TZ3npHTyDn0Ev9e1uzmN2odmXd0QX+J5GTEn90=
github.com/jsimonetti/rtnetlink v1.4.2/go.mod h1:92s6LJdE+1iOrw+F2/RO7LYI2Qd8pPpFNNUYW06gcoM=
github.com/kamstrup/intmap v0.5.2 h1:qnwBm1mh4XAnW9W9Ue9tZtTff8pS6+s6iKF6JRIV2Dk=
github.com/kamstrup/intmap v0.5.2/go.mod h1:gWUVWHKzWj8xpJVFf5GC0O26bWmv3GqdnIX/LMT6Aq4=
github.com/klauspost/compress v1.18.3 h1:9PJRvfbmTabkOX8moIpXPbMMbYN60bWImDDU7L+/6zw=
github.com/klauspost/compress v1.18.3/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.0.10/go.mod h1:g2LTdtYhdyuGPqyWyv7qRAmj1WBqxuObKfj5c0PQa7c=
github.com/klauspost/cpuid/v2 v2.0.12/go.mod h1:g2LTdtYhdyuGPqyWyv7qRAmj1WBqxuObKfj5c0PQa7c=
github.com/klauspost/cpuid/v2 v2.2.3 h1:sxCkb+qR91z4vsqw4vGGZlDgPz3G7gjaLyK3V8y70BU=
github.com/klauspost/cpuid/v2 v2.2.3/go.mod h1:RVVoqg1df56z8g3pUjL/3lE5UfnlrJX8tyFgg4nqhuY=
github.com/kortschak/wol v0.0.0-20200729010619-da482cc4850a h1:+RR6SqnTkDLWyICxS1xpjCi/3dhyV+TgZwA6Ww3KncQ=
github.com/kortschak/wol v0.0.0-20200729010619-da482cc4850a/go.mod h1:YTtCCM3ryyfiu4F7t8HQ1mxvp1UBdWM2r6Xa+nGWvDk=
github.com/kr/fs v0.1.0 h1:Jskdu9ieNAYnjxsi0LbQp1ulIKZV1LAFgK1tWhpZgl8=
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/ledongthuc/pdf v0.0.0-20220302134840-0c2507a12d80/go.mod h1:imJHygn/1yfhB7XSJJKlFZKl/J+dCPAknuiaGOshXAs=
github.com/lib/pq v1.8.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lib/pq v1.11.1 h1:wuChtj2hfsGmmx3nf1m7xC2XpK6OtelS2shMY+bGMtI=
github.com/lib/pq v1.11.1/go.mod h1:/p+8NSbOcwzAEI7wiMXFlgydTwcgTr3OSKMsD2BitpA=
github.com/lithammer/fuzzysearch v1.1.8 h1:/HIuJnjHuXS8bKaiTMeeDlW2/AyIWk2brx1V8LFgLN4=
github.com/lithammer/fuzzysearch v1.1.8/go.mod h1:IdqeyBClc3FFqSzYq/MXESsS4S0FsZ5ajtkr5xPLts4=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.13/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mattn/go-runewidth v0.0.19 h1:v++JhqYnZuu5jSKrk9RbgF5v4CGUjqRfBm05byFGLdw=
github.com/mattn/go-runewidth v0.0.19/go.mod h1:XBkDxAl56ILZc9knddidhrOlY5R/pDhgLpndooCuJAs=
github.com/mdlayher/genetlink v1.3.2 h1:KdrNKe+CTu+IbZnm/GVUMXSqBBLqcGpRDa0xkQy56gw=
github.com/mdlayher/genetlink v1.3.2/go.mod h1:tcC3pkCrPUGIKKsCsp0B3AdaaKuHtaxoJRz3cc+528o=
github.com/mdlayher/netlink v1.8.0 h1:e7XNIYJKD7hUct3Px04RuIGJbBxy1/c4nX7D5YyvvlM=
github.com/mdlayher/netlink v1.8.0/go.mod h1:UhgKXUlDQhzb09DrCl2GuRNEglHmhYoWAHid9HK3594=
github.com/mdlayher/sdnotify v1.0.0 h1:Ma9XeLVN/l0qpyx1tNeMSeTjCPH6NtuD6/N9XdTlQ3c=
github.com/mdlayher/sdnotify v1.0.0/go.mod h1:HQUmpM4XgYkhDLtd+Uad8ZFK1T9D5+pNxnXQjCeJlGE=
github.com/mdlayher/socket v0.5.1 h1:VZaqt6RkGkt2OE9l3GcC6nZkqD3xKeQLyfleW/uBcos=
github.com/mdlayher/socket v0.5.1/go.mod h1:TjPLHI1UgwEv5J1B5q0zTZq12A/6H7nKmtTanQE37IQ=
github.com/miekg/dns v1.1.58 h1:ca2Hdkz+cDg/7eNF6V56jjzuZ4aCAE+DbVkILdQWG/4=
github.com/miekg/dns v1.1.58/go.mod h1:Ypv+3b/KadlvW9vJfXOTf300O4UqaHFzFCuHz+rPkBY=
github.com/mitchellh/go-ps v1.0.0 h1:i6ampVEEF4wQFF+bkYfwYgY+F/uYJDktmvLPf7qIgjc=
github.com/mitchellh/go-ps v1.0.0/go.mod h1:J4lOc8z8yJs6vUwklHw2XEIiT4z4C40KtWVN3nvg8Pg=
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
github.com/moby/moby/api v1.53.0 h1:PihqG1ncw4W+8mZs69jlwGXdaYBeb5brF6BL7mPIS/w=
github.com/moby/moby/api v1.53.0/go.mod h1:8mb+ReTlisw4pS6BRzCMts5M49W5M7bKt1cJy/YbAqc=
github.com/moby/moby/client v0.2.2 h1:Pt4hRMCAIlyjL3cr8M5TrXCwKzguebPAc2do2ur7dEM=
github.com/moby/moby/client v0.2.2/go.mod h1:2EkIPVNCqR05CMIzL1mfA07t0HvVUUOl85pasRz/GmQ=
github.com/moby/sys/atomicwriter v0.1.0 h1:kw5D/EqkBwsBFi0ss9v1VG3wIkVhzGvLklJ+w3A14Sw=
github.com/moby/sys/atomicwriter v0.1.0/go.mod h1:Ul8oqv2ZMNHOceF643P6FKPXeCmYtlQMvpizfsSoaWs=
github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU=
github.com/moby/sys/sequential v0.6.0/go.mod h1:uyv8EUTrca5PnDsdMGXhZe6CCe8U/UiTWd+lL+7b/Ko=
github.com/moby/sys/user v0.4.0 h1:jhcMKit7SA80hivmFJcbB1vqmw//wU61Zdui2eQXuMs=
github.com/moby/sys/user v0.4.0/go.mod h1:bG+tYYYJgaMtRKgEmuueC0hJEAZWwtIbZTB+85uoHjs=
github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ=
github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc=
github.com/morikuni/aec v1.1.0 h1:vBBl0pUnvi/Je71dsRrhMBtreIqNMYErSAbEeb8jrXQ=
github.com/morikuni/aec v1.1.0/go.mod h1:xDRgiq/iw5l+zkao76YTKzKttOp2cwPEne25HDkJnBw=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/nfnt/resize v0.0.0-20180221191011-83c6a9932646 h1:zYyBkD/k9seD2A7fsi6Oo2LfFZAehjjQMERAvZLEDnQ=
github.com/nfnt/resize v0.0.0-20180221191011-83c6a9932646/go.mod h1:jpp1/29i3P1S/RLdc7JQKbRpFeM1dOBd8T9ki5s+AY8=
github.com/oauth2-proxy/mockoidc v0.0.0-20240214162133-caebfff84d25 h1:9bCMuD3TcnjeqjPT2gSlha4asp8NvgcFRYExCaikCxk=
github.com/oauth2-proxy/mockoidc v0.0.0-20240214162133-caebfff84d25/go.mod h1:eDjgYHYDJbPLBLsyZ6qRaugP0mX8vePOhZ5id1fdzJw=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
github.com/opencontainers/runc v1.3.2 h1:GUwgo0Fx9M/pl2utaSYlJfdBcXAB/CZXDxe322lvJ3Y=
github.com/opencontainers/runc v1.3.2/go.mod h1:F7UQQEsxcjUNnFpT1qPLHZBKYP7yWwk6hq8suLy9cl0=
github.com/orisano/pixelmatch v0.0.0-20220722002657-fb0b55479cde/go.mod h1:nZgzbfBr3hhjoZnS66nKrHmduYNpc34ny7RK4z5/HM0=
github.com/ory/dockertest/v3 v3.12.0 h1:3oV9d0sDzlSQfHtIaB5k6ghUCVMVLpAY8hwrqoCyRCw=
github.com/ory/dockertest/v3 v3.12.0/go.mod h1:aKNDTva3cp8dwOWwb9cWuX84aH5akkxXRvO7KCwWVjE=
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/petermattis/goid v0.0.0-20250813065127-a731cc31b4fe/go.mod h1:pxMtw7cyUw6B2bRH0ZBANSPg+AoSud1I1iyJHI69jH4=
github.com/petermattis/goid v0.0.0-20260113132338-7c7de50cc741 h1:KPpdlQLZcHfTMQRi6bFQ7ogNO0ltFT4PmtwTLW4W+14=
github.com/petermattis/goid v0.0.0-20260113132338-7c7de50cc741/go.mod h1:pxMtw7cyUw6B2bRH0ZBANSPg+AoSud1I1iyJHI69jH4=
github.com/philip-bui/grpc-zerolog v1.0.1 h1:EMacvLRUd2O1K0eWod27ZP5CY1iTNkhBDLSN+Q4JEvA=
github.com/philip-bui/grpc-zerolog v1.0.1/go.mod h1:qXbiq/2X4ZUMMshsqlWyTHOcw7ns+GZmlqZZN05ZHcQ=
github.com/pierrec/lz4/v4 v4.1.21 h1:yOVMLb6qSIDP67pl/5F7RepeKYu/VmTyEXvuMI5d9mQ=
github.com/pierrec/lz4/v4 v4.1.21/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pires/go-proxyproto v0.9.2 h1:H1UdHn695zUVVmB0lQ354lOWHOy6TZSpzBl3tgN0s1U=
github.com/pires/go-proxyproto v0.9.2/go.mod h1:ZKAAyp3cgy5Y5Mo4n9AlScrkCZwUy0g3Jf+slqQVcuU=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/profile v1.7.0 h1:hnbDkaNWPCLMO9wGLdBFTIZvzDrDfBM2072E1S9gJkA=
github.com/pkg/profile v1.7.0/go.mod h1:8Uer0jas47ZQMJ7VD+OHknK4YDY07LPUC6dEvqDjvNo=
github.com/pkg/sftp v1.13.6 h1:JFZT4XbOU7l77xGSpOdW+pwIMqP044IyjXX6FGyEKFo=
github.com/pkg/sftp v1.13.6/go.mod h1:tz1ryNURKu77RL+GuCzmoJYxQczL3wLNNpPWagdg4Qk=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus-community/pro-bing v0.7.0 h1:KFYFbxC2f2Fp6c+TyxbCOEarf7rbnzr9Gw8eIb0RfZA=
github.com/prometheus-community/pro-bing v0.7.0/go.mod h1:Moob9dvlY50Bfq6i88xIwfyw7xLFHH69LUgx9n5zqCE=
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.67.5 h1:pIgK94WWlQt1WLwAC5j2ynLaBRDiinoAb86HZHTUGI4=
github.com/prometheus/common v0.67.5/go.mod h1:SjE/0MzDEEAyrdr5Gqc6G+sXI67maCxzaT3A2+HqjUw=
github.com/prometheus/procfs v0.19.2 h1:zUMhqEW66Ex7OXIiDkll3tl9a1ZdilUOd/F6ZXw4Vws=
github.com/prometheus/procfs v0.19.2/go.mod h1:M0aotyiemPhBCM0z5w87kL22CxfcH05ZpYlu+b4J7mw=
github.com/pterm/pterm v0.12.27/go.mod h1:PhQ89w4i95rhgE+xedAoqous6K9X+r6aSOI2eFF7DZI=
github.com/pterm/pterm v0.12.29/go.mod h1:WI3qxgvoQFFGKGjGnJR849gU0TsEOvKn5Q8LlY1U7lg=
github.com/pterm/pterm v0.12.30/go.mod h1:MOqLIyMOgmTDz9yorcYbcw+HsgoZo3BQfg2wtl3HEFE=
github.com/pterm/pterm v0.12.31/go.mod h1:32ZAWZVXD7ZfG0s8qqHXePte42kdz8ECtRyEejaWgXU=
github.com/pterm/pterm v0.12.33/go.mod h1:x+h2uL+n7CP/rel9+bImHD5lF3nM9vJj80k9ybiiTTE=
github.com/pterm/pterm v0.12.36/go.mod h1:NjiL09hFhT/vWjQHSj1athJpx6H8cjpHXNAK5bUw8T8=
github.com/pterm/pterm v0.12.40/go.mod h1:ffwPLwlbXxP+rxT0GsgDTzS3y3rmpAO1NMjUkGTYf8s=
github.com/pterm/pterm v0.12.82 h1:+D9wYhCaeaK0FIQoZtqbNQuNpe2lB2tajKKsTd5paVQ=
github.com/pterm/pterm v0.12.82/go.mod h1:TyuyrPjnxfwP+ccJdBTeWHtd/e0ybQHkOS/TakajZCw=
github.com/puzpuzpuz/xsync/v4 v4.4.0 h1:vlSN6/CkEY0pY8KaB0yqo/pCLZvp9nhdbBdjipT4gWo=
github.com/puzpuzpuz/xsync/v4 v4.4.0/go.mod h1:VJDmTCJMBt8igNxnkQd86r+8KUeN1quSfNKu5bLYFQo=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
github.com/rs/zerolog v1.34.0 h1:k43nTLIwcTVQAncfCw4KZ2VY6ukYoZaBPNOE8txlOeY=
github.com/rs/zerolog v1.34.0/go.mod h1:bJsvje4Z08ROH4Nhs5iH600c3IkWhwp44iRc54W6wYQ=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/safchain/ethtool v0.7.0 h1:rlJzfDetsVvT61uz8x1YIcFn12akMfuPulHtZjtb7Is=
github.com/safchain/ethtool v0.7.0/go.mod h1:MenQKEjXdfkjD3mp2QdCk8B/hwvkrlOTm/FD4gTpFxQ=
github.com/sagikazarmark/locafero v0.12.0 h1:/NQhBAkUb4+fH1jivKHWusDYFjMOOKU88eegjfxfHb4=
github.com/sagikazarmark/locafero v0.12.0/go.mod h1:sZh36u/YSZ918v0Io+U9ogLYQJ9tLLBmM4eneO6WwsI=
github.com/samber/lo v1.52.0 h1:Rvi+3BFHES3A8meP33VPAxiBZX/Aws5RxrschYGjomw=
github.com/samber/lo v1.52.0/go.mod h1:4+MXEGsJzbKGaUEQFKBq2xtfuznW9oz/WrgyzMzRoM0=
github.com/sasha-s/go-deadlock v0.3.6 h1:TR7sfOnZ7x00tWPfD397Peodt57KzMDo+9Ae9rMiUmw=
github.com/sasha-s/go-deadlock v0.3.6/go.mod h1:CUqNyyvMxTyjFqDT7MRg9mb4Dv/btmGTqSR+rky/UXo=
github.com/sergi/go-diff v1.2.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 h1:n661drycOFuPLCN3Uc8sB6B/s6Z4t2xvBgU1htSHuq8=
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3/go.mod h1:A0bzQcvG0E7Rwjx0REVgAGH58e96+X0MeOfepqsbeW4=
github.com/sirupsen/logrus v1.9.4 h1:TsZE7l11zFCLZnZ+teH4Umoq5BhEIfIzfRDZ1Uzql2w=
github.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC42RjbV2g=
github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I=
github.com/spf13/afero v1.15.0/go.mod h1:NC2ByUVxtQs4b3sIUphxK0NioZnmxgyCrfzeuq8lxMg=
github.com/spf13/cast v1.10.0 h1:h2x0u2shc1QuLHfxi+cTJvs30+ZAHOGRic8uyGTDWxY=
github.com/spf13/cast v1.10.0/go.mod h1:jNfB8QC9IA6ZuY2ZjDp0KtFO2LZZlg4S/7bzP6qqeHo=
github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU=
github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4=
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.21.0 h1:x5S+0EU27Lbphp4UKm1C+1oQO+rKx36vfCoaVebLFSU=
github.com/spf13/viper v1.21.0/go.mod h1:P0lhsswPGWD/1lZJ9ny3fYnVqxiegrlNrEmgLjbTCAY=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/tailscale/certstore v0.1.1-0.20231202035212-d3fa0460f47e h1:PtWT87weP5LWHEY//SWsYkSO3RWRZo4OSWagh3YD2vQ=
github.com/tailscale/certstore v0.1.1-0.20231202035212-d3fa0460f47e/go.mod h1:XrBNfAFN+pwoWuksbFS9Ccxnopa15zJGgXRFN90l3K4=
github.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55 h1:Gzfnfk2TWrk8Jj4P4c1a3CtQyMaTVCznlkLZI++hok4=
github.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55/go.mod h1:4k4QO+dQ3R5FofL+SanAUZe+/QfeK0+OIuwDIRu2vSg=
github.com/tailscale/golang-x-crypto v0.0.0-20250404221719-a5573b049869 h1:SRL6irQkKGQKKLzvQP/ke/2ZuB7Py5+XuqtOgSj+iMM=
github.com/tailscale/golang-x-crypto v0.0.0-20250404221719-a5573b049869/go.mod h1:ikbF+YT089eInTp9f2vmvy4+ZVnW5hzX1q2WknxSprQ=
github.com/tailscale/hujson v0.0.0-20250605163823-992244df8c5a h1:a6TNDN9CgG+cYjaeN8l2mc4kSz2iMiCDQxPEyltUV/I=
github.com/tailscale/hujson v0.0.0-20250605163823-992244df8c5a/go.mod h1:EbW0wDK/qEUYI0A5bqq0C2kF8JTQwWONmGDBbzsxxHo=
github.com/tailscale/netlink v1.1.1-0.20240822203006-4d49adab4de7 h1:uFsXVBE9Qr4ZoF094vE6iYTLDl0qCiKzYXlL6UeWObU=
github.com/tailscale/netlink v1.1.1-0.20240822203006-4d49adab4de7/go.mod h1:NzVQi3Mleb+qzq8VmcWpSkcSYxXIg0DkI6XDzpVkhJ0=
github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc h1:24heQPtnFR+yfntqhI3oAu9i27nEojcQ4NuBQOo5ZFA=
github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc/go.mod h1:f93CXfllFsO9ZQVq+Zocb1Gp4G5Fz0b0rXHLOzt/Djc=
github.com/tailscale/setec v0.0.0-20260115174028-19d190c5556d h1:N+TtzIaGYREbLbKZB0WU0vVnMSfaqUkSf3qMEi03hwE=
github.com/tailscale/setec v0.0.0-20260115174028-19d190c5556d/go.mod h1:6NU8H/GLPVX2TnXAY1duyy9ylLaHwFpr0X93UPiYmNI=
github.com/tailscale/squibble v0.0.0-20251104223530-a961feffb67f h1:CL6gu95Y1o2ko4XiWPvWkJka0QmQWcUyPywWVWDPQbQ=
github.com/tailscale/squibble v0.0.0-20251104223530-a961feffb67f/go.mod h1:xJkMmR3t+thnUQhA3Q4m2VSlS5pcOq+CIjmU/xfKKx4=
github.com/tailscale/tailsql v0.0.0-20260105194658-001575c3ca09 h1:Fc9lE2cDYJbBLpCqnVmoLdf7McPqoHZiDxDPPpkJM04=
github.com/tailscale/tailsql v0.0.0-20260105194658-001575c3ca09/go.mod h1:QMNhC4XGFiXKngHVLXE+ERDmQoH0s5fD7AUxupykocQ=
github.com/tailscale/web-client-prebuilt v0.0.0-20251127225136-f19339b67368 h1:0tpDdAj9sSfSZg4gMwNTdqMP592sBrq2Sm0w6ipnh7k=
github.com/tailscale/web-client-prebuilt v0.0.0-20251127225136-f19339b67368/go.mod h1:agQPE6y6ldqCOui2gkIh7ZMztTkIQKH049tv8siLuNQ=
github.com/tailscale/wf v0.0.0-20240214030419-6fbb0a674ee6 h1:l10Gi6w9jxvinoiq15g8OToDdASBni4CyJOdHY1Hr8M=
github.com/tailscale/wf v0.0.0-20240214030419-6fbb0a674ee6/go.mod h1:ZXRML051h7o4OcI0d3AaILDIad/Xw0IkXaHM17dic1Y=
github.com/tailscale/wireguard-go v0.0.0-20250716170648-1d0488a3d7da h1:jVRUZPRs9sqyKlYHHzHjAqKN+6e/Vog6NpHYeNPJqOw=
github.com/tailscale/wireguard-go v0.0.0-20250716170648-1d0488a3d7da/go.mod h1:BOm5fXUBFM+m9woLNBoxI9TaBXXhGNP50LX/TGIvGb4=
github.com/tailscale/xnet v0.0.0-20240729143630-8497ac4dab2e h1:zOGKqN5D5hHhiYUp091JqK7DPCqSARyUfduhGUY8Bek=
github.com/tailscale/xnet v0.0.0-20240729143630-8497ac4dab2e/go.mod h1:orPd6JZXXRyuDusYilywte7k094d7dycXXU5YnWsrwg=
github.com/tc-hib/winres v0.2.1 h1:YDE0FiP0VmtRaDn7+aaChp1KiF4owBiJa5l964l5ujA=
github.com/tc-hib/winres v0.2.1/go.mod h1:C/JaNhH3KBvhNKVbvdlDWkbMDO9H4fKKDaN7/07SSuk=
github.com/tcnksm/go-latest v0.0.0-20170313132115-e3007ae9052e h1:IWllFTiDjjLIf2oeKxpIUmtiDV5sn71VgeQgg6vcE7k=
github.com/tcnksm/go-latest v0.0.0-20170313132115-e3007ae9052e/go.mod h1:d7u6HkTYKSv5m6MCKkOQlHwaShTMl3HjqSGW3XtVhXM=
github.com/tink-crypto/tink-go/v2 v2.6.0 h1:+KHNBHhWH33Vn+igZWcsgdEPUxKwBMEe0QC60t388v4=
github.com/tink-crypto/tink-go/v2 v2.6.0/go.mod h1:2WbBA6pfNsAfBwDCggboaHeB2X29wkU8XHtGwh2YIk8=
github.com/u-root/u-root v0.14.0 h1:Ka4T10EEML7dQ5XDvO9c3MBN8z4nuSnGjcd1jmU2ivg=
github.com/u-root/u-root v0.14.0/go.mod h1:hAyZorapJe4qzbLWlAkmSVCJGbfoU9Pu4jpJ1WMluqE=
github.com/u-root/uio v0.0.0-20240224005618-d2acac8f3701 h1:pyC9PaHYZFgEKFdlp3G8RaCKgVpHZnecvArXvPXcFkM=
github.com/u-root/uio v0.0.0-20240224005618-d2acac8f3701/go.mod h1:P3a5rG4X7tI17Nn3aOIAYr5HbIMukwXG0urG0WuL8OA=
github.com/vishvananda/netns v0.0.5 h1:DfiHV+j8bA32MFM7bfEunvT8IAqQ/NzSJHtcmW5zdEY=
github.com/vishvananda/netns v0.0.5/go.mod h1:SpkAiCQRtJ6TvvxPnOSyH3BMl6unz3xZlaprSwhNNJM=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb h1:zGWFAtiMcyryUHoUjUJX0/lt1H2+i2Ka2n+D3DImSNo=
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0=
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ=
github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74=
github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y=
github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778/go.mod h1:2MuV+tbUrU1zIOPMxZ5EncGwgmMJsa+9ucAQZXxsObs=
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e h1:JVG44RsyaB9T2KIHavMF/ppJZNG9ZpyihvCd0w101no=
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.65.0 h1:7iP2uCb7sGddAr30RRS6xjKy7AZ2JtTOPA3oolgVSw8=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.65.0/go.mod h1:c7hN3ddxs/z6q9xwvfLPk+UHlWRQyaeR1LdgfL/66l0=
go.opentelemetry.io/otel v1.40.0 h1:oA5YeOcpRTXq6NN7frwmwFR0Cn3RhTVZvXsP4duvCms=
go.opentelemetry.io/otel v1.40.0/go.mod h1:IMb+uXZUKkMXdPddhwAHm6UfOwJyh4ct1ybIlV14J0g=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.40.0 h1:QKdN8ly8zEMrByybbQgv8cWBcdAarwmIPZ6FThrWXJs=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.40.0/go.mod h1:bTdK1nhqF76qiPoCCdyFIV+N/sRHYXYCTQc+3VCi3MI=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.40.0 h1:wVZXIWjQSeSmMoxF74LzAnpVQOAFDo3pPji9Y4SOFKc=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.40.0/go.mod h1:khvBS2IggMFNwZK/6lEeHg/W57h/IX6J4URh57fuI40=
go.opentelemetry.io/otel/metric v1.40.0 h1:rcZe317KPftE2rstWIBitCdVp89A2HqjkxR3c11+p9g=
go.opentelemetry.io/otel/metric v1.40.0/go.mod h1:ib/crwQH7N3r5kfiBZQbwrTge743UDc7DTFVZrrXnqc=
go.opentelemetry.io/otel/sdk v1.40.0 h1:KHW/jUzgo6wsPh9At46+h4upjtccTmuZCFAc9OJ71f8=
go.opentelemetry.io/otel/sdk v1.40.0/go.mod h1:Ph7EFdYvxq72Y8Li9q8KebuYUr2KoeyHx0DRMKrYBUE=
go.opentelemetry.io/otel/sdk/metric v1.40.0 h1:mtmdVqgQkeRxHgRv4qhyJduP3fYJRMX4AtAlbuWdCYw=
go.opentelemetry.io/otel/sdk/metric v1.40.0/go.mod h1:4Z2bGMf0KSK3uRjlczMOeMhKU2rhUqdWNoKcYrtcBPg=
go.opentelemetry.io/otel/trace v1.40.0 h1:WA4etStDttCSYuhwvEa8OP8I5EWu24lkOzp+ZYblVjw=
go.opentelemetry.io/otel/trace v1.40.0/go.mod h1:zeAhriXecNGP/s2SEG3+Y8X9ujcJOTqQ5RgdEJcawiA=
go.opentelemetry.io/proto/otlp v1.9.0 h1:l706jCMITVouPOqEnii2fIAuO3IVGBRPV5ICjceRb/A=
go.opentelemetry.io/proto/otlp v1.9.0/go.mod h1:xE+Cx5E/eEHw+ISFkwPLwCZefwVjY+pqKg1qcK03+/4=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
go4.org/mem v0.0.0-20240501181205-ae6ca9944745 h1:Tl++JLUCe4sxGu8cTpDzRLd3tN7US4hOxG5YpKCzkek=
go4.org/mem v0.0.0-20240501181205-ae6ca9944745/go.mod h1:reUoABIJ9ikfM5sgtSF3Wushcza7+WeD01VB9Lirh3g=
go4.org/netipx v0.0.0-20231129151722-fdeea329fbba h1:0b9z3AuHCjxk0x/opv64kcgZLBseWJUpBw5I82+2U4M=
go4.org/netipx v0.0.0-20231129151722-fdeea329fbba/go.mod h1:PLyyIXexvUFg3Owu6p/WfdlivPbZJsZdgWZlrGope/Y=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.48.0 h1:/VRzVqiRSggnhY7gNRxPauEQ5Drw9haKdM0jqfcCFts=
golang.org/x/crypto v0.48.0/go.mod h1:r0kV5h3qnFPlQnBSrULhlsRfryS2pmewsg+XfMgkVos=
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 h1:Z/6YuSHTLOHfNFdb8zVZomZr7cqNgTJvA8+Qz75D8gU=
golang.org/x/exp v0.0.0-20260112195511-716be5621a96/go.mod h1:nzimsREAkjBCIEFtHiYkrJyT+2uy9YZJB7H1k68CXZU=
golang.org/x/exp/typeparams v0.0.0-20240314144324-c7f7c6466f7f h1:phY1HzDcf18Aq9A8KkmRtY9WvOFIxN8wgfvy6Zm1DV8=
golang.org/x/exp/typeparams v0.0.0-20240314144324-c7f7c6466f7f/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
golang.org/x/image v0.27.0 h1:C8gA4oWU/tKkdCfYT6T2u4faJu3MeNS5O8UPWlPF61w=
golang.org/x/image v0.27.0/go.mod h1:xbdrClrAUway1MUTEZDq9mz/UpRwYAkFFNUslZtcB+g=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.33.0 h1:tHFzIWbBifEmbwtGz65eaWyGiGZatSrT9prnU8DbVL8=
golang.org/x/mod v0.33.0/go.mod h1:swjeQEj+6r7fODbD2cqrnje9PnziFuw4bmLbBZFrQ5w=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.50.0 h1:ucWh9eiCGyDR3vtzso0WMQinm2Dnt8cFMuQa9K33J60=
golang.org/x/net v0.50.0/go.mod h1:UgoSli3F/pBgdJBHCTc+tp3gmrU4XswgGRgtnwWTfyM=
golang.org/x/oauth2 v0.34.0 h1:hqK/t4AKgbqWkdkcAeI8XLmbK+4m4G5YeQRrmiotGlw=
golang.org/x/oauth2 v0.34.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211013075003-97ac67df715c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220310020820-b874c991c1a5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220319134239-a9b59b0215f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.41.0 h1:Ivj+2Cp/ylzLiEU89QhWblYnOE9zerudt9Ftecq2C6k=
golang.org/x/sys v0.41.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.40.0 h1:36e4zGLqU4yhjlmxEaagx2KuYbJq3EwY8K943ZsHcvg=
golang.org/x/term v0.40.0/go.mod h1:w2P8uVp06p2iyKKuvXIm7N/y0UCRt3UfJTfZ7oOpglM=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.34.0 h1:oL/Qq0Kdaqxa1KbNeMKwQq0reLCCaFtqu2eNuSeNHbk=
golang.org/x/text v0.34.0/go.mod h1:homfLqTYRFyVYemLBFl5GgL/DWEiH5wcsQ5gSh1yziA=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.42.0 h1:uNgphsn75Tdz5Ji2q36v/nsFSfR/9BRFvqhGBaJGd5k=
golang.org/x/tools v0.42.0/go.mod h1:Ma6lCIwGZvHK6XtgbswSoWroEkhugApmsXyrUmBhfr0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2 h1:B82qJJgjvYKsXS9jeunTOisW56dUokqW/FOteYJJ/yg=
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2/go.mod h1:deeaetjYA+DHMHg+sMSMI58GrEteJUUzzw7en6TJQcI=
golang.zx2c4.com/wireguard/windows v0.5.3 h1:On6j2Rpn3OEMXqBq00QEDC7bWSZrPIHKIus8eIuExIE=
golang.zx2c4.com/wireguard/windows v0.5.3/go.mod h1:9TEe8TJmtwyQebdFwAkEWOPr3prrtqm+REGFifP60hI=
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
google.golang.org/genproto/googleapis/api v0.0.0-20260203192932-546029d2fa20 h1:7ei4lp52gK1uSejlA8AZl5AJjeLUOHBQscRQZUgAcu0=
google.golang.org/genproto/googleapis/api v0.0.0-20260203192932-546029d2fa20/go.mod h1:ZdbssH/1SOVnjnDlXzxDHK2MCidiqXtbYccJNzNYPEE=
google.golang.org/genproto/googleapis/rpc v0.0.0-20260203192932-546029d2fa20 h1:Jr5R2J6F6qWyzINc+4AM8t5pfUz6beZpHp678GNrMbE=
google.golang.org/genproto/googleapis/rpc v0.0.0-20260203192932-546029d2fa20/go.mod h1:j9x/tPzZkyxcgEFkiKEEGxfvyumM01BEtsW8xzOahRQ=
google.golang.org/grpc v1.78.0 h1:K1XZG/yGDJnzMdd/uZHAkVqJE+xIDOcmdSFZkBUicNc=
google.golang.org/grpc v1.78.0/go.mod h1:I47qjTo4OKbMkjA/aOOwxDIiPSBofUtQUI5EfpWvW7U=
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gorm.io/driver/postgres v1.6.0 h1:2dxzU8xJ+ivvqTRph34QX+WrRaJlmfyPqXmoGVjMBa4=
gorm.io/driver/postgres v1.6.0/go.mod h1:vUw0mrGgrTK+uPHEhAdV4sfFELrByKVGnaVRkXDhtWo=
gorm.io/gorm v1.31.1 h1:7CA8FTFz/gRfgqgpeKIBcervUn3xSyPUmr6B2WXJ7kg=
gorm.io/gorm v1.31.1/go.mod h1:XyQVbO2k6YkOis7C2437jSit3SsDK72s7n7rsSHd+Gs=
gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q=
gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA=
gvisor.dev/gvisor v0.0.0-20250205023644-9414b50a5633 h1:2gap+Kh/3F47cO6hAu3idFvsJ0ue6TRcEi2IUkv/F8k=
gvisor.dev/gvisor v0.0.0-20250205023644-9414b50a5633/go.mod h1:5DMfjtclAbTIjbXqO1qCe2K5GKKxWz2JHvCChuTcJEM=
honnef.co/go/tools v0.7.0-0.dev.0.20251022135355-8273271481d0 h1:5SXjd4ET5dYijLaf0O3aOenC0Z4ZafIWSpjUzsQaNho=
honnef.co/go/tools v0.7.0-0.dev.0.20251022135355-8273271481d0/go.mod h1:EPDDhEZqVHhWuPI5zPAsjU0U7v9xNIWjoOVyZ5ZcniQ=
howett.net/plist v1.0.0 h1:7CrbWYbPPO/PyNy38b2EB/+gYbjCe2DXBxgtOOZbSQM=
howett.net/plist v1.0.0/go.mod h1:lqaXoTrLY4hg8tnEzNru53gicrbv7rrk+2xJA/7hw9g=
modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis=
modernc.org/cc/v4 v4.27.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.30.1 h1:4r4U1J6Fhj98NKfSjnPUN7Ze2c6MnAdL0hWw6+LrJpc=
modernc.org/ccgo/v4 v4.30.1/go.mod h1:bIOeI1JL54Utlxn+LwrFyjCx2n2RDiYEaJVSrgdrRfM=
modernc.org/fileutil v1.3.40 h1:ZGMswMNc9JOCrcrakF1HrvmergNLAmxOPjizirpfqBA=
modernc.org/fileutil v1.3.40/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
modernc.org/gc/v3 v3.1.1 h1:k8T3gkXWY9sEiytKhcgyiZ2L0DTyCQ/nvX+LoCljoRE=
modernc.org/gc/v3 v3.1.1/go.mod h1:HFK/6AGESC7Ex+EZJhJ2Gni6cTaYpSMmU/cT9RmlfYY=
modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks=
modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI=
modernc.org/libc v1.67.6 h1:eVOQvpModVLKOdT+LvBPjdQqfrZq+pC39BygcT+E7OI=
modernc.org/libc v1.67.6/go.mod h1:JAhxUVlolfYDErnwiqaLvUqc8nfb2r6S6slAgZOnaiE=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
modernc.org/memory v1.11.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.44.3 h1:+39JvV/HWMcYslAwRxHb8067w+2zowvFOUrOWIy9PjY=
modernc.org/sqlite v1.44.3/go.mod h1:CzbrU2lSB1DKUusvwGz7rqEKIq+NUd8GWuBBZDs9/nA=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=
pgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk=
pgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04=
software.sslmate.com/src/go-pkcs12 v0.4.0 h1:H2g08FrTvSFKUj+D309j1DPfk5APnIdAQAB8aEykJ5k=
software.sslmate.com/src/go-pkcs12 v0.4.0/go.mod h1:Qiz0EyvDRJjjxGyUQa2cCNZn/wMyzrRJ/qcDXOQazLI=
tailscale.com v1.94.1 h1:0dAst/ozTuFkgmxZULc3oNwR9+qPIt5ucvzH7kaM0Jw=
tailscale.com v1.94.1/go.mod h1:gLnVrEOP32GWvroaAHHGhjSGMPJ1i4DvqNwEg+Yuov4=
zgo.at/zcache/v2 v2.4.1 h1:Dfjoi8yI0Uq7NCc4lo2kaQJJmp9Mijo21gef+oJstbY=
zgo.at/zcache/v2 v2.4.1/go.mod h1:gyCeoLVo01QjDZynjime8xUGHHMbsLiPyUTBpDGd4Gk=
zombiezen.com/go/postgrestest v1.0.1 h1:aXoADQAJmZDU3+xilYVut0pHhgc0sF8ZspPW9gFNwP4=
zombiezen.com/go/postgrestest v1.0.1/go.mod h1:marlZezr+k2oSJrvXHnZUs1olHqpE9czlz8ZYkVxliQ=
================================================
FILE: hscontrol/app.go
================================================
package hscontrol
import (
"context"
"crypto/tls"
"errors"
"fmt"
"io"
"net"
"net/http"
_ "net/http/pprof" // nolint
"os"
"os/signal"
"path/filepath"
"runtime"
"strings"
"sync"
"syscall"
"testing"
"time"
"github.com/cenkalti/backoff/v5"
"github.com/davecgh/go-spew/spew"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/go-chi/metrics"
grpcRuntime "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/capver"
"github.com/juanfont/headscale/hscontrol/db"
"github.com/juanfont/headscale/hscontrol/derp"
derpServer "github.com/juanfont/headscale/hscontrol/derp/server"
"github.com/juanfont/headscale/hscontrol/dns"
"github.com/juanfont/headscale/hscontrol/mapper"
"github.com/juanfont/headscale/hscontrol/state"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/types/change"
"github.com/juanfont/headscale/hscontrol/util"
zerolog "github.com/philip-bui/grpc-zerolog"
"github.com/pkg/profile"
zl "github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"github.com/sasha-s/go-deadlock"
"golang.org/x/crypto/acme"
"golang.org/x/crypto/acme/autocert"
"golang.org/x/sync/errgroup"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/peer"
"google.golang.org/grpc/reflection"
"google.golang.org/grpc/status"
"tailscale.com/envknob"
"tailscale.com/tailcfg"
"tailscale.com/types/dnstype"
"tailscale.com/types/key"
"tailscale.com/util/dnsname"
)
var (
errSTUNAddressNotSet = errors.New("STUN address not set")
errUnsupportedLetsEncryptChallengeType = errors.New(
"unknown value for Lets Encrypt challenge type",
)
errEmptyInitialDERPMap = errors.New(
"initial DERPMap is empty, Headscale requires at least one entry",
)
)
var (
debugDeadlock = envknob.Bool("HEADSCALE_DEBUG_DEADLOCK")
debugDeadlockTimeout = envknob.RegisterDuration("HEADSCALE_DEBUG_DEADLOCK_TIMEOUT")
)
func init() {
deadlock.Opts.Disable = !debugDeadlock
if debugDeadlock {
deadlock.Opts.DeadlockTimeout = debugDeadlockTimeout()
deadlock.Opts.PrintAllCurrentGoroutines = true
}
}
const (
AuthPrefix = "Bearer "
updateInterval = 5 * time.Second
privateKeyFileMode = 0o600
headscaleDirPerm = 0o700
)
// Headscale represents the base app of the service.
type Headscale struct {
cfg *types.Config
state *state.State
noisePrivateKey *key.MachinePrivate
ephemeralGC *db.EphemeralGarbageCollector
DERPServer *derpServer.DERPServer
// Things that generate changes
extraRecordMan *dns.ExtraRecordsMan
authProvider AuthProvider
mapBatcher *mapper.Batcher
clientStreamsOpen sync.WaitGroup
}
var (
profilingEnabled = envknob.Bool("HEADSCALE_DEBUG_PROFILING_ENABLED")
profilingPath = envknob.String("HEADSCALE_DEBUG_PROFILING_PATH")
tailsqlEnabled = envknob.Bool("HEADSCALE_DEBUG_TAILSQL_ENABLED")
tailsqlStateDir = envknob.String("HEADSCALE_DEBUG_TAILSQL_STATE_DIR")
tailsqlTSKey = envknob.String("TS_AUTHKEY")
dumpConfig = envknob.Bool("HEADSCALE_DEBUG_DUMP_CONFIG")
)
func NewHeadscale(cfg *types.Config) (*Headscale, error) {
var err error
if profilingEnabled {
runtime.SetBlockProfileRate(1)
}
noisePrivateKey, err := readOrCreatePrivateKey(cfg.NoisePrivateKeyPath)
if err != nil {
return nil, fmt.Errorf("reading or creating Noise protocol private key: %w", err)
}
s, err := state.NewState(cfg)
if err != nil {
return nil, fmt.Errorf("init state: %w", err)
}
app := Headscale{
cfg: cfg,
noisePrivateKey: noisePrivateKey,
clientStreamsOpen: sync.WaitGroup{},
state: s,
}
// Initialize ephemeral garbage collector
ephemeralGC := db.NewEphemeralGarbageCollector(func(ni types.NodeID) {
node, ok := app.state.GetNodeByID(ni)
if !ok {
log.Error().Uint64("node.id", ni.Uint64()).Msg("ephemeral node deletion failed")
log.Debug().Caller().Uint64("node.id", ni.Uint64()).Msg("ephemeral node deletion failed because node not found in NodeStore")
return
}
policyChanged, err := app.state.DeleteNode(node)
if err != nil {
log.Error().Err(err).EmbedObject(node).Msg("ephemeral node deletion failed")
return
}
app.Change(policyChanged)
log.Debug().Caller().EmbedObject(node).Msg("ephemeral node deleted because garbage collection timeout reached")
})
app.ephemeralGC = ephemeralGC
var authProvider AuthProvider
authProvider = NewAuthProviderWeb(cfg.ServerURL)
if cfg.OIDC.Issuer != "" {
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
oidcProvider, err := NewAuthProviderOIDC(
ctx,
&app,
cfg.ServerURL,
&cfg.OIDC,
)
if err != nil {
if cfg.OIDC.OnlyStartIfOIDCIsAvailable {
return nil, err
} else {
log.Warn().Err(err).Msg("failed to set up OIDC provider, falling back to CLI based authentication")
}
} else {
authProvider = oidcProvider
}
}
app.authProvider = authProvider
if app.cfg.TailcfgDNSConfig != nil && app.cfg.TailcfgDNSConfig.Proxied { // if MagicDNS
// TODO(kradalby): revisit why this takes a list.
var magicDNSDomains []dnsname.FQDN
if cfg.PrefixV4 != nil {
magicDNSDomains = append(
magicDNSDomains,
util.GenerateIPv4DNSRootDomain(*cfg.PrefixV4)...)
}
if cfg.PrefixV6 != nil {
magicDNSDomains = append(
magicDNSDomains,
util.GenerateIPv6DNSRootDomain(*cfg.PrefixV6)...)
}
// we might have routes already from Split DNS
if app.cfg.TailcfgDNSConfig.Routes == nil {
app.cfg.TailcfgDNSConfig.Routes = make(map[string][]*dnstype.Resolver)
}
for _, d := range magicDNSDomains {
app.cfg.TailcfgDNSConfig.Routes[d.WithoutTrailingDot()] = nil
}
}
if cfg.DERP.ServerEnabled {
derpServerKey, err := readOrCreatePrivateKey(cfg.DERP.ServerPrivateKeyPath)
if err != nil {
return nil, fmt.Errorf("reading or creating DERP server private key: %w", err)
}
if derpServerKey.Equal(*noisePrivateKey) {
return nil, fmt.Errorf(
"DERP server private key and noise private key are the same: %w",
err,
)
}
if cfg.DERP.ServerVerifyClients {
t := http.DefaultTransport.(*http.Transport) //nolint:forcetypeassert
t.RegisterProtocol(
derpServer.DerpVerifyScheme,
derpServer.NewDERPVerifyTransport(app.handleVerifyRequest),
)
}
embeddedDERPServer, err := derpServer.NewDERPServer(
cfg.ServerURL,
key.NodePrivate(*derpServerKey),
&cfg.DERP,
)
if err != nil {
return nil, err
}
app.DERPServer = embeddedDERPServer
}
return &app, nil
}
// Redirect to our TLS url.
func (h *Headscale) redirect(w http.ResponseWriter, req *http.Request) {
target := h.cfg.ServerURL + req.URL.RequestURI()
http.Redirect(w, req, target, http.StatusFound)
}
func (h *Headscale) scheduledTasks(ctx context.Context) {
expireTicker := time.NewTicker(updateInterval)
defer expireTicker.Stop()
lastExpiryCheck := time.Unix(0, 0)
derpTickerChan := make(<-chan time.Time)
if h.cfg.DERP.AutoUpdate && h.cfg.DERP.UpdateFrequency != 0 {
derpTicker := time.NewTicker(h.cfg.DERP.UpdateFrequency)
defer derpTicker.Stop()
derpTickerChan = derpTicker.C
}
var extraRecordsUpdate <-chan []tailcfg.DNSRecord
if h.extraRecordMan != nil {
extraRecordsUpdate = h.extraRecordMan.UpdateCh()
} else {
extraRecordsUpdate = make(chan []tailcfg.DNSRecord)
}
for {
select {
case <-ctx.Done():
log.Info().Caller().Msg("scheduled task worker is shutting down.")
return
case <-expireTicker.C:
var (
expiredNodeChanges []change.Change
changed bool
)
lastExpiryCheck, expiredNodeChanges, changed = h.state.ExpireExpiredNodes(lastExpiryCheck)
if changed {
log.Trace().Interface("changes", expiredNodeChanges).Msgf("expiring nodes")
// Send the changes directly since they're already in the new format
for _, nodeChange := range expiredNodeChanges {
h.Change(nodeChange)
}
}
case <-derpTickerChan:
log.Info().Msg("fetching DERPMap updates")
derpMap, err := backoff.Retry(ctx, func() (*tailcfg.DERPMap, error) { //nolint:contextcheck
derpMap, err := derp.GetDERPMap(h.cfg.DERP)
if err != nil {
return nil, err
}
if h.cfg.DERP.ServerEnabled && h.cfg.DERP.AutomaticallyAddEmbeddedDerpRegion {
region, _ := h.DERPServer.GenerateRegion()
derpMap.Regions[region.RegionID] = ®ion
}
return derpMap, nil
}, backoff.WithBackOff(backoff.NewExponentialBackOff()))
if err != nil {
log.Error().Err(err).Msg("failed to build new DERPMap, retrying later")
continue
}
h.state.SetDERPMap(derpMap)
h.Change(change.DERPMap())
case records, ok := <-extraRecordsUpdate:
if !ok {
continue
}
h.cfg.TailcfgDNSConfig.ExtraRecords = records
h.Change(change.ExtraRecords())
}
}
}
func (h *Headscale) grpcAuthenticationInterceptor(ctx context.Context,
req any,
info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler,
) (any, error) {
// Check if the request is coming from the on-server client.
// This is not secure, but it is to maintain maintainability
// with the "legacy" database-based client
// It is also needed for grpc-gateway to be able to connect to
// the server
client, _ := peer.FromContext(ctx)
log.Trace().
Caller().
Str("client_address", client.Addr.String()).
Msg("Client is trying to authenticate")
meta, ok := metadata.FromIncomingContext(ctx)
if !ok {
return ctx, status.Errorf(
codes.InvalidArgument,
"retrieving metadata",
)
}
authHeader, ok := meta["authorization"]
if !ok {
return ctx, status.Errorf(
codes.Unauthenticated,
"authorization token not supplied",
)
}
token := authHeader[0]
if !strings.HasPrefix(token, AuthPrefix) {
return ctx, status.Error(
codes.Unauthenticated,
`missing "Bearer " prefix in "Authorization" header`,
)
}
valid, err := h.state.ValidateAPIKey(strings.TrimPrefix(token, AuthPrefix))
if err != nil {
return ctx, status.Error(codes.Internal, "validating token")
}
if !valid {
log.Info().
Str("client_address", client.Addr.String()).
Msg("invalid token")
return ctx, status.Error(codes.Unauthenticated, "invalid token")
}
return handler(ctx, req)
}
func (h *Headscale) httpAuthenticationMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(
writer http.ResponseWriter,
req *http.Request,
) {
log.Trace().
Caller().
Str("client_address", req.RemoteAddr).
Msg("HTTP authentication invoked")
authHeader := req.Header.Get("Authorization")
writeUnauthorized := func(statusCode int) {
writer.WriteHeader(statusCode)
if _, err := writer.Write([]byte("Unauthorized")); err != nil { //nolint:noinlineerr
log.Error().Err(err).Msg("writing HTTP response failed")
}
}
if !strings.HasPrefix(authHeader, AuthPrefix) {
log.Error().
Caller().
Str("client_address", req.RemoteAddr).
Msg(`missing "Bearer " prefix in "Authorization" header`)
writeUnauthorized(http.StatusUnauthorized)
return
}
valid, err := h.state.ValidateAPIKey(strings.TrimPrefix(authHeader, AuthPrefix))
if err != nil {
log.Info().
Caller().
Err(err).
Str("client_address", req.RemoteAddr).
Msg("failed to validate token")
writeUnauthorized(http.StatusUnauthorized)
return
}
if !valid {
log.Info().
Str("client_address", req.RemoteAddr).
Msg("invalid token")
writeUnauthorized(http.StatusUnauthorized)
return
}
next.ServeHTTP(writer, req)
})
}
// ensureUnixSocketIsAbsent will check if the given path for headscales unix socket is clear
// and will remove it if it is not.
func (h *Headscale) ensureUnixSocketIsAbsent() error {
// File does not exist, all fine
if _, err := os.Stat(h.cfg.UnixSocket); errors.Is(err, os.ErrNotExist) { //nolint:noinlineerr
return nil
}
return os.Remove(h.cfg.UnixSocket)
}
func (h *Headscale) createRouter(grpcMux *grpcRuntime.ServeMux) *chi.Mux {
r := chi.NewRouter()
r.Use(metrics.Collector(metrics.CollectorOpts{
Host: false,
Proto: true,
Skip: func(r *http.Request) bool {
return r.Method != http.MethodOptions
},
}))
r.Use(middleware.RequestID)
r.Use(middleware.RealIP)
r.Use(middleware.RequestLogger(&zerologRequestLogger{}))
r.Use(middleware.Recoverer)
r.Post(ts2021UpgradePath, h.NoiseUpgradeHandler)
r.Get("/robots.txt", h.RobotsHandler)
r.Get("/health", h.HealthHandler)
r.Get("/version", h.VersionHandler)
r.Get("/key", h.KeyHandler)
r.Get("/register/{auth_id}", h.authProvider.RegisterHandler)
r.Get("/auth/{auth_id}", h.authProvider.AuthHandler)
if provider, ok := h.authProvider.(*AuthProviderOIDC); ok {
r.Get("/oidc/callback", provider.OIDCCallbackHandler)
}
r.Get("/apple", h.AppleConfigMessage)
r.Get("/apple/{platform}", h.ApplePlatformConfig)
r.Get("/windows", h.WindowsConfigMessage)
// TODO(kristoffer): move swagger into a package
r.Get("/swagger", headscale.SwaggerUI)
r.Get("/swagger/v1/openapiv2.json", headscale.SwaggerAPIv1)
r.Post("/verify", h.VerifyHandler)
if h.cfg.DERP.ServerEnabled {
r.HandleFunc("/derp", h.DERPServer.DERPHandler)
r.HandleFunc("/derp/probe", derpServer.DERPProbeHandler)
r.HandleFunc("/derp/latency-check", derpServer.DERPProbeHandler)
r.HandleFunc("/bootstrap-dns", derpServer.DERPBootstrapDNSHandler(h.state.DERPMap()))
}
r.Route("/api", func(r chi.Router) {
r.Use(h.httpAuthenticationMiddleware)
r.HandleFunc("/v1/*", grpcMux.ServeHTTP)
})
r.Get("/favicon.ico", FaviconHandler)
r.Get("/", BlankHandler)
return r
}
// Serve launches the HTTP and gRPC server service Headscale and the API.
//
//nolint:gocyclo // complex server startup function
func (h *Headscale) Serve() error {
var err error
capver.CanOldCodeBeCleanedUp()
if profilingEnabled {
if profilingPath != "" {
err = os.MkdirAll(profilingPath, os.ModePerm)
if err != nil {
log.Fatal().Err(err).Msg("failed to create profiling directory")
}
defer profile.Start(profile.ProfilePath(profilingPath)).Stop()
} else {
defer profile.Start().Stop()
}
}
if dumpConfig {
spew.Dump(h.cfg)
}
versionInfo := types.GetVersionInfo()
log.Info().Str("version", versionInfo.Version).Str("commit", versionInfo.Commit).Msg("starting headscale")
log.Info().
Str("minimum_version", capver.TailscaleVersion(capver.MinSupportedCapabilityVersion)).
Msg("Clients with a lower minimum version will be rejected")
h.mapBatcher = mapper.NewBatcherAndMapper(h.cfg, h.state)
h.mapBatcher.Start()
defer h.mapBatcher.Close()
if h.cfg.DERP.ServerEnabled {
// When embedded DERP is enabled we always need a STUN server
if h.cfg.DERP.STUNAddr == "" {
return errSTUNAddressNotSet
}
go h.DERPServer.ServeSTUN()
}
derpMap, err := derp.GetDERPMap(h.cfg.DERP)
if err != nil {
return fmt.Errorf("getting DERPMap: %w", err)
}
if h.cfg.DERP.ServerEnabled && h.cfg.DERP.AutomaticallyAddEmbeddedDerpRegion {
region, _ := h.DERPServer.GenerateRegion()
derpMap.Regions[region.RegionID] = ®ion
}
if len(derpMap.Regions) == 0 {
return errEmptyInitialDERPMap
}
h.state.SetDERPMap(derpMap)
// Start ephemeral node garbage collector and schedule all nodes
// that are already in the database and ephemeral. If they are still
// around between restarts, they will reconnect and the GC will
// be cancelled.
go h.ephemeralGC.Start()
ephmNodes := h.state.ListEphemeralNodes()
for _, node := range ephmNodes.All() {
h.ephemeralGC.Schedule(node.ID(), h.cfg.EphemeralNodeInactivityTimeout)
}
if h.cfg.DNSConfig.ExtraRecordsPath != "" {
h.extraRecordMan, err = dns.NewExtraRecordsManager(h.cfg.DNSConfig.ExtraRecordsPath)
if err != nil {
return fmt.Errorf("setting up extrarecord manager: %w", err)
}
h.cfg.TailcfgDNSConfig.ExtraRecords = h.extraRecordMan.Records()
go h.extraRecordMan.Run()
defer h.extraRecordMan.Close()
}
// Start all scheduled tasks, e.g. expiring nodes, derp updates and
// records updates
scheduleCtx, scheduleCancel := context.WithCancel(context.Background())
defer scheduleCancel()
go h.scheduledTasks(scheduleCtx)
if zl.GlobalLevel() == zl.TraceLevel {
zerolog.RespLog = true
} else {
zerolog.RespLog = false
}
// Prepare group for running listeners
errorGroup := new(errgroup.Group)
ctx := context.Background()
ctx, cancel := context.WithCancel(ctx)
defer cancel()
//
//
// Set up LOCAL listeners
//
err = h.ensureUnixSocketIsAbsent()
if err != nil {
return fmt.Errorf("removing old socket file: %w", err)
}
socketDir := filepath.Dir(h.cfg.UnixSocket)
err = util.EnsureDir(socketDir)
if err != nil {
return fmt.Errorf("setting up unix socket: %w", err)
}
socketListener, err := new(net.ListenConfig).Listen(context.Background(), "unix", h.cfg.UnixSocket)
if err != nil {
return fmt.Errorf("setting up gRPC socket: %w", err)
}
// Change socket permissions
if err := os.Chmod(h.cfg.UnixSocket, h.cfg.UnixSocketPermission); err != nil { //nolint:noinlineerr
return fmt.Errorf("changing gRPC socket permission: %w", err)
}
grpcGatewayMux := grpcRuntime.NewServeMux()
// Make the grpc-gateway connect to grpc over socket
grpcGatewayConn, err := grpc.Dial( //nolint:staticcheck // SA1019: deprecated but supported in 1.x
h.cfg.UnixSocket,
[]grpc.DialOption{
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithContextDialer(util.GrpcSocketDialer),
}...,
)
if err != nil {
return fmt.Errorf("setting up gRPC gateway via socket: %w", err)
}
// Connect to the gRPC server over localhost to skip
// the authentication.
err = v1.RegisterHeadscaleServiceHandler(ctx, grpcGatewayMux, grpcGatewayConn)
if err != nil {
return fmt.Errorf("registering Headscale API service to gRPC: %w", err)
}
// Start the local gRPC server without TLS and without authentication
grpcSocket := grpc.NewServer(
// Uncomment to debug grpc communication.
// zerolog.UnaryInterceptor(),
)
v1.RegisterHeadscaleServiceServer(grpcSocket, newHeadscaleV1APIServer(h))
reflection.Register(grpcSocket)
errorGroup.Go(func() error { return grpcSocket.Serve(socketListener) })
//
//
// Set up REMOTE listeners
//
tlsConfig, err := h.getTLSSettings()
if err != nil {
return fmt.Errorf("configuring TLS settings: %w", err)
}
//
//
// gRPC setup
//
// We are sadly not able to run gRPC and HTTPS (2.0) on the same
// port because the connection mux does not support matching them
// since they are so similar. There is multiple issues open and we
// can revisit this if changes:
// https://github.com/soheilhy/cmux/issues/68
// https://github.com/soheilhy/cmux/issues/91
var (
grpcServer *grpc.Server
grpcListener net.Listener
)
if tlsConfig != nil || h.cfg.GRPCAllowInsecure {
log.Info().Msgf("enabling remote gRPC at %s", h.cfg.GRPCAddr)
grpcOptions := []grpc.ServerOption{
grpc.ChainUnaryInterceptor(
h.grpcAuthenticationInterceptor,
// Uncomment to debug grpc communication.
// zerolog.NewUnaryServerInterceptor(),
),
}
if tlsConfig != nil {
grpcOptions = append(grpcOptions,
grpc.Creds(credentials.NewTLS(tlsConfig)),
)
} else {
log.Warn().Msg("gRPC is running without security")
}
grpcServer = grpc.NewServer(grpcOptions...)
v1.RegisterHeadscaleServiceServer(grpcServer, newHeadscaleV1APIServer(h))
reflection.Register(grpcServer)
grpcListener, err = new(net.ListenConfig).Listen(context.Background(), "tcp", h.cfg.GRPCAddr)
if err != nil {
return fmt.Errorf("binding to TCP address: %w", err)
}
errorGroup.Go(func() error { return grpcServer.Serve(grpcListener) })
log.Info().
Msgf("listening and serving gRPC on: %s", h.cfg.GRPCAddr)
}
//
//
// HTTP setup
//
// This is the regular router that we expose
// over our main Addr
router := h.createRouter(grpcGatewayMux)
httpServer := &http.Server{
Addr: h.cfg.Addr,
Handler: router,
ReadTimeout: types.HTTPTimeout,
// Long polling should not have any timeout, this is overridden
// further down the chain
WriteTimeout: types.HTTPTimeout,
}
var httpListener net.Listener
if tlsConfig != nil {
httpServer.TLSConfig = tlsConfig
httpListener, err = tls.Listen("tcp", h.cfg.Addr, tlsConfig)
} else {
httpListener, err = new(net.ListenConfig).Listen(context.Background(), "tcp", h.cfg.Addr)
}
if err != nil {
return fmt.Errorf("binding to TCP address: %w", err)
}
errorGroup.Go(func() error { return httpServer.Serve(httpListener) })
log.Info().
Msgf("listening and serving HTTP on: %s", h.cfg.Addr)
// Only start debug/metrics server if address is configured
var debugHTTPServer *http.Server
var debugHTTPListener net.Listener
if h.cfg.MetricsAddr != "" {
debugHTTPListener, err = (&net.ListenConfig{}).Listen(ctx, "tcp", h.cfg.MetricsAddr)
if err != nil {
return fmt.Errorf("binding to TCP address: %w", err)
}
debugHTTPServer = h.debugHTTPServer()
errorGroup.Go(func() error { return debugHTTPServer.Serve(debugHTTPListener) })
log.Info().
Msgf("listening and serving debug and metrics on: %s", h.cfg.MetricsAddr)
} else {
log.Info().Msg("metrics server disabled (metrics_listen_addr is empty)")
}
var tailsqlContext context.Context
if tailsqlEnabled {
if h.cfg.Database.Type != types.DatabaseSqlite {
//nolint:gocritic // exitAfterDefer: Fatal exits during initialization before servers start
log.Fatal().
Str("type", h.cfg.Database.Type).
Msgf("tailsql only support %q", types.DatabaseSqlite)
}
if tailsqlTSKey == "" {
//nolint:gocritic // exitAfterDefer: Fatal exits during initialization before servers start
log.Fatal().Msg("tailsql requires TS_AUTHKEY to be set")
}
tailsqlContext = context.Background()
go runTailSQLService(ctx, util.TSLogfWrapper(), tailsqlStateDir, h.cfg.Database.Sqlite.Path) //nolint:errcheck
}
// Handle common process-killing signals so we can gracefully shut down:
sigc := make(chan os.Signal, 1)
signal.Notify(sigc,
syscall.SIGHUP,
syscall.SIGINT,
syscall.SIGTERM,
syscall.SIGQUIT,
syscall.SIGHUP)
sigFunc := func(c chan os.Signal) {
// Wait for a SIGINT or SIGKILL:
for {
sig := <-c
switch sig {
case syscall.SIGHUP:
log.Info().
Str("signal", sig.String()).
Msg("Received SIGHUP, reloading ACL policy")
if h.cfg.Policy.IsEmpty() {
continue
}
changes, err := h.state.ReloadPolicy()
if err != nil {
log.Error().Err(err).Msgf("reloading policy")
continue
}
h.Change(changes...)
default:
info := func(msg string) { log.Info().Msg(msg) }
log.Info().
Str("signal", sig.String()).
Msg("Received signal to stop, shutting down gracefully")
scheduleCancel()
h.ephemeralGC.Close()
// Gracefully shut down servers
shutdownCtx, cancel := context.WithTimeout(
context.WithoutCancel(ctx),
types.HTTPShutdownTimeout,
)
defer cancel()
if debugHTTPServer != nil {
info("shutting down debug http server")
err := debugHTTPServer.Shutdown(shutdownCtx)
if err != nil {
log.Error().Err(err).Msg("failed to shutdown prometheus http")
}
}
info("shutting down main http server")
err := httpServer.Shutdown(shutdownCtx)
if err != nil {
log.Error().Err(err).Msg("failed to shutdown http")
}
info("closing batcher")
h.mapBatcher.Close()
info("waiting for netmap stream to close")
h.clientStreamsOpen.Wait()
info("shutting down grpc server (socket)")
grpcSocket.GracefulStop()
if grpcServer != nil {
info("shutting down grpc server (external)")
grpcServer.GracefulStop()
grpcListener.Close()
}
if tailsqlContext != nil {
info("shutting down tailsql")
tailsqlContext.Done()
}
// Close network listeners
info("closing network listeners")
if debugHTTPListener != nil {
debugHTTPListener.Close()
}
httpListener.Close()
grpcGatewayConn.Close()
// Stop listening (and unlink the socket if unix type):
info("closing socket listener")
socketListener.Close()
// Close state connections
info("closing state and database")
err = h.state.Close()
if err != nil {
log.Error().Err(err).Msg("failed to close state")
}
log.Info().
Msg("Headscale stopped")
return
}
}
}
errorGroup.Go(func() error {
sigFunc(sigc)
return nil
})
return errorGroup.Wait()
}
func (h *Headscale) getTLSSettings() (*tls.Config, error) {
var err error
if h.cfg.TLS.LetsEncrypt.Hostname != "" {
if !strings.HasPrefix(h.cfg.ServerURL, "https://") {
log.Warn().
Msg("Listening with TLS but ServerURL does not start with https://")
}
certManager := autocert.Manager{
Prompt: autocert.AcceptTOS,
HostPolicy: autocert.HostWhitelist(h.cfg.TLS.LetsEncrypt.Hostname),
Cache: autocert.DirCache(h.cfg.TLS.LetsEncrypt.CacheDir),
Client: &acme.Client{
DirectoryURL: h.cfg.ACMEURL,
HTTPClient: &http.Client{
Transport: &acmeLogger{
rt: http.DefaultTransport,
},
},
},
Email: h.cfg.ACMEEmail,
}
switch h.cfg.TLS.LetsEncrypt.ChallengeType {
case types.TLSALPN01ChallengeType:
// Configuration via autocert with TLS-ALPN-01 (https://tools.ietf.org/html/rfc8737)
// The RFC requires that the validation is done on port 443; in other words, headscale
// must be reachable on port 443.
return certManager.TLSConfig(), nil
case types.HTTP01ChallengeType:
// Configuration via autocert with HTTP-01. This requires listening on
// port 80 for the certificate validation in addition to the headscale
// service, which can be configured to run on any other port.
server := &http.Server{
Addr: h.cfg.TLS.LetsEncrypt.Listen,
Handler: certManager.HTTPHandler(http.HandlerFunc(h.redirect)),
ReadTimeout: types.HTTPTimeout,
}
go func() {
err := server.ListenAndServe()
log.Fatal().
Caller().
Err(err).
Msg("failed to set up a HTTP server")
}()
return certManager.TLSConfig(), nil
default:
return nil, errUnsupportedLetsEncryptChallengeType
}
} else if h.cfg.TLS.CertPath == "" {
if !strings.HasPrefix(h.cfg.ServerURL, "http://") {
log.Warn().Msg("listening without TLS but ServerURL does not start with http://")
}
return nil, err
} else {
if !strings.HasPrefix(h.cfg.ServerURL, "https://") {
log.Warn().Msg("listening with TLS but ServerURL does not start with https://")
}
tlsConfig := &tls.Config{
NextProtos: []string{"http/1.1"},
Certificates: make([]tls.Certificate, 1),
MinVersion: tls.VersionTLS12,
}
tlsConfig.Certificates[0], err = tls.LoadX509KeyPair(h.cfg.TLS.CertPath, h.cfg.TLS.KeyPath)
return tlsConfig, err
}
}
func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
dir := filepath.Dir(path)
err := util.EnsureDir(dir)
if err != nil {
return nil, fmt.Errorf("ensuring private key directory: %w", err)
}
privateKey, err := os.ReadFile(path)
if errors.Is(err, os.ErrNotExist) {
log.Info().Str("path", path).Msg("no private key file at path, creating...")
machineKey := key.NewMachine()
machineKeyStr, err := machineKey.MarshalText()
if err != nil {
return nil, fmt.Errorf(
"converting private key to string for saving: %w",
err,
)
}
err = os.WriteFile(path, machineKeyStr, privateKeyFileMode)
if err != nil {
return nil, fmt.Errorf(
"saving private key to disk at path %q: %w",
path,
err,
)
}
return &machineKey, nil
} else if err != nil {
return nil, fmt.Errorf("reading private key file: %w", err)
}
trimmedPrivateKey := strings.TrimSpace(string(privateKey))
var machineKey key.MachinePrivate
if err = machineKey.UnmarshalText([]byte(trimmedPrivateKey)); err != nil { //nolint:noinlineerr
return nil, fmt.Errorf("parsing private key: %w", err)
}
return &machineKey, nil
}
// Change is used to send changes to nodes.
// All change should be enqueued here and empty will be automatically
// ignored.
func (h *Headscale) Change(cs ...change.Change) {
h.mapBatcher.AddWork(cs...)
}
// HTTPHandler returns an http.Handler for the Headscale control server.
// The handler serves the Tailscale control protocol including the /key
// endpoint and /ts2021 Noise upgrade path.
func (h *Headscale) HTTPHandler() http.Handler {
return h.createRouter(grpcRuntime.NewServeMux())
}
// NoisePublicKey returns the server's Noise protocol public key.
func (h *Headscale) NoisePublicKey() key.MachinePublic {
return h.noisePrivateKey.Public()
}
// GetState returns the server's state manager for programmatic access
// to users, nodes, policies, and other server state.
func (h *Headscale) GetState() *state.State {
return h.state
}
// SetServerURLForTest updates the server URL in the configuration.
// This is needed for test servers where the URL is not known until
// the HTTP test server starts.
// It panics when called outside of tests.
func (h *Headscale) SetServerURLForTest(tb testing.TB, url string) {
tb.Helper()
h.cfg.ServerURL = url
}
// StartBatcherForTest initialises and starts the map response batcher.
// It registers a cleanup function on tb to stop the batcher.
// It panics when called outside of tests.
func (h *Headscale) StartBatcherForTest(tb testing.TB) {
tb.Helper()
h.mapBatcher = mapper.NewBatcherAndMapper(h.cfg, h.state)
h.mapBatcher.Start()
tb.Cleanup(func() { h.mapBatcher.Close() })
}
// StartEphemeralGCForTest starts the ephemeral node garbage collector.
// It registers a cleanup function on tb to stop the collector.
// It panics when called outside of tests.
func (h *Headscale) StartEphemeralGCForTest(tb testing.TB) {
tb.Helper()
go h.ephemeralGC.Start()
tb.Cleanup(func() { h.ephemeralGC.Close() })
}
// Provide some middleware that can inspect the ACME/autocert https calls
// and log when things are failing.
type acmeLogger struct {
rt http.RoundTripper
}
// RoundTrip will log when ACME/autocert failures happen either when err != nil OR
// when http status codes indicate a failure has occurred.
func (l *acmeLogger) RoundTrip(req *http.Request) (*http.Response, error) {
resp, err := l.rt.RoundTrip(req)
if err != nil {
log.Error().Err(err).Str("url", req.URL.String()).Msg("acme request failed")
return nil, err
}
if resp.StatusCode >= http.StatusBadRequest {
defer resp.Body.Close()
body, _ := io.ReadAll(resp.Body)
log.Error().Int("status_code", resp.StatusCode).Str("url", req.URL.String()).Bytes("body", body).Msg("acme request returned error")
}
return resp, nil
}
// zerologRequestLogger implements chi's middleware.LogFormatter
// to route HTTP request logs through zerolog.
type zerologRequestLogger struct{}
func (z *zerologRequestLogger) NewLogEntry(
r *http.Request,
) middleware.LogEntry {
return &zerologLogEntry{
method: r.Method,
path: r.URL.Path,
proto: r.Proto,
remote: r.RemoteAddr,
}
}
type zerologLogEntry struct {
method string
path string
proto string
remote string
}
func (e *zerologLogEntry) Write(
status, bytes int,
header http.Header,
elapsed time.Duration,
extra any,
) {
log.Info().
Str("method", e.method).
Str("path", e.path).
Str("proto", e.proto).
Str("remote", e.remote).
Int("status", status).
Int("bytes", bytes).
Dur("elapsed", elapsed).
Msg("http request")
}
func (e *zerologLogEntry) Panic(
v any,
stack []byte,
) {
log.Error().
Interface("panic", v).
Bytes("stack", stack).
Msg("http handler panic")
}
================================================
FILE: hscontrol/assets/assets.go
================================================
// Package assets provides embedded static assets for Headscale.
// All static files (favicon, CSS, SVG) are embedded here for
// centralized asset management.
package assets
import (
_ "embed"
)
// Favicon is the embedded favicon.png file served at /favicon.ico
//
//go:embed favicon.png
var Favicon []byte
// CSS is the embedded style.css stylesheet used in HTML templates.
// Contains Material for MkDocs design system styles.
//
//go:embed style.css
var CSS string
// SVG is the embedded headscale.svg logo used in HTML templates.
//
//go:embed headscale.svg
var SVG string
================================================
FILE: hscontrol/assets/style.css
================================================
/* CSS Variables from Material for MkDocs */
:root {
--md-default-fg-color: rgba(0, 0, 0, 0.87);
--md-default-fg-color--light: rgba(0, 0, 0, 0.54);
--md-default-fg-color--lighter: rgba(0, 0, 0, 0.32);
--md-default-fg-color--lightest: rgba(0, 0, 0, 0.07);
--md-code-fg-color: #36464e;
--md-code-bg-color: #f5f5f5;
--md-primary-fg-color: #4051b5;
--md-accent-fg-color: #526cfe;
--md-typeset-a-color: var(--md-primary-fg-color);
--md-text-font: "Roboto", -apple-system, BlinkMacSystemFont, "Segoe UI", "Helvetica Neue", Arial, sans-serif;
--md-code-font: "Roboto Mono", "SF Mono", Monaco, "Cascadia Code", Consolas, "Courier New", monospace;
}
/* Base Typography */
.md-typeset {
font-size: 0.8rem;
line-height: 1.6;
color: var(--md-default-fg-color);
font-family: var(--md-text-font);
overflow-wrap: break-word;
text-align: left;
}
/* Headings */
.md-typeset h1 {
color: var(--md-default-fg-color--light);
font-size: 2em;
line-height: 1.3;
margin: 0 0 1.25em;
font-weight: 300;
letter-spacing: -0.01em;
}
.md-typeset h1:not(:first-child) {
margin-top: 2em;
}
.md-typeset h2 {
font-size: 1.5625em;
line-height: 1.4;
margin: 2.4em 0 0.64em;
font-weight: 300;
letter-spacing: -0.01em;
color: var(--md-default-fg-color--light);
}
.md-typeset h3 {
font-size: 1.25em;
line-height: 1.5;
margin: 2em 0 0.8em;
font-weight: 400;
letter-spacing: -0.01em;
color: var(--md-default-fg-color--light);
}
/* Paragraphs and block elements */
.md-typeset p {
margin: 1em 0;
}
.md-typeset blockquote,
.md-typeset dl,
.md-typeset figure,
.md-typeset ol,
.md-typeset pre,
.md-typeset ul {
margin-bottom: 1em;
margin-top: 1em;
}
/* Lists */
.md-typeset ol,
.md-typeset ul {
padding-left: 2em;
}
/* Links */
.md-typeset a {
color: var(--md-typeset-a-color);
text-decoration: none;
word-break: break-word;
}
.md-typeset a:hover,
.md-typeset a:focus {
color: var(--md-accent-fg-color);
}
/* Code (inline) */
.md-typeset code {
background-color: var(--md-code-bg-color);
color: var(--md-code-fg-color);
border-radius: 0.1rem;
font-size: 0.85em;
font-family: var(--md-code-font);
padding: 0 0.2941176471em;
word-break: break-word;
}
/* Code blocks (pre) */
.md-typeset pre {
display: block;
line-height: 1.4;
margin: 1em 0;
overflow-x: auto;
}
.md-typeset pre > code {
background-color: var(--md-code-bg-color);
color: var(--md-code-fg-color);
display: block;
padding: 0.7720588235em 1.1764705882em;
font-family: var(--md-code-font);
font-size: 0.85em;
line-height: 1.4;
overflow-wrap: break-word;
word-wrap: break-word;
white-space: pre-wrap;
}
/* Links in code */
.md-typeset a code {
color: currentcolor;
}
/* Logo */
.headscale-logo {
display: block;
width: 400px;
max-width: 100%;
height: auto;
margin: 0 0 3rem 0;
padding: 0;
}
@media (max-width: 768px) {
.headscale-logo {
width: 200px;
margin-left: 0;
}
}
================================================
FILE: hscontrol/auth.go
================================================
package hscontrol
import (
"cmp"
"context"
"errors"
"fmt"
"net/http"
"net/url"
"strings"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
"gorm.io/gorm"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
)
type AuthProvider interface {
RegisterHandler(w http.ResponseWriter, r *http.Request)
AuthHandler(w http.ResponseWriter, r *http.Request)
RegisterURL(authID types.AuthID) string
AuthURL(authID types.AuthID) string
}
func (h *Headscale) handleRegister(
ctx context.Context,
req tailcfg.RegisterRequest,
machineKey key.MachinePublic,
) (*tailcfg.RegisterResponse, error) {
// Check for logout/expiry FIRST, before checking auth key.
// Tailscale clients may send logout requests with BOTH a past expiry AND an auth key.
// A past expiry takes precedence - it's a logout regardless of other fields.
if !req.Expiry.IsZero() && req.Expiry.Before(time.Now()) {
log.Debug().
Str("node.key", req.NodeKey.ShortString()).
Time("expiry", req.Expiry).
Bool("has_auth", req.Auth != nil).
Msg("Detected logout attempt with past expiry")
// This is a logout attempt (expiry in the past)
if node, ok := h.state.GetNodeByNodeKey(req.NodeKey); ok {
log.Debug().
EmbedObject(node).
Bool("is_ephemeral", node.IsEphemeral()).
Bool("has_authkey", node.AuthKey().Valid()).
Msg("Found existing node for logout, calling handleLogout")
resp, err := h.handleLogout(node, req, machineKey)
if err != nil {
return nil, fmt.Errorf("handling logout: %w", err)
}
if resp != nil {
return resp, nil
}
} else {
log.Warn().
Str("node.key", req.NodeKey.ShortString()).
Msg("Logout attempt but node not found in NodeStore")
}
}
// If the register request does not contain a Auth struct, it means we are logging
// out an existing node (legacy logout path for clients that send Auth=nil).
if req.Auth == nil {
// If the register request present a NodeKey that is currently in use, we will
// check if the node needs to be sent to re-auth, or if the node is logging out.
// We do not look up nodes by [key.MachinePublic] as it might belong to multiple
// nodes, separated by users and this path is handling expiring/logout paths.
if node, ok := h.state.GetNodeByNodeKey(req.NodeKey); ok {
// When tailscaled restarts, it sends RegisterRequest with Auth=nil and Expiry=zero.
// Return the current node state without modification.
// See: https://github.com/juanfont/headscale/issues/2862
if req.Expiry.IsZero() && node.Expiry().Valid() && !node.IsExpired() {
return nodeToRegisterResponse(node), nil
}
resp, err := h.handleLogout(node, req, machineKey)
if err != nil {
return nil, fmt.Errorf("handling existing node: %w", err)
}
// If resp is not nil, we have a response to return to the node.
// If resp is nil, we should proceed and see if the node is trying to re-auth.
if resp != nil {
return resp, nil
}
} else {
// If the register request is not attempting to register a node, and
// we cannot match it with an existing node, we consider that unexpected
// as only register nodes should attempt to log out.
log.Debug().
Str("node.key", req.NodeKey.ShortString()).
Str("machine.key", machineKey.ShortString()).
Bool("unexpected", true).
Msg("received register request with no auth, and no existing node")
}
}
// If the [tailcfg.RegisterRequest] has a Followup URL, it means that the
// node has already started the registration process and we should wait for
// it to finish the original registration.
if req.Followup != "" {
return h.waitForFollowup(ctx, req, machineKey)
}
// Pre authenticated keys are handled slightly different than interactive
// logins as they can be done fully sync and we can respond to the node with
// the result as it is waiting.
if isAuthKey(req) {
resp, err := h.handleRegisterWithAuthKey(req, machineKey)
if err != nil {
// Preserve HTTPError types so they can be handled properly by the HTTP layer
if httpErr, ok := errors.AsType[HTTPError](err); ok {
return nil, httpErr
}
return nil, fmt.Errorf("handling register with auth key: %w", err)
}
return resp, nil
}
resp, err := h.handleRegisterInteractive(req, machineKey)
if err != nil {
return nil, fmt.Errorf("handling register interactive: %w", err)
}
return resp, nil
}
// handleLogout checks if the [tailcfg.RegisterRequest] is a
// logout attempt from a node. If the node is not attempting to.
func (h *Headscale) handleLogout(
node types.NodeView,
req tailcfg.RegisterRequest,
machineKey key.MachinePublic,
) (*tailcfg.RegisterResponse, error) {
// Fail closed if it looks like this is an attempt to modify a node where
// the node key and the machine key the noise session was started with does
// not align.
if node.MachineKey() != machineKey {
return nil, NewHTTPError(http.StatusUnauthorized, "node exist with different machine key", nil)
}
// Note: We do NOT return early if req.Auth is set, because Tailscale clients
// may send logout requests with BOTH a past expiry AND an auth key.
// A past expiry indicates logout, regardless of whether Auth is present.
// The expiry check below will handle the logout logic.
// If the node is expired and this is not a re-authentication attempt,
// force the client to re-authenticate.
// TODO(kradalby): I wonder if this is a path we ever hit?
if node.IsExpired() {
log.Trace().
EmbedObject(node).
Interface("reg.req", req).
Bool("unexpected", true).
Msg("Node key expired, forcing re-authentication")
return &tailcfg.RegisterResponse{
NodeKeyExpired: true,
MachineAuthorized: false,
AuthURL: "", // Client will need to re-authenticate
}, nil
}
// If we get here, the node is not currently expired, and not trying to
// do an auth.
// The node is likely logging out, but before we run that logic, we will validate
// that the node is not attempting to tamper/extend their expiry.
// If it is not, we will expire the node or in the case of an ephemeral node, delete it.
// The client is trying to extend their key, this is not allowed.
if req.Expiry.After(time.Now()) {
return nil, NewHTTPError(http.StatusBadRequest, "extending key is not allowed", nil)
}
// If the request expiry is in the past, we consider it a logout.
// Zero expiry is handled in handleRegister() before calling this function.
if req.Expiry.Before(time.Now()) {
log.Debug().
EmbedObject(node).
Bool("is_ephemeral", node.IsEphemeral()).
Bool("has_authkey", node.AuthKey().Valid()).
Time("req.expiry", req.Expiry).
Msg("Processing logout request with past expiry")
if node.IsEphemeral() {
log.Info().
EmbedObject(node).
Msg("Deleting ephemeral node during logout")
c, err := h.state.DeleteNode(node)
if err != nil {
return nil, fmt.Errorf("deleting ephemeral node: %w", err)
}
h.Change(c)
return &tailcfg.RegisterResponse{
NodeKeyExpired: true,
MachineAuthorized: false,
}, nil
}
log.Debug().
EmbedObject(node).
Msg("Node is not ephemeral, setting expiry instead of deleting")
}
// Update the internal state with the nodes new expiry, meaning it is
// logged out.
expiry := req.Expiry
updatedNode, c, err := h.state.SetNodeExpiry(node.ID(), &expiry)
if err != nil {
return nil, fmt.Errorf("setting node expiry: %w", err)
}
h.Change(c)
return nodeToRegisterResponse(updatedNode), nil
}
// isAuthKey reports if the register request is a registration request
// using an pre auth key.
func isAuthKey(req tailcfg.RegisterRequest) bool {
return req.Auth != nil && req.Auth.AuthKey != ""
}
func nodeToRegisterResponse(node types.NodeView) *tailcfg.RegisterResponse {
resp := &tailcfg.RegisterResponse{
NodeKeyExpired: node.IsExpired(),
// Headscale does not implement the concept of machine authorization
// so we always return true here.
// Revisit this if #2176 gets implemented.
MachineAuthorized: true,
}
// For tagged nodes, use the TaggedDevices special user
// For user-owned nodes, include User and Login information from the actual user
if node.IsTagged() {
resp.User = types.TaggedDevices.View().TailscaleUser()
resp.Login = types.TaggedDevices.View().TailscaleLogin()
} else if node.Owner().Valid() {
resp.User = node.Owner().TailscaleUser()
resp.Login = node.Owner().TailscaleLogin()
}
return resp
}
func (h *Headscale) waitForFollowup(
ctx context.Context,
req tailcfg.RegisterRequest,
machineKey key.MachinePublic,
) (*tailcfg.RegisterResponse, error) {
fu, err := url.Parse(req.Followup)
if err != nil {
return nil, NewHTTPError(http.StatusUnauthorized, "invalid followup URL", err)
}
followupReg, err := types.AuthIDFromString(strings.ReplaceAll(fu.Path, "/register/", ""))
if err != nil {
return nil, NewHTTPError(http.StatusUnauthorized, "invalid registration ID", err)
}
if reg, ok := h.state.GetAuthCacheEntry(followupReg); ok {
select {
case <-ctx.Done():
return nil, NewHTTPError(http.StatusUnauthorized, "registration timed out", err)
case verdict := <-reg.WaitForAuth():
if verdict.Accept() {
if !verdict.Node.Valid() {
// registration is expired in the cache, instruct the client to try a new registration
return h.reqToNewRegisterResponse(req, machineKey)
}
return nodeToRegisterResponse(verdict.Node), nil
}
}
}
// if the follow-up registration isn't found anymore, instruct the client to try a new registration
return h.reqToNewRegisterResponse(req, machineKey)
}
// reqToNewRegisterResponse refreshes the registration flow by creating a new
// registration ID and returning the corresponding AuthURL so the client can
// restart the authentication process.
func (h *Headscale) reqToNewRegisterResponse(
req tailcfg.RegisterRequest,
machineKey key.MachinePublic,
) (*tailcfg.RegisterResponse, error) {
newAuthID, err := types.NewAuthID()
if err != nil {
return nil, NewHTTPError(http.StatusInternalServerError, "failed to generate registration ID", err)
}
// Ensure we have a valid hostname
hostname := util.EnsureHostname(
req.Hostinfo.View(),
machineKey.String(),
req.NodeKey.String(),
)
// Ensure we have valid hostinfo
hostinfo := cmp.Or(req.Hostinfo, &tailcfg.Hostinfo{})
hostinfo.Hostname = hostname
nodeToRegister := types.Node{
Hostname: hostname,
MachineKey: machineKey,
NodeKey: req.NodeKey,
Hostinfo: hostinfo,
LastSeen: new(time.Now()),
}
if !req.Expiry.IsZero() {
nodeToRegister.Expiry = &req.Expiry
}
authRegReq := types.NewRegisterAuthRequest(nodeToRegister)
log.Info().Msgf("new followup node registration using auth id: %s", newAuthID)
h.state.SetAuthCacheEntry(newAuthID, authRegReq)
return &tailcfg.RegisterResponse{
AuthURL: h.authProvider.RegisterURL(newAuthID),
}, nil
}
func (h *Headscale) handleRegisterWithAuthKey(
req tailcfg.RegisterRequest,
machineKey key.MachinePublic,
) (*tailcfg.RegisterResponse, error) {
node, changed, err := h.state.HandleNodeFromPreAuthKey(
req,
machineKey,
)
if err != nil {
if errors.Is(err, gorm.ErrRecordNotFound) {
return nil, NewHTTPError(http.StatusUnauthorized, "invalid pre auth key", nil)
}
if perr, ok := errors.AsType[types.PAKError](err); ok {
return nil, NewHTTPError(http.StatusUnauthorized, perr.Error(), nil)
}
return nil, err
}
// If node is not valid, it means an ephemeral node was deleted during logout
if !node.Valid() {
h.Change(changed)
return nil, nil //nolint:nilnil // intentional: no node to return when ephemeral deleted
}
// This is a bit of a back and forth, but we have a bit of a chicken and egg
// dependency here.
// Because the way the policy manager works, we need to have the node
// in the database, then add it to the policy manager and then we can
// approve the route. This means we get this dance where the node is
// first added to the database, then we add it to the policy manager via
// nodesChangedHook and then we can auto approve the routes.
// As that only approves the struct object, we need to save it again and
// ensure we send an update.
// This works, but might be another good candidate for doing some sort of
// eventbus.
// TODO(kradalby): This needs to be ran as part of the batcher maybe?
// now since we dont update the node/pol here anymore
routesChange, err := h.state.AutoApproveRoutes(node)
if err != nil {
return nil, fmt.Errorf("auto approving routes: %w", err)
}
// Send both changes. Empty changes are ignored by Change().
h.Change(changed, routesChange)
resp := &tailcfg.RegisterResponse{
MachineAuthorized: true,
NodeKeyExpired: node.IsExpired(),
User: node.Owner().TailscaleUser(),
Login: node.Owner().TailscaleLogin(),
}
log.Trace().
Caller().
Interface("reg.resp", resp).
Interface("reg.req", req).
EmbedObject(node).
Msg("RegisterResponse")
return resp, nil
}
func (h *Headscale) handleRegisterInteractive(
req tailcfg.RegisterRequest,
machineKey key.MachinePublic,
) (*tailcfg.RegisterResponse, error) {
authID, err := types.NewAuthID()
if err != nil {
return nil, fmt.Errorf("generating registration ID: %w", err)
}
// Ensure we have a valid hostname
hostname := util.EnsureHostname(
req.Hostinfo.View(),
machineKey.String(),
req.NodeKey.String(),
)
// Ensure we have valid hostinfo
hostinfo := cmp.Or(req.Hostinfo, &tailcfg.Hostinfo{})
if req.Hostinfo == nil {
log.Warn().
Str("machine.key", machineKey.ShortString()).
Str("node.key", req.NodeKey.ShortString()).
Str("generated.hostname", hostname).
Msg("Received registration request with nil hostinfo, generated default hostname")
} else if req.Hostinfo.Hostname == "" {
log.Warn().
Str("machine.key", machineKey.ShortString()).
Str("node.key", req.NodeKey.ShortString()).
Str("generated.hostname", hostname).
Msg("Received registration request with empty hostname, generated default")
}
hostinfo.Hostname = hostname
nodeToRegister := types.Node{
Hostname: hostname,
MachineKey: machineKey,
NodeKey: req.NodeKey,
Hostinfo: hostinfo,
LastSeen: new(time.Now()),
}
if !req.Expiry.IsZero() {
nodeToRegister.Expiry = &req.Expiry
}
authRegReq := types.NewRegisterAuthRequest(nodeToRegister)
h.state.SetAuthCacheEntry(
authID,
authRegReq,
)
log.Info().Msgf("starting node registration using auth id: %s", authID)
return &tailcfg.RegisterResponse{
AuthURL: h.authProvider.RegisterURL(authID),
}, nil
}
================================================
FILE: hscontrol/auth_tags_test.go
================================================
package hscontrol
import (
"testing"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
)
// TestTaggedPreAuthKeyCreatesTaggedNode tests that a PreAuthKey with tags creates
// a tagged node with:
// - Tags from the PreAuthKey
// - Nil UserID (tagged nodes are owned by tags, not a user)
// - IsTagged() returns true.
func TestTaggedPreAuthKeyCreatesTaggedNode(t *testing.T) {
app := createTestApp(t)
user := app.state.CreateUserForTest("tag-creator")
tags := []string{"tag:server", "tag:prod"}
// Create a tagged PreAuthKey
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, tags)
require.NoError(t, err)
require.NotEmpty(t, pak.Tags, "PreAuthKey should have tags")
require.ElementsMatch(t, tags, pak.Tags, "PreAuthKey should have specified tags")
// Register a node using the tagged key
machineKey := key.NewMachine()
nodeKey := key.NewNode()
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
require.NoError(t, err)
require.True(t, resp.MachineAuthorized)
// Verify the node was created with tags
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
// Tagged nodes are owned by their tags, not a user.
assert.True(t, node.IsTagged(), "Node should be tagged")
assert.ElementsMatch(t, tags, node.Tags().AsSlice(), "Node should have tags from PreAuthKey")
assert.False(t, node.UserID().Valid(), "Tagged node should not have UserID")
// Verify node is identified correctly
assert.True(t, node.IsTagged(), "Tagged node is not user-owned")
assert.True(t, node.HasTag("tag:server"), "Node should have tag:server")
assert.True(t, node.HasTag("tag:prod"), "Node should have tag:prod")
assert.False(t, node.HasTag("tag:other"), "Node should not have tag:other")
}
// TestReAuthDoesNotReapplyTags tests that when a node re-authenticates using the
// same PreAuthKey, the tags are NOT re-applied. Tags are only set during initial
// authentication. This is critical for the container restart scenario (#2830).
//
// NOTE: This test verifies that re-authentication preserves the node's current tags
// without testing tag modification via SetNodeTags (which requires ACL policy setup).
func TestReAuthDoesNotReapplyTags(t *testing.T) {
app := createTestApp(t)
user := app.state.CreateUserForTest("tag-creator")
initialTags := []string{"tag:server", "tag:dev"}
// Create a tagged PreAuthKey with reusable=true for re-auth
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, initialTags)
require.NoError(t, err)
// Initial registration
machineKey := key.NewMachine()
nodeKey := key.NewNode()
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "reauth-test-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
require.NoError(t, err)
require.True(t, resp.MachineAuthorized)
// Verify initial tags
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
require.True(t, node.IsTagged())
require.ElementsMatch(t, initialTags, node.Tags().AsSlice())
// Re-authenticate with the SAME PreAuthKey (container restart scenario)
// Key behavior: Tags should NOT be re-applied during re-auth
reAuthReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key, // Same key
},
NodeKey: nodeKey.Public(), // Same node key
Hostinfo: &tailcfg.Hostinfo{
Hostname: "reauth-test-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
reAuthResp, err := app.handleRegisterWithAuthKey(reAuthReq, machineKey.Public())
require.NoError(t, err)
require.True(t, reAuthResp.MachineAuthorized)
// CRITICAL: Tags should remain unchanged after re-auth
// They should match the original tags, proving they weren't re-applied
nodeAfterReauth, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
assert.True(t, nodeAfterReauth.IsTagged(), "Node should still be tagged")
assert.ElementsMatch(t, initialTags, nodeAfterReauth.Tags().AsSlice(), "Tags should remain unchanged on re-auth")
// Verify only one node was created (no duplicates).
// Tagged nodes are not indexed by user, so check the global list.
allNodes := app.state.ListNodes()
assert.Equal(t, 1, allNodes.Len(), "Should have exactly one node")
}
// NOTE: TestSetTagsOnUserOwnedNode functionality is covered by gRPC tests in grpcv1_test.go
// which properly handle ACL policy setup. The test verifies that SetTags can convert
// user-owned nodes to tagged nodes while preserving UserID.
// TestCannotRemoveAllTags tests that attempting to remove all tags from a
// tagged node fails with ErrCannotRemoveAllTags. Once a node is tagged,
// it must always have at least one tag (Tailscale requirement).
func TestCannotRemoveAllTags(t *testing.T) {
app := createTestApp(t)
user := app.state.CreateUserForTest("tag-creator")
tags := []string{"tag:server"}
// Create a tagged node
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, tags)
require.NoError(t, err)
machineKey := key.NewMachine()
nodeKey := key.NewNode()
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
require.NoError(t, err)
require.True(t, resp.MachineAuthorized)
// Verify node is tagged
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
require.True(t, node.IsTagged())
// Attempt to remove all tags by setting empty array
_, _, err = app.state.SetNodeTags(node.ID(), []string{})
require.Error(t, err, "Should not be able to remove all tags")
require.ErrorIs(t, err, types.ErrCannotRemoveAllTags, "Error should be ErrCannotRemoveAllTags")
// Verify node still has original tags
nodeAfter, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
assert.True(t, nodeAfter.IsTagged(), "Node should still be tagged")
assert.ElementsMatch(t, tags, nodeAfter.Tags().AsSlice(), "Tags should be unchanged")
}
// TestUserOwnedNodeCreatedWithUntaggedPreAuthKey tests that using a PreAuthKey
// without tags creates a user-owned node (no tags, UserID is the owner).
func TestUserOwnedNodeCreatedWithUntaggedPreAuthKey(t *testing.T) {
app := createTestApp(t)
user := app.state.CreateUserForTest("node-owner")
// Create an untagged PreAuthKey
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
require.NoError(t, err)
require.Empty(t, pak.Tags, "PreAuthKey should not be tagged")
require.Empty(t, pak.Tags, "PreAuthKey should have no tags")
// Register a node
machineKey := key.NewMachine()
nodeKey := key.NewNode()
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "user-owned-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
require.NoError(t, err)
require.True(t, resp.MachineAuthorized)
// Verify node is user-owned
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
// Critical assertions for user-owned node
assert.False(t, node.IsTagged(), "Node should not be tagged")
assert.False(t, node.IsTagged(), "Node should be user-owned (not tagged)")
assert.Empty(t, node.Tags().AsSlice(), "Node should have no tags")
assert.True(t, node.UserID().Valid(), "Node should have UserID")
assert.Equal(t, user.ID, node.UserID().Get(), "UserID should be the PreAuthKey owner")
}
// TestMultipleNodesWithSameReusableTaggedPreAuthKey tests that a reusable
// PreAuthKey with tags can be used to register multiple nodes, and all nodes
// receive the same tags from the key.
func TestMultipleNodesWithSameReusableTaggedPreAuthKey(t *testing.T) {
app := createTestApp(t)
user := app.state.CreateUserForTest("tag-creator")
tags := []string{"tag:server", "tag:prod"}
// Create a REUSABLE tagged PreAuthKey
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, tags)
require.NoError(t, err)
require.ElementsMatch(t, tags, pak.Tags)
// Register first node
machineKey1 := key.NewMachine()
nodeKey1 := key.NewNode()
regReq1 := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-node-1",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp1, err := app.handleRegisterWithAuthKey(regReq1, machineKey1.Public())
require.NoError(t, err)
require.True(t, resp1.MachineAuthorized)
// Register second node with SAME PreAuthKey
machineKey2 := key.NewMachine()
nodeKey2 := key.NewNode()
regReq2 := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key, // Same key
},
NodeKey: nodeKey2.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-node-2",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp2, err := app.handleRegisterWithAuthKey(regReq2, machineKey2.Public())
require.NoError(t, err)
require.True(t, resp2.MachineAuthorized)
// Verify both nodes exist and have the same tags
node1, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
require.True(t, found)
node2, found := app.state.GetNodeByNodeKey(nodeKey2.Public())
require.True(t, found)
// Both nodes should be tagged with the same tags
assert.True(t, node1.IsTagged(), "First node should be tagged")
assert.True(t, node2.IsTagged(), "Second node should be tagged")
assert.ElementsMatch(t, tags, node1.Tags().AsSlice(), "First node should have PreAuthKey tags")
assert.ElementsMatch(t, tags, node2.Tags().AsSlice(), "Second node should have PreAuthKey tags")
// Tagged nodes should not have UserID set.
assert.False(t, node1.UserID().Valid(), "First node should not have UserID")
assert.False(t, node2.UserID().Valid(), "Second node should not have UserID")
// Verify we have exactly 2 nodes.
allNodes := app.state.ListNodes()
assert.Equal(t, 2, allNodes.Len(), "Should have exactly two nodes")
}
// TestNonReusableTaggedPreAuthKey tests that a non-reusable PreAuthKey with tags
// can only be used once. The second attempt should fail.
func TestNonReusableTaggedPreAuthKey(t *testing.T) {
app := createTestApp(t)
user := app.state.CreateUserForTest("tag-creator")
tags := []string{"tag:server"}
// Create a NON-REUSABLE tagged PreAuthKey
pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, tags)
require.NoError(t, err)
require.ElementsMatch(t, tags, pak.Tags)
// Register first node - should succeed
machineKey1 := key.NewMachine()
nodeKey1 := key.NewNode()
regReq1 := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-node-1",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp1, err := app.handleRegisterWithAuthKey(regReq1, machineKey1.Public())
require.NoError(t, err)
require.True(t, resp1.MachineAuthorized)
// Verify first node was created with tags
node1, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
require.True(t, found)
assert.True(t, node1.IsTagged())
assert.ElementsMatch(t, tags, node1.Tags().AsSlice())
// Attempt to register second node with SAME non-reusable key - should fail
machineKey2 := key.NewMachine()
nodeKey2 := key.NewNode()
regReq2 := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key, // Same non-reusable key
},
NodeKey: nodeKey2.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-node-2",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegisterWithAuthKey(regReq2, machineKey2.Public())
require.Error(t, err, "Should not be able to reuse non-reusable PreAuthKey")
// Verify only one node was created.
allNodes := app.state.ListNodes()
assert.Equal(t, 1, allNodes.Len(), "Should have exactly one node")
}
// TestExpiredTaggedPreAuthKey tests that an expired PreAuthKey with tags
// cannot be used to register a node.
func TestExpiredTaggedPreAuthKey(t *testing.T) {
app := createTestApp(t)
user := app.state.CreateUserForTest("tag-creator")
tags := []string{"tag:server"}
// Create a PreAuthKey that expires immediately
expiration := time.Now().Add(-1 * time.Hour) // Already expired
pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, &expiration, tags)
require.NoError(t, err)
require.ElementsMatch(t, tags, pak.Tags)
// Attempt to register with expired key
machineKey := key.NewMachine()
nodeKey := key.NewNode()
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegisterWithAuthKey(regReq, machineKey.Public())
require.Error(t, err, "Should not be able to use expired PreAuthKey")
// Verify no node was created
_, found := app.state.GetNodeByNodeKey(nodeKey.Public())
assert.False(t, found, "No node should be created with expired key")
}
// TestSingleVsMultipleTags tests that PreAuthKeys work correctly with both
// a single tag and multiple tags.
func TestSingleVsMultipleTags(t *testing.T) {
app := createTestApp(t)
user := app.state.CreateUserForTest("tag-creator")
// Test with single tag
singleTag := []string{"tag:server"}
pak1, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, singleTag)
require.NoError(t, err)
machineKey1 := key.NewMachine()
nodeKey1 := key.NewNode()
regReq1 := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak1.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "single-tag-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp1, err := app.handleRegisterWithAuthKey(regReq1, machineKey1.Public())
require.NoError(t, err)
require.True(t, resp1.MachineAuthorized)
node1, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
require.True(t, found)
assert.True(t, node1.IsTagged())
assert.ElementsMatch(t, singleTag, node1.Tags().AsSlice())
// Test with multiple tags
multipleTags := []string{"tag:server", "tag:prod", "tag:database"}
pak2, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, multipleTags)
require.NoError(t, err)
machineKey2 := key.NewMachine()
nodeKey2 := key.NewNode()
regReq2 := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak2.Key,
},
NodeKey: nodeKey2.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "multi-tag-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp2, err := app.handleRegisterWithAuthKey(regReq2, machineKey2.Public())
require.NoError(t, err)
require.True(t, resp2.MachineAuthorized)
node2, found := app.state.GetNodeByNodeKey(nodeKey2.Public())
require.True(t, found)
assert.True(t, node2.IsTagged())
assert.ElementsMatch(t, multipleTags, node2.Tags().AsSlice())
// Verify HasTag works for all tags
assert.True(t, node2.HasTag("tag:server"))
assert.True(t, node2.HasTag("tag:prod"))
assert.True(t, node2.HasTag("tag:database"))
assert.False(t, node2.HasTag("tag:other"))
}
// TestTaggedPreAuthKeyDisablesKeyExpiry tests that nodes registered with
// a tagged PreAuthKey have key expiry disabled (expiry is nil).
func TestTaggedPreAuthKeyDisablesKeyExpiry(t *testing.T) {
app := createTestApp(t)
user := app.state.CreateUserForTest("tag-creator")
tags := []string{"tag:server", "tag:prod"}
// Create a tagged PreAuthKey
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, tags)
require.NoError(t, err)
require.ElementsMatch(t, tags, pak.Tags)
// Register a node using the tagged key
machineKey := key.NewMachine()
nodeKey := key.NewNode()
// Client requests an expiry time, but for tagged nodes it should be ignored
clientRequestedExpiry := time.Now().Add(24 * time.Hour)
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-expiry-test",
},
Expiry: clientRequestedExpiry,
}
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
require.NoError(t, err)
require.True(t, resp.MachineAuthorized)
// Verify the node has key expiry DISABLED (expiry is nil/zero)
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
// Critical assertion: Tagged nodes should have expiry disabled
assert.True(t, node.IsTagged(), "Node should be tagged")
assert.False(t, node.Expiry().Valid(), "Tagged node should have expiry disabled (nil)")
}
// TestUntaggedPreAuthKeyPreservesKeyExpiry tests that nodes registered with
// an untagged PreAuthKey preserve the client's requested key expiry.
func TestUntaggedPreAuthKeyPreservesKeyExpiry(t *testing.T) {
app := createTestApp(t)
user := app.state.CreateUserForTest("node-owner")
// Create an untagged PreAuthKey
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
require.NoError(t, err)
require.Empty(t, pak.Tags, "PreAuthKey should not be tagged")
// Register a node
machineKey := key.NewMachine()
nodeKey := key.NewNode()
// Client requests an expiry time
clientRequestedExpiry := time.Now().Add(24 * time.Hour)
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "untagged-expiry-test",
},
Expiry: clientRequestedExpiry,
}
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
require.NoError(t, err)
require.True(t, resp.MachineAuthorized)
// Verify the node has the client's requested expiry
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
// Critical assertion: User-owned nodes should preserve client expiry
assert.False(t, node.IsTagged(), "Node should not be tagged")
assert.True(t, node.Expiry().Valid(), "User-owned node should have expiry set")
// Allow some tolerance for test execution time
assert.WithinDuration(t, clientRequestedExpiry, node.Expiry().Get(), 5*time.Second,
"User-owned node should have the client's requested expiry")
}
// TestTaggedNodeReauthPreservesDisabledExpiry tests that when a tagged node
// re-authenticates, the disabled expiry is preserved (not updated from client request).
func TestTaggedNodeReauthPreservesDisabledExpiry(t *testing.T) {
app := createTestApp(t)
user := app.state.CreateUserForTest("tag-creator")
tags := []string{"tag:server"}
// Create a reusable tagged PreAuthKey
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, tags)
require.NoError(t, err)
// Initial registration
machineKey := key.NewMachine()
nodeKey := key.NewNode()
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-reauth-test",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
require.NoError(t, err)
require.True(t, resp.MachineAuthorized)
// Verify initial registration has expiry disabled
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
require.True(t, node.IsTagged())
require.False(t, node.Expiry().Valid(), "Initial registration should have expiry disabled")
// Re-authenticate with a NEW expiry request (should be ignored for tagged nodes)
newRequestedExpiry := time.Now().Add(48 * time.Hour)
reAuthReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-reauth-test",
},
Expiry: newRequestedExpiry, // Client requests new expiry
}
reAuthResp, err := app.handleRegisterWithAuthKey(reAuthReq, machineKey.Public())
require.NoError(t, err)
require.True(t, reAuthResp.MachineAuthorized)
// Verify expiry is STILL disabled after re-auth
nodeAfterReauth, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
// Critical assertion: Tagged node should preserve disabled expiry on re-auth
assert.True(t, nodeAfterReauth.IsTagged(), "Node should still be tagged")
assert.False(t, nodeAfterReauth.Expiry().Valid(),
"Tagged node should have expiry PRESERVED as disabled after re-auth")
}
// TestExpiryDuringPersonalToTaggedConversion tests that when a personal node
// is converted to tagged via reauth with RequestTags, the expiry is cleared to nil.
// BUG #3048: Previously expiry was NOT cleared because expiry handling ran
// BEFORE processReauthTags.
func TestExpiryDuringPersonalToTaggedConversion(t *testing.T) {
app := createTestApp(t)
user := app.state.CreateUserForTest("expiry-test-user")
// Update policy to allow user to own tags
err := app.state.UpdatePolicyManagerUsersForTest()
require.NoError(t, err)
policy := `{
"tagOwners": {
"tag:server": ["expiry-test-user@"]
},
"acls": [{"action": "accept", "src": ["*"], "dst": ["*:*"]}]
}`
_, err = app.state.SetPolicy([]byte(policy))
require.NoError(t, err)
machineKey := key.NewMachine()
nodeKey1 := key.NewNode()
// Step 1: Create user-owned node WITH expiry set
clientExpiry := time.Now().Add(24 * time.Hour)
registrationID1 := types.MustAuthID()
regEntry1 := types.NewRegisterAuthRequest(types.Node{
MachineKey: machineKey.Public(),
NodeKey: nodeKey1.Public(),
Hostname: "personal-to-tagged",
Hostinfo: &tailcfg.Hostinfo{
Hostname: "personal-to-tagged",
RequestTags: []string{}, // No tags - user-owned
},
Expiry: &clientExpiry,
})
app.state.SetAuthCacheEntry(registrationID1, regEntry1)
node, _, err := app.state.HandleNodeFromAuthPath(
registrationID1, types.UserID(user.ID), nil, "webauth",
)
require.NoError(t, err)
require.False(t, node.IsTagged(), "Node should be user-owned initially")
require.True(t, node.Expiry().Valid(), "User-owned node should have expiry set")
// Step 2: Re-auth with tags (Personal → Tagged conversion)
nodeKey2 := key.NewNode()
registrationID2 := types.MustAuthID()
regEntry2 := types.NewRegisterAuthRequest(types.Node{
MachineKey: machineKey.Public(),
NodeKey: nodeKey2.Public(),
Hostname: "personal-to-tagged",
Hostinfo: &tailcfg.Hostinfo{
Hostname: "personal-to-tagged",
RequestTags: []string{"tag:server"}, // Adding tags
},
Expiry: &clientExpiry, // Client still sends expiry
})
app.state.SetAuthCacheEntry(registrationID2, regEntry2)
nodeAfter, _, err := app.state.HandleNodeFromAuthPath(
registrationID2, types.UserID(user.ID), nil, "webauth",
)
require.NoError(t, err)
require.True(t, nodeAfter.IsTagged(), "Node should be tagged after conversion")
// CRITICAL ASSERTION: Tagged nodes should NOT have expiry
assert.False(t, nodeAfter.Expiry().Valid(),
"Tagged node should have expiry cleared to nil")
}
// TestExpiryDuringTaggedToPersonalConversion tests that when a tagged node
// is converted to personal via reauth with empty RequestTags, expiry is set
// from the client request.
// BUG #3048: Previously expiry was NOT set because expiry handling ran
// BEFORE processReauthTags (node was still tagged at check time).
func TestExpiryDuringTaggedToPersonalConversion(t *testing.T) {
app := createTestApp(t)
user := app.state.CreateUserForTest("expiry-test-user2")
// Update policy to allow user to own tags
err := app.state.UpdatePolicyManagerUsersForTest()
require.NoError(t, err)
policy := `{
"tagOwners": {
"tag:server": ["expiry-test-user2@"]
},
"acls": [{"action": "accept", "src": ["*"], "dst": ["*:*"]}]
}`
_, err = app.state.SetPolicy([]byte(policy))
require.NoError(t, err)
machineKey := key.NewMachine()
nodeKey1 := key.NewNode()
// Step 1: Create tagged node (expiry should be nil)
registrationID1 := types.MustAuthID()
regEntry1 := types.NewRegisterAuthRequest(types.Node{
MachineKey: machineKey.Public(),
NodeKey: nodeKey1.Public(),
Hostname: "tagged-to-personal",
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-to-personal",
RequestTags: []string{"tag:server"}, // Tagged node
},
})
app.state.SetAuthCacheEntry(registrationID1, regEntry1)
node, _, err := app.state.HandleNodeFromAuthPath(
registrationID1, types.UserID(user.ID), nil, "webauth",
)
require.NoError(t, err)
require.True(t, node.IsTagged(), "Node should be tagged initially")
require.False(t, node.Expiry().Valid(), "Tagged node should have nil expiry")
// Step 2: Re-auth with empty tags (Tagged → Personal conversion)
nodeKey2 := key.NewNode()
clientExpiry := time.Now().Add(48 * time.Hour)
registrationID2 := types.MustAuthID()
regEntry2 := types.NewRegisterAuthRequest(types.Node{
MachineKey: machineKey.Public(),
NodeKey: nodeKey2.Public(),
Hostname: "tagged-to-personal",
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-to-personal",
RequestTags: []string{}, // Empty tags - convert to user-owned
},
Expiry: &clientExpiry, // Client requests expiry
})
app.state.SetAuthCacheEntry(registrationID2, regEntry2)
nodeAfter, _, err := app.state.HandleNodeFromAuthPath(
registrationID2, types.UserID(user.ID), nil, "webauth",
)
require.NoError(t, err)
require.False(t, nodeAfter.IsTagged(), "Node should be user-owned after conversion")
// CRITICAL ASSERTION: User-owned nodes should have expiry from client
assert.True(t, nodeAfter.Expiry().Valid(),
"User-owned node should have expiry set")
assert.WithinDuration(t, clientExpiry, nodeAfter.Expiry().Get(), 5*time.Second,
"Expiry should match client request")
}
// TestReAuthWithDifferentMachineKey tests the edge case where a node attempts
// to re-authenticate with the same NodeKey but a DIFFERENT MachineKey.
// This scenario should be handled gracefully (currently creates a new node).
func TestReAuthWithDifferentMachineKey(t *testing.T) {
app := createTestApp(t)
user := app.state.CreateUserForTest("tag-creator")
tags := []string{"tag:server"}
// Create a reusable tagged PreAuthKey
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, tags)
require.NoError(t, err)
// Initial registration
machineKey1 := key.NewMachine()
nodeKey := key.NewNode() // Same NodeKey for both attempts
regReq1 := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "test-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp1, err := app.handleRegisterWithAuthKey(regReq1, machineKey1.Public())
require.NoError(t, err)
require.True(t, resp1.MachineAuthorized)
// Verify initial node
node1, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
assert.True(t, node1.IsTagged())
// Re-authenticate with DIFFERENT MachineKey but SAME NodeKey
machineKey2 := key.NewMachine() // Different machine key
regReq2 := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(), // Same NodeKey
Hostinfo: &tailcfg.Hostinfo{
Hostname: "test-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp2, err := app.handleRegisterWithAuthKey(regReq2, machineKey2.Public())
require.NoError(t, err)
require.True(t, resp2.MachineAuthorized)
// Verify the node still exists and has tags
// Note: Depending on implementation, this might be the same node or a new node
node2, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
assert.True(t, node2.IsTagged())
assert.ElementsMatch(t, tags, node2.Tags().AsSlice())
}
================================================
FILE: hscontrol/auth_test.go
================================================
package hscontrol
import (
"context"
"errors"
"fmt"
"net/url"
"strings"
"testing"
"time"
"github.com/juanfont/headscale/hscontrol/mapper"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
)
// Interactive step type constants.
const (
stepTypeInitialRequest = "initial_request"
stepTypeAuthCompletion = "auth_completion"
stepTypeFollowupRequest = "followup_request"
)
var errNodeNotFoundAfterSetup = errors.New("node not found after setup")
// interactiveStep defines a step in the interactive authentication workflow.
type interactiveStep struct {
stepType string // stepTypeInitialRequest, stepTypeAuthCompletion, or stepTypeFollowupRequest
expectAuthURL bool
expectCacheEntry bool
callAuthPath bool // Real call to HandleNodeFromAuthPath, not mocked
}
//nolint:gocyclo // comprehensive test function with many scenarios
func TestAuthenticationFlows(t *testing.T) {
// Shared test keys for consistent behavior across test cases
machineKey1 := key.NewMachine()
machineKey2 := key.NewMachine()
nodeKey1 := key.NewNode()
nodeKey2 := key.NewNode()
tests := []struct {
name string
setupFunc func(*testing.T, *Headscale) (string, error) // Returns dynamic values like auth keys
request func(dynamicValue string) tailcfg.RegisterRequest
machineKey func() key.MachinePublic
wantAuth bool
wantError bool
wantAuthURL bool
wantExpired bool
validate func(*testing.T, *tailcfg.RegisterResponse, *Headscale)
// Interactive workflow support
requiresInteractiveFlow bool
interactiveSteps []interactiveStep
validateRegistrationCache bool
expectedAuthURLPattern string
simulateAuthCompletion bool
validateCompleteResponse bool
}{
// === PRE-AUTH KEY SCENARIOS ===
// Tests authentication using pre-authorization keys for automated node registration.
// Pre-auth keys allow nodes to join without interactive authentication.
// TEST: Valid pre-auth key registers a new node
// WHAT: Tests successful node registration using a valid pre-auth key
// INPUT: Register request with valid pre-auth key, node key, and hostinfo
// EXPECTED: Node is authorized immediately, registered in database
// WHY: Pre-auth keys enable automated/headless node registration without user interaction
{
name: "preauth_key_valid_new_node",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper // not a test helper, inline closure
user := app.state.CreateUserForTest("preauth-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
return pak.Key, nil
},
request: func(authKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: authKey,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "preauth-node-1",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantAuth: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper //nolint:thelper // not a test helper, inline closure
assert.True(t, resp.MachineAuthorized)
assert.False(t, resp.NodeKeyExpired)
assert.NotEmpty(t, resp.User.DisplayName)
// Verify node was created in database
node, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(t, found)
assert.Equal(t, "preauth-node-1", node.Hostname())
},
},
// TEST: Reusable pre-auth key can register multiple nodes
// WHAT: Tests that a reusable pre-auth key can be used for multiple node registrations
// INPUT: Same reusable pre-auth key used to register two different nodes
// EXPECTED: Both nodes successfully register with the same key
// WHY: Reusable keys allow multiple machines to join using one key (useful for fleet deployments)
{
name: "preauth_key_reusable_multiple_nodes",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
user := app.state.CreateUserForTest("reusable-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
// Use the key for first node
firstReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "reusable-node-1",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegisterWithAuthKey(firstReq, machineKey1.Public())
if err != nil {
return "", err
}
// Wait for node to be available in NodeStore
require.EventuallyWithT(t, func(c *assert.CollectT) {
_, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(c, found, "node should be available in NodeStore")
}, 1*time.Second, 50*time.Millisecond, "waiting for node to be available in NodeStore")
return pak.Key, nil
},
request: func(authKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: authKey,
},
NodeKey: nodeKey2.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "reusable-node-2",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey2.Public,
wantAuth: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper //nolint:thelper
assert.True(t, resp.MachineAuthorized)
assert.False(t, resp.NodeKeyExpired)
// Verify both nodes exist
node1, found1 := app.state.GetNodeByNodeKey(nodeKey1.Public())
node2, found2 := app.state.GetNodeByNodeKey(nodeKey2.Public())
assert.True(t, found1)
assert.True(t, found2)
assert.Equal(t, "reusable-node-1", node1.Hostname())
assert.Equal(t, "reusable-node-2", node2.Hostname())
},
},
// TEST: Single-use pre-auth key cannot be reused
// WHAT: Tests that a single-use pre-auth key fails on second use
// INPUT: Single-use key used for first node (succeeds), then attempted for second node
// EXPECTED: First node registers successfully, second node fails with error
// WHY: Single-use keys provide security by preventing key reuse after initial registration
{
name: "preauth_key_single_use_exhausted",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
user := app.state.CreateUserForTest("single-use-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, nil)
if err != nil {
return "", err
}
// Use the key for first node (should work)
firstReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "single-use-node-1",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegisterWithAuthKey(firstReq, machineKey1.Public())
if err != nil {
return "", err
}
// Wait for node to be available in NodeStore
require.EventuallyWithT(t, func(c *assert.CollectT) {
_, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(c, found, "node should be available in NodeStore")
}, 1*time.Second, 50*time.Millisecond, "waiting for node to be available in NodeStore")
return pak.Key, nil
},
request: func(authKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: authKey,
},
NodeKey: nodeKey2.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "single-use-node-2",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey2.Public,
wantError: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper //nolint:thelper
// First node should exist, second should not
_, found1 := app.state.GetNodeByNodeKey(nodeKey1.Public())
_, found2 := app.state.GetNodeByNodeKey(nodeKey2.Public())
assert.True(t, found1)
assert.False(t, found2)
},
},
// TEST: Invalid pre-auth key is rejected
// WHAT: Tests that an invalid/non-existent pre-auth key is rejected
// INPUT: Register request with invalid auth key string
// EXPECTED: Registration fails with error
// WHY: Invalid keys must be rejected to prevent unauthorized node registration
{
name: "preauth_key_invalid",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
return "invalid-key-12345", nil
},
request: func(authKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: authKey,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "invalid-key-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantError: true,
},
// TEST: Ephemeral pre-auth key creates ephemeral node
// WHAT: Tests that a node registered with ephemeral key is marked as ephemeral
// INPUT: Pre-auth key with ephemeral=true, standard register request
// EXPECTED: Node registers and is marked as ephemeral (will be deleted on logout)
// WHY: Ephemeral nodes auto-cleanup when disconnected, useful for temporary/CI environments
{
name: "preauth_key_ephemeral_node",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
user := app.state.CreateUserForTest("ephemeral-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, true, nil, nil)
if err != nil {
return "", err
}
return pak.Key, nil
},
request: func(authKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: authKey,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "ephemeral-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantAuth: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper //nolint:thelper
assert.True(t, resp.MachineAuthorized)
assert.False(t, resp.NodeKeyExpired)
// Verify ephemeral node was created
node, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(t, found)
assert.NotNil(t, node.AuthKey)
assert.True(t, node.AuthKey().Ephemeral())
},
},
// === INTERACTIVE REGISTRATION SCENARIOS ===
// Tests interactive authentication flow where user completes registration via web UI.
// Interactive flow: node requests registration → receives AuthURL → user authenticates → node gets registered
// TEST: Complete interactive workflow for new node
// WHAT: Tests full interactive registration flow from initial request to completion
// INPUT: Register request with no auth → user completes auth → followup request
// EXPECTED: Initial request returns AuthURL, after auth completion node is registered
// WHY: Interactive flow is the standard user-facing authentication method for new nodes
{
name: "full_interactive_workflow_new_node",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "interactive-flow-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
requiresInteractiveFlow: true,
interactiveSteps: []interactiveStep{
{stepType: stepTypeInitialRequest, expectAuthURL: true, expectCacheEntry: true},
{stepType: stepTypeAuthCompletion, callAuthPath: true, expectCacheEntry: false}, // cleaned up after completion
},
validateCompleteResponse: true,
expectedAuthURLPattern: "/register/",
},
// TEST: Interactive workflow with no Auth struct in request
// WHAT: Tests interactive flow when request has no Auth field (nil)
// INPUT: Register request with Auth field set to nil
// EXPECTED: Node receives AuthURL and can complete registration via interactive flow
// WHY: Validates handling of requests without Auth field, same as empty auth
{
name: "interactive_workflow_no_auth_struct",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
// No Auth field at all
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "interactive-no-auth-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
requiresInteractiveFlow: true,
interactiveSteps: []interactiveStep{
{stepType: stepTypeInitialRequest, expectAuthURL: true, expectCacheEntry: true},
{stepType: stepTypeAuthCompletion, callAuthPath: true, expectCacheEntry: false}, // cleaned up after completion
},
validateCompleteResponse: true,
expectedAuthURLPattern: "/register/",
},
// === EXISTING NODE SCENARIOS ===
// Tests behavior when existing registered nodes send requests (logout, re-auth, expiry, etc.)
// TEST: Existing node logout with past expiry
// WHAT: Tests node logout by sending request with expiry in the past
// INPUT: Previously registered node sends request with Auth=nil and past expiry time
// EXPECTED: Node expiry is updated, NodeKeyExpired=true, MachineAuthorized=true (for compatibility)
// WHY: Nodes signal logout by setting expiry to past time; system updates node state accordingly
{
name: "existing_node_logout",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
user := app.state.CreateUserForTest("logout-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
// Register the node first
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "logout-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey1.Public())
if err != nil {
return "", err
}
t.Logf("Setup registered node: %+v", resp)
// Wait for node to be available in NodeStore with debug info
var attemptCount int
require.EventuallyWithT(t, func(c *assert.CollectT) {
attemptCount++
_, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
if assert.True(c, found, "node should be available in NodeStore") {
t.Logf("Node found in NodeStore after %d attempts", attemptCount)
}
}, 1*time.Second, 100*time.Millisecond, "waiting for node to be available in NodeStore")
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: nil,
NodeKey: nodeKey1.Public(),
Expiry: time.Now().Add(-1 * time.Hour), // Past expiry = logout
}
},
machineKey: machineKey1.Public,
wantAuth: true,
wantExpired: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper //nolint:thelper
assert.True(t, resp.MachineAuthorized)
assert.True(t, resp.NodeKeyExpired)
},
},
// TEST: Existing node with different machine key is rejected
// WHAT: Tests that requests for existing node with wrong machine key are rejected
// INPUT: Node key matches existing node, but machine key is different
// EXPECTED: Request fails with unauthorized error (machine key mismatch)
// WHY: Machine key must match to prevent node hijacking/impersonation
{
name: "existing_node_machine_key_mismatch",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
user := app.state.CreateUserForTest("mismatch-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
// Register with machineKey1
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "mismatch-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegisterWithAuthKey(regReq, machineKey1.Public())
if err != nil {
return "", err
}
// Wait for node to be available in NodeStore
require.EventuallyWithT(t, func(c *assert.CollectT) {
_, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(c, found, "node should be available in NodeStore")
}, 1*time.Second, 50*time.Millisecond, "waiting for node to be available in NodeStore")
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: nil,
NodeKey: nodeKey1.Public(),
Expiry: time.Now().Add(-1 * time.Hour),
}
},
machineKey: machineKey2.Public, // Different machine key
wantError: true,
},
// TEST: Existing node cannot extend expiry without re-auth
// WHAT: Tests that nodes cannot extend their expiry time without authentication
// INPUT: Existing node sends request with Auth=nil and future expiry (extension attempt)
// EXPECTED: Request fails with error (extending key not allowed)
// WHY: Prevents nodes from extending their own lifetime; must re-authenticate
{
name: "existing_node_key_extension_not_allowed",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
user := app.state.CreateUserForTest("extend-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
// Register the node first
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "extend-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegisterWithAuthKey(regReq, machineKey1.Public())
if err != nil {
return "", err
}
// Wait for node to be available in NodeStore
require.EventuallyWithT(t, func(c *assert.CollectT) {
_, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(c, found, "node should be available in NodeStore")
}, 1*time.Second, 50*time.Millisecond, "waiting for node to be available in NodeStore")
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: nil,
NodeKey: nodeKey1.Public(),
Expiry: time.Now().Add(48 * time.Hour), // Future time = extend attempt
}
},
machineKey: machineKey1.Public,
wantError: true,
},
// TEST: Expired node must re-authenticate
// WHAT: Tests that expired nodes receive NodeKeyExpired=true and must re-auth
// INPUT: Previously expired node sends request with no auth
// EXPECTED: Response has NodeKeyExpired=true, node must re-authenticate
// WHY: Expired nodes must go through authentication again for security
{
name: "existing_node_expired_forces_reauth",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
user := app.state.CreateUserForTest("reauth-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
// Register the node first
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "reauth-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegisterWithAuthKey(regReq, machineKey1.Public())
if err != nil {
return "", err
}
// Wait for node to be available in NodeStore
var (
node types.NodeView
found bool
)
require.EventuallyWithT(t, func(c *assert.CollectT) {
node, found = app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(c, found, "node should be available in NodeStore")
}, 1*time.Second, 50*time.Millisecond, "waiting for node to be available in NodeStore")
if !found {
return "", errNodeNotFoundAfterSetup
}
// Expire the node
expiredTime := time.Now().Add(-1 * time.Hour)
_, _, err = app.state.SetNodeExpiry(node.ID(), &expiredTime)
return "", err
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: nil,
NodeKey: nodeKey1.Public(),
Expiry: time.Now().Add(24 * time.Hour), // Future expiry
}
},
machineKey: machineKey1.Public,
wantExpired: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper //nolint:thelper
assert.True(t, resp.NodeKeyExpired)
assert.False(t, resp.MachineAuthorized)
},
},
// TEST: Ephemeral node is deleted on logout
// WHAT: Tests that ephemeral nodes are deleted (not just expired) on logout
// INPUT: Ephemeral node sends logout request (past expiry)
// EXPECTED: Node is completely deleted from database, not just marked expired
// WHY: Ephemeral nodes should not persist after logout; auto-cleanup
{
name: "ephemeral_node_logout_deletion",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
user := app.state.CreateUserForTest("ephemeral-logout-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, true, nil, nil)
if err != nil {
return "", err
}
// Register ephemeral node
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "ephemeral-logout-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegisterWithAuthKey(regReq, machineKey1.Public())
if err != nil {
return "", err
}
// Wait for node to be available in NodeStore
require.EventuallyWithT(t, func(c *assert.CollectT) {
_, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(c, found, "node should be available in NodeStore")
}, 1*time.Second, 50*time.Millisecond, "waiting for node to be available in NodeStore")
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: nil,
NodeKey: nodeKey1.Public(),
Expiry: time.Now().Add(-1 * time.Hour), // Logout
}
},
machineKey: machineKey1.Public,
wantExpired: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper //nolint:thelper
assert.True(t, resp.NodeKeyExpired)
assert.False(t, resp.MachineAuthorized)
// Ephemeral node should be deleted, not just marked expired
_, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.False(t, found, "ephemeral node should be deleted on logout")
},
},
// === FOLLOWUP REGISTRATION SCENARIOS ===
// Tests followup request handling after interactive registration is initiated.
// Followup requests are sent by nodes waiting for auth completion.
// TEST: Successful followup registration after auth completion
// WHAT: Tests node successfully completes registration via followup URL
// INPUT: Register request with followup URL after auth completion
// EXPECTED: Node receives successful registration response with user info
// WHY: Followup mechanism allows nodes to poll/wait for auth completion
{
name: "followup_registration_success",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
regID, err := types.NewAuthID()
if err != nil {
return "", err
}
nodeToRegister := types.NewRegisterAuthRequest(types.Node{
Hostname: "followup-success-node",
})
app.state.SetAuthCacheEntry(regID, nodeToRegister)
// Simulate successful registration
// handleRegister will receive the value when it starts waiting
go func() {
user := app.state.CreateUserForTest("followup-user")
node := app.state.CreateNodeForTest(user, "followup-success-node")
nodeToRegister.FinishAuth(types.AuthVerdict{Node: node.View()})
}()
return fmt.Sprintf("http://localhost:8080/register/%s", regID), nil
},
request: func(followupURL string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Followup: followupURL,
NodeKey: nodeKey1.Public(),
}
},
machineKey: machineKey1.Public,
wantAuth: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper //nolint:thelper
assert.True(t, resp.MachineAuthorized)
assert.False(t, resp.NodeKeyExpired)
},
},
// TEST: Followup registration times out when auth not completed
// WHAT: Tests that followup request times out if auth is not completed in time
// INPUT: Followup request with short timeout, no auth completion
// EXPECTED: Request times out with unauthorized error
// WHY: Prevents indefinite waiting; nodes must retry if auth takes too long
{
name: "followup_registration_timeout",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
regID, err := types.NewAuthID()
if err != nil {
return "", err
}
nodeToRegister := types.NewRegisterAuthRequest(types.Node{
Hostname: "followup-timeout-node",
})
app.state.SetAuthCacheEntry(regID, nodeToRegister)
// Don't call FinishRegistration - will timeout
return fmt.Sprintf("http://localhost:8080/register/%s", regID), nil
},
request: func(followupURL string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Followup: followupURL,
NodeKey: nodeKey1.Public(),
}
},
machineKey: machineKey1.Public,
wantError: true,
},
// TEST: Invalid followup URL is rejected
// WHAT: Tests that malformed/invalid followup URLs are rejected
// INPUT: Register request with invalid URL in Followup field
// EXPECTED: Request fails with error (invalid followup URL)
// WHY: Validates URL format to prevent errors and potential exploits
{
name: "followup_invalid_url",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
return "invalid://url[malformed", nil
},
request: func(followupURL string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Followup: followupURL,
NodeKey: nodeKey1.Public(),
}
},
machineKey: machineKey1.Public,
wantError: true,
},
// TEST: Non-existent registration ID is rejected
// WHAT: Tests that followup with non-existent registration ID fails
// INPUT: Valid followup URL but registration ID not in cache
// EXPECTED: Request fails with unauthorized error
// WHY: Registration must exist in cache; prevents invalid/expired registrations
{
name: "followup_registration_not_found",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
return "http://localhost:8080/register/nonexistent-id", nil
},
request: func(followupURL string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Followup: followupURL,
NodeKey: nodeKey1.Public(),
}
},
machineKey: machineKey1.Public,
wantError: true,
},
// === EDGE CASES ===
// Tests handling of malformed, invalid, or unusual input data
// TEST: Empty hostname is handled with defensive code
// WHAT: Tests that empty hostname in hostinfo generates a default hostname
// INPUT: Register request with hostinfo containing empty hostname string
// EXPECTED: Node registers successfully with generated hostname (node-MACHINEKEY)
// WHY: Defensive code prevents errors from missing hostnames; generates sensible default
{
name: "empty_hostname",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
user := app.state.CreateUserForTest("empty-hostname-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
return pak.Key, nil
},
request: func(authKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: authKey,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "", // Empty hostname should be handled gracefully
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantAuth: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper //nolint:thelper
assert.True(t, resp.MachineAuthorized)
// Node should be created with generated hostname
node, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(t, found)
assert.NotEmpty(t, node.Hostname())
},
},
// TEST: Nil hostinfo is handled with defensive code
// WHAT: Tests that nil hostinfo in register request is handled gracefully
// INPUT: Register request with Hostinfo field set to nil
// EXPECTED: Node registers successfully with generated hostname starting with "node-"
// WHY: Defensive code prevents nil pointer panics; creates valid default hostinfo
{
name: "nil_hostinfo",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
user := app.state.CreateUserForTest("nil-hostinfo-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
return pak.Key, nil
},
request: func(authKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: authKey,
},
NodeKey: nodeKey1.Public(),
Hostinfo: nil, // Nil hostinfo should be handled with defensive code
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantAuth: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper //nolint:thelper
assert.True(t, resp.MachineAuthorized)
// Node should be created with generated hostname from defensive code
node, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(t, found)
assert.NotEmpty(t, node.Hostname())
// Hostname should start with "node-" (generated from machine key)
assert.True(t, strings.HasPrefix(node.Hostname(), "node-"))
},
},
// === PRE-AUTH KEY WITH EXPIRY SCENARIOS ===
// Tests pre-auth key expiration handling
// TEST: Expired pre-auth key is rejected
// WHAT: Tests that a pre-auth key with past expiration date cannot be used
// INPUT: Pre-auth key with expiry 1 hour in the past
// EXPECTED: Registration fails with error
// WHY: Expired keys must be rejected to maintain security and key lifecycle management
{
name: "preauth_key_expired",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
user := app.state.CreateUserForTest("expired-pak-user")
expiry := time.Now().Add(-1 * time.Hour) // Expired 1 hour ago
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, &expiry, nil)
if err != nil {
return "", err
}
return pak.Key, nil
},
request: func(authKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: authKey,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "expired-pak-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantError: true,
},
// TEST: Pre-auth key with ACL tags applies tags to node
// WHAT: Tests that ACL tags from pre-auth key are applied to registered node
// INPUT: Pre-auth key with ACL tags ["tag:test", "tag:integration"], register request
// EXPECTED: Node registers with specified ACL tags applied as ForcedTags
// WHY: Pre-auth keys can enforce ACL policies on nodes during registration
{
name: "preauth_key_with_acl_tags",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper //nolint:thelper
user := app.state.CreateUserForTest("tagged-pak-user")
tags := []string{"tag:server", "tag:database"}
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, tags)
if err != nil {
return "", err
}
return pak.Key, nil
},
request: func(authKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: authKey,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-pak-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantAuth: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
assert.True(t, resp.MachineAuthorized)
assert.False(t, resp.NodeKeyExpired)
// Verify node was created with tags
node, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(t, found)
assert.Equal(t, "tagged-pak-node", node.Hostname())
if node.AuthKey().Valid() {
assert.NotEmpty(t, node.AuthKey().Tags())
}
},
},
// === ADVERTISE-TAGS (RequestTags) SCENARIOS ===
// Tests for client-provided tags via --advertise-tags flag
// TEST: PreAuthKey registration rejects client-provided RequestTags
// WHAT: Tests that PreAuthKey registrations cannot use client-provided tags
// INPUT: PreAuthKey registration with RequestTags in Hostinfo
// EXPECTED: Registration fails with "requested tags [...] are invalid or not permitted" error
// WHY: PreAuthKey nodes get their tags from the key itself, not from client requests
{
name: "preauth_key_rejects_request_tags",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
t.Helper()
user := app.state.CreateUserForTest("pak-requesttags-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
return pak.Key, nil
},
request: func(authKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: authKey,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "pak-requesttags-node",
RequestTags: []string{"tag:unauthorized"},
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantError: true,
},
// TEST: Tagged PreAuthKey ignores client-provided RequestTags
// WHAT: Tests that tagged PreAuthKey uses key tags, not client RequestTags
// INPUT: Tagged PreAuthKey registration with different RequestTags
// EXPECTED: Registration fails because RequestTags are rejected for PreAuthKey
// WHY: Tags-as-identity: PreAuthKey tags are authoritative, client cannot override
{
name: "tagged_preauth_key_rejects_client_request_tags",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
t.Helper()
user := app.state.CreateUserForTest("tagged-pak-clienttags-user")
keyTags := []string{"tag:authorized"}
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, keyTags)
if err != nil {
return "", err
}
return pak.Key, nil
},
request: func(authKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: authKey,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-pak-clienttags-node",
RequestTags: []string{"tag:client-wants-this"}, // Should be rejected
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantError: true, // RequestTags rejected for PreAuthKey registrations
},
// === RE-AUTHENTICATION SCENARIOS ===
// TEST: Existing node re-authenticates with new pre-auth key
// WHAT: Tests that existing node can re-authenticate using new pre-auth key
// INPUT: Existing node sends request with new valid pre-auth key
// EXPECTED: Node successfully re-authenticates, stays authorized
// WHY: Allows nodes to refresh authentication using pre-auth keys
{
name: "existing_node_reauth_with_new_authkey",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
user := app.state.CreateUserForTest("reauth-user")
// First, register with initial auth key
pak1, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak1.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "reauth-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegisterWithAuthKey(regReq, machineKey1.Public())
if err != nil {
return "", err
}
// Wait for node to be available
require.EventuallyWithT(t, func(c *assert.CollectT) {
_, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(c, found, "node should be available in NodeStore")
}, 1*time.Second, 50*time.Millisecond, "waiting for node to be available in NodeStore")
// Create new auth key for re-authentication
pak2, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
return pak2.Key, nil
},
request: func(newAuthKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: newAuthKey,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "reauth-node-updated",
},
Expiry: time.Now().Add(48 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantAuth: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
assert.True(t, resp.MachineAuthorized)
assert.False(t, resp.NodeKeyExpired)
// Verify node was updated, not duplicated
node, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(t, found)
assert.Equal(t, "reauth-node-updated", node.Hostname())
},
},
// TEST: Existing node re-authenticates via interactive flow
// WHAT: Tests that existing expired node can re-authenticate interactively
// INPUT: Expired node initiates interactive re-authentication
// EXPECTED: Node receives AuthURL and can complete re-authentication
// WHY: Allows expired nodes to re-authenticate without pre-auth keys
{
name: "existing_node_reauth_interactive_flow",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
user := app.state.CreateUserForTest("interactive-reauth-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
// Register initially with auth key
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "interactive-reauth-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegisterWithAuthKey(regReq, machineKey1.Public())
if err != nil {
return "", err
}
// Wait for node to be available
require.EventuallyWithT(t, func(c *assert.CollectT) {
_, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(c, found, "node should be available in NodeStore")
}, 1*time.Second, 50*time.Millisecond, "waiting for node to be available in NodeStore")
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: "", // Empty auth key triggers interactive flow
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "interactive-reauth-node-updated",
},
Expiry: time.Now().Add(48 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantAuthURL: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
assert.Contains(t, resp.AuthURL, "register/")
assert.False(t, resp.MachineAuthorized)
},
},
// === NODE KEY ROTATION SCENARIOS ===
// Tests node key rotation where node changes its node key while keeping same machine key
// TEST: Node key rotation with same machine key updates in place
// WHAT: Tests that registering with new node key and same machine key updates existing node
// INPUT: Register node with nodeKey1, then register again with nodeKey2 but same machineKey
// EXPECTED: Node is updated in place; nodeKey2 exists, nodeKey1 no longer exists
// WHY: Same machine key means same physical device; node key rotation updates, doesn't duplicate
{
name: "node_key_rotation_same_machine",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
user := app.state.CreateUserForTest("rotation-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
// Register with initial node key
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "rotation-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegisterWithAuthKey(regReq, machineKey1.Public())
if err != nil {
return "", err
}
// Wait for node to be available
require.EventuallyWithT(t, func(c *assert.CollectT) {
_, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(c, found, "node should be available in NodeStore")
}, 1*time.Second, 50*time.Millisecond, "waiting for node to be available in NodeStore")
// Create new auth key for rotation
pakRotation, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
return pakRotation.Key, nil
},
request: func(authKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: authKey,
},
NodeKey: nodeKey2.Public(), // Different node key, same machine
Hostinfo: &tailcfg.Hostinfo{
Hostname: "rotation-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantAuth: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
assert.True(t, resp.MachineAuthorized)
assert.False(t, resp.NodeKeyExpired)
// When same machine key is used, node is updated in place (not duplicated)
// The old nodeKey1 should no longer exist
_, found1 := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.False(t, found1, "old node key should not exist after rotation")
// The new nodeKey2 should exist with the same machine key
node2, found2 := app.state.GetNodeByNodeKey(nodeKey2.Public())
assert.True(t, found2, "new node key should exist after rotation")
assert.Equal(t, machineKey1.Public(), node2.MachineKey(), "machine key should remain the same")
},
},
// === MALFORMED REQUEST SCENARIOS ===
// Tests handling of requests with malformed or unusual field values
// TEST: Zero-time expiry is handled correctly
// WHAT: Tests registration with expiry set to zero time value
// INPUT: Register request with Expiry set to time.Time{} (zero value)
// EXPECTED: Node registers successfully; zero time treated as no expiry
// WHY: Zero time is valid Go default; should be handled gracefully
{
name: "malformed_expiry_zero_time",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
user := app.state.CreateUserForTest("zero-expiry-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
return pak.Key, nil
},
request: func(authKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: authKey,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "zero-expiry-node",
},
Expiry: time.Time{}, // Zero time
}
},
machineKey: machineKey1.Public,
wantAuth: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
assert.True(t, resp.MachineAuthorized)
// Node should be created with default expiry handling
node, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(t, found)
assert.Equal(t, "zero-expiry-node", node.Hostname())
},
},
// TEST: Malformed hostinfo with very long hostname is truncated
// WHAT: Tests that excessively long hostname is truncated to DNS label limit
// INPUT: Hostinfo with 110-character hostname (exceeds 63-char DNS limit)
// EXPECTED: Node registers successfully; hostname truncated to 63 characters
// WHY: Defensive code enforces DNS label limit (RFC 1123); prevents errors
{
name: "malformed_hostinfo_invalid_data",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
user := app.state.CreateUserForTest("invalid-hostinfo-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
return pak.Key, nil
},
request: func(authKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: authKey,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "test-node-with-very-long-hostname-that-might-exceed-normal-limits-and-contain-special-chars-!@#$%",
BackendLogID: "invalid-log-id",
OS: "unknown-os",
OSVersion: "999.999.999",
DeviceModel: "test-device-model",
// Note: RequestTags are not included for PreAuthKey registrations
// since tags come from the key itself, not client requests.
Services: []tailcfg.Service{{Proto: "tcp", Port: 65535}},
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantAuth: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
assert.True(t, resp.MachineAuthorized)
// Node should be created even with malformed hostinfo
node, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(t, found)
// Hostname should be sanitized or handled gracefully
assert.NotEmpty(t, node.Hostname())
},
},
// === REGISTRATION CACHE EDGE CASES ===
// Tests edge cases in registration cache handling during interactive flow
// TEST: Followup registration with nil response (cache expired during auth)
// WHAT: Tests that followup request handles nil node response (cache expired/cleared)
// INPUT: Followup request where auth completion sends nil (cache was cleared)
// EXPECTED: Returns new AuthURL so client can retry authentication
// WHY: Nil response means cache expired - give client new AuthURL instead of error
{
name: "followup_registration_node_nil_response",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
regID, err := types.NewAuthID()
if err != nil {
return "", err
}
nodeToRegister := types.NewRegisterAuthRequest(types.Node{
Hostname: "nil-response-node",
})
app.state.SetAuthCacheEntry(regID, nodeToRegister)
// Simulate registration that returns empty NodeView (cache expired during auth)
go func() {
nodeToRegister.FinishAuth(types.AuthVerdict{Node: types.NodeView{}}) // Empty view indicates cache expiry
}()
return fmt.Sprintf("http://localhost:8080/register/%s", regID), nil
},
request: func(followupURL string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Followup: followupURL,
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "nil-response-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantAuth: false, // Should not be authorized yet - needs to use new AuthURL
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
// Should get a new AuthURL, not an error
assert.NotEmpty(t, resp.AuthURL, "should receive new AuthURL when cache returns nil")
assert.Contains(t, resp.AuthURL, "/register/", "AuthURL should contain registration path")
assert.False(t, resp.MachineAuthorized, "machine should not be authorized yet")
},
},
// TEST: Malformed followup path is rejected
// WHAT: Tests that followup URL with malformed path is rejected
// INPUT: Followup URL with path that doesn't match expected format
// EXPECTED: Request fails with error (invalid followup URL)
// WHY: Path validation prevents processing of corrupted/invalid URLs
{
name: "followup_registration_malformed_path",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
return "http://localhost:8080/register/", nil // Missing registration ID
},
request: func(followupURL string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Followup: followupURL,
NodeKey: nodeKey1.Public(),
}
},
machineKey: machineKey1.Public,
wantError: true,
},
// TEST: Wrong followup path format is rejected
// WHAT: Tests that followup URL with incorrect path structure fails
// INPUT: Valid URL but path doesn't start with "/register/"
// EXPECTED: Request fails with error (invalid path format)
// WHY: Strict path validation ensures only valid registration URLs accepted
{
name: "followup_registration_wrong_path_format",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
return "http://localhost:8080/wrong/path/format", nil
},
request: func(followupURL string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Followup: followupURL,
NodeKey: nodeKey1.Public(),
}
},
machineKey: machineKey1.Public,
wantError: true,
},
// === AUTH PROVIDER EDGE CASES ===
// TEST: Interactive workflow preserves custom hostinfo
// WHAT: Tests that custom hostinfo fields are preserved through interactive flow
// INPUT: Interactive registration with detailed hostinfo (OS, version, model)
// EXPECTED: Node registers with all hostinfo fields preserved
// WHY: Ensures interactive flow doesn't lose custom hostinfo data
// NOTE: RequestTags are NOT tested here because tag authorization via
// advertise-tags requires the user to have existing nodes (for IP-based
// ownership verification). New users registering their first node cannot
// claim tags via RequestTags - they must use a tagged PreAuthKey instead.
{
name: "interactive_workflow_with_custom_hostinfo",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "custom-interactive-node",
OS: "linux",
OSVersion: "20.04",
DeviceModel: "server",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
requiresInteractiveFlow: true,
interactiveSteps: []interactiveStep{
{stepType: stepTypeInitialRequest, expectAuthURL: true, expectCacheEntry: true},
{stepType: stepTypeAuthCompletion, callAuthPath: true, expectCacheEntry: false}, // cleaned up after completion
},
validateCompleteResponse: true,
expectedAuthURLPattern: "/register/",
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
// Verify custom hostinfo was preserved through interactive workflow
node, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(t, found, "node should be found after interactive registration")
if found {
assert.Equal(t, "custom-interactive-node", node.Hostname())
assert.Equal(t, "linux", node.Hostinfo().OS())
assert.Equal(t, "20.04", node.Hostinfo().OSVersion())
assert.Equal(t, "server", node.Hostinfo().DeviceModel())
}
},
},
// === PRE-AUTH KEY USAGE TRACKING ===
// Tests accurate tracking of pre-auth key usage counts
// TEST: Pre-auth key usage count is tracked correctly
// WHAT: Tests that each use of a pre-auth key increments its usage counter
// INPUT: Reusable pre-auth key used to register three different nodes
// EXPECTED: All three nodes register successfully, key usage count increments each time
// WHY: Usage tracking enables monitoring and auditing of pre-auth key usage
{
name: "preauth_key_usage_count_tracking",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
user := app.state.CreateUserForTest("usage-count-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, nil) // Single use
if err != nil {
return "", err
}
return pak.Key, nil
},
request: func(authKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: authKey,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "usage-count-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantAuth: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
assert.True(t, resp.MachineAuthorized)
assert.False(t, resp.NodeKeyExpired)
// Verify auth key usage was tracked
node, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(t, found)
assert.Equal(t, "usage-count-node", node.Hostname())
// Key should now be used up (single use)
if node.AuthKey().Valid() {
assert.False(t, node.AuthKey().Reusable())
}
},
},
// === REGISTRATION ID GENERATION AND ADVANCED EDGE CASES ===
// TEST: Interactive workflow generates valid registration IDs
// WHAT: Tests that interactive flow generates unique, valid registration IDs
// INPUT: Interactive registration request
// EXPECTED: AuthURL contains valid registration ID that can be extracted
// WHY: Registration IDs must be unique and valid for cache lookup
{
name: "interactive_workflow_registration_id_generation",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "registration-id-test-node",
OS: "test-os",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
requiresInteractiveFlow: true,
interactiveSteps: []interactiveStep{
{stepType: stepTypeInitialRequest, expectAuthURL: true, expectCacheEntry: true},
{stepType: stepTypeAuthCompletion, callAuthPath: true, expectCacheEntry: false},
},
validateCompleteResponse: true,
expectedAuthURLPattern: "/register/",
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
// Verify registration ID was properly generated and used
node, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(t, found, "node should be registered after interactive workflow")
if found {
assert.Equal(t, "registration-id-test-node", node.Hostname())
assert.Equal(t, "test-os", node.Hostinfo().OS())
}
},
},
{
name: "concurrent_registration_same_node_key",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
user := app.state.CreateUserForTest("concurrent-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
return pak.Key, nil
},
request: func(authKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: authKey,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "concurrent-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantAuth: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
assert.True(t, resp.MachineAuthorized)
assert.False(t, resp.NodeKeyExpired)
// Verify node was registered
node, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(t, found)
assert.Equal(t, "concurrent-node", node.Hostname())
},
},
// TEST: Auth key expiry vs request expiry handling
// WHAT: Tests that pre-auth key expiry is independent of request expiry
// INPUT: Valid pre-auth key (future expiry), request with past expiry
// EXPECTED: Node registers with request expiry used (logout scenario)
// WHY: Request expiry overrides key expiry; allows logout with valid key
{
name: "auth_key_with_future_expiry_past_request_expiry",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
user := app.state.CreateUserForTest("future-expiry-user")
// Auth key expires in the future
expiry := time.Now().Add(48 * time.Hour)
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, &expiry, nil)
if err != nil {
return "", err
}
return pak.Key, nil
},
request: func(authKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: authKey,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "future-expiry-node",
},
// Request expires before auth key
Expiry: time.Now().Add(12 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantAuth: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
assert.True(t, resp.MachineAuthorized)
assert.False(t, resp.NodeKeyExpired)
// Node should be created with request expiry (shorter than auth key expiry)
node, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(t, found)
assert.Equal(t, "future-expiry-node", node.Hostname())
},
},
// TEST: Re-authentication with different user's auth key
// WHAT: Tests node transfer when re-authenticating with a different user's auth key
// INPUT: Node registered with user1's auth key, re-authenticates with user2's auth key
// EXPECTED: Node is transferred to user2 (updates UserID and related fields)
// WHY: Validates device reassignment scenarios where a machine moves between users
{
name: "reauth_existing_node_different_user_auth_key",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
// Create two users
user1 := app.state.CreateUserForTest("user1-context")
user2 := app.state.CreateUserForTest("user2-context")
// Register node with user1's auth key
pak1, err := app.state.CreatePreAuthKey(user1.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak1.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "context-node-user1",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegisterWithAuthKey(regReq, machineKey1.Public())
if err != nil {
return "", err
}
// Wait for node to be available
require.EventuallyWithT(t, func(c *assert.CollectT) {
_, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(c, found, "node should be available in NodeStore")
}, 1*time.Second, 50*time.Millisecond, "waiting for node to be available in NodeStore")
// Return user2's auth key for re-authentication
pak2, err := app.state.CreatePreAuthKey(user2.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
return pak2.Key, nil
},
request: func(user2AuthKey string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: user2AuthKey,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "context-node-user2",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantAuth: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
assert.True(t, resp.MachineAuthorized)
assert.False(t, resp.NodeKeyExpired)
// Verify NEW node was created for user2
node2, found := app.state.GetNodeByMachineKey(machineKey1.Public(), types.UserID(2))
require.True(t, found, "new node should exist for user2")
assert.Equal(t, uint(2), node2.UserID().Get(), "new node should belong to user2")
user := node2.User()
assert.Equal(t, "user2-context", user.Name(), "new node should show user2 username")
// Verify original node still exists for user1
node1, found := app.state.GetNodeByMachineKey(machineKey1.Public(), types.UserID(1))
require.True(t, found, "original node should still exist for user1")
assert.Equal(t, uint(1), node1.UserID().Get(), "original node should still belong to user1")
// Verify they are different nodes (different IDs)
assert.NotEqual(t, node1.ID(), node2.ID(), "should be different node IDs")
},
},
// TEST: Re-authentication with different user via interactive flow creates new node
// WHAT: Tests new node creation when re-authenticating interactively with a different user
// INPUT: Node registered with user1, re-authenticates interactively as user2 (same machine key, same node key)
// EXPECTED: New node is created for user2, user1's original node remains (no transfer)
// WHY: Same physical machine can have separate node identities per user
{
name: "interactive_reauth_existing_node_different_user_creates_new_node",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
// Create user1 and register a node with auth key
user1 := app.state.CreateUserForTest("interactive-user-1")
pak1, err := app.state.CreatePreAuthKey(user1.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
// Register node with user1's auth key first
initialReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak1.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "transfer-node-user1",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegister(context.Background(), initialReq, machineKey1.Public())
if err != nil {
return "", err
}
// Wait for node to be available
require.EventuallyWithT(t, func(c *assert.CollectT) {
_, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(c, found, "node should be available in NodeStore")
}, 1*time.Second, 50*time.Millisecond, "waiting for node to be available in NodeStore")
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{}, // Empty auth triggers interactive flow
NodeKey: nodeKey1.Public(), // Same node key as original registration
Hostinfo: &tailcfg.Hostinfo{
Hostname: "transfer-node-user2", // Different hostname
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public, // Same machine key
requiresInteractiveFlow: true,
interactiveSteps: []interactiveStep{
{stepType: stepTypeInitialRequest, expectAuthURL: true, expectCacheEntry: true},
{stepType: stepTypeAuthCompletion, callAuthPath: true, expectCacheEntry: false},
},
validateCompleteResponse: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
// User1's original node should STILL exist (not transferred)
node1, found1 := app.state.GetNodeByMachineKey(machineKey1.Public(), types.UserID(1))
require.True(t, found1, "user1's original node should still exist")
assert.Equal(t, uint(1), node1.UserID().Get(), "user1's node should still belong to user1")
assert.Equal(t, nodeKey1.Public(), node1.NodeKey(), "user1's node should have original node key")
// User2 should have a NEW node created
node2, found2 := app.state.GetNodeByMachineKey(machineKey1.Public(), types.UserID(2))
require.True(t, found2, "user2 should have new node created")
assert.Equal(t, uint(2), node2.UserID().Get(), "user2's node should belong to user2")
user := node2.User()
assert.Equal(t, "interactive-test-user", user.Name(), "user2's node should show correct username")
// Both nodes should have the same machine key but different IDs
assert.NotEqual(t, node1.ID(), node2.ID(), "should be different nodes (different IDs)")
assert.Equal(t, machineKey1.Public(), node2.MachineKey(), "user2's node should have same machine key")
},
},
// TEST: Followup request after registration cache expiry
// WHAT: Tests that expired followup requests get a new AuthURL instead of error
// INPUT: Followup request for registration ID that has expired/been evicted from cache
// EXPECTED: Returns new AuthURL (not error) so client can retry authentication
// WHY: Validates new reqToNewRegisterResponse functionality - prevents client getting stuck
{
name: "followup_request_after_cache_expiry",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
// Generate a registration ID that doesn't exist in cache
// This simulates an expired/missing cache entry
regID, err := types.NewAuthID()
if err != nil {
return "", err
}
// Don't add it to cache - it's already expired/missing
return regID.String(), nil
},
request: func(regID string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Followup: "http://localhost:8080/register/" + regID,
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "expired-cache-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
wantAuth: false, // Should not be authorized yet - needs to use new AuthURL
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
// Should get a new AuthURL, not an error
assert.NotEmpty(t, resp.AuthURL, "should receive new AuthURL when registration expired")
assert.Contains(t, resp.AuthURL, "/register/", "AuthURL should contain registration path")
assert.False(t, resp.MachineAuthorized, "machine should not be authorized yet")
// Verify the response contains a valid registration URL
authURL, err := url.Parse(resp.AuthURL)
assert.NoError(t, err, "AuthURL should be a valid URL") //nolint:testifylint // inside closure, uses assert pattern
assert.True(t, strings.HasPrefix(authURL.Path, "/register/"), "AuthURL path should start with /register/")
// Extract and validate the new registration ID exists in cache
newRegIDStr := strings.TrimPrefix(authURL.Path, "/register/")
newRegID, err := types.AuthIDFromString(newRegIDStr)
assert.NoError(t, err, "should be able to parse new registration ID") //nolint:testifylint // inside closure
// Verify new registration entry exists in cache
_, found := app.state.GetAuthCacheEntry(newRegID)
assert.True(t, found, "new registration should exist in cache")
},
},
// TEST: Logout with expiry exactly at current time
// WHAT: Tests logout when expiry is set to exact current time (boundary case)
// INPUT: Existing node sends request with expiry=time.Now() (not past, not future)
// EXPECTED: Node is logged out (treated as expired)
// WHY: Edge case: current time should be treated as expired
{
name: "logout_with_exactly_now_expiry",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
user := app.state.CreateUserForTest("exact-now-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
// Register the node first
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "exact-now-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegisterWithAuthKey(regReq, machineKey1.Public())
if err != nil {
return "", err
}
// Wait for node to be available
require.EventuallyWithT(t, func(c *assert.CollectT) {
_, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(c, found, "node should be available in NodeStore")
}, 1*time.Second, 50*time.Millisecond, "waiting for node to be available in NodeStore")
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: nil,
NodeKey: nodeKey1.Public(),
Expiry: time.Now(), // Exactly now (edge case between past and future)
}
},
machineKey: machineKey1.Public,
wantAuth: true,
wantExpired: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
assert.True(t, resp.MachineAuthorized)
assert.True(t, resp.NodeKeyExpired)
// Node should be marked as expired but still exist
node, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(t, found)
assert.True(t, node.IsExpired())
},
},
// TEST: Interactive workflow timeout cleans up cache
// WHAT: Tests that timed-out interactive registrations clean up cache entries
// INPUT: Interactive registration that times out without completion
// EXPECTED: Cache entry should be cleaned up (behavior depends on implementation)
// WHY: Prevents cache bloat from abandoned registrations
{
name: "interactive_workflow_timeout_cleanup",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
NodeKey: nodeKey2.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "interactive-timeout-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey2.Public,
requiresInteractiveFlow: true,
interactiveSteps: []interactiveStep{
{stepType: stepTypeInitialRequest, expectAuthURL: true, expectCacheEntry: true},
// NOTE: No auth_completion step - simulates timeout scenario
},
validateRegistrationCache: true, // should be cleaned up eventually
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
// Verify AuthURL was generated but registration not completed
assert.Contains(t, resp.AuthURL, "/register/")
assert.False(t, resp.MachineAuthorized)
},
},
// === COMPREHENSIVE INTERACTIVE WORKFLOW EDGE CASES ===
// TEST: Interactive workflow with existing node from different user creates new node
// WHAT: Tests new node creation when re-authenticating interactively with different user
// INPUT: Node already registered with user1, interactive auth with user2 (same machine key, different node key)
// EXPECTED: New node is created for user2, user1's original node remains (no transfer)
// WHY: Same physical machine can have separate node identities per user
{
name: "interactive_workflow_with_existing_node_different_user_creates_new_node",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
// First create a node under user1
user1 := app.state.CreateUserForTest("existing-user-1")
pak1, err := app.state.CreatePreAuthKey(user1.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
// Register the node with user1 first
initialReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak1.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "existing-node-user1",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegister(context.Background(), initialReq, machineKey1.Public())
if err != nil {
return "", err
}
// Wait for node to be available
require.EventuallyWithT(t, func(c *assert.CollectT) {
_, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(c, found, "node should be available in NodeStore")
}, 1*time.Second, 50*time.Millisecond, "waiting for node to be available in NodeStore")
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{}, // Empty auth triggers interactive flow
NodeKey: nodeKey2.Public(), // Different node key for different user
Hostinfo: &tailcfg.Hostinfo{
Hostname: "existing-node-user2", // Different hostname
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
requiresInteractiveFlow: true,
interactiveSteps: []interactiveStep{
{stepType: stepTypeInitialRequest, expectAuthURL: true, expectCacheEntry: true},
{stepType: stepTypeAuthCompletion, callAuthPath: true, expectCacheEntry: false},
},
validateCompleteResponse: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
// User1's original node with nodeKey1 should STILL exist
node1, found1 := app.state.GetNodeByNodeKey(nodeKey1.Public())
require.True(t, found1, "user1's original node with nodeKey1 should still exist")
assert.Equal(t, uint(1), node1.UserID().Get(), "user1's node should still belong to user1")
assert.Equal(t, uint64(1), node1.ID().Uint64(), "user1's node should be ID=1")
// User2 should have a NEW node with nodeKey2
node2, found2 := app.state.GetNodeByNodeKey(nodeKey2.Public())
require.True(t, found2, "user2 should have new node with nodeKey2")
assert.Equal(t, "existing-node-user2", node2.Hostname(), "hostname should be from new registration")
user := node2.User()
assert.Equal(t, "interactive-test-user", user.Name(), "user2's node should belong to user2")
assert.Equal(t, machineKey1.Public(), node2.MachineKey(), "machine key should be the same")
// Verify it's a NEW node, not transferred
assert.NotEqual(t, uint64(1), node2.ID().Uint64(), "should be a NEW node (different ID)")
},
},
// TEST: Interactive workflow with malformed followup URL
// WHAT: Tests that malformed followup URLs in interactive flow are rejected
// INPUT: Interactive registration with invalid followup URL format
// EXPECTED: Request fails with error (invalid URL)
// WHY: Validates followup URLs to prevent errors
{
name: "interactive_workflow_malformed_followup_url",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "malformed-followup-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
requiresInteractiveFlow: true,
interactiveSteps: []interactiveStep{
{stepType: stepTypeInitialRequest, expectAuthURL: true, expectCacheEntry: true},
},
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
// Test malformed followup URLs after getting initial AuthURL
authURL := resp.AuthURL
assert.Contains(t, authURL, "/register/")
// Test various malformed followup URLs - use completely invalid IDs to avoid blocking
malformedURLs := []string{
"invalid-url",
"/register/",
"/register/invalid-id-that-does-not-exist",
"/register/00000000-0000-0000-0000-000000000000",
"http://malicious-site.com/register/invalid-id",
}
for _, malformedURL := range malformedURLs {
followupReq := tailcfg.RegisterRequest{
NodeKey: nodeKey1.Public(),
Followup: malformedURL,
Hostinfo: &tailcfg.Hostinfo{
Hostname: "malformed-followup-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
// These should all fail gracefully
_, err := app.handleRegister(context.Background(), followupReq, machineKey1.Public())
assert.Error(t, err, "malformed followup URL should be rejected: %s", malformedURL)
}
},
},
// TEST: Concurrent interactive workflow registrations
// WHAT: Tests multiple simultaneous interactive registrations
// INPUT: Two nodes initiate interactive registration concurrently
// EXPECTED: Both registrations succeed independently
// WHY: System should handle concurrent interactive flows without conflicts
{
name: "interactive_workflow_concurrent_registrations",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "concurrent-registration-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
// This test validates concurrent interactive registration attempts
assert.Contains(t, resp.AuthURL, "/register/")
// Start multiple concurrent followup requests
authURL := resp.AuthURL
numConcurrent := 3
results := make(chan error, numConcurrent)
for i := range numConcurrent {
go func(index int) {
followupReq := tailcfg.RegisterRequest{
NodeKey: nodeKey1.Public(),
Followup: authURL,
Hostinfo: &tailcfg.Hostinfo{
Hostname: fmt.Sprintf("concurrent-node-%d", index),
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err := app.handleRegister(context.Background(), followupReq, machineKey1.Public())
results <- err
}(i)
}
// Complete the authentication to signal the waiting goroutines
// The goroutines will receive from the buffered channel when ready
registrationID, err := extractRegistrationIDFromAuthURL(authURL)
require.NoError(t, err)
user := app.state.CreateUserForTest("concurrent-test-user")
_, _, err = app.state.HandleNodeFromAuthPath(
registrationID,
types.UserID(user.ID),
nil,
"concurrent-test-method",
)
require.NoError(t, err)
// Collect results - at least one should succeed
successCount := 0
for range numConcurrent {
select {
case err := <-results:
if err == nil {
successCount++
}
case <-time.After(2 * time.Second):
// Some may timeout, which is expected
}
}
// At least one concurrent request should have succeeded
assert.GreaterOrEqual(t, successCount, 1, "at least one concurrent registration should succeed")
},
},
// TEST: Interactive workflow with node key rotation attempt
// WHAT: Tests interactive registration with different node key (appears as rotation)
// INPUT: Node registered with nodeKey1, then interactive registration with nodeKey2
// EXPECTED: Creates new node for different user (not true rotation)
// WHY: Interactive flow creates new nodes with new users; doesn't rotate existing nodes
{
name: "interactive_workflow_node_key_rotation",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
// Register initial node
user := app.state.CreateUserForTest("rotation-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
if err != nil {
return "", err
}
initialReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "rotation-node-initial",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegister(context.Background(), initialReq, machineKey1.Public())
if err != nil {
return "", err
}
// Wait for node to be available
require.EventuallyWithT(t, func(c *assert.CollectT) {
_, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(c, found, "node should be available in NodeStore")
}, 1*time.Second, 50*time.Millisecond, "waiting for node to be available in NodeStore")
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
NodeKey: nodeKey2.Public(), // Different node key (rotation scenario)
OldNodeKey: nodeKey1.Public(), // Previous node key
Hostinfo: &tailcfg.Hostinfo{
Hostname: "rotation-node-updated",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
requiresInteractiveFlow: true,
interactiveSteps: []interactiveStep{
{stepType: stepTypeInitialRequest, expectAuthURL: true, expectCacheEntry: true},
{stepType: stepTypeAuthCompletion, callAuthPath: true, expectCacheEntry: false},
},
validateCompleteResponse: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
// User1's original node with nodeKey1 should STILL exist
oldNode, foundOld := app.state.GetNodeByNodeKey(nodeKey1.Public())
require.True(t, foundOld, "user1's original node with nodeKey1 should still exist")
assert.Equal(t, uint(1), oldNode.UserID().Get(), "user1's node should still belong to user1")
assert.Equal(t, uint64(1), oldNode.ID().Uint64(), "user1's node should be ID=1")
// User2 should have a NEW node with nodeKey2
newNode, found := app.state.GetNodeByNodeKey(nodeKey2.Public())
require.True(t, found, "user2 should have new node with nodeKey2")
assert.Equal(t, "rotation-node-updated", newNode.Hostname())
assert.Equal(t, machineKey1.Public(), newNode.MachineKey())
user := newNode.User()
assert.Equal(t, "interactive-test-user", user.Name(), "user2's node should belong to user2")
// Verify it's a NEW node, not transferred
assert.NotEqual(t, uint64(1), newNode.ID().Uint64(), "should be a NEW node (different ID)")
},
},
// TEST: Interactive workflow with nil hostinfo
// WHAT: Tests interactive registration when request has nil hostinfo
// INPUT: Interactive registration request with Hostinfo=nil
// EXPECTED: Node registers successfully with generated default hostname
// WHY: Defensive code handles nil hostinfo in interactive flow
{
name: "interactive_workflow_with_nil_hostinfo",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
NodeKey: nodeKey1.Public(),
Hostinfo: nil, // Nil hostinfo should be handled gracefully
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
requiresInteractiveFlow: true,
interactiveSteps: []interactiveStep{
{stepType: stepTypeInitialRequest, expectAuthURL: true, expectCacheEntry: true},
{stepType: stepTypeAuthCompletion, callAuthPath: true, expectCacheEntry: false},
},
validateCompleteResponse: true,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
// Should handle nil hostinfo gracefully
node, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(t, found, "node should be registered despite nil hostinfo")
if found {
// Should have some default hostname or handle nil gracefully
hostname := node.Hostname()
assert.NotEmpty(t, hostname, "should have some hostname even with nil hostinfo")
}
},
},
// TEST: Registration cache cleanup on authentication error
// WHAT: Tests that cache is cleaned up when authentication fails
// INPUT: Interactive registration that fails during auth completion
// EXPECTED: Cache entry removed after error
// WHY: Failed registrations should clean up to prevent stale cache entries
{
name: "interactive_workflow_registration_cache_cleanup_on_error",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "cache-cleanup-test-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
// Get initial AuthURL and extract registration ID
authURL := resp.AuthURL
assert.Contains(t, authURL, "/register/")
registrationID, err := extractRegistrationIDFromAuthURL(authURL)
require.NoError(t, err)
// Verify cache entry exists
cacheEntry, found := app.state.GetAuthCacheEntry(registrationID)
assert.True(t, found, "registration cache entry should exist initially")
assert.NotNil(t, cacheEntry)
// Try to complete authentication with invalid user ID (should cause error)
invalidUserID := types.UserID(99999) // Non-existent user
_, _, err = app.state.HandleNodeFromAuthPath(
registrationID,
invalidUserID,
nil,
"error-test-method",
)
assert.Error(t, err, "should fail with invalid user ID") //nolint:testifylint // inside closure, uses assert pattern
// Cache entry should still exist after auth error (for retry scenarios)
_, stillFound := app.state.GetAuthCacheEntry(registrationID)
assert.True(t, stillFound, "registration cache entry should still exist after auth error for potential retry")
},
},
// TEST: Multiple interactive workflow steps for same node
// WHAT: Tests that interactive workflow can handle multi-step process for same node
// INPUT: Node goes through complete interactive flow with multiple steps
// EXPECTED: Node successfully completes registration after all steps
// WHY: Validates complete interactive flow works end-to-end
// TEST: Interactive workflow with multiple registration attempts for same node
// WHAT: Tests that multiple interactive registrations can be created for same node
// INPUT: Start two interactive registrations, verify both cache entries exist
// EXPECTED: Both registrations get different IDs and can coexist
// WHY: Validates that multiple pending registrations don't interfere with each other
{
name: "interactive_workflow_multiple_steps_same_node",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "multi-step-node",
OS: "linux",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
// Test multiple interactive registration attempts for the same node can coexist
authURL1 := resp.AuthURL
assert.Contains(t, authURL1, "/register/")
// Start a second interactive registration for the same node
secondReq := tailcfg.RegisterRequest{
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "multi-step-node-updated",
OS: "linux-updated",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp2, err := app.handleRegister(context.Background(), secondReq, machineKey1.Public())
require.NoError(t, err)
authURL2 := resp2.AuthURL
assert.Contains(t, authURL2, "/register/")
// Both should have different registration IDs
regID1, err1 := extractRegistrationIDFromAuthURL(authURL1)
regID2, err2 := extractRegistrationIDFromAuthURL(authURL2)
require.NoError(t, err1)
require.NoError(t, err2)
assert.NotEqual(t, regID1, regID2, "different registration attempts should have different IDs")
// Both cache entries should exist simultaneously
_, found1 := app.state.GetAuthCacheEntry(regID1)
_, found2 := app.state.GetAuthCacheEntry(regID2)
assert.True(t, found1, "first registration cache entry should exist")
assert.True(t, found2, "second registration cache entry should exist")
// This validates that multiple pending registrations can coexist
// without interfering with each other
},
},
// TEST: Complete one of multiple pending registrations
// WHAT: Tests completing the second of two pending registrations for same node
// INPUT: Create two pending registrations, complete the second one
// EXPECTED: Second registration completes successfully, node is created
// WHY: Validates that you can complete any pending registration, not just the first
{
name: "interactive_workflow_complete_second_of_multiple_pending",
setupFunc: func(t *testing.T, app *Headscale) (string, error) { //nolint:thelper
return "", nil
},
request: func(_ string) tailcfg.RegisterRequest {
return tailcfg.RegisterRequest{
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "pending-node-1",
},
Expiry: time.Now().Add(24 * time.Hour),
}
},
machineKey: machineKey1.Public,
validate: func(t *testing.T, resp *tailcfg.RegisterResponse, app *Headscale) { //nolint:thelper
authURL1 := resp.AuthURL
regID1, err := extractRegistrationIDFromAuthURL(authURL1)
require.NoError(t, err)
// Start a second interactive registration for the same node
secondReq := tailcfg.RegisterRequest{
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "pending-node-2",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp2, err := app.handleRegister(context.Background(), secondReq, machineKey1.Public())
require.NoError(t, err)
authURL2 := resp2.AuthURL
regID2, err := extractRegistrationIDFromAuthURL(authURL2)
require.NoError(t, err)
// Verify both exist
_, found1 := app.state.GetAuthCacheEntry(regID1)
_, found2 := app.state.GetAuthCacheEntry(regID2)
assert.True(t, found1, "first cache entry should exist")
assert.True(t, found2, "second cache entry should exist")
// Complete the SECOND registration (not the first)
user := app.state.CreateUserForTest("second-registration-user")
// Start followup request in goroutine (it will wait for auth completion)
responseChan := make(chan *tailcfg.RegisterResponse, 1)
errorChan := make(chan error, 1)
followupReq := tailcfg.RegisterRequest{
NodeKey: nodeKey1.Public(),
Followup: authURL2,
Hostinfo: &tailcfg.Hostinfo{
Hostname: "pending-node-2",
},
Expiry: time.Now().Add(24 * time.Hour),
}
go func() {
resp, err := app.handleRegister(context.Background(), followupReq, machineKey1.Public())
if err != nil {
errorChan <- err
return
}
responseChan <- resp
}()
// Complete authentication for second registration
// The goroutine will receive the node from the buffered channel
_, _, err = app.state.HandleNodeFromAuthPath(
regID2,
types.UserID(user.ID),
nil,
"second-registration-method",
)
require.NoError(t, err)
// Wait for followup to complete
select {
case err := <-errorChan:
t.Fatalf("followup request failed: %v", err)
case finalResp := <-responseChan:
require.NotNil(t, finalResp)
assert.True(t, finalResp.MachineAuthorized, "machine should be authorized")
case <-time.After(2 * time.Second):
t.Fatal("followup request timed out")
}
// Verify the node was created with the second registration's data
node, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
assert.True(t, found, "node should be registered")
if found {
assert.Equal(t, "pending-node-2", node.Hostname())
assert.Equal(t, "second-registration-user", node.User().Name())
}
// First registration should still be in cache (not completed)
_, stillFound := app.state.GetAuthCacheEntry(regID1)
assert.True(t, stillFound, "first registration should still be pending")
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Create test app
app := createTestApp(t)
// Run setup function
dynamicValue, err := tt.setupFunc(t, app)
require.NoError(t, err, "setup should not fail")
// Check if this test requires interactive workflow
if tt.requiresInteractiveFlow {
runInteractiveWorkflowTest(t, tt, app, dynamicValue)
return
}
// Build request
req := tt.request(dynamicValue)
machineKey := tt.machineKey()
// Set up context with timeout for followup tests
ctx := context.Background()
if req.Followup != "" {
var cancel context.CancelFunc
ctx, cancel = context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
}
// Debug: check node availability before test execution
if req.Auth == nil {
if node, found := app.state.GetNodeByNodeKey(req.NodeKey); found {
t.Logf("Node found before handleRegister: hostname=%s, expired=%t", node.Hostname(), node.IsExpired())
} else {
t.Logf("Node NOT found before handleRegister for key %s", req.NodeKey.ShortString())
}
}
// Execute the test
resp, err := app.handleRegister(ctx, req, machineKey)
// Validate error expectations
if tt.wantError {
assert.Error(t, err, "expected error but got none")
return
}
require.NoError(t, err, "unexpected error: %v", err)
require.NotNil(t, resp, "response should not be nil")
// Validate basic response properties
if tt.wantAuth {
assert.True(t, resp.MachineAuthorized, "machine should be authorized")
} else {
assert.False(t, resp.MachineAuthorized, "machine should not be authorized")
}
if tt.wantAuthURL {
assert.NotEmpty(t, resp.AuthURL, "should have AuthURL")
assert.Contains(t, resp.AuthURL, "register/", "AuthURL should contain registration path")
}
if tt.wantExpired {
assert.True(t, resp.NodeKeyExpired, "node key should be expired")
} else {
assert.False(t, resp.NodeKeyExpired, "node key should not be expired")
}
// Run custom validation if provided
if tt.validate != nil {
tt.validate(t, resp, app)
}
})
}
}
// runInteractiveWorkflowTest executes a multi-step interactive authentication workflow.
func runInteractiveWorkflowTest(t *testing.T, tt struct {
name string
setupFunc func(*testing.T, *Headscale) (string, error)
request func(dynamicValue string) tailcfg.RegisterRequest
machineKey func() key.MachinePublic
wantAuth bool
wantError bool
wantAuthURL bool
wantExpired bool
validate func(*testing.T, *tailcfg.RegisterResponse, *Headscale)
requiresInteractiveFlow bool
interactiveSteps []interactiveStep
validateRegistrationCache bool
expectedAuthURLPattern string
simulateAuthCompletion bool
validateCompleteResponse bool
}, app *Headscale, dynamicValue string,
) {
t.Helper()
// Build initial request
req := tt.request(dynamicValue)
machineKey := tt.machineKey()
ctx := context.Background()
// Execute interactive workflow steps
var (
initialResp *tailcfg.RegisterResponse
authURL string
registrationID types.AuthID
finalResp *tailcfg.RegisterResponse
err error
)
// Execute the steps in the correct sequence for interactive workflow
for i, step := range tt.interactiveSteps {
t.Logf("Executing interactive step %d: %s", i+1, step.stepType)
switch step.stepType {
case stepTypeInitialRequest:
// Step 1: Initial request should get AuthURL back
initialResp, err = app.handleRegister(ctx, req, machineKey)
require.NoError(t, err, "initial request should not fail")
require.NotNil(t, initialResp, "initial response should not be nil")
if step.expectAuthURL {
require.NotEmpty(t, initialResp.AuthURL, "should have AuthURL")
require.Contains(t, initialResp.AuthURL, "/register/", "AuthURL should contain registration path")
authURL = initialResp.AuthURL
// Extract registration ID from AuthURL
registrationID, err = extractRegistrationIDFromAuthURL(authURL)
require.NoError(t, err, "should be able to extract registration ID from AuthURL")
}
if step.expectCacheEntry {
// Verify registration cache entry was created
cacheEntry, found := app.state.GetAuthCacheEntry(registrationID)
require.True(t, found, "registration cache entry should exist")
require.NotNil(t, cacheEntry, "cache entry should not be nil")
require.Equal(t, req.NodeKey, cacheEntry.Node().NodeKey(), "cache entry should have correct node key")
}
case stepTypeAuthCompletion:
// Step 2: Start followup request that will wait, then complete authentication
if step.callAuthPath {
require.NotEmpty(t, registrationID, "registration ID should be available from previous step")
// Prepare followup request
followupReq := tt.request(dynamicValue)
followupReq.Followup = authURL
// Start the followup request in a goroutine - it will wait for channel signal
responseChan := make(chan *tailcfg.RegisterResponse, 1)
errorChan := make(chan error, 1)
go func() {
resp, err := app.handleRegister(context.Background(), followupReq, machineKey)
if err != nil {
errorChan <- err
return
}
responseChan <- resp
}()
// Complete the authentication - the goroutine will receive from the buffered channel
user := app.state.CreateUserForTest("interactive-test-user")
_, _, err = app.state.HandleNodeFromAuthPath(
registrationID,
types.UserID(user.ID),
nil, // no custom expiry
"test-method",
)
require.NoError(t, err, "HandleNodeFromAuthPath should succeed")
// Wait for the followup request to complete
select {
case err := <-errorChan:
require.NoError(t, err, "followup request should not fail")
case finalResp = <-responseChan:
require.NotNil(t, finalResp, "final response should not be nil")
// Verify machine is now authorized
require.True(t, finalResp.MachineAuthorized, "machine should be authorized after followup")
case <-time.After(5 * time.Second):
t.Fatal("followup request timed out waiting for authentication completion")
}
}
case stepTypeFollowupRequest:
// This step is deprecated - followup is now handled within auth_completion step
t.Logf("followup_request step is deprecated - use expectCacheEntry in auth_completion instead")
default:
t.Fatalf("unknown interactive step type: %s", step.stepType)
}
// Check cache cleanup expectation for this step
if step.expectCacheEntry == false && registrationID != "" {
// Verify cache entry was cleaned up
_, found := app.state.GetAuthCacheEntry(registrationID)
require.False(t, found, "registration cache entry should be cleaned up after step: %s", step.stepType)
}
}
// Validate final response if requested
if tt.validateCompleteResponse && finalResp != nil {
validateCompleteRegistrationResponse(t, finalResp, req)
}
// Run custom validation if provided
if tt.validate != nil {
responseToValidate := finalResp
if responseToValidate == nil {
responseToValidate = initialResp
}
tt.validate(t, responseToValidate, app)
}
}
// extractRegistrationIDFromAuthURL extracts the registration ID from an AuthURL.
func extractRegistrationIDFromAuthURL(authURL string) (types.AuthID, error) {
// AuthURL format: "http://localhost/register/abc123"
const registerPrefix = "/register/"
idx := strings.LastIndex(authURL, registerPrefix)
if idx == -1 {
return "", fmt.Errorf("invalid AuthURL format: %s", authURL) //nolint:err113
}
idStr := authURL[idx+len(registerPrefix):]
return types.AuthIDFromString(idStr)
}
// validateCompleteRegistrationResponse performs comprehensive validation of a registration response.
func validateCompleteRegistrationResponse(t *testing.T, resp *tailcfg.RegisterResponse, _ tailcfg.RegisterRequest) {
t.Helper()
// Basic response validation
require.NotNil(t, resp, "response should not be nil")
require.True(t, resp.MachineAuthorized, "machine should be authorized")
require.False(t, resp.NodeKeyExpired, "node key should not be expired")
require.NotEmpty(t, resp.User.DisplayName, "user should have display name")
// Additional validation can be added here as needed
// Note: NodeKey field may not be present in all response types
// Additional validation can be added here as needed
}
// Simple test to validate basic node creation and lookup.
func TestNodeStoreLookup(t *testing.T) {
app := createTestApp(t)
machineKey := key.NewMachine()
nodeKey := key.NewNode()
user := app.state.CreateUserForTest("test-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
require.NoError(t, err)
// Register a node
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "test-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
require.NoError(t, err)
require.NotNil(t, resp)
require.True(t, resp.MachineAuthorized)
t.Logf("Registered node successfully: %+v", resp)
// Wait for node to be available in NodeStore
var node types.NodeView
require.EventuallyWithT(t, func(c *assert.CollectT) {
var found bool
node, found = app.state.GetNodeByNodeKey(nodeKey.Public())
assert.True(c, found, "Node should be found in NodeStore")
}, 1*time.Second, 100*time.Millisecond, "waiting for node to be available in NodeStore")
require.Equal(t, "test-node", node.Hostname())
t.Logf("Found node: hostname=%s, id=%d", node.Hostname(), node.ID().Uint64())
}
// TestPreAuthKeyLogoutAndReloginDifferentUser tests the scenario where:
// 1. Multiple nodes register with different users using pre-auth keys
// 2. All nodes logout
// 3. All nodes re-login using a different user's pre-auth key
// EXPECTED BEHAVIOR: Should create NEW nodes for the new user, leaving old nodes with the old user.
// This matches the integration test expectation and web flow behavior.
func TestPreAuthKeyLogoutAndReloginDifferentUser(t *testing.T) {
app := createTestApp(t)
// Create two users
user1 := app.state.CreateUserForTest("user1")
user2 := app.state.CreateUserForTest("user2")
// Create pre-auth keys for both users
pak1, err := app.state.CreatePreAuthKey(user1.TypedID(), true, false, nil, nil)
require.NoError(t, err)
pak2, err := app.state.CreatePreAuthKey(user2.TypedID(), true, false, nil, nil)
require.NoError(t, err)
// Create machine and node keys for 4 nodes (2 per user)
type nodeInfo struct {
machineKey key.MachinePrivate
nodeKey key.NodePrivate
hostname string
nodeID types.NodeID
}
nodes := []nodeInfo{
{machineKey: key.NewMachine(), nodeKey: key.NewNode(), hostname: "user1-node1"},
{machineKey: key.NewMachine(), nodeKey: key.NewNode(), hostname: "user1-node2"},
{machineKey: key.NewMachine(), nodeKey: key.NewNode(), hostname: "user2-node1"},
{machineKey: key.NewMachine(), nodeKey: key.NewNode(), hostname: "user2-node2"},
}
// Register nodes: first 2 to user1, last 2 to user2
for i, node := range nodes {
authKey := pak1.Key
if i >= 2 {
authKey = pak2.Key
}
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: authKey,
},
NodeKey: node.nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: node.hostname,
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp, err := app.handleRegisterWithAuthKey(regReq, node.machineKey.Public())
require.NoError(t, err)
require.NotNil(t, resp)
require.True(t, resp.MachineAuthorized)
// Get the node ID
var registeredNode types.NodeView
require.EventuallyWithT(t, func(c *assert.CollectT) {
var found bool
registeredNode, found = app.state.GetNodeByNodeKey(node.nodeKey.Public())
assert.True(c, found, "Node should be found in NodeStore")
}, 1*time.Second, 100*time.Millisecond, "waiting for node to be available")
nodes[i].nodeID = registeredNode.ID()
t.Logf("Registered node %s with ID %d to user%d", node.hostname, registeredNode.ID().Uint64(), i/2+1)
}
// Verify initial state: user1 has 2 nodes, user2 has 2 nodes
user1Nodes := app.state.ListNodesByUser(types.UserID(user1.ID))
user2Nodes := app.state.ListNodesByUser(types.UserID(user2.ID))
require.Equal(t, 2, user1Nodes.Len(), "user1 should have 2 nodes initially")
require.Equal(t, 2, user2Nodes.Len(), "user2 should have 2 nodes initially")
t.Logf("Initial state verified: user1=%d nodes, user2=%d nodes", user1Nodes.Len(), user2Nodes.Len())
// Simulate logout for all nodes
for _, node := range nodes {
logoutReq := tailcfg.RegisterRequest{
Auth: nil, // nil Auth indicates logout
NodeKey: node.nodeKey.Public(),
}
resp, err := app.handleRegister(context.Background(), logoutReq, node.machineKey.Public())
require.NoError(t, err)
t.Logf("Logout response for %s: %+v", node.hostname, resp)
}
t.Logf("All nodes logged out")
// Create a new pre-auth key for user1 (reusable for all nodes)
newPak1, err := app.state.CreatePreAuthKey(user1.TypedID(), true, false, nil, nil)
require.NoError(t, err)
// Re-login all nodes using user1's new pre-auth key
for i, node := range nodes {
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: newPak1.Key,
},
NodeKey: node.nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: node.hostname,
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp, err := app.handleRegisterWithAuthKey(regReq, node.machineKey.Public())
require.NoError(t, err)
require.NotNil(t, resp)
require.True(t, resp.MachineAuthorized)
t.Logf("Re-registered node %s (originally user%d) with user1's pre-auth key", node.hostname, i/2+1)
}
// Verify final state after re-login
// EXPECTED: New nodes created for user1, old nodes remain with original users
user1NodesAfter := app.state.ListNodesByUser(types.UserID(user1.ID))
user2NodesAfter := app.state.ListNodesByUser(types.UserID(user2.ID))
t.Logf("Final state: user1=%d nodes, user2=%d nodes", user1NodesAfter.Len(), user2NodesAfter.Len())
// CORRECT BEHAVIOR: When re-authenticating with a DIFFERENT user's pre-auth key,
// new nodes should be created (not transferred). This matches:
// 1. The integration test expectation
// 2. The web flow behavior (creates new nodes)
// 3. The principle that each user owns distinct node entries
require.Equal(t, 4, user1NodesAfter.Len(), "user1 should have 4 nodes total (2 original + 2 new from user2's machines)")
require.Equal(t, 2, user2NodesAfter.Len(), "user2 should still have 2 nodes (old nodes from original registration)")
// Verify original nodes still exist with original users
for i := range 2 {
node := nodes[i]
// User1's original nodes should still be owned by user1
registeredNode, found := app.state.GetNodeByMachineKey(node.machineKey.Public(), types.UserID(user1.ID))
require.True(t, found, "User1's original node %s should still exist", node.hostname)
require.Equal(t, user1.ID, registeredNode.UserID().Get(), "Node %s should still belong to user1", node.hostname)
t.Logf("✓ User1's original node %s (ID=%d) still owned by user1", node.hostname, registeredNode.ID().Uint64())
}
for i := 2; i < 4; i++ {
node := nodes[i]
// User2's original nodes should still be owned by user2
registeredNode, found := app.state.GetNodeByMachineKey(node.machineKey.Public(), types.UserID(user2.ID))
require.True(t, found, "User2's original node %s should still exist", node.hostname)
require.Equal(t, user2.ID, registeredNode.UserID().Get(), "Node %s should still belong to user2", node.hostname)
t.Logf("✓ User2's original node %s (ID=%d) still owned by user2", node.hostname, registeredNode.ID().Uint64())
}
// Verify new nodes were created for user1 with the same machine keys
t.Logf("Verifying new nodes created for user1 from user2's machine keys...")
for i := 2; i < 4; i++ {
node := nodes[i]
// Should be able to find a node with user1 and this machine key (the new one)
newNode, found := app.state.GetNodeByMachineKey(node.machineKey.Public(), types.UserID(user1.ID))
require.True(t, found, "Should have created new node for user1 with machine key from %s", node.hostname)
require.Equal(t, user1.ID, newNode.UserID().Get(), "New node should belong to user1")
t.Logf("✓ New node created for user1 with machine key from %s (ID=%d)", node.hostname, newNode.ID().Uint64())
}
}
// TestWebFlowReauthDifferentUser validates CLI registration behavior when switching users.
// This test replicates the TestAuthWebFlowLogoutAndReloginNewUser integration test scenario.
//
// IMPORTANT: CLI registration creates NEW nodes (different from interactive flow which transfers).
//
// Scenario:
// 1. Node registers with user1 via pre-auth key
// 2. Node logs out (expires)
// 3. Admin runs: headscale auth register --auth-id --user user2
//
// Expected behavior:
// - User1's original node should STILL EXIST (expired)
// - User2 should get a NEW node created (NOT transfer)
// - Both nodes share the same machine key (same physical device).
func TestWebFlowReauthDifferentUser(t *testing.T) {
machineKey := key.NewMachine()
nodeKey1 := key.NewNode()
nodeKey2 := key.NewNode() // Node key rotates on re-auth
app := createTestApp(t)
// Step 1: Register node for user1 via pre-auth key (simulating initial web flow registration)
user1 := app.state.CreateUserForTest("user1")
pak1, err := app.state.CreatePreAuthKey(user1.TypedID(), true, false, nil, nil)
require.NoError(t, err)
regReq1 := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak1.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "test-machine",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp1, err := app.handleRegisterWithAuthKey(regReq1, machineKey.Public())
require.NoError(t, err)
require.True(t, resp1.MachineAuthorized, "Should be authorized via pre-auth key")
// Verify node exists for user1
user1Node, found := app.state.GetNodeByMachineKey(machineKey.Public(), types.UserID(user1.ID))
require.True(t, found, "Node should exist for user1")
require.Equal(t, user1.ID, user1Node.UserID().Get(), "Node should belong to user1")
user1NodeID := user1Node.ID()
t.Logf("✓ User1 node created with ID: %d", user1NodeID)
// Step 2: Simulate logout by expiring the node
pastTime := time.Now().Add(-1 * time.Hour)
logoutReq := tailcfg.RegisterRequest{
NodeKey: nodeKey1.Public(),
Expiry: pastTime, // Expired = logout
}
_, err = app.handleRegister(context.Background(), logoutReq, machineKey.Public())
require.NoError(t, err)
// Verify node is expired
user1Node, found = app.state.GetNodeByMachineKey(machineKey.Public(), types.UserID(user1.ID))
require.True(t, found, "Node should still exist after logout")
require.True(t, user1Node.IsExpired(), "Node should be expired after logout")
t.Logf("✓ User1 node expired (logged out)")
// Step 3: Start interactive re-authentication (simulates "tailscale up")
user2 := app.state.CreateUserForTest("user2")
reAuthReq := tailcfg.RegisterRequest{
// No Auth field - triggers interactive flow
NodeKey: nodeKey2.Public(), // New node key (rotated on re-auth)
Hostinfo: &tailcfg.Hostinfo{
Hostname: "test-machine",
},
Expiry: time.Now().Add(24 * time.Hour),
}
// Initial request should return AuthURL
initialResp, err := app.handleRegister(context.Background(), reAuthReq, machineKey.Public())
require.NoError(t, err)
require.NotEmpty(t, initialResp.AuthURL, "Should receive AuthURL for interactive flow")
t.Logf("✓ Interactive flow started, AuthURL: %s", initialResp.AuthURL)
// Extract registration ID from AuthURL
regID, err := extractRegistrationIDFromAuthURL(initialResp.AuthURL)
require.NoError(t, err, "Should extract registration ID from AuthURL")
require.NotEmpty(t, regID, "Should have valid registration ID")
// Step 4: Admin completes authentication via CLI
// This simulates: headscale auth register --auth-id --user user2
node, _, err := app.state.HandleNodeFromAuthPath(
regID,
types.UserID(user2.ID), // Register to user2, not user1!
nil, // No custom expiry
"cli", // Registration method (CLI register command)
)
require.NoError(t, err, "HandleNodeFromAuthPath should succeed")
t.Logf("✓ Admin registered node to user2 via CLI (node ID: %d)", node.ID())
t.Run("user1_original_node_still_exists", func(t *testing.T) {
// User1's original node should STILL exist (not transferred to user2)
user1NodeAfter, found1 := app.state.GetNodeByMachineKey(machineKey.Public(), types.UserID(user1.ID))
assert.True(t, found1, "User1's original node should still exist (not transferred)")
if !found1 {
t.Fatal("User1's node was transferred or deleted - this breaks the integration test!")
}
assert.Equal(t, user1.ID, user1NodeAfter.UserID().Get(), "User1's node should still belong to user1")
assert.Equal(t, user1NodeID, user1NodeAfter.ID(), "Should be the same node (same ID)")
assert.True(t, user1NodeAfter.IsExpired(), "User1's node should still be expired")
t.Logf("✓ User1's original node still exists (ID: %d, expired: %v)", user1NodeAfter.ID(), user1NodeAfter.IsExpired())
})
t.Run("user2_has_new_node_created", func(t *testing.T) {
// User2 should have a NEW node created (not transfer from user1)
user2Node, found2 := app.state.GetNodeByMachineKey(machineKey.Public(), types.UserID(user2.ID))
assert.True(t, found2, "User2 should have a new node created")
if !found2 {
t.Fatal("User2 doesn't have a node - registration failed!")
}
assert.Equal(t, user2.ID, user2Node.UserID().Get(), "User2's node should belong to user2")
assert.NotEqual(t, user1NodeID, user2Node.ID(), "Should be a NEW node (different ID), not transfer!")
assert.Equal(t, machineKey.Public(), user2Node.MachineKey(), "Should have same machine key")
assert.Equal(t, nodeKey2.Public(), user2Node.NodeKey(), "Should have new node key")
assert.False(t, user2Node.IsExpired(), "User2's node should NOT be expired (active)")
t.Logf("✓ User2's new node created (ID: %d, active)", user2Node.ID())
})
t.Run("returned_node_is_user2_new_node", func(t *testing.T) {
// The node returned from HandleNodeFromAuthPath should be user2's NEW node
assert.Equal(t, user2.ID, node.UserID().Get(), "Returned node should belong to user2")
assert.NotEqual(t, user1NodeID, node.ID(), "Returned node should be NEW, not transferred from user1")
t.Logf("✓ HandleNodeFromAuthPath returned user2's new node (ID: %d)", node.ID())
})
t.Run("both_nodes_share_machine_key", func(t *testing.T) {
// Both nodes should have the same machine key (same physical device)
user1NodeFinal, found1 := app.state.GetNodeByMachineKey(machineKey.Public(), types.UserID(user1.ID))
user2NodeFinal, found2 := app.state.GetNodeByMachineKey(machineKey.Public(), types.UserID(user2.ID))
require.True(t, found1, "User1 node should exist")
require.True(t, found2, "User2 node should exist")
assert.Equal(t, machineKey.Public(), user1NodeFinal.MachineKey(), "User1 node should have correct machine key")
assert.Equal(t, machineKey.Public(), user2NodeFinal.MachineKey(), "User2 node should have same machine key")
t.Logf("✓ Both nodes share machine key: %s", machineKey.Public().ShortString())
})
t.Run("total_node_count", func(t *testing.T) {
// We should have exactly 2 nodes total: one for user1 (expired), one for user2 (active)
allNodesSlice := app.state.ListNodes()
assert.Equal(t, 2, allNodesSlice.Len(), "Should have exactly 2 nodes total")
// Count nodes per user
user1Nodes := 0
user2Nodes := 0
for i := range allNodesSlice.Len() {
n := allNodesSlice.At(i)
if n.UserID().Get() == user1.ID {
user1Nodes++
}
if n.UserID().Get() == user2.ID {
user2Nodes++
}
}
assert.Equal(t, 1, user1Nodes, "User1 should have 1 node")
assert.Equal(t, 1, user2Nodes, "User2 should have 1 node")
t.Logf("✓ Total: 2 nodes (user1: 1 expired, user2: 1 active)")
})
}
// Helper function to create test app.
func createTestApp(t *testing.T) *Headscale {
t.Helper()
tmpDir := t.TempDir()
cfg := types.Config{
ServerURL: "http://localhost:8080",
NoisePrivateKeyPath: tmpDir + "/noise_private.key",
Database: types.DatabaseConfig{
Type: "sqlite3",
Sqlite: types.SqliteConfig{
Path: tmpDir + "/headscale_test.db",
},
},
OIDC: types.OIDCConfig{},
Policy: types.PolicyConfig{
Mode: types.PolicyModeDB,
},
Tuning: types.Tuning{
BatchChangeDelay: 100 * time.Millisecond,
BatcherWorkers: 1,
},
}
app, err := NewHeadscale(&cfg)
require.NoError(t, err)
// Initialize and start the mapBatcher to handle Change() calls
app.mapBatcher = mapper.NewBatcherAndMapper(&cfg, app.state)
app.mapBatcher.Start()
// Clean up the batcher when the test finishes
t.Cleanup(func() {
if app.mapBatcher != nil {
app.mapBatcher.Close()
}
})
return app
}
// TestGitHubIssue2830_NodeRestartWithUsedPreAuthKey tests the scenario reported in
// https://github.com/juanfont/headscale/issues/2830
//
// Scenario:
// 1. Node registers successfully with a single-use pre-auth key
// 2. Node is running fine
// 3. Node restarts (e.g., after headscale upgrade or tailscale container restart)
// 4. Node sends RegisterRequest with the same pre-auth key
// 5. BUG: Headscale rejects the request with "authkey expired" or "authkey already used"
//
// Expected behavior:
// When an existing node (identified by matching NodeKey + MachineKey) re-registers
// with a pre-auth key that it previously used, the registration should succeed.
// The node is not creating a new registration - it's re-authenticating the same device.
func TestGitHubIssue2830_NodeRestartWithUsedPreAuthKey(t *testing.T) {
t.Parallel()
app := createTestApp(t)
// Create user and single-use pre-auth key
user := app.state.CreateUserForTest("test-user")
pakNew, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, nil) // reusable=false
require.NoError(t, err)
// Fetch the full pre-auth key to check Reusable field
pak, err := app.state.GetPreAuthKey(pakNew.Key)
require.NoError(t, err)
require.False(t, pak.Reusable, "key should be single-use for this test")
machineKey := key.NewMachine()
nodeKey := key.NewNode()
// STEP 1: Initial registration with pre-auth key (simulates fresh node joining)
initialReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pakNew.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "test-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
t.Log("Step 1: Initial registration with pre-auth key")
initialResp, err := app.handleRegister(context.Background(), initialReq, machineKey.Public())
require.NoError(t, err, "initial registration should succeed")
require.NotNil(t, initialResp)
assert.True(t, initialResp.MachineAuthorized, "node should be authorized")
assert.False(t, initialResp.NodeKeyExpired, "node key should not be expired")
// Verify node was created in database
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found, "node should exist after initial registration")
assert.Equal(t, "test-node", node.Hostname())
assert.Equal(t, nodeKey.Public(), node.NodeKey())
assert.Equal(t, machineKey.Public(), node.MachineKey())
// Verify pre-auth key is now marked as used
usedPak, err := app.state.GetPreAuthKey(pakNew.Key)
require.NoError(t, err)
assert.True(t, usedPak.Used, "pre-auth key should be marked as used after initial registration")
// STEP 2: Simulate node restart - node sends RegisterRequest again with same pre-auth key
// This happens when:
// - Tailscale container restarts
// - Tailscaled service restarts
// - System reboots
// The Tailscale client persists the pre-auth key in its state and sends it on every registration
t.Log("Step 2: Node restart - re-registration with same (now used) pre-auth key")
restartReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pakNew.Key, // Same key, now marked as Used=true
},
NodeKey: nodeKey.Public(), // Same node key
Hostinfo: &tailcfg.Hostinfo{
Hostname: "test-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
// BUG: This fails with "authkey already used" or "authkey expired"
// EXPECTED: Should succeed because it's the same node re-registering
restartResp, err := app.handleRegister(context.Background(), restartReq, machineKey.Public())
// This is the assertion that currently FAILS in v0.27.0
assert.NoError(t, err, "BUG: existing node re-registration with its own used pre-auth key should succeed") //nolint:testifylint // intentionally uses assert to show bug
if err != nil {
t.Logf("Error received (this is the bug): %v", err)
t.Logf("Expected behavior: Node should be able to re-register with the same pre-auth key it used initially")
return // Stop here to show the bug clearly
}
require.NotNil(t, restartResp)
assert.True(t, restartResp.MachineAuthorized, "node should remain authorized after restart")
assert.False(t, restartResp.NodeKeyExpired, "node key should not be expired after restart")
// Verify it's the same node (not a duplicate)
nodeAfterRestart, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found, "node should still exist after restart")
assert.Equal(t, node.ID(), nodeAfterRestart.ID(), "should be the same node, not a new one")
assert.Equal(t, "test-node", nodeAfterRestart.Hostname())
}
// TestNodeReregistrationWithReusablePreAuthKey tests that reusable keys work correctly
// for node re-registration.
func TestNodeReregistrationWithReusablePreAuthKey(t *testing.T) {
t.Parallel()
app := createTestApp(t)
user := app.state.CreateUserForTest("test-user")
pakNew, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) // reusable=true
require.NoError(t, err)
// Fetch the full pre-auth key to check Reusable field
pak, err := app.state.GetPreAuthKey(pakNew.Key)
require.NoError(t, err)
require.True(t, pak.Reusable)
machineKey := key.NewMachine()
nodeKey := key.NewNode()
// Initial registration
initialReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pakNew.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "reusable-test-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
initialResp, err := app.handleRegister(context.Background(), initialReq, machineKey.Public())
require.NoError(t, err)
require.NotNil(t, initialResp)
assert.True(t, initialResp.MachineAuthorized)
// Node restart - re-registration with reusable key
restartReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pakNew.Key, // Reusable key
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "reusable-test-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
restartResp, err := app.handleRegister(context.Background(), restartReq, machineKey.Public())
require.NoError(t, err, "reusable key should allow re-registration")
require.NotNil(t, restartResp)
assert.True(t, restartResp.MachineAuthorized)
assert.False(t, restartResp.NodeKeyExpired)
}
// TestNodeReregistrationWithExpiredPreAuthKey tests that truly expired keys
// are still rejected even for existing nodes.
func TestNodeReregistrationWithExpiredPreAuthKey(t *testing.T) {
t.Parallel()
app := createTestApp(t)
user := app.state.CreateUserForTest("test-user")
expiry := time.Now().Add(-1 * time.Hour) // Already expired
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, &expiry, nil)
require.NoError(t, err)
machineKey := key.NewMachine()
nodeKey := key.NewNode()
// Try to register with expired key
req := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "expired-key-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegister(context.Background(), req, machineKey.Public())
require.Error(t, err, "expired pre-auth key should be rejected")
assert.Contains(t, err.Error(), "authkey expired", "error should mention key expiration")
}
// TestIssue2830_ExistingNodeReregistersWithExpiredKey tests the fix for issue #2830.
// When a node is already registered and the pre-auth key expires, the node should
// still be able to re-register (e.g., after a container restart) using the same
// expired key. The key was only needed for initial authentication.
func TestIssue2830_ExistingNodeReregistersWithExpiredKey(t *testing.T) {
t.Parallel()
app := createTestApp(t)
user := app.state.CreateUserForTest("test-user")
// Create a valid key (will expire it later)
expiry := time.Now().Add(1 * time.Hour)
pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, &expiry, nil)
require.NoError(t, err)
machineKey := key.NewMachine()
nodeKey := key.NewNode()
// Register the node initially (key is still valid)
req := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "issue2830-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp, err := app.handleRegister(context.Background(), req, machineKey.Public())
require.NoError(t, err, "initial registration should succeed")
require.NotNil(t, resp)
require.True(t, resp.MachineAuthorized, "node should be authorized after initial registration")
// Verify node was created
allNodes := app.state.ListNodes()
require.Equal(t, 1, allNodes.Len())
initialNodeID := allNodes.At(0).ID()
// Now expire the key by updating it in the database to have an expiry in the past.
// This simulates the real-world scenario where a key expires after initial registration.
pastExpiry := time.Now().Add(-1 * time.Hour)
err = app.state.DB().DB.Model(&types.PreAuthKey{}).
Where("id = ?", pak.ID).
Update("expiration", pastExpiry).Error
require.NoError(t, err, "should be able to update key expiration")
// Reload the key to verify it's now expired
expiredPak, err := app.state.GetPreAuthKey(pak.Key)
require.NoError(t, err)
require.NotNil(t, expiredPak.Expiration)
require.True(t, expiredPak.Expiration.Before(time.Now()), "key should be expired")
// Verify the expired key would fail validation
err = expiredPak.Validate()
require.Error(t, err, "key should fail validation when expired")
require.Contains(t, err.Error(), "authkey expired")
// Attempt to re-register with the SAME key (now expired).
// This should SUCCEED because:
// - The node already exists with the same MachineKey and User
// - The fix allows existing nodes to re-register even with expired keys
// - The key was only needed for initial authentication
req2 := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key, // Same key as initial registration (now expired)
},
NodeKey: nodeKey.Public(), // Same NodeKey as initial registration
Hostinfo: &tailcfg.Hostinfo{
Hostname: "issue2830-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp2, err := app.handleRegister(context.Background(), req2, machineKey.Public())
require.NoError(t, err, "re-registration should succeed even with expired key for existing node")
assert.NotNil(t, resp2)
assert.True(t, resp2.MachineAuthorized, "node should remain authorized after re-registration")
// Verify we still have only one node (re-registered, not created new)
allNodes = app.state.ListNodes()
require.Equal(t, 1, allNodes.Len(), "should have exactly one node (re-registered)")
assert.Equal(t, initialNodeID, allNodes.At(0).ID(), "node ID should not change on re-registration")
}
// TestGitHubIssue2830_ExistingNodeCanReregisterWithUsedPreAuthKey tests that an existing node
// can re-register using a pre-auth key that's already marked as Used=true, as long as:
// 1. The node is re-registering with the same MachineKey it originally used
// 2. The node is using the same pre-auth key it was originally registered with (AuthKeyID matches)
//
// This is the fix for GitHub issue #2830: https://github.com/juanfont/headscale/issues/2830
//
// Background: When Docker/Kubernetes containers restart, they keep their persistent state
// (including the MachineKey), but container entrypoints unconditionally run:
//
// tailscale up --authkey=$TS_AUTHKEY
//
// This caused nodes to be rejected after restart because the pre-auth key was already
// marked as Used=true from the initial registration. The fix allows re-registration of
// existing nodes with their own used keys.
func TestGitHubIssue2830_ExistingNodeCanReregisterWithUsedPreAuthKey(t *testing.T) {
app := createTestApp(t)
// Create a user
user := app.state.CreateUserForTest("testuser")
// Create a SINGLE-USE pre-auth key (reusable=false)
// This is the type of key that triggers the bug in issue #2830
preAuthKeyNew, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, nil)
require.NoError(t, err)
// Fetch the full pre-auth key to check Reusable and Used fields
preAuthKey, err := app.state.GetPreAuthKey(preAuthKeyNew.Key)
require.NoError(t, err)
require.False(t, preAuthKey.Reusable, "Pre-auth key must be single-use to test issue #2830")
require.False(t, preAuthKey.Used, "Pre-auth key should not be used yet")
// Generate node keys for the client
machineKey := key.NewMachine()
nodeKey := key.NewNode()
// Step 1: Initial registration with the pre-auth key
// This simulates the first time the container starts and runs 'tailscale up --authkey=...'
initialReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: preAuthKeyNew.Key, // Use the full key from creation
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "issue-2830-test-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
initialResp, err := app.handleRegisterWithAuthKey(initialReq, machineKey.Public())
require.NoError(t, err, "Initial registration should succeed")
require.True(t, initialResp.MachineAuthorized, "Node should be authorized after initial registration")
require.NotNil(t, initialResp.User, "User should be set in response")
require.Equal(t, "testuser", initialResp.User.DisplayName, "User should match the pre-auth key's user")
// Verify the pre-auth key is now marked as Used
updatedKey, err := app.state.GetPreAuthKey(preAuthKeyNew.Key)
require.NoError(t, err)
require.True(t, updatedKey.Used, "Pre-auth key should be marked as Used after initial registration")
// Step 2: Container restart scenario
// The container keeps its MachineKey (persistent state), but the entrypoint script
// unconditionally runs 'tailscale up --authkey=$TS_AUTHKEY' again
//
// WITHOUT THE FIX: This would fail with "authkey already used" error
// WITH THE FIX: This succeeds because it's the same node re-registering with its own key
// Simulate sending the same RegisterRequest again (same MachineKey, same AuthKey)
// This is exactly what happens when a container restarts
reregisterReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: preAuthKeyNew.Key, // Same key, now marked as Used=true
},
NodeKey: nodeKey.Public(), // Same NodeKey
Hostinfo: &tailcfg.Hostinfo{
Hostname: "issue-2830-test-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
reregisterResp, err := app.handleRegisterWithAuthKey(reregisterReq, machineKey.Public()) // Same MachineKey
require.NoError(t, err, "Re-registration with same MachineKey and used pre-auth key should succeed (fixes #2830)")
require.True(t, reregisterResp.MachineAuthorized, "Node should remain authorized after re-registration")
require.NotNil(t, reregisterResp.User, "User should be set in re-registration response")
require.Equal(t, "testuser", reregisterResp.User.DisplayName, "User should remain the same")
// Verify that only ONE node was created (not a duplicate)
nodes := app.state.ListNodesByUser(types.UserID(user.ID))
require.Equal(t, 1, nodes.Len(), "Should have exactly one node (no duplicates created)")
require.Equal(t, "issue-2830-test-node", nodes.At(0).Hostname(), "Node hostname should match")
// Step 3: Verify that a DIFFERENT machine cannot use the same used key
// This ensures we didn't break the security model - only the original node can re-register
differentMachineKey := key.NewMachine()
differentNodeKey := key.NewNode()
attackReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: preAuthKeyNew.Key, // Try to use the same key
},
NodeKey: differentNodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "attacker-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
_, err = app.handleRegisterWithAuthKey(attackReq, differentMachineKey.Public())
require.Error(t, err, "Different machine should NOT be able to use the same used pre-auth key")
require.Contains(t, err.Error(), "already used", "Error should indicate key is already used")
// Verify still only one node (the original one)
nodesAfterAttack := app.state.ListNodesByUser(types.UserID(user.ID))
require.Equal(t, 1, nodesAfterAttack.Len(), "Should still have exactly one node (attack prevented)")
}
// TestWebAuthRejectsUnauthorizedRequestTags tests that web auth registrations
// validate RequestTags against policy and reject unauthorized tags.
func TestWebAuthRejectsUnauthorizedRequestTags(t *testing.T) {
t.Parallel()
app := createTestApp(t)
// Create a user that will authenticate via web auth
user := app.state.CreateUserForTest("webauth-tags-user")
machineKey := key.NewMachine()
nodeKey := key.NewNode()
// Simulate a registration cache entry (as would be created during web auth)
registrationID := types.MustAuthID()
regEntry := types.NewRegisterAuthRequest(types.Node{
MachineKey: machineKey.Public(),
NodeKey: nodeKey.Public(),
Hostname: "webauth-tags-node",
Hostinfo: &tailcfg.Hostinfo{
Hostname: "webauth-tags-node",
RequestTags: []string{"tag:unauthorized"}, // This tag is not in policy
},
})
app.state.SetAuthCacheEntry(registrationID, regEntry)
// Complete the web auth - should fail because tag is unauthorized
_, _, err := app.state.HandleNodeFromAuthPath(
registrationID,
types.UserID(user.ID),
nil, // no expiry
"webauth",
)
// Expect error due to unauthorized tags
require.Error(t, err, "HandleNodeFromAuthPath should reject unauthorized RequestTags")
require.Contains(t, err.Error(), "requested tags",
"Error should indicate requested tags are invalid or not permitted")
require.Contains(t, err.Error(), "tag:unauthorized",
"Error should mention the rejected tag")
// Verify no node was created
_, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.False(t, found, "Node should not be created when tags are unauthorized")
}
// TestWebAuthReauthWithEmptyTagsRemovesAllTags tests that when an existing tagged node
// reauths with empty RequestTags, all tags are removed and ownership returns to user.
// This is the fix for issue #2979.
func TestWebAuthReauthWithEmptyTagsRemovesAllTags(t *testing.T) {
t.Parallel()
app := createTestApp(t)
// Create a user
user := app.state.CreateUserForTest("reauth-untag-user")
// Update policy manager to recognize the new user
// This is necessary because CreateUserForTest doesn't update the policy manager
err := app.state.UpdatePolicyManagerUsersForTest()
require.NoError(t, err, "Failed to update policy manager users")
// Set up policy that allows the user to own these tags
policy := `{
"tagOwners": {
"tag:valid-owned": ["reauth-untag-user@"],
"tag:second": ["reauth-untag-user@"]
},
"acls": [{"action": "accept", "src": ["*"], "dst": ["*:*"]}]
}`
_, err = app.state.SetPolicy([]byte(policy))
require.NoError(t, err, "Failed to set policy")
machineKey := key.NewMachine()
nodeKey1 := key.NewNode()
// Step 1: Initial registration with tags
registrationID1 := types.MustAuthID()
regEntry1 := types.NewRegisterAuthRequest(types.Node{
MachineKey: machineKey.Public(),
NodeKey: nodeKey1.Public(),
Hostname: "reauth-untag-node",
Hostinfo: &tailcfg.Hostinfo{
Hostname: "reauth-untag-node",
RequestTags: []string{"tag:valid-owned", "tag:second"},
},
})
app.state.SetAuthCacheEntry(registrationID1, regEntry1)
// Complete initial registration with tags
node, _, err := app.state.HandleNodeFromAuthPath(
registrationID1,
types.UserID(user.ID),
nil,
"webauth",
)
require.NoError(t, err, "Initial registration should succeed")
require.True(t, node.IsTagged(), "Node should be tagged after initial registration")
require.ElementsMatch(t, []string{"tag:valid-owned", "tag:second"}, node.Tags().AsSlice())
t.Logf("Initial registration complete - Node ID: %d, Tags: %v, IsTagged: %t",
node.ID().Uint64(), node.Tags().AsSlice(), node.IsTagged())
// Step 2: Reauth with EMPTY tags to untag
nodeKey2 := key.NewNode() // New node key for reauth
registrationID2 := types.MustAuthID()
regEntry2 := types.NewRegisterAuthRequest(types.Node{
MachineKey: machineKey.Public(), // Same machine key
NodeKey: nodeKey2.Public(), // Different node key (rotation)
Hostname: "reauth-untag-node",
Hostinfo: &tailcfg.Hostinfo{
Hostname: "reauth-untag-node",
RequestTags: []string{}, // EMPTY - should untag
},
})
app.state.SetAuthCacheEntry(registrationID2, regEntry2)
// Complete reauth with empty tags
nodeAfterReauth, _, err := app.state.HandleNodeFromAuthPath(
registrationID2,
types.UserID(user.ID),
nil,
"webauth",
)
require.NoError(t, err, "Reauth should succeed")
// Verify tags were removed
require.False(t, nodeAfterReauth.IsTagged(), "Node should NOT be tagged after reauth with empty tags")
require.Empty(t, nodeAfterReauth.Tags().AsSlice(), "Node should have no tags")
// Verify ownership returned to user
require.True(t, nodeAfterReauth.UserID().Valid(), "Node should have a user ID")
require.Equal(t, user.ID, nodeAfterReauth.UserID().Get(), "Node should be owned by the user again")
// Verify it's the same node (not a new one)
require.Equal(t, node.ID(), nodeAfterReauth.ID(), "Should be the same node after reauth")
t.Logf("Reauth complete - Node ID: %d, Tags: %v, IsTagged: %t, UserID: %d",
nodeAfterReauth.ID().Uint64(), nodeAfterReauth.Tags().AsSlice(),
nodeAfterReauth.IsTagged(), nodeAfterReauth.UserID().Get())
}
// TestAuthKeyTaggedToUserOwnedViaReauth tests that a node originally registered
// with a tagged pre-auth key can transition to user-owned by re-authenticating
// via web auth with empty RequestTags. This ensures authkey-tagged nodes are
// not permanently locked to being tagged.
func TestAuthKeyTaggedToUserOwnedViaReauth(t *testing.T) {
t.Parallel()
app := createTestApp(t)
// Create a user
user := app.state.CreateUserForTest("authkey-to-user")
// Create a tagged pre-auth key
authKeyTags := []string{"tag:server", "tag:prod"}
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, authKeyTags)
require.NoError(t, err, "Failed to create tagged pre-auth key")
machineKey := key.NewMachine()
nodeKey1 := key.NewNode()
// Step 1: Initial registration with tagged pre-auth key
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "authkey-tagged-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
require.NoError(t, err, "Initial registration should succeed")
require.True(t, resp.MachineAuthorized, "Node should be authorized")
// Verify initial state: node is tagged via authkey
node, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
require.True(t, found, "Node should be found")
require.True(t, node.IsTagged(), "Node should be tagged after authkey registration")
require.ElementsMatch(t, authKeyTags, node.Tags().AsSlice(), "Node should have authkey tags")
require.NotNil(t, node.AuthKey(), "Node should have AuthKey reference")
require.Positive(t, node.AuthKey().Tags().Len(), "AuthKey should have tags")
t.Logf("Initial registration complete - Node ID: %d, Tags: %v, IsTagged: %t, AuthKey.Tags.Len: %d",
node.ID().Uint64(), node.Tags().AsSlice(), node.IsTagged(), node.AuthKey().Tags().Len())
// Step 2: Reauth via web auth with EMPTY tags to transition to user-owned
nodeKey2 := key.NewNode() // New node key for reauth
registrationID := types.MustAuthID()
regEntry := types.NewRegisterAuthRequest(types.Node{
MachineKey: machineKey.Public(), // Same machine key
NodeKey: nodeKey2.Public(), // Different node key (rotation)
Hostname: "authkey-tagged-node",
Hostinfo: &tailcfg.Hostinfo{
Hostname: "authkey-tagged-node",
RequestTags: []string{}, // EMPTY - should untag
},
})
app.state.SetAuthCacheEntry(registrationID, regEntry)
// Complete reauth with empty tags
nodeAfterReauth, _, err := app.state.HandleNodeFromAuthPath(
registrationID,
types.UserID(user.ID),
nil,
"webauth",
)
require.NoError(t, err, "Reauth should succeed")
// Verify tags were removed (authkey-tagged → user-owned transition)
require.False(t, nodeAfterReauth.IsTagged(), "Node should NOT be tagged after reauth with empty tags")
require.Empty(t, nodeAfterReauth.Tags().AsSlice(), "Node should have no tags")
// Verify ownership returned to user
require.True(t, nodeAfterReauth.UserID().Valid(), "Node should have a user ID")
require.Equal(t, user.ID, nodeAfterReauth.UserID().Get(), "Node should be owned by the user")
// Verify it's the same node (not a new one)
require.Equal(t, node.ID(), nodeAfterReauth.ID(), "Should be the same node after reauth")
// AuthKey reference should still exist (for audit purposes)
require.NotNil(t, nodeAfterReauth.AuthKey(), "AuthKey reference should be preserved")
t.Logf("Reauth complete - Node ID: %d, Tags: %v, IsTagged: %t, UserID: %d",
nodeAfterReauth.ID().Uint64(), nodeAfterReauth.Tags().AsSlice(),
nodeAfterReauth.IsTagged(), nodeAfterReauth.UserID().Get())
}
// TestDeletedPreAuthKeyNotRecreatedOnNodeUpdate tests that when a PreAuthKey is deleted,
// subsequent node updates (like those triggered by MapRequests) do not recreate the key.
//
// This reproduces the bug where:
// 1. Create a tagged preauthkey and register a node
// 2. Delete the preauthkey (confirmed gone from pre_auth_keys DB table)
// 3. Node sends MapRequest (e.g., after tailscaled restart)
// 4. BUG: The preauthkey reappears because GORM's Updates() upserts the stale AuthKey
// data that still exists in the NodeStore's in-memory cache.
//
// The fix is to use Omit("AuthKey") on all node Updates() calls to prevent GORM
// from touching the AuthKey association.
func TestDeletedPreAuthKeyNotRecreatedOnNodeUpdate(t *testing.T) {
t.Parallel()
app := createTestApp(t)
// Create user and tagged pre-auth key
user := app.state.CreateUserForTest("test-user")
pakNew, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, []string{"tag:test"})
require.NoError(t, err)
pakID := pakNew.ID
t.Logf("Created PreAuthKey ID: %d", pakID)
// Register a node with the pre-auth key
machineKey := key.NewMachine()
nodeKey := key.NewNode()
registerReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pakNew.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "test-node",
},
}
resp, err := app.handleRegister(context.Background(), registerReq, machineKey.Public())
require.NoError(t, err, "registration should succeed")
require.True(t, resp.MachineAuthorized, "node should be authorized")
// Verify node exists and has AuthKey reference
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found, "node should exist")
require.True(t, node.AuthKeyID().Valid(), "node should have AuthKeyID set")
require.Equal(t, pakID, node.AuthKeyID().Get(), "node should reference the correct PreAuthKey")
t.Logf("Node ID: %d, AuthKeyID: %d", node.ID().Uint64(), node.AuthKeyID().Get())
// Verify the PreAuthKey exists in the database
var pakCount int64
err = app.state.DB().DB.Model(&types.PreAuthKey{}).Where("id = ?", pakID).Count(&pakCount).Error
require.NoError(t, err)
require.Equal(t, int64(1), pakCount, "PreAuthKey should exist in database")
// Delete the PreAuthKey
t.Log("Deleting PreAuthKey...")
err = app.state.DeletePreAuthKey(pakID)
require.NoError(t, err, "deleting PreAuthKey should succeed")
// Verify the PreAuthKey is gone from the database
err = app.state.DB().DB.Model(&types.PreAuthKey{}).Where("id = ?", pakID).Count(&pakCount).Error
require.NoError(t, err)
require.Equal(t, int64(0), pakCount, "PreAuthKey should be deleted from database")
t.Log("PreAuthKey deleted from database")
// Verify the node's auth_key_id is now NULL in the database
var dbNode types.Node
err = app.state.DB().DB.First(&dbNode, node.ID().Uint64()).Error
require.NoError(t, err)
require.Nil(t, dbNode.AuthKeyID, "node's AuthKeyID should be NULL after PreAuthKey deletion")
t.Log("Node's AuthKeyID is NULL in database")
// The NodeStore may still have stale AuthKey data in memory.
// Now simulate what happens when the node sends a MapRequest after a tailscaled restart.
// This triggers persistNodeToDB which calls GORM's Updates().
// Simulate a MapRequest by updating the node through the state layer
// This mimics what poll.go does when processing MapRequests
mapReq := tailcfg.MapRequest{
NodeKey: nodeKey.Public(),
DiscoKey: node.DiscoKey(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "test-node",
GoVersion: "go1.21", // Some change to trigger an update
},
}
// Process the MapRequest-like update
// This calls UpdateNodeFromMapRequest which eventually calls persistNodeToDB
_, err = app.state.UpdateNodeFromMapRequest(node.ID(), mapReq)
require.NoError(t, err, "UpdateNodeFromMapRequest should succeed")
t.Log("Simulated MapRequest update completed")
// THE CRITICAL CHECK: Verify the PreAuthKey was NOT recreated
err = app.state.DB().DB.Model(&types.PreAuthKey{}).Where("id = ?", pakID).Count(&pakCount).Error
require.NoError(t, err)
require.Equal(t, int64(0), pakCount,
"BUG: PreAuthKey was recreated! The deleted PreAuthKey should NOT reappear after node update")
t.Log("SUCCESS: PreAuthKey remained deleted after node update")
}
// TestTaggedNodeWithoutUserToDifferentUser tests that a node registered with a
// tags-only PreAuthKey (no user) can be re-registered to a different user
// without panicking. This reproduces the issue reported in #3038.
func TestTaggedNodeWithoutUserToDifferentUser(t *testing.T) {
t.Parallel()
app := createTestApp(t)
// Step 1: Create a tags-only PreAuthKey (no user, only tags)
// This is valid for tagged nodes where ownership is defined by tags, not users
tags := []string{"tag:server", "tag:prod"}
pak, err := app.state.CreatePreAuthKey(nil, true, false, nil, tags)
require.NoError(t, err, "Failed to create tags-only pre-auth key")
require.Nil(t, pak.User, "Tags-only PAK should have nil User")
machineKey := key.NewMachine()
nodeKey1 := key.NewNode()
// Step 2: Register node with tags-only PreAuthKey
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-orphan-node",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
require.NoError(t, err, "Initial registration should succeed")
require.True(t, resp.MachineAuthorized, "Node should be authorized")
// Verify initial state: node is tagged with no UserID
node, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
require.True(t, found, "Node should be found")
require.True(t, node.IsTagged(), "Node should be tagged")
require.ElementsMatch(t, tags, node.Tags().AsSlice(), "Node should have tags from PAK")
require.False(t, node.UserID().Valid(), "Node should NOT have a UserID (tags-only PAK)")
require.False(t, node.User().Valid(), "Node should NOT have a User (tags-only PAK)")
t.Logf("Initial registration complete - Node ID: %d, Tags: %v, IsTagged: %t, UserID valid: %t",
node.ID().Uint64(), node.Tags().AsSlice(), node.IsTagged(), node.UserID().Valid())
// Step 3: Create a new user (alice) to re-register the node to
alice := app.state.CreateUserForTest("alice")
require.NotNil(t, alice, "Alice user should be created")
// Step 4: Re-register the node to alice via HandleNodeFromAuthPath
// This is what happens when running: headscale auth register --auth-id --user alice
nodeKey2 := key.NewNode()
registrationID := types.MustAuthID()
regEntry := types.NewRegisterAuthRequest(types.Node{
MachineKey: machineKey.Public(), // Same machine key as the tagged node
NodeKey: nodeKey2.Public(),
Hostname: "tagged-orphan-node",
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-orphan-node",
RequestTags: []string{}, // Empty - transition to user-owned
},
})
app.state.SetAuthCacheEntry(registrationID, regEntry)
// This should NOT panic - before the fix, this would panic with:
// panic: runtime error: invalid memory address or nil pointer dereference
// at UserView.Name() because the existing node has no User
nodeAfterReauth, _, err := app.state.HandleNodeFromAuthPath(
registrationID,
types.UserID(alice.ID),
nil,
"cli",
)
require.NoError(t, err, "Re-registration to alice should succeed without panic")
// Verify the existing tagged node was converted to be owned by alice (same node ID)
require.True(t, nodeAfterReauth.Valid(), "Node should be valid")
require.True(t, nodeAfterReauth.UserID().Valid(), "Node should have a UserID")
require.Equal(t, alice.ID, nodeAfterReauth.UserID().Get(), "Node should be owned by alice")
require.Equal(t, node.ID(), nodeAfterReauth.ID(), "Should be the same node (converted, not new)")
require.False(t, nodeAfterReauth.IsTagged(), "Node should no longer be tagged")
require.Empty(t, nodeAfterReauth.Tags().AsSlice(), "Node should have no tags")
// Verify Owner() works without panicking - this is what the mapper's
// generateUserProfiles calls, and it would panic with a nil pointer
// dereference if node.User was not set during the tag→user conversion.
owner := nodeAfterReauth.Owner()
require.True(t, owner.Valid(), "Owner should be valid after conversion (mapper would panic if nil)")
require.Equal(t, alice.ID, owner.Model().ID, "Owner should be alice")
t.Logf("Re-registration complete - Node ID: %d, Tags: %v, IsTagged: %t, UserID: %d",
nodeAfterReauth.ID().Uint64(), nodeAfterReauth.Tags().AsSlice(),
nodeAfterReauth.IsTagged(), nodeAfterReauth.UserID().Get())
}
================================================
FILE: hscontrol/capver/capver.go
================================================
package capver
//go:generate go run ../../tools/capver/main.go
import (
"slices"
"sort"
"strings"
xmaps "golang.org/x/exp/maps"
"tailscale.com/tailcfg"
"tailscale.com/util/set"
)
const (
// minVersionParts is the minimum number of version parts needed for major.minor.
minVersionParts = 2
// legacyDERPCapVer is the capability version when LegacyDERP can be cleaned up.
legacyDERPCapVer = 111
)
// CanOldCodeBeCleanedUp is intended to be called on startup to see if
// there are old code that can ble cleaned up, entries should contain
// a CapVer where something can be cleaned up and a panic if it can.
// This is only intended to catch things in tests.
//
// All uses of Capability version checks should be listed here.
func CanOldCodeBeCleanedUp() {
if MinSupportedCapabilityVersion >= legacyDERPCapVer {
panic("LegacyDERP can be cleaned up in tail.go")
}
}
func tailscaleVersSorted() []string {
vers := xmaps.Keys(tailscaleToCapVer)
sort.Strings(vers)
return vers
}
func capVersSorted() []tailcfg.CapabilityVersion {
capVers := xmaps.Keys(capVerToTailscaleVer)
slices.Sort(capVers)
return capVers
}
// TailscaleVersion returns the Tailscale version for the given CapabilityVersion.
func TailscaleVersion(ver tailcfg.CapabilityVersion) string {
return capVerToTailscaleVer[ver]
}
// CapabilityVersion returns the CapabilityVersion for the given Tailscale version.
// It accepts both full versions (v1.90.1) and minor versions (v1.90).
func CapabilityVersion(ver string) tailcfg.CapabilityVersion {
if !strings.HasPrefix(ver, "v") {
ver = "v" + ver
}
// Try direct lookup first (works for minor versions like v1.90)
if cv, ok := tailscaleToCapVer[ver]; ok {
return cv
}
// Try extracting minor version from full version (v1.90.1 -> v1.90)
parts := strings.Split(strings.TrimPrefix(ver, "v"), ".")
if len(parts) >= minVersionParts {
minor := "v" + parts[0] + "." + parts[1]
return tailscaleToCapVer[minor]
}
return 0
}
// TailscaleLatest returns the n latest Tailscale versions.
func TailscaleLatest(n int) []string {
if n <= 0 {
return nil
}
tsSorted := tailscaleVersSorted()
if n > len(tsSorted) {
return tsSorted
}
return tsSorted[len(tsSorted)-n:]
}
// TailscaleLatestMajorMinor returns the n latest Tailscale versions (e.g. 1.80).
func TailscaleLatestMajorMinor(n int, stripV bool) []string {
if n <= 0 {
return nil
}
majors := set.Set[string]{}
for _, vers := range tailscaleVersSorted() {
if stripV {
vers = strings.TrimPrefix(vers, "v")
}
v := strings.Split(vers, ".")
majors.Add(v[0] + "." + v[1])
}
majorSl := majors.Slice()
sort.Strings(majorSl)
if n > len(majorSl) {
return majorSl
}
return majorSl[len(majorSl)-n:]
}
// CapVerLatest returns the n latest CapabilityVersions.
func CapVerLatest(n int) []tailcfg.CapabilityVersion {
if n <= 0 {
return nil
}
s := capVersSorted()
if n > len(s) {
return s
}
return s[len(s)-n:]
}
================================================
FILE: hscontrol/capver/capver_generated.go
================================================
package capver
// Generated DO NOT EDIT
import "tailscale.com/tailcfg"
var tailscaleToCapVer = map[string]tailcfg.CapabilityVersion{
"v1.24": 32,
"v1.26": 32,
"v1.28": 32,
"v1.30": 41,
"v1.32": 46,
"v1.34": 51,
"v1.36": 56,
"v1.38": 58,
"v1.40": 61,
"v1.42": 62,
"v1.44": 63,
"v1.46": 65,
"v1.48": 68,
"v1.50": 74,
"v1.52": 79,
"v1.54": 79,
"v1.56": 82,
"v1.58": 85,
"v1.60": 87,
"v1.62": 88,
"v1.64": 90,
"v1.66": 95,
"v1.68": 97,
"v1.70": 102,
"v1.72": 104,
"v1.74": 106,
"v1.76": 106,
"v1.78": 109,
"v1.80": 113,
"v1.82": 115,
"v1.84": 116,
"v1.86": 123,
"v1.88": 125,
"v1.90": 130,
"v1.92": 131,
"v1.94": 131,
}
var capVerToTailscaleVer = map[tailcfg.CapabilityVersion]string{
32: "v1.24",
41: "v1.30",
46: "v1.32",
51: "v1.34",
56: "v1.36",
58: "v1.38",
61: "v1.40",
62: "v1.42",
63: "v1.44",
65: "v1.46",
68: "v1.48",
74: "v1.50",
79: "v1.52",
82: "v1.56",
85: "v1.58",
87: "v1.60",
88: "v1.62",
90: "v1.64",
95: "v1.66",
97: "v1.68",
102: "v1.70",
104: "v1.72",
106: "v1.74",
109: "v1.78",
113: "v1.80",
115: "v1.82",
116: "v1.84",
123: "v1.86",
125: "v1.88",
130: "v1.90",
131: "v1.92",
}
// SupportedMajorMinorVersions is the number of major.minor Tailscale versions supported.
const SupportedMajorMinorVersions = 10
// MinSupportedCapabilityVersion represents the minimum capability version
// supported by this Headscale instance (latest 10 minor versions)
const MinSupportedCapabilityVersion tailcfg.CapabilityVersion = 106
================================================
FILE: hscontrol/capver/capver_test.go
================================================
package capver
import (
"testing"
"github.com/google/go-cmp/cmp"
)
func TestTailscaleLatestMajorMinor(t *testing.T) {
for _, test := range tailscaleLatestMajorMinorTests {
t.Run("", func(t *testing.T) {
output := TailscaleLatestMajorMinor(test.n, test.stripV)
if diff := cmp.Diff(output, test.expected); diff != "" {
t.Errorf("TailscaleLatestMajorMinor(%d, %v) mismatch (-want +got):\n%s", test.n, test.stripV, diff)
}
})
}
}
func TestCapVerMinimumTailscaleVersion(t *testing.T) {
for _, test := range capVerMinimumTailscaleVersionTests {
t.Run("", func(t *testing.T) {
output := TailscaleVersion(test.input)
if output != test.expected {
t.Errorf("CapVerFromTailscaleVersion(%d) = %s; want %s", test.input, output, test.expected)
}
})
}
}
================================================
FILE: hscontrol/capver/capver_test_data.go
================================================
package capver
// Generated DO NOT EDIT
import "tailscale.com/tailcfg"
var tailscaleLatestMajorMinorTests = []struct {
n int
stripV bool
expected []string
}{
{3, false, []string{"v1.90", "v1.92", "v1.94"}},
{2, true, []string{"1.92", "1.94"}},
{10, true, []string{
"1.76",
"1.78",
"1.80",
"1.82",
"1.84",
"1.86",
"1.88",
"1.90",
"1.92",
"1.94",
}},
{0, false, nil},
}
var capVerMinimumTailscaleVersionTests = []struct {
input tailcfg.CapabilityVersion
expected string
}{
{106, "v1.74"},
{32, "v1.24"},
{41, "v1.30"},
{46, "v1.32"},
{51, "v1.34"},
{9001, ""}, // Test case for a version higher than any in the map
{60, ""}, // Test case for a version lower than any in the map
}
================================================
FILE: hscontrol/db/api_key.go
================================================
package db
import (
"errors"
"fmt"
"strings"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"golang.org/x/crypto/bcrypt"
"gorm.io/gorm"
)
const (
apiKeyPrefix = "hskey-api-" //nolint:gosec // This is a prefix, not a credential
apiKeyPrefixLength = 12
apiKeyHashLength = 64
// Legacy format constants.
legacyAPIPrefixLength = 7
legacyAPIKeyLength = 32
)
var (
ErrAPIKeyFailedToParse = errors.New("failed to parse ApiKey")
ErrAPIKeyGenerationFailed = errors.New("failed to generate API key")
ErrAPIKeyInvalidGeneration = errors.New("generated API key failed validation")
)
// CreateAPIKey creates a new ApiKey in a user, and returns it.
func (hsdb *HSDatabase) CreateAPIKey(
expiration *time.Time,
) (string, *types.APIKey, error) {
// Generate public prefix (12 chars)
prefix, err := util.GenerateRandomStringURLSafe(apiKeyPrefixLength)
if err != nil {
return "", nil, err
}
// Validate prefix
if len(prefix) != apiKeyPrefixLength {
return "", nil, fmt.Errorf("%w: generated prefix has invalid length: expected %d, got %d", ErrAPIKeyInvalidGeneration, apiKeyPrefixLength, len(prefix))
}
if !isValidBase64URLSafe(prefix) {
return "", nil, fmt.Errorf("%w: generated prefix contains invalid characters", ErrAPIKeyInvalidGeneration)
}
// Generate secret (64 chars)
secret, err := util.GenerateRandomStringURLSafe(apiKeyHashLength)
if err != nil {
return "", nil, err
}
// Validate secret
if len(secret) != apiKeyHashLength {
return "", nil, fmt.Errorf("%w: generated secret has invalid length: expected %d, got %d", ErrAPIKeyInvalidGeneration, apiKeyHashLength, len(secret))
}
if !isValidBase64URLSafe(secret) {
return "", nil, fmt.Errorf("%w: generated secret contains invalid characters", ErrAPIKeyInvalidGeneration)
}
// Full key string (shown ONCE to user)
keyStr := apiKeyPrefix + prefix + "-" + secret
// bcrypt hash of secret
hash, err := bcrypt.GenerateFromPassword([]byte(secret), bcrypt.DefaultCost)
if err != nil {
return "", nil, err
}
key := types.APIKey{
Prefix: prefix,
Hash: hash,
Expiration: expiration,
}
if err := hsdb.DB.Save(&key).Error; err != nil { //nolint:noinlineerr
return "", nil, fmt.Errorf("saving API key to database: %w", err)
}
return keyStr, &key, nil
}
// ListAPIKeys returns the list of ApiKeys for a user.
func (hsdb *HSDatabase) ListAPIKeys() ([]types.APIKey, error) {
keys := []types.APIKey{}
err := hsdb.DB.Find(&keys).Error
if err != nil {
return nil, err
}
return keys, nil
}
// GetAPIKey returns a ApiKey for a given key.
func (hsdb *HSDatabase) GetAPIKey(prefix string) (*types.APIKey, error) {
key := types.APIKey{}
if result := hsdb.DB.First(&key, "prefix = ?", prefix); result.Error != nil {
return nil, result.Error
}
return &key, nil
}
// GetAPIKeyByID returns a ApiKey for a given id.
func (hsdb *HSDatabase) GetAPIKeyByID(id uint64) (*types.APIKey, error) {
key := types.APIKey{}
if result := hsdb.DB.Find(&types.APIKey{ID: id}).First(&key); result.Error != nil {
return nil, result.Error
}
return &key, nil
}
// DestroyAPIKey destroys a ApiKey. Returns error if the ApiKey
// does not exist.
func (hsdb *HSDatabase) DestroyAPIKey(key types.APIKey) error {
if result := hsdb.DB.Unscoped().Delete(key); result.Error != nil {
return result.Error
}
return nil
}
// ExpireAPIKey marks a ApiKey as expired.
func (hsdb *HSDatabase) ExpireAPIKey(key *types.APIKey) error {
err := hsdb.DB.Model(&key).Update("Expiration", time.Now()).Error
if err != nil {
return err
}
return nil
}
func (hsdb *HSDatabase) ValidateAPIKey(keyStr string) (bool, error) {
key, err := validateAPIKey(hsdb.DB, keyStr)
if err != nil {
return false, err
}
if key.Expiration != nil && key.Expiration.Before(time.Now()) {
return false, nil
}
return true, nil
}
// ParseAPIKeyPrefix extracts the database prefix from a display prefix.
// Handles formats: "hskey-api-{12chars}-***", "hskey-api-{12chars}", or just "{12chars}".
// Returns the 12-character prefix suitable for database lookup.
func ParseAPIKeyPrefix(displayPrefix string) (string, error) {
// If it's already just the 12-character prefix, return it
if len(displayPrefix) == apiKeyPrefixLength && isValidBase64URLSafe(displayPrefix) {
return displayPrefix, nil
}
// If it starts with the API key prefix, parse it
if strings.HasPrefix(displayPrefix, apiKeyPrefix) {
// Remove the "hskey-api-" prefix
_, remainder, found := strings.Cut(displayPrefix, apiKeyPrefix)
if !found {
return "", fmt.Errorf("%w: invalid display prefix format", ErrAPIKeyFailedToParse)
}
// Extract just the first 12 characters (the actual prefix)
if len(remainder) < apiKeyPrefixLength {
return "", fmt.Errorf("%w: prefix too short", ErrAPIKeyFailedToParse)
}
prefix := remainder[:apiKeyPrefixLength]
// Validate it's base64 URL-safe
if !isValidBase64URLSafe(prefix) {
return "", fmt.Errorf("%w: prefix contains invalid characters", ErrAPIKeyFailedToParse)
}
return prefix, nil
}
// For legacy 7-character prefixes or other formats, return as-is
return displayPrefix, nil
}
// validateAPIKey validates an API key and returns the key if valid.
// Handles both new (hskey-api-{prefix}-{secret}) and legacy (prefix.secret) formats.
func validateAPIKey(db *gorm.DB, keyStr string) (*types.APIKey, error) {
// Validate input is not empty
if keyStr == "" {
return nil, ErrAPIKeyFailedToParse
}
// Check for new format: hskey-api-{prefix}-{secret}
_, prefixAndSecret, found := strings.Cut(keyStr, apiKeyPrefix)
if !found {
// Legacy format: prefix.secret
return validateLegacyAPIKey(db, keyStr)
}
// New format: parse and verify
const expectedMinLength = apiKeyPrefixLength + 1 + apiKeyHashLength
if len(prefixAndSecret) < expectedMinLength {
return nil, fmt.Errorf(
"%w: key too short, expected at least %d chars after prefix, got %d",
ErrAPIKeyFailedToParse,
expectedMinLength,
len(prefixAndSecret),
)
}
// Use fixed-length parsing
prefix := prefixAndSecret[:apiKeyPrefixLength]
// Validate separator at expected position
if prefixAndSecret[apiKeyPrefixLength] != '-' {
return nil, fmt.Errorf(
"%w: expected separator '-' at position %d, got '%c'",
ErrAPIKeyFailedToParse,
apiKeyPrefixLength,
prefixAndSecret[apiKeyPrefixLength],
)
}
secret := prefixAndSecret[apiKeyPrefixLength+1:]
// Validate secret length
if len(secret) != apiKeyHashLength {
return nil, fmt.Errorf(
"%w: secret length mismatch, expected %d chars, got %d",
ErrAPIKeyFailedToParse,
apiKeyHashLength,
len(secret),
)
}
// Validate prefix contains only base64 URL-safe characters
if !isValidBase64URLSafe(prefix) {
return nil, fmt.Errorf(
"%w: prefix contains invalid characters (expected base64 URL-safe: A-Za-z0-9_-)",
ErrAPIKeyFailedToParse,
)
}
// Validate secret contains only base64 URL-safe characters
if !isValidBase64URLSafe(secret) {
return nil, fmt.Errorf(
"%w: secret contains invalid characters (expected base64 URL-safe: A-Za-z0-9_-)",
ErrAPIKeyFailedToParse,
)
}
// Look up by prefix (indexed)
var key types.APIKey
err := db.First(&key, "prefix = ?", prefix).Error
if err != nil {
return nil, fmt.Errorf("API key not found: %w", err)
}
// Verify bcrypt hash
err = bcrypt.CompareHashAndPassword(key.Hash, []byte(secret))
if err != nil {
return nil, fmt.Errorf("invalid API key: %w", err)
}
return &key, nil
}
// validateLegacyAPIKey validates a legacy format API key (prefix.secret).
func validateLegacyAPIKey(db *gorm.DB, keyStr string) (*types.APIKey, error) {
// Legacy format uses "." as separator
prefix, secret, found := strings.Cut(keyStr, ".")
if !found {
return nil, ErrAPIKeyFailedToParse
}
// Legacy prefix is 7 chars
if len(prefix) != legacyAPIPrefixLength {
return nil, fmt.Errorf("%w: legacy prefix length mismatch", ErrAPIKeyFailedToParse)
}
var key types.APIKey
err := db.First(&key, "prefix = ?", prefix).Error
if err != nil {
return nil, fmt.Errorf("API key not found: %w", err)
}
// Verify bcrypt (key.Hash stores bcrypt of full secret)
err = bcrypt.CompareHashAndPassword(key.Hash, []byte(secret))
if err != nil {
return nil, fmt.Errorf("invalid API key: %w", err)
}
return &key, nil
}
================================================
FILE: hscontrol/db/api_key_test.go
================================================
package db
import (
"strings"
"testing"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/crypto/bcrypt"
)
func TestCreateAPIKey(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
apiKeyStr, apiKey, err := db.CreateAPIKey(nil)
require.NoError(t, err)
require.NotNil(t, apiKey)
// Did we get a valid key?
assert.NotNil(t, apiKey.Prefix)
assert.NotNil(t, apiKey.Hash)
assert.NotEmpty(t, apiKeyStr)
_, err = db.ListAPIKeys()
require.NoError(t, err)
keys, err := db.ListAPIKeys()
require.NoError(t, err)
assert.Len(t, keys, 1)
}
func TestAPIKeyDoesNotExist(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
key, err := db.GetAPIKey("does-not-exist")
require.Error(t, err)
assert.Nil(t, key)
}
func TestValidateAPIKeyOk(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
nowPlus2 := time.Now().Add(2 * time.Hour)
apiKeyStr, apiKey, err := db.CreateAPIKey(&nowPlus2)
require.NoError(t, err)
require.NotNil(t, apiKey)
valid, err := db.ValidateAPIKey(apiKeyStr)
require.NoError(t, err)
assert.True(t, valid)
}
func TestValidateAPIKeyNotOk(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
nowMinus2 := time.Now().Add(time.Duration(-2) * time.Hour)
apiKeyStr, apiKey, err := db.CreateAPIKey(&nowMinus2)
require.NoError(t, err)
require.NotNil(t, apiKey)
valid, err := db.ValidateAPIKey(apiKeyStr)
require.NoError(t, err)
assert.False(t, valid)
now := time.Now()
apiKeyStrNow, apiKey, err := db.CreateAPIKey(&now)
require.NoError(t, err)
require.NotNil(t, apiKey)
validNow, err := db.ValidateAPIKey(apiKeyStrNow)
require.NoError(t, err)
assert.False(t, validNow)
validSilly, err := db.ValidateAPIKey("nota.validkey")
require.Error(t, err)
assert.False(t, validSilly)
validWithErr, err := db.ValidateAPIKey("produceerrorkey")
require.Error(t, err)
assert.False(t, validWithErr)
}
func TestExpireAPIKey(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
nowPlus2 := time.Now().Add(2 * time.Hour)
apiKeyStr, apiKey, err := db.CreateAPIKey(&nowPlus2)
require.NoError(t, err)
require.NotNil(t, apiKey)
valid, err := db.ValidateAPIKey(apiKeyStr)
require.NoError(t, err)
assert.True(t, valid)
err = db.ExpireAPIKey(apiKey)
require.NoError(t, err)
assert.NotNil(t, apiKey.Expiration)
notValid, err := db.ValidateAPIKey(apiKeyStr)
require.NoError(t, err)
assert.False(t, notValid)
}
func TestAPIKeyWithPrefix(t *testing.T) {
tests := []struct {
name string
test func(*testing.T, *HSDatabase)
}{
{
name: "new_key_with_prefix",
test: func(t *testing.T, db *HSDatabase) {
t.Helper()
keyStr, apiKey, err := db.CreateAPIKey(nil)
require.NoError(t, err)
// Verify format: hskey-api-{12-char-prefix}-{64-char-secret}
assert.True(t, strings.HasPrefix(keyStr, "hskey-api-"))
_, prefixAndSecret, found := strings.Cut(keyStr, "hskey-api-")
assert.True(t, found)
assert.GreaterOrEqual(t, len(prefixAndSecret), 12+1+64)
prefix := prefixAndSecret[:12]
assert.Len(t, prefix, 12)
assert.Equal(t, byte('-'), prefixAndSecret[12])
secret := prefixAndSecret[13:]
assert.Len(t, secret, 64)
// Verify stored fields
assert.Len(t, apiKey.Prefix, types.NewAPIKeyPrefixLength)
assert.NotNil(t, apiKey.Hash)
},
},
{
name: "new_key_can_be_retrieved",
test: func(t *testing.T, db *HSDatabase) {
t.Helper()
keyStr, createdKey, err := db.CreateAPIKey(nil)
require.NoError(t, err)
// Validate the created key
valid, err := db.ValidateAPIKey(keyStr)
require.NoError(t, err)
assert.True(t, valid)
// Verify prefix is correct length
assert.Len(t, createdKey.Prefix, types.NewAPIKeyPrefixLength)
},
},
{
name: "invalid_key_format_rejected",
test: func(t *testing.T, db *HSDatabase) {
t.Helper()
invalidKeys := []string{
"",
"hskey-api-short",
"hskey-api-ABCDEFGHIJKL-tooshort",
"hskey-api-ABC$EFGHIJKL-" + strings.Repeat("a", 64),
"hskey-api-ABCDEFGHIJKL" + strings.Repeat("a", 64), // missing separator
}
for _, invalidKey := range invalidKeys {
valid, err := db.ValidateAPIKey(invalidKey)
require.Error(t, err, "key should be rejected: %s", invalidKey)
assert.False(t, valid)
}
},
},
{
name: "legacy_key_still_works",
test: func(t *testing.T, db *HSDatabase) {
t.Helper()
// Insert legacy API key directly (7-char prefix + 32-char secret)
legacyPrefix := "abcdefg"
legacySecret := strings.Repeat("x", 32)
legacyKey := legacyPrefix + "." + legacySecret
hash, err := bcrypt.GenerateFromPassword([]byte(legacySecret), bcrypt.DefaultCost)
require.NoError(t, err)
now := time.Now()
err = db.DB.Exec(`
INSERT INTO api_keys (prefix, hash, created_at)
VALUES (?, ?, ?)
`, legacyPrefix, hash, now).Error
require.NoError(t, err)
// Validate legacy key
valid, err := db.ValidateAPIKey(legacyKey)
require.NoError(t, err)
assert.True(t, valid)
},
},
{
name: "wrong_secret_rejected",
test: func(t *testing.T, db *HSDatabase) {
t.Helper()
keyStr, _, err := db.CreateAPIKey(nil)
require.NoError(t, err)
// Tamper with the secret
_, prefixAndSecret, _ := strings.Cut(keyStr, "hskey-api-")
prefix := prefixAndSecret[:12]
tamperedKey := "hskey-api-" + prefix + "-" + strings.Repeat("x", 64)
valid, err := db.ValidateAPIKey(tamperedKey)
require.Error(t, err)
assert.False(t, valid)
},
},
{
name: "expired_key_rejected",
test: func(t *testing.T, db *HSDatabase) {
t.Helper()
// Create expired key
expired := time.Now().Add(-1 * time.Hour)
keyStr, _, err := db.CreateAPIKey(&expired)
require.NoError(t, err)
// Should fail validation
valid, err := db.ValidateAPIKey(keyStr)
require.NoError(t, err)
assert.False(t, valid)
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
tt.test(t, db)
})
}
}
func TestGetAPIKeyByID(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
// Create an API key
_, apiKey, err := db.CreateAPIKey(nil)
require.NoError(t, err)
require.NotNil(t, apiKey)
// Retrieve by ID
retrievedKey, err := db.GetAPIKeyByID(apiKey.ID)
require.NoError(t, err)
require.NotNil(t, retrievedKey)
assert.Equal(t, apiKey.ID, retrievedKey.ID)
assert.Equal(t, apiKey.Prefix, retrievedKey.Prefix)
}
func TestGetAPIKeyByIDNotFound(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
// Try to get a non-existent key by ID
key, err := db.GetAPIKeyByID(99999)
require.Error(t, err)
assert.Nil(t, key)
}
================================================
FILE: hscontrol/db/db.go
================================================
package db
import (
"context"
_ "embed"
"encoding/json"
"errors"
"fmt"
"net/netip"
"path/filepath"
"slices"
"strconv"
"time"
"github.com/glebarez/sqlite"
"github.com/go-gormigrate/gormigrate/v2"
"github.com/juanfont/headscale/hscontrol/db/sqliteconfig"
"github.com/juanfont/headscale/hscontrol/policy"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
"github.com/tailscale/squibble"
"gorm.io/driver/postgres"
"gorm.io/gorm"
"gorm.io/gorm/logger"
"gorm.io/gorm/schema"
"zgo.at/zcache/v2"
)
//go:embed schema.sql
var dbSchema string
func init() {
schema.RegisterSerializer("text", TextSerialiser{})
}
var errDatabaseNotSupported = errors.New("database type not supported")
var errForeignKeyConstraintsViolated = errors.New("foreign key constraints violated")
const (
maxIdleConns = 100
maxOpenConns = 100
contextTimeoutSecs = 10
)
type HSDatabase struct {
DB *gorm.DB
cfg *types.Config
regCache *zcache.Cache[types.AuthID, types.AuthRequest]
}
// NewHeadscaleDatabase creates a new database connection and runs migrations.
// It accepts the full configuration to allow migrations access to policy settings.
//
//nolint:gocyclo // complex database initialization with many migrations
func NewHeadscaleDatabase(
cfg *types.Config,
regCache *zcache.Cache[types.AuthID, types.AuthRequest],
) (*HSDatabase, error) {
dbConn, err := openDB(cfg.Database)
if err != nil {
return nil, err
}
err = checkVersionUpgradePath(dbConn)
if err != nil {
return nil, fmt.Errorf("version check: %w", err)
}
migrations := gormigrate.New(
dbConn,
gormigrate.DefaultOptions,
[]*gormigrate.Migration{
// New migrations must be added as transactions at the end of this list.
// Migrations start from v0.25.0. If upgrading from v0.24.x or earlier,
// you must first upgrade to v0.25.1 before upgrading to this version.
// v0.25.0
{
// Add a constraint to routes ensuring they cannot exist without a node.
ID: "202501221827",
Migrate: func(tx *gorm.DB) error {
// Remove any invalid routes associated with a node that does not exist.
if tx.Migrator().HasTable(&types.Route{}) && tx.Migrator().HasTable(&types.Node{}) { //nolint:staticcheck // SA1019: Route kept for migrations
err := tx.Exec("delete from routes where node_id not in (select id from nodes)").Error
if err != nil {
return err
}
}
// Remove any invalid routes without a node_id.
if tx.Migrator().HasTable(&types.Route{}) { //nolint:staticcheck // SA1019: Route kept for migrations
err := tx.Exec("delete from routes where node_id is null").Error
if err != nil {
return err
}
}
err := tx.AutoMigrate(&types.Route{}) //nolint:staticcheck // SA1019: Route kept for migrations
if err != nil {
return fmt.Errorf("automigrating types.Route: %w", err)
}
return nil
},
Rollback: func(db *gorm.DB) error { return nil },
},
// Add back constraint so you cannot delete preauth keys that
// is still used by a node.
{
ID: "202501311657",
Migrate: func(tx *gorm.DB) error {
err := tx.AutoMigrate(&types.PreAuthKey{})
if err != nil {
return fmt.Errorf("automigrating types.PreAuthKey: %w", err)
}
err = tx.AutoMigrate(&types.Node{})
if err != nil {
return fmt.Errorf("automigrating types.Node: %w", err)
}
return nil
},
Rollback: func(db *gorm.DB) error { return nil },
},
// Ensure there are no nodes referring to a deleted preauthkey.
{
ID: "202502070949",
Migrate: func(tx *gorm.DB) error {
if tx.Migrator().HasTable(&types.PreAuthKey{}) {
err := tx.Exec(`
UPDATE nodes
SET auth_key_id = NULL
WHERE auth_key_id IS NOT NULL
AND auth_key_id NOT IN (
SELECT id FROM pre_auth_keys
);
`).Error
if err != nil {
return fmt.Errorf("setting auth_key to null on nodes with non-existing keys: %w", err)
}
}
return nil
},
Rollback: func(db *gorm.DB) error { return nil },
},
// v0.26.0
// Migrate all routes from the Route table to the new field ApprovedRoutes
// in the Node table. Then drop the Route table.
{
ID: "202502131714",
Migrate: func(tx *gorm.DB) error {
if !tx.Migrator().HasColumn(&types.Node{}, "approved_routes") {
err := tx.Migrator().AddColumn(&types.Node{}, "approved_routes")
if err != nil {
return fmt.Errorf("adding column types.Node: %w", err)
}
}
nodeRoutes := map[uint64][]netip.Prefix{}
var routes []types.Route //nolint:staticcheck // SA1019: Route kept for migrations
err = tx.Find(&routes).Error
if err != nil {
return fmt.Errorf("fetching routes: %w", err)
}
for _, route := range routes {
if route.Enabled {
nodeRoutes[route.NodeID] = append(nodeRoutes[route.NodeID], route.Prefix)
}
}
for nodeID, routes := range nodeRoutes {
slices.SortFunc(routes, netip.Prefix.Compare)
routes = slices.Compact(routes)
data, _ := json.Marshal(routes)
err = tx.Model(&types.Node{}).Where("id = ?", nodeID).Update("approved_routes", data).Error
if err != nil {
return fmt.Errorf("saving approved routes to new column: %w", err)
}
}
// Drop the old table.
_ = tx.Migrator().DropTable(&types.Route{}) //nolint:staticcheck // SA1019: Route kept for migrations
return nil
},
Rollback: func(db *gorm.DB) error { return nil },
},
{
ID: "202502171819",
Migrate: func(tx *gorm.DB) error {
// This migration originally removed the last_seen column
// from the node table, but it was added back in
// 202505091439.
return nil
},
Rollback: func(db *gorm.DB) error { return nil },
},
// Add back last_seen column to node table.
{
ID: "202505091439",
Migrate: func(tx *gorm.DB) error {
// Add back last_seen column to node table if it does not exist.
// This is a workaround for the fact that the last_seen column
// was removed in the 202502171819 migration, but only for some
// beta testers.
if !tx.Migrator().HasColumn(&types.Node{}, "last_seen") {
_ = tx.Migrator().AddColumn(&types.Node{}, "last_seen")
}
return nil
},
Rollback: func(db *gorm.DB) error { return nil },
},
// Fix the provider identifier for users that have a double slash in the
// provider identifier.
{
ID: "202505141324",
Migrate: func(tx *gorm.DB) error {
users, err := ListUsers(tx)
if err != nil {
return fmt.Errorf("listing users: %w", err)
}
for _, user := range users {
user.ProviderIdentifier.String = types.CleanIdentifier(user.ProviderIdentifier.String)
err := tx.Save(user).Error
if err != nil {
return fmt.Errorf("saving user: %w", err)
}
}
return nil
},
Rollback: func(db *gorm.DB) error { return nil },
},
// v0.27.0
// Schema migration to ensure all tables match the expected schema.
// This migration recreates all tables to match the exact structure in schema.sql,
// preserving all data during the process.
// Only SQLite will be migrated for consistency.
{
ID: "202507021200",
Migrate: func(tx *gorm.DB) error {
// Only run on SQLite
if cfg.Database.Type != types.DatabaseSqlite {
log.Info().Msg("skipping schema migration on non-SQLite database")
return nil
}
log.Info().Msg("starting schema recreation with table renaming")
// Rename existing tables to _old versions
tablesToRename := []string{"users", "pre_auth_keys", "api_keys", "nodes", "policies"}
// Check if routes table exists and drop it (should have been migrated already)
var routesExists bool
err := tx.Raw("SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='routes'").Row().Scan(&routesExists)
if err == nil && routesExists {
log.Info().Msg("dropping leftover routes table")
err := tx.Exec("DROP TABLE routes").Error
if err != nil {
return fmt.Errorf("dropping routes table: %w", err)
}
}
// Drop all indexes first to avoid conflicts
indexesToDrop := []string{
"idx_users_deleted_at",
"idx_provider_identifier",
"idx_name_provider_identifier",
"idx_name_no_provider_identifier",
"idx_api_keys_prefix",
"idx_policies_deleted_at",
}
for _, index := range indexesToDrop {
_ = tx.Exec("DROP INDEX IF EXISTS " + index).Error
}
for _, table := range tablesToRename {
// Check if table exists before renaming
var exists bool
err := tx.Raw("SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name=?", table).Row().Scan(&exists)
if err != nil {
return fmt.Errorf("checking if table %s exists: %w", table, err)
}
if exists {
// Drop old table if it exists from previous failed migration
_ = tx.Exec("DROP TABLE IF EXISTS " + table + "_old").Error
// Rename current table to _old
err := tx.Exec("ALTER TABLE " + table + " RENAME TO " + table + "_old").Error
if err != nil {
return fmt.Errorf("renaming table %s to %s_old: %w", table, table, err)
}
}
}
// Create new tables with correct schema
tableCreationSQL := []string{
`CREATE TABLE users(
id integer PRIMARY KEY AUTOINCREMENT,
name text,
display_name text,
email text,
provider_identifier text,
provider text,
profile_pic_url text,
created_at datetime,
updated_at datetime,
deleted_at datetime
)`,
`CREATE TABLE pre_auth_keys(
id integer PRIMARY KEY AUTOINCREMENT,
key text,
user_id integer,
reusable numeric,
ephemeral numeric DEFAULT false,
used numeric DEFAULT false,
tags text,
expiration datetime,
created_at datetime,
CONSTRAINT fk_pre_auth_keys_user FOREIGN KEY(user_id) REFERENCES users(id) ON DELETE SET NULL
)`,
`CREATE TABLE api_keys(
id integer PRIMARY KEY AUTOINCREMENT,
prefix text,
hash blob,
expiration datetime,
last_seen datetime,
created_at datetime
)`,
`CREATE TABLE nodes(
id integer PRIMARY KEY AUTOINCREMENT,
machine_key text,
node_key text,
disco_key text,
endpoints text,
host_info text,
ipv4 text,
ipv6 text,
hostname text,
given_name varchar(63),
user_id integer,
register_method text,
forced_tags text,
auth_key_id integer,
last_seen datetime,
expiry datetime,
approved_routes text,
created_at datetime,
updated_at datetime,
deleted_at datetime,
CONSTRAINT fk_nodes_user FOREIGN KEY(user_id) REFERENCES users(id) ON DELETE CASCADE,
CONSTRAINT fk_nodes_auth_key FOREIGN KEY(auth_key_id) REFERENCES pre_auth_keys(id)
)`,
`CREATE TABLE policies(
id integer PRIMARY KEY AUTOINCREMENT,
data text,
created_at datetime,
updated_at datetime,
deleted_at datetime
)`,
}
for _, createSQL := range tableCreationSQL {
err := tx.Exec(createSQL).Error
if err != nil {
return fmt.Errorf("creating new table: %w", err)
}
}
// Copy data directly using SQL
dataCopySQL := []string{
`INSERT INTO users (id, name, display_name, email, provider_identifier, provider, profile_pic_url, created_at, updated_at, deleted_at)
SELECT id, name, display_name, email, provider_identifier, provider, profile_pic_url, created_at, updated_at, deleted_at
FROM users_old`,
`INSERT INTO pre_auth_keys (id, key, user_id, reusable, ephemeral, used, tags, expiration, created_at)
SELECT id, key, user_id, reusable, ephemeral, used, tags, expiration, created_at
FROM pre_auth_keys_old`,
`INSERT INTO api_keys (id, prefix, hash, expiration, last_seen, created_at)
SELECT id, prefix, hash, expiration, last_seen, created_at
FROM api_keys_old`,
`INSERT INTO nodes (id, machine_key, node_key, disco_key, endpoints, host_info, ipv4, ipv6, hostname, given_name, user_id, register_method, forced_tags, auth_key_id, last_seen, expiry, approved_routes, created_at, updated_at, deleted_at)
SELECT id, machine_key, node_key, disco_key, endpoints, host_info, ipv4, ipv6, hostname, given_name, user_id, register_method, forced_tags, auth_key_id, last_seen, expiry, approved_routes, created_at, updated_at, deleted_at
FROM nodes_old`,
`INSERT INTO policies (id, data, created_at, updated_at, deleted_at)
SELECT id, data, created_at, updated_at, deleted_at
FROM policies_old`,
}
for _, copySQL := range dataCopySQL {
err := tx.Exec(copySQL).Error
if err != nil {
return fmt.Errorf("copying data: %w", err)
}
}
// Create indexes
indexes := []string{
"CREATE INDEX idx_users_deleted_at ON users(deleted_at)",
`CREATE UNIQUE INDEX idx_provider_identifier ON users(
provider_identifier
) WHERE provider_identifier IS NOT NULL`,
`CREATE UNIQUE INDEX idx_name_provider_identifier ON users(
name,
provider_identifier
)`,
`CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users(
name
) WHERE provider_identifier IS NULL`,
"CREATE UNIQUE INDEX idx_api_keys_prefix ON api_keys(prefix)",
"CREATE INDEX idx_policies_deleted_at ON policies(deleted_at)",
}
for _, indexSQL := range indexes {
err := tx.Exec(indexSQL).Error
if err != nil {
return fmt.Errorf("creating index: %w", err)
}
}
// Drop old tables only after everything succeeds
for _, table := range tablesToRename {
err := tx.Exec("DROP TABLE IF EXISTS " + table + "_old").Error
if err != nil {
log.Warn().Str("table", table+"_old").Err(err).Msg("failed to drop old table, but migration succeeded")
}
}
log.Info().Msg("schema recreation completed successfully")
return nil
},
Rollback: func(db *gorm.DB) error { return nil },
},
// v0.27.1
{
// Drop all tables that are no longer in use and has existed.
// They potentially still present from broken migrations in the past.
ID: "202510311551",
Migrate: func(tx *gorm.DB) error {
for _, oldTable := range []string{"namespaces", "machines", "shared_machines", "kvs", "pre_auth_key_acl_tags", "routes"} {
err := tx.Migrator().DropTable(oldTable)
if err != nil {
log.Trace().Str("table", oldTable).
Err(err).
Msg("Error dropping old table, continuing...")
}
}
return nil
},
Rollback: func(tx *gorm.DB) error {
return nil
},
},
{
// Drop all indices that are no longer in use and has existed.
// They potentially still present from broken migrations in the past.
// They should all be cleaned up by the db engine, but we are a bit
// conservative to ensure all our previous mess is cleaned up.
ID: "202511101554-drop-old-idx",
Migrate: func(tx *gorm.DB) error {
for _, oldIdx := range []struct{ name, table string }{
{"idx_namespaces_deleted_at", "namespaces"},
{"idx_routes_deleted_at", "routes"},
{"idx_shared_machines_deleted_at", "shared_machines"},
} {
err := tx.Migrator().DropIndex(oldIdx.table, oldIdx.name)
if err != nil {
log.Trace().
Str("index", oldIdx.name).
Str("table", oldIdx.table).
Err(err).
Msg("Error dropping old index, continuing...")
}
}
return nil
},
Rollback: func(tx *gorm.DB) error {
return nil
},
},
// Migrations **above** this points will be REMOVED in version **0.29.0**
// This is to clean up a lot of old migrations that is seldom used
// and carries a lot of technical debt.
// Any new migrations should be added after the comment below and follow
// the rules it sets out.
// From this point, the following rules must be followed:
// - NEVER use gorm.AutoMigrate, write the exact migration steps needed
// - AutoMigrate depends on the struct staying exactly the same, which it won't over time.
// - Never write migrations that requires foreign keys to be disabled.
// - ALL errors in migrations must be handled properly.
{
// Add columns for prefix and hash for pre auth keys, implementing
// them with the same security model as api keys.
ID: "202511011637-preauthkey-bcrypt",
Migrate: func(tx *gorm.DB) error {
// Check and add prefix column if it doesn't exist
if !tx.Migrator().HasColumn(&types.PreAuthKey{}, "prefix") {
err := tx.Migrator().AddColumn(&types.PreAuthKey{}, "prefix")
if err != nil {
return fmt.Errorf("adding prefix column: %w", err)
}
}
// Check and add hash column if it doesn't exist
if !tx.Migrator().HasColumn(&types.PreAuthKey{}, "hash") {
err := tx.Migrator().AddColumn(&types.PreAuthKey{}, "hash")
if err != nil {
return fmt.Errorf("adding hash column: %w", err)
}
}
// Create partial unique index to allow multiple legacy keys (NULL/empty prefix)
// while enforcing uniqueness for new bcrypt-based keys
err := tx.Exec("CREATE UNIQUE INDEX IF NOT EXISTS idx_pre_auth_keys_prefix ON pre_auth_keys(prefix) WHERE prefix IS NOT NULL AND prefix != ''").Error
if err != nil {
return fmt.Errorf("creating prefix index: %w", err)
}
return nil
},
Rollback: func(db *gorm.DB) error { return nil },
},
{
ID: "202511122344-remove-newline-index",
Migrate: func(tx *gorm.DB) error {
// Reformat multi-line indexes to single-line for consistency
// This migration drops and recreates the three user identity indexes
// to match the single-line format expected by schema validation
// Drop existing multi-line indexes
dropIndexes := []string{
`DROP INDEX IF EXISTS idx_provider_identifier`,
`DROP INDEX IF EXISTS idx_name_provider_identifier`,
`DROP INDEX IF EXISTS idx_name_no_provider_identifier`,
}
for _, dropSQL := range dropIndexes {
err := tx.Exec(dropSQL).Error
if err != nil {
return fmt.Errorf("dropping index: %w", err)
}
}
// Recreate indexes in single-line format
createIndexes := []string{
`CREATE UNIQUE INDEX idx_provider_identifier ON users(provider_identifier) WHERE provider_identifier IS NOT NULL`,
`CREATE UNIQUE INDEX idx_name_provider_identifier ON users(name, provider_identifier)`,
`CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users(name) WHERE provider_identifier IS NULL`,
}
for _, createSQL := range createIndexes {
err := tx.Exec(createSQL).Error
if err != nil {
return fmt.Errorf("creating index: %w", err)
}
}
return nil
},
Rollback: func(db *gorm.DB) error { return nil },
},
{
// Rename forced_tags column to tags in nodes table.
// This must run after migration 202505141324 which creates tables with forced_tags.
ID: "202511131445-node-forced-tags-to-tags",
Migrate: func(tx *gorm.DB) error {
// Rename the column from forced_tags to tags
err := tx.Migrator().RenameColumn(&types.Node{}, "forced_tags", "tags")
if err != nil {
return fmt.Errorf("renaming forced_tags to tags: %w", err)
}
return nil
},
Rollback: func(db *gorm.DB) error { return nil },
},
{
// Migrate RequestTags from host_info JSON to tags column.
// In 0.27.x, tags from --advertise-tags (ValidTags) were stored only in
// host_info.RequestTags, not in the tags column (formerly forced_tags).
// This migration validates RequestTags against the policy's tagOwners
// and merges validated tags into the tags column.
// Fixes: https://github.com/juanfont/headscale/issues/3006
ID: "202601121700-migrate-hostinfo-request-tags",
Migrate: func(tx *gorm.DB) error {
// 1. Load policy from file or database based on configuration
policyData, err := PolicyBytes(tx, cfg)
if err != nil {
log.Warn().Err(err).Msg("failed to load policy, skipping RequestTags migration (tags will be validated on node reconnect)")
return nil
}
if len(policyData) == 0 {
log.Info().Msg("no policy found, skipping RequestTags migration (tags will be validated on node reconnect)")
return nil
}
// 2. Load users and nodes to create PolicyManager
users, err := ListUsers(tx)
if err != nil {
return fmt.Errorf("loading users for RequestTags migration: %w", err)
}
nodes, err := ListNodes(tx)
if err != nil {
return fmt.Errorf("loading nodes for RequestTags migration: %w", err)
}
// 3. Create PolicyManager (handles HuJSON parsing, groups, nested tags, etc.)
polMan, err := policy.NewPolicyManager(policyData, users, nodes.ViewSlice())
if err != nil {
log.Warn().Err(err).Msg("failed to parse policy, skipping RequestTags migration (tags will be validated on node reconnect)")
return nil
}
// 4. Process each node
for _, node := range nodes {
if node.Hostinfo == nil {
continue
}
requestTags := node.Hostinfo.RequestTags
if len(requestTags) == 0 {
continue
}
existingTags := node.Tags
var validatedTags, rejectedTags []string
nodeView := node.View()
for _, tag := range requestTags {
if polMan.NodeCanHaveTag(nodeView, tag) {
if !slices.Contains(existingTags, tag) {
validatedTags = append(validatedTags, tag)
}
} else {
rejectedTags = append(rejectedTags, tag)
}
}
if len(validatedTags) == 0 {
if len(rejectedTags) > 0 {
log.Debug().
EmbedObject(node).
Strs("rejected_tags", rejectedTags).
Msg("RequestTags rejected during migration (not authorized)")
}
continue
}
mergedTags := append(existingTags, validatedTags...)
slices.Sort(mergedTags)
mergedTags = slices.Compact(mergedTags)
tagsJSON, err := json.Marshal(mergedTags)
if err != nil {
return fmt.Errorf("serializing merged tags for node %d: %w", node.ID, err)
}
err = tx.Exec("UPDATE nodes SET tags = ? WHERE id = ?", string(tagsJSON), node.ID).Error
if err != nil {
return fmt.Errorf("updating tags for node %d: %w", node.ID, err)
}
log.Info().
EmbedObject(node).
Strs("validated_tags", validatedTags).
Strs("rejected_tags", rejectedTags).
Strs("existing_tags", existingTags).
Strs("merged_tags", mergedTags).
Msg("Migrated validated RequestTags from host_info to tags column")
}
return nil
},
Rollback: func(db *gorm.DB) error { return nil },
},
{
// Clear user_id on tagged nodes.
// Tagged nodes are owned by their tags, not a user.
// Previously user_id was kept as "created by" tracking,
// but this prevents deleting users whose nodes have been
// tagged, and the ON DELETE CASCADE FK would destroy the
// tagged nodes if the user were deleted.
// Fixes: https://github.com/juanfont/headscale/issues/3077
ID: "202602201200-clear-tagged-node-user-id",
Migrate: func(tx *gorm.DB) error {
err := tx.Exec(`
UPDATE nodes
SET user_id = NULL
WHERE tags IS NOT NULL AND tags != '[]' AND tags != '';
`).Error
if err != nil {
return fmt.Errorf("clearing user_id on tagged nodes: %w", err)
}
return nil
},
Rollback: func(db *gorm.DB) error { return nil },
},
},
)
migrations.InitSchema(func(tx *gorm.DB) error {
// Create all tables using AutoMigrate
err := tx.AutoMigrate(
&types.User{},
&types.PreAuthKey{},
&types.APIKey{},
&types.Node{},
&types.Policy{},
)
if err != nil {
return err
}
// Drop all indexes (both GORM-created and potentially pre-existing ones)
// to ensure we can recreate them in the correct format
dropIndexes := []string{
`DROP INDEX IF EXISTS "idx_users_deleted_at"`,
`DROP INDEX IF EXISTS "idx_api_keys_prefix"`,
`DROP INDEX IF EXISTS "idx_policies_deleted_at"`,
`DROP INDEX IF EXISTS "idx_provider_identifier"`,
`DROP INDEX IF EXISTS "idx_name_provider_identifier"`,
`DROP INDEX IF EXISTS "idx_name_no_provider_identifier"`,
`DROP INDEX IF EXISTS "idx_pre_auth_keys_prefix"`,
}
for _, dropSQL := range dropIndexes {
err := tx.Exec(dropSQL).Error
if err != nil {
return err
}
}
// Recreate indexes without backticks to match schema.sql format
indexes := []string{
`CREATE INDEX idx_users_deleted_at ON users(deleted_at)`,
`CREATE UNIQUE INDEX idx_api_keys_prefix ON api_keys(prefix)`,
`CREATE INDEX idx_policies_deleted_at ON policies(deleted_at)`,
`CREATE UNIQUE INDEX idx_provider_identifier ON users(provider_identifier) WHERE provider_identifier IS NOT NULL`,
`CREATE UNIQUE INDEX idx_name_provider_identifier ON users(name, provider_identifier)`,
`CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users(name) WHERE provider_identifier IS NULL`,
`CREATE UNIQUE INDEX idx_pre_auth_keys_prefix ON pre_auth_keys(prefix) WHERE prefix IS NOT NULL AND prefix != ''`,
}
for _, indexSQL := range indexes {
err := tx.Exec(indexSQL).Error
if err != nil {
return err
}
}
return nil
})
err = runMigrations(cfg.Database, dbConn, migrations)
if err != nil {
return nil, fmt.Errorf("migration failed: %w", err)
}
// Store the current version in the database after migrations succeed.
// Dev builds skip this to preserve the stored version for the next
// real versioned binary.
currentVersion := types.GetVersionInfo().Version
if !isDev(currentVersion) {
err = setDatabaseVersion(dbConn, currentVersion)
if err != nil {
return nil, fmt.Errorf(
"storing database version: %w",
err,
)
}
}
// Validate that the schema ends up in the expected state.
// This is currently only done on sqlite as squibble does not
// support Postgres and we use our sqlite schema as our source of
// truth.
if cfg.Database.Type == types.DatabaseSqlite {
sqlConn, err := dbConn.DB()
if err != nil {
return nil, fmt.Errorf("getting DB from gorm: %w", err)
}
// or else it blocks...
sqlConn.SetMaxIdleConns(maxIdleConns)
sqlConn.SetMaxOpenConns(maxOpenConns)
defer sqlConn.SetMaxIdleConns(1)
defer sqlConn.SetMaxOpenConns(1)
ctx, cancel := context.WithTimeout(context.Background(), contextTimeoutSecs*time.Second)
defer cancel()
opts := squibble.DigestOptions{
IgnoreTables: []string{
// Litestream tables, these are inserted by
// litestream and not part of our schema
// https://litestream.io/how-it-works
"_litestream_lock",
"_litestream_seq",
},
}
if err := squibble.Validate(ctx, sqlConn, dbSchema, &opts); err != nil { //nolint:noinlineerr
return nil, fmt.Errorf("validating schema: %w", err)
}
}
db := HSDatabase{
DB: dbConn,
cfg: cfg,
regCache: regCache,
}
return &db, err
}
func openDB(cfg types.DatabaseConfig) (*gorm.DB, error) {
// TODO(kradalby): Integrate this with zerolog
var dbLogger logger.Interface
if cfg.Debug {
dbLogger = util.NewDBLogWrapper(&log.Logger, cfg.Gorm.SlowThreshold, cfg.Gorm.SkipErrRecordNotFound, cfg.Gorm.ParameterizedQueries)
} else {
dbLogger = logger.Default.LogMode(logger.Silent)
}
switch cfg.Type {
case types.DatabaseSqlite:
dir := filepath.Dir(cfg.Sqlite.Path)
err := util.EnsureDir(dir)
if err != nil {
return nil, fmt.Errorf("creating directory for sqlite: %w", err)
}
log.Info().
Str("database", types.DatabaseSqlite).
Str("path", cfg.Sqlite.Path).
Msg("Opening database")
// Build SQLite configuration with pragmas set at connection time
sqliteConfig := sqliteconfig.Default(cfg.Sqlite.Path)
if cfg.Sqlite.WriteAheadLog {
sqliteConfig.JournalMode = sqliteconfig.JournalModeWAL
sqliteConfig.WALAutocheckpoint = cfg.Sqlite.WALAutoCheckPoint
}
connectionURL, err := sqliteConfig.ToURL()
if err != nil {
return nil, fmt.Errorf("building sqlite connection URL: %w", err)
}
db, err := gorm.Open(
sqlite.Open(connectionURL),
&gorm.Config{
PrepareStmt: cfg.Gorm.PrepareStmt,
Logger: dbLogger,
},
)
// The pure Go SQLite library does not handle locking in
// the same way as the C based one and we can't use the gorm
// connection pool as of 2022/02/23.
sqlDB, _ := db.DB()
sqlDB.SetMaxIdleConns(1)
sqlDB.SetMaxOpenConns(1)
sqlDB.SetConnMaxIdleTime(time.Hour)
return db, err
case types.DatabasePostgres:
dbString := fmt.Sprintf(
"host=%s dbname=%s user=%s",
cfg.Postgres.Host,
cfg.Postgres.Name,
cfg.Postgres.User,
)
log.Info().
Str("database", types.DatabasePostgres).
Str("path", dbString).
Msg("Opening database")
if sslEnabled, err := strconv.ParseBool(cfg.Postgres.Ssl); err == nil { //nolint:noinlineerr
if !sslEnabled {
dbString += " sslmode=disable"
}
} else {
dbString += " sslmode=" + cfg.Postgres.Ssl
}
if cfg.Postgres.Port != 0 {
dbString += fmt.Sprintf(" port=%d", cfg.Postgres.Port)
}
if cfg.Postgres.Pass != "" {
dbString += " password=" + cfg.Postgres.Pass
}
db, err := gorm.Open(postgres.Open(dbString), &gorm.Config{
Logger: dbLogger,
})
if err != nil {
return nil, err
}
sqlDB, _ := db.DB()
sqlDB.SetMaxIdleConns(cfg.Postgres.MaxIdleConnections)
sqlDB.SetMaxOpenConns(cfg.Postgres.MaxOpenConnections)
sqlDB.SetConnMaxIdleTime(
time.Duration(cfg.Postgres.ConnMaxIdleTimeSecs) * time.Second,
)
return db, nil
}
return nil, fmt.Errorf(
"database of type %s is not supported: %w",
cfg.Type,
errDatabaseNotSupported,
)
}
func runMigrations(cfg types.DatabaseConfig, dbConn *gorm.DB, migrations *gormigrate.Gormigrate) error {
if cfg.Type == types.DatabaseSqlite {
// SQLite: Run migrations step-by-step, only disabling foreign keys when necessary
// List of migration IDs that require foreign keys to be disabled
// These are migrations that perform complex schema changes that GORM cannot handle safely with FK enabled
// NO NEW MIGRATIONS SHOULD BE ADDED HERE. ALL NEW MIGRATIONS MUST RUN WITH FOREIGN KEYS ENABLED.
migrationsRequiringFKDisabled := map[string]bool{
"202501221827": true, // Route table automigration with FK constraint issues
"202501311657": true, // PreAuthKey table automigration with FK constraint issues
// Add other migration IDs here as they are identified to need FK disabled
}
// Get the current foreign key status
var fkOriginallyEnabled int
if err := dbConn.Raw("PRAGMA foreign_keys").Scan(&fkOriginallyEnabled).Error; err != nil { //nolint:noinlineerr
return fmt.Errorf("checking foreign key status: %w", err)
}
// Get all migration IDs in order from the actual migration definitions
// Only IDs that are in the migrationsRequiringFKDisabled map will be processed with FK disabled
// any other new migrations are ran after.
migrationIDs := []string{
// v0.25.0
"202501221827",
"202501311657",
"202502070949",
// v0.26.0
"202502131714",
"202502171819",
"202505091439",
"202505141324",
// As of 2025-07-02, no new IDs should be added here.
// They will be ran by the migrations.Migrate() call below.
}
for _, migrationID := range migrationIDs {
log.Trace().Caller().Str("migration_id", migrationID).Msg("running migration")
needsFKDisabled := migrationsRequiringFKDisabled[migrationID]
if needsFKDisabled {
// Disable foreign keys for this migration
err := dbConn.Exec("PRAGMA foreign_keys = OFF").Error
if err != nil {
return fmt.Errorf("disabling foreign keys for migration %s: %w", migrationID, err)
}
} else {
// Ensure foreign keys are enabled for this migration
err := dbConn.Exec("PRAGMA foreign_keys = ON").Error
if err != nil {
return fmt.Errorf("enabling foreign keys for migration %s: %w", migrationID, err)
}
}
// Run up to this specific migration (will only run the next pending migration)
err := migrations.MigrateTo(migrationID)
if err != nil {
return fmt.Errorf("running migration %s: %w", migrationID, err)
}
}
if err := dbConn.Exec("PRAGMA foreign_keys = ON").Error; err != nil { //nolint:noinlineerr
return fmt.Errorf("restoring foreign keys: %w", err)
}
// Run the rest of the migrations
if err := migrations.Migrate(); err != nil { //nolint:noinlineerr
return err
}
// Check for constraint violations at the end
type constraintViolation struct {
Table string
RowID int
Parent string
ConstraintIndex int
}
var violatedConstraints []constraintViolation
rows, err := dbConn.Raw("PRAGMA foreign_key_check").Rows()
if err != nil {
return err
}
defer rows.Close()
for rows.Next() {
var violation constraintViolation
err := rows.Scan(&violation.Table, &violation.RowID, &violation.Parent, &violation.ConstraintIndex)
if err != nil {
return err
}
violatedConstraints = append(violatedConstraints, violation)
}
if err := rows.Err(); err != nil { //nolint:noinlineerr
return err
}
if len(violatedConstraints) > 0 {
for _, violation := range violatedConstraints {
log.Error().
Str("table", violation.Table).
Int("row_id", violation.RowID).
Str("parent", violation.Parent).
Msg("Foreign key constraint violated")
}
return errForeignKeyConstraintsViolated
}
} else {
// PostgreSQL can run all migrations in one block - no foreign key issues
err := migrations.Migrate()
if err != nil {
return err
}
}
return nil
}
func (hsdb *HSDatabase) PingDB(ctx context.Context) error {
ctx, cancel := context.WithTimeout(ctx, time.Second)
defer cancel()
sqlDB, err := hsdb.DB.DB()
if err != nil {
return err
}
return sqlDB.PingContext(ctx)
}
func (hsdb *HSDatabase) Close() error {
db, err := hsdb.DB.DB()
if err != nil {
return err
}
if hsdb.cfg.Database.Type == types.DatabaseSqlite && hsdb.cfg.Database.Sqlite.WriteAheadLog {
db.Exec("VACUUM") //nolint:errcheck,noctx
}
return db.Close()
}
func (hsdb *HSDatabase) Read(fn func(rx *gorm.DB) error) error {
rx := hsdb.DB.Begin()
defer rx.Rollback()
return fn(rx)
}
func Read[T any](db *gorm.DB, fn func(rx *gorm.DB) (T, error)) (T, error) {
rx := db.Begin()
defer rx.Rollback()
ret, err := fn(rx)
if err != nil {
var no T
return no, err
}
return ret, nil
}
func (hsdb *HSDatabase) Write(fn func(tx *gorm.DB) error) error {
tx := hsdb.DB.Begin()
defer tx.Rollback()
err := fn(tx)
if err != nil {
return err
}
return tx.Commit().Error
}
func Write[T any](db *gorm.DB, fn func(tx *gorm.DB) (T, error)) (T, error) {
tx := db.Begin()
defer tx.Rollback()
ret, err := fn(tx)
if err != nil {
var no T
return no, err
}
return ret, tx.Commit().Error
}
================================================
FILE: hscontrol/db/db_test.go
================================================
package db
import (
"context"
"database/sql"
"os"
"os/exec"
"path/filepath"
"strings"
"testing"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gorm.io/gorm"
"zgo.at/zcache/v2"
)
// TestSQLiteMigrationAndDataValidation tests specific SQLite migration scenarios
// and validates data integrity after migration. All migrations that require data validation
// should be added here.
func TestSQLiteMigrationAndDataValidation(t *testing.T) {
tests := []struct {
dbPath string
wantFunc func(*testing.T, *HSDatabase)
}{
// at 14:15:06 ❯ go run ./cmd/headscale preauthkeys list
// ID | Key | Reusable | Ephemeral | Used | Expiration | Created | Tags
// 1 | 09b28f.. | false | false | false | 2024-09-27 | 2024-09-27 | tag:derp
// 2 | 3112b9.. | false | false | false | 2024-09-27 | 2024-09-27 | tag:derp
{
dbPath: "testdata/sqlite/failing-node-preauth-constraint_dump.sql",
wantFunc: func(t *testing.T, hsdb *HSDatabase) {
t.Helper()
// Comprehensive data preservation validation for node-preauth constraint issue
// Expected data from dump: 1 user, 2 api_keys, 6 nodes
// Verify users data preservation
users, err := Read(hsdb.DB, func(rx *gorm.DB) ([]types.User, error) {
return ListUsers(rx)
})
require.NoError(t, err)
assert.Len(t, users, 1, "should preserve all 1 user from original schema")
// Verify api_keys data preservation
var apiKeyCount int
err = hsdb.DB.Raw("SELECT COUNT(*) FROM api_keys").Scan(&apiKeyCount).Error
require.NoError(t, err)
assert.Equal(t, 2, apiKeyCount, "should preserve all 2 api_keys from original schema")
// Verify nodes data preservation and field validation
nodes, err := Read(hsdb.DB, func(rx *gorm.DB) (types.Nodes, error) {
return ListNodes(rx)
})
require.NoError(t, err)
assert.Len(t, nodes, 6, "should preserve all 6 nodes from original schema")
for _, node := range nodes {
assert.Falsef(t, node.MachineKey.IsZero(), "expected non zero machinekey")
assert.Contains(t, node.MachineKey.String(), "mkey:")
assert.Falsef(t, node.NodeKey.IsZero(), "expected non zero nodekey")
assert.Contains(t, node.NodeKey.String(), "nodekey:")
assert.Falsef(t, node.DiscoKey.IsZero(), "expected non zero discokey")
assert.Contains(t, node.DiscoKey.String(), "discokey:")
assert.Nil(t, node.AuthKey)
assert.Nil(t, node.AuthKeyID)
}
},
},
// Test for RequestTags migration (202601121700-migrate-hostinfo-request-tags)
// and forced_tags->tags rename migration (202511131445-node-forced-tags-to-tags)
//
// This test validates that:
// 1. The forced_tags column is renamed to tags
// 2. RequestTags from host_info are validated against policy tagOwners
// 3. Authorized tags are migrated to the tags column
// 4. Unauthorized tags are rejected
// 5. Existing tags are preserved
// 6. Group membership is evaluated for tag authorization
{
dbPath: "testdata/sqlite/request_tags_migration_test.sql",
wantFunc: func(t *testing.T, hsdb *HSDatabase) {
t.Helper()
nodes, err := Read(hsdb.DB, func(rx *gorm.DB) (types.Nodes, error) {
return ListNodes(rx)
})
require.NoError(t, err)
require.Len(t, nodes, 7, "should have all 7 nodes")
// Helper to find node by hostname
findNode := func(hostname string) *types.Node {
for _, n := range nodes {
if n.Hostname == hostname {
return n
}
}
return nil
}
// Node 1: user1 has RequestTags for tag:server (authorized)
// Expected: tags = ["tag:server"]
node1 := findNode("node1")
require.NotNil(t, node1, "node1 should exist")
assert.Contains(t, node1.Tags, "tag:server", "node1 should have tag:server migrated from RequestTags")
// Node 2: user1 has RequestTags for tag:unauthorized (NOT authorized)
// Expected: tags = [] (unchanged)
node2 := findNode("node2")
require.NotNil(t, node2, "node2 should exist")
assert.Empty(t, node2.Tags, "node2 should have empty tags (unauthorized tag rejected)")
// Node 3: user2 has RequestTags for tag:client (authorized) + existing tag:existing
// Expected: tags = ["tag:client", "tag:existing"]
node3 := findNode("node3")
require.NotNil(t, node3, "node3 should exist")
assert.Contains(t, node3.Tags, "tag:client", "node3 should have tag:client migrated from RequestTags")
assert.Contains(t, node3.Tags, "tag:existing", "node3 should preserve existing tag")
// Node 4: user1 has RequestTags for tag:server which already exists
// Expected: tags = ["tag:server"] (no duplicates)
node4 := findNode("node4")
require.NotNil(t, node4, "node4 should exist")
assert.Equal(t, []string{"tag:server"}, node4.Tags, "node4 should have tag:server without duplicates")
// Node 5: user2 has no RequestTags
// Expected: tags = [] (unchanged)
node5 := findNode("node5")
require.NotNil(t, node5, "node5 should exist")
assert.Empty(t, node5.Tags, "node5 should have empty tags (no RequestTags)")
// Node 6: admin1 has RequestTags for tag:admin (authorized via group:admins)
// Expected: tags = ["tag:admin"]
node6 := findNode("node6")
require.NotNil(t, node6, "node6 should exist")
assert.Contains(t, node6.Tags, "tag:admin", "node6 should have tag:admin migrated via group membership")
// Node 7: user1 has RequestTags for tag:server (authorized) and tag:forbidden (unauthorized)
// Expected: tags = ["tag:server"] (only authorized tag)
node7 := findNode("node7")
require.NotNil(t, node7, "node7 should exist")
assert.Contains(t, node7.Tags, "tag:server", "node7 should have tag:server migrated")
assert.NotContains(t, node7.Tags, "tag:forbidden", "node7 should NOT have tag:forbidden (unauthorized)")
},
},
}
for _, tt := range tests {
t.Run(tt.dbPath, func(t *testing.T) {
if !strings.HasSuffix(tt.dbPath, ".sql") {
t.Fatalf("TestSQLiteMigrationAndDataValidation only supports .sql files, got: %s", tt.dbPath)
}
hsdb := dbForTestWithPath(t, tt.dbPath)
if tt.wantFunc != nil {
tt.wantFunc(t, hsdb)
}
})
}
}
func emptyCache() *zcache.Cache[types.AuthID, types.AuthRequest] {
return zcache.New[types.AuthID, types.AuthRequest](time.Minute, time.Hour)
}
func createSQLiteFromSQLFile(sqlFilePath, dbPath string) error {
db, err := sql.Open("sqlite", dbPath)
if err != nil {
return err
}
defer db.Close()
schemaContent, err := os.ReadFile(sqlFilePath)
if err != nil {
return err
}
_, err = db.ExecContext(context.Background(), string(schemaContent))
return err
}
// requireConstraintFailed checks if the error is a constraint failure with
// either SQLite and PostgreSQL error messages.
func requireConstraintFailed(t *testing.T, err error) {
t.Helper()
require.Error(t, err)
if !strings.Contains(err.Error(), "UNIQUE constraint failed:") && !strings.Contains(err.Error(), "violates unique constraint") {
require.Failf(t, "expected error to contain a constraint failure, got: %s", err.Error())
}
}
func TestConstraints(t *testing.T) {
tests := []struct {
name string
run func(*testing.T, *gorm.DB)
}{
{
name: "no-duplicate-username-if-no-oidc",
run: func(t *testing.T, db *gorm.DB) { //nolint:thelper
_, err := CreateUser(db, types.User{Name: "user1"})
require.NoError(t, err)
_, err = CreateUser(db, types.User{Name: "user1"})
requireConstraintFailed(t, err)
},
},
{
name: "no-oidc-duplicate-username-and-id",
run: func(t *testing.T, db *gorm.DB) { //nolint:thelper
user := types.User{
Model: gorm.Model{ID: 1},
Name: "user1",
}
user.ProviderIdentifier = sql.NullString{String: "http://test.com/user1", Valid: true}
err := db.Save(&user).Error
require.NoError(t, err)
user = types.User{
Model: gorm.Model{ID: 2},
Name: "user1",
}
user.ProviderIdentifier = sql.NullString{String: "http://test.com/user1", Valid: true}
err = db.Save(&user).Error
requireConstraintFailed(t, err)
},
},
{
name: "no-oidc-duplicate-id",
run: func(t *testing.T, db *gorm.DB) { //nolint:thelper
user := types.User{
Model: gorm.Model{ID: 1},
Name: "user1",
}
user.ProviderIdentifier = sql.NullString{String: "http://test.com/user1", Valid: true}
err := db.Save(&user).Error
require.NoError(t, err)
user = types.User{
Model: gorm.Model{ID: 2},
Name: "user1.1",
}
user.ProviderIdentifier = sql.NullString{String: "http://test.com/user1", Valid: true}
err = db.Save(&user).Error
requireConstraintFailed(t, err)
},
},
{
name: "allow-duplicate-username-cli-then-oidc",
run: func(t *testing.T, db *gorm.DB) { //nolint:thelper
_, err := CreateUser(db, types.User{Name: "user1"}) // Create CLI username
require.NoError(t, err)
user := types.User{
Name: "user1",
ProviderIdentifier: sql.NullString{String: "http://test.com/user1", Valid: true},
}
err = db.Save(&user).Error
require.NoError(t, err)
},
},
{
name: "allow-duplicate-username-oidc-then-cli",
run: func(t *testing.T, db *gorm.DB) { //nolint:thelper
user := types.User{
Name: "user1",
ProviderIdentifier: sql.NullString{String: "http://test.com/user1", Valid: true},
}
err := db.Save(&user).Error
require.NoError(t, err)
_, err = CreateUser(db, types.User{Name: "user1"}) // Create CLI username
require.NoError(t, err)
},
},
}
for _, tt := range tests {
t.Run(tt.name+"-postgres", func(t *testing.T) {
db := newPostgresTestDB(t)
tt.run(t, db.DB.Debug())
})
t.Run(tt.name+"-sqlite", func(t *testing.T) {
db, err := newSQLiteTestDB()
if err != nil {
t.Fatalf("creating database: %s", err)
}
tt.run(t, db.DB.Debug())
})
}
}
// TestPostgresMigrationAndDataValidation tests specific PostgreSQL migration scenarios
// and validates data integrity after migration. All migrations that require data validation
// should be added here.
//
// TODO(kradalby): Convert to use plain text SQL dumps instead of binary .pssql dumps for consistency
// with SQLite tests and easier version control.
func TestPostgresMigrationAndDataValidation(t *testing.T) {
tests := []struct {
name string
dbPath string
wantFunc func(*testing.T, *HSDatabase)
}{}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
u := newPostgresDBForTest(t)
pgRestorePath, err := exec.LookPath("pg_restore")
if err != nil {
t.Fatal("pg_restore not found in PATH. Please install it and ensure it is accessible.")
}
// Construct the pg_restore command
cmd := exec.CommandContext(context.Background(), pgRestorePath, "--verbose", "--if-exists", "--clean", "--no-owner", "--dbname", u.String(), tt.dbPath)
// Set the output streams
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
// Execute the command
err = cmd.Run()
if err != nil {
t.Fatalf("failed to restore postgres database: %s", err)
}
db := newHeadscaleDBFromPostgresURL(t, u)
if tt.wantFunc != nil {
tt.wantFunc(t, db)
}
})
}
}
func dbForTest(t *testing.T) *HSDatabase {
t.Helper()
return dbForTestWithPath(t, "")
}
func dbForTestWithPath(t *testing.T, sqlFilePath string) *HSDatabase {
t.Helper()
dbPath := t.TempDir() + "/headscale_test.db"
// If SQL file path provided, validate and create database from it
if sqlFilePath != "" {
// Validate that the file is a SQL text file
if !strings.HasSuffix(sqlFilePath, ".sql") {
t.Fatalf("dbForTestWithPath only accepts .sql files, got: %s", sqlFilePath)
}
err := createSQLiteFromSQLFile(sqlFilePath, dbPath)
if err != nil {
t.Fatalf("setting up database from SQL file %s: %s", sqlFilePath, err)
}
}
db, err := NewHeadscaleDatabase(
&types.Config{
Database: types.DatabaseConfig{
Type: "sqlite3",
Sqlite: types.SqliteConfig{
Path: dbPath,
},
},
Policy: types.PolicyConfig{
Mode: types.PolicyModeDB,
},
},
emptyCache(),
)
if err != nil {
t.Fatalf("setting up database: %s", err)
}
if sqlFilePath != "" {
t.Logf("database set up from %s at: %s", sqlFilePath, dbPath)
} else {
t.Logf("database set up at: %s", dbPath)
}
return db
}
// TestSQLiteAllTestdataMigrations tests migration compatibility across all SQLite schemas
// in the testdata directory. It verifies they can be successfully migrated to the current
// schema version. This test only validates migration success, not data integrity.
//
// All test database files are SQL dumps (created with `sqlite3 headscale.db .dump`) generated
// with old Headscale binaries on empty databases (no user/node data). These dumps include the
// migration history in the `migrations` table, which allows the migration system to correctly
// skip already-applied migrations and only run new ones.
func TestSQLiteAllTestdataMigrations(t *testing.T) {
t.Parallel()
schemas, err := os.ReadDir("testdata/sqlite")
require.NoError(t, err)
t.Logf("loaded %d schemas", len(schemas))
for _, schema := range schemas {
if schema.IsDir() {
continue
}
t.Logf("validating: %s", schema.Name())
t.Run(schema.Name(), func(t *testing.T) {
t.Parallel()
dbPath := t.TempDir() + "/headscale_test.db"
// Setup a database with the old schema
schemaPath := filepath.Join("testdata/sqlite", schema.Name())
err := createSQLiteFromSQLFile(schemaPath, dbPath)
require.NoError(t, err)
_, err = NewHeadscaleDatabase(
&types.Config{
Database: types.DatabaseConfig{
Type: "sqlite3",
Sqlite: types.SqliteConfig{
Path: dbPath,
},
},
Policy: types.PolicyConfig{
Mode: types.PolicyModeDB,
},
},
emptyCache(),
)
require.NoError(t, err)
})
}
}
================================================
FILE: hscontrol/db/ephemeral_garbage_collector_test.go
================================================
package db
import (
"runtime"
"sync"
"sync/atomic"
"testing"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/stretchr/testify/assert"
)
const (
fiveHundred = 500 * time.Millisecond
oneHundred = 100 * time.Millisecond
fifty = 50 * time.Millisecond
)
// TestEphemeralGarbageCollectorGoRoutineLeak is a test for a goroutine leak in EphemeralGarbageCollector().
// It creates a new EphemeralGarbageCollector, schedules several nodes for deletion with a short expiry,
// and verifies that the nodes are deleted when the expiry time passes, and then
// for any leaked goroutines after the garbage collector is closed.
func TestEphemeralGarbageCollectorGoRoutineLeak(t *testing.T) {
// Count goroutines at the start
initialGoroutines := runtime.NumGoroutine()
t.Logf("Initial number of goroutines: %d", initialGoroutines)
// Basic deletion tracking mechanism
var (
deletedIDs []types.NodeID
deleteMutex sync.Mutex
deletionWg sync.WaitGroup
)
deleteFunc := func(nodeID types.NodeID) {
deleteMutex.Lock()
deletedIDs = append(deletedIDs, nodeID)
deleteMutex.Unlock()
deletionWg.Done()
}
// Start the GC
gc := NewEphemeralGarbageCollector(deleteFunc)
go gc.Start()
// Schedule several nodes for deletion with short expiry
const (
expiry = fifty
numNodes = 100
)
// Set up wait group for expected deletions
deletionWg.Add(numNodes)
for i := 1; i <= numNodes; i++ {
gc.Schedule(types.NodeID(i), expiry) //nolint:gosec // safe conversion in test
}
// Wait for all scheduled deletions to complete
deletionWg.Wait()
// Check nodes are deleted
deleteMutex.Lock()
assert.Len(t, deletedIDs, numNodes, "Not all nodes were deleted")
deleteMutex.Unlock()
// Schedule and immediately cancel to test that part of the code
for i := numNodes + 1; i <= numNodes*2; i++ {
nodeID := types.NodeID(i) //nolint:gosec // safe conversion in test
gc.Schedule(nodeID, time.Hour)
gc.Cancel(nodeID)
}
// Close GC
gc.Close()
// Wait for goroutines to clean up and verify no leaks
assert.EventuallyWithT(t, func(c *assert.CollectT) {
finalGoroutines := runtime.NumGoroutine()
// NB: We have to allow for a small number of extra goroutines because of test itself
assert.LessOrEqual(c, finalGoroutines, initialGoroutines+5,
"There are significantly more goroutines after GC usage, which suggests a leak")
}, time.Second, 10*time.Millisecond, "goroutines should clean up after GC close")
t.Logf("Final number of goroutines: %d", runtime.NumGoroutine())
}
// TestEphemeralGarbageCollectorReschedule is a test for the rescheduling of nodes in EphemeralGarbageCollector().
// It creates a new EphemeralGarbageCollector, schedules a node for deletion with a longer expiry,
// and then reschedules it with a shorter expiry, and verifies that the node is deleted only once.
func TestEphemeralGarbageCollectorReschedule(t *testing.T) {
// Deletion tracking mechanism
var (
deletedIDs []types.NodeID
deleteMutex sync.Mutex
)
deletionNotifier := make(chan types.NodeID, 1)
deleteFunc := func(nodeID types.NodeID) {
deleteMutex.Lock()
deletedIDs = append(deletedIDs, nodeID)
deleteMutex.Unlock()
deletionNotifier <- nodeID
}
// Start GC
gc := NewEphemeralGarbageCollector(deleteFunc)
go gc.Start()
defer gc.Close()
const (
shortExpiry = fifty
longExpiry = 1 * time.Hour
)
nodeID := types.NodeID(1)
// Schedule node for deletion with long expiry
gc.Schedule(nodeID, longExpiry)
// Reschedule the same node with a shorter expiry
gc.Schedule(nodeID, shortExpiry)
// Wait for deletion notification with timeout
select {
case deletedNodeID := <-deletionNotifier:
assert.Equal(t, nodeID, deletedNodeID, "The correct node should be deleted")
case <-time.After(time.Second):
t.Fatal("Timed out waiting for node deletion")
}
// Verify that the node was deleted exactly once
deleteMutex.Lock()
assert.Len(t, deletedIDs, 1, "Node should be deleted exactly once")
assert.Equal(t, nodeID, deletedIDs[0], "The correct node should be deleted")
deleteMutex.Unlock()
}
// TestEphemeralGarbageCollectorCancelAndReschedule is a test for the cancellation and rescheduling of nodes in EphemeralGarbageCollector().
// It creates a new EphemeralGarbageCollector, schedules a node for deletion, cancels it, and then reschedules it,
// and verifies that the node is deleted only once.
func TestEphemeralGarbageCollectorCancelAndReschedule(t *testing.T) {
// Deletion tracking mechanism
var (
deletedIDs []types.NodeID
deleteMutex sync.Mutex
)
deletionNotifier := make(chan types.NodeID, 1)
deleteFunc := func(nodeID types.NodeID) {
deleteMutex.Lock()
deletedIDs = append(deletedIDs, nodeID)
deleteMutex.Unlock()
deletionNotifier <- nodeID
}
// Start the GC
gc := NewEphemeralGarbageCollector(deleteFunc)
go gc.Start()
defer gc.Close()
nodeID := types.NodeID(1)
const expiry = fifty
// Schedule node for deletion
gc.Schedule(nodeID, expiry)
// Cancel the scheduled deletion
gc.Cancel(nodeID)
// Use a timeout to verify no deletion occurred
select {
case <-deletionNotifier:
t.Fatal("Node was deleted after cancellation")
case <-time.After(expiry * 2): // Still need a timeout for negative test
// This is expected - no deletion should occur
}
deleteMutex.Lock()
assert.Empty(t, deletedIDs, "Node should not be deleted after cancellation")
deleteMutex.Unlock()
// Reschedule the node
gc.Schedule(nodeID, expiry)
// Wait for deletion with timeout
select {
case deletedNodeID := <-deletionNotifier:
// Verify the correct node was deleted
assert.Equal(t, nodeID, deletedNodeID, "The correct node should be deleted")
case <-time.After(time.Second): // Longer timeout as a safety net
t.Fatal("Timed out waiting for node deletion")
}
// Verify final state
deleteMutex.Lock()
assert.Len(t, deletedIDs, 1, "Node should be deleted after rescheduling")
assert.Equal(t, nodeID, deletedIDs[0], "The correct node should be deleted")
deleteMutex.Unlock()
}
// TestEphemeralGarbageCollectorCloseBeforeTimerFires is a test for the closing of the EphemeralGarbageCollector before the timer fires.
// It creates a new EphemeralGarbageCollector, schedules a node for deletion, closes the GC, and verifies that the node is not deleted.
func TestEphemeralGarbageCollectorCloseBeforeTimerFires(t *testing.T) {
// Deletion tracking
var (
deletedIDs []types.NodeID
deleteMutex sync.Mutex
)
deletionNotifier := make(chan types.NodeID, 1)
deleteFunc := func(nodeID types.NodeID) {
deleteMutex.Lock()
deletedIDs = append(deletedIDs, nodeID)
deleteMutex.Unlock()
deletionNotifier <- nodeID
}
// Start the GC
gc := NewEphemeralGarbageCollector(deleteFunc)
go gc.Start()
const (
longExpiry = 1 * time.Hour
shortWait = fifty * 2
)
// Schedule node deletion with a long expiry
gc.Schedule(types.NodeID(1), longExpiry)
// Close the GC before the timer
gc.Close()
// Verify that no deletion occurred within a reasonable time
select {
case <-deletionNotifier:
t.Fatal("Node was deleted after GC was closed, which should not happen")
case <-time.After(shortWait):
// Expected: no deletion should occur
}
// Verify that no deletion occurred
deleteMutex.Lock()
assert.Empty(t, deletedIDs, "No node should be deleted when GC is closed before timer fires")
deleteMutex.Unlock()
}
// TestEphemeralGarbageCollectorScheduleAfterClose verifies that calling Schedule after Close
// is a no-op and doesn't cause any panics, goroutine leaks, or other issues.
func TestEphemeralGarbageCollectorScheduleAfterClose(t *testing.T) {
// Count initial goroutines to check for leaks
initialGoroutines := runtime.NumGoroutine()
t.Logf("Initial number of goroutines: %d", initialGoroutines)
// Deletion tracking
var (
deletedIDs []types.NodeID
deleteMutex sync.Mutex
)
nodeDeleted := make(chan struct{})
deleteFunc := func(nodeID types.NodeID) {
deleteMutex.Lock()
deletedIDs = append(deletedIDs, nodeID)
deleteMutex.Unlock()
close(nodeDeleted) // Signal that deletion happened
}
// Start new GC
gc := NewEphemeralGarbageCollector(deleteFunc)
// Use a WaitGroup to ensure the GC has started
var startWg sync.WaitGroup
startWg.Add(1)
go func() {
startWg.Done() // Signal that the goroutine has started
gc.Start()
}()
startWg.Wait() // Wait for the GC to start
// Close GC right away
gc.Close()
// Now try to schedule node for deletion with a very short expiry
// If the Schedule operation incorrectly creates a timer, it would fire quickly
nodeID := types.NodeID(1)
gc.Schedule(nodeID, 1*time.Millisecond)
// Check if any node was deleted (which shouldn't happen)
// Use timeout to wait for potential deletion
select {
case <-nodeDeleted:
t.Fatal("Node was deleted after GC was closed, which should not happen")
case <-time.After(fiveHundred):
// This is the expected path - no deletion should occur
}
// Check no node was deleted
deleteMutex.Lock()
nodesDeleted := len(deletedIDs)
deleteMutex.Unlock()
assert.Equal(t, 0, nodesDeleted, "No nodes should be deleted when Schedule is called after Close")
// Check for goroutine leaks after GC is fully closed
assert.EventuallyWithT(t, func(c *assert.CollectT) {
finalGoroutines := runtime.NumGoroutine()
// Allow for small fluctuations in goroutine count for testing routines etc
assert.LessOrEqual(c, finalGoroutines, initialGoroutines+2,
"There should be no significant goroutine leaks when Schedule is called after Close")
}, time.Second, 10*time.Millisecond, "goroutines should clean up after GC close")
t.Logf("Final number of goroutines: %d", runtime.NumGoroutine())
}
// TestEphemeralGarbageCollectorConcurrentScheduleAndClose tests the behavior of the garbage collector
// when Schedule and Close are called concurrently from multiple goroutines.
func TestEphemeralGarbageCollectorConcurrentScheduleAndClose(t *testing.T) {
// Count initial goroutines
initialGoroutines := runtime.NumGoroutine()
t.Logf("Initial number of goroutines: %d", initialGoroutines)
// Deletion tracking mechanism
var (
deletedIDs []types.NodeID
deleteMutex sync.Mutex
)
deleteFunc := func(nodeID types.NodeID) {
deleteMutex.Lock()
deletedIDs = append(deletedIDs, nodeID)
deleteMutex.Unlock()
}
// Start the GC
gc := NewEphemeralGarbageCollector(deleteFunc)
go gc.Start()
// Number of concurrent scheduling goroutines
const (
numSchedulers = 10
nodesPerScheduler = 50
)
const closeAfterNodes = 25 // Close GC after this many nodes per scheduler
// Use WaitGroup to wait for all scheduling goroutines to finish
var wg sync.WaitGroup
wg.Add(numSchedulers + 1) // +1 for the closer goroutine
// Create a stopper channel to signal scheduling goroutines to stop
stopScheduling := make(chan struct{})
// Track how many nodes have been scheduled
var scheduledCount int64
// Launch goroutines that continuously schedule nodes
for schedulerIndex := range numSchedulers {
go func(schedulerID int) {
defer wg.Done()
baseNodeID := schedulerID * nodesPerScheduler
// Keep scheduling nodes until signaled to stop
for j := range nodesPerScheduler {
select {
case <-stopScheduling:
return
default:
nodeID := types.NodeID(baseNodeID + j + 1) //nolint:gosec // safe conversion in test
gc.Schedule(nodeID, 1*time.Hour) // Long expiry to ensure it doesn't trigger during test
atomic.AddInt64(&scheduledCount, 1)
// Yield to other goroutines to introduce variability
runtime.Gosched()
}
}
}(schedulerIndex)
}
// Close the garbage collector after some nodes have been scheduled
go func() {
defer wg.Done()
// Wait until enough nodes have been scheduled
for atomic.LoadInt64(&scheduledCount) < int64(numSchedulers*closeAfterNodes) {
runtime.Gosched()
}
// Close GC
gc.Close()
// Signal schedulers to stop
close(stopScheduling)
}()
// Wait for all goroutines to complete
wg.Wait()
// Check for leaks using EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
finalGoroutines := runtime.NumGoroutine()
// Allow for a reasonable small variable routine count due to testing
assert.LessOrEqual(c, finalGoroutines, initialGoroutines+5,
"There should be no significant goroutine leaks during concurrent Schedule and Close operations")
}, time.Second, 10*time.Millisecond, "goroutines should clean up")
t.Logf("Final number of goroutines: %d", runtime.NumGoroutine())
}
================================================
FILE: hscontrol/db/ip.go
================================================
package db
import (
"crypto/rand"
"database/sql"
"errors"
"fmt"
"math/big"
"net/netip"
"sync"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
"go4.org/netipx"
"gorm.io/gorm"
"tailscale.com/net/tsaddr"
)
var (
errGeneratedIPBytesInvalid = errors.New("generated ip bytes are invalid ip")
errGeneratedIPNotInPrefix = errors.New("generated ip not in prefix")
errIPAllocatorNil = errors.New("ip allocator was nil")
)
// IPAllocator is a singleton responsible for allocating
// IP addresses for nodes and making sure the same
// address is not handed out twice. There can only be one
// and it needs to be created before any other database
// writes occur.
type IPAllocator struct {
mu sync.Mutex
prefix4 *netip.Prefix
prefix6 *netip.Prefix
// Previous IPs handed out
prev4 netip.Addr
prev6 netip.Addr
// strategy used for handing out IP addresses.
strategy types.IPAllocationStrategy
// Set of all IPs handed out.
// This might not be in sync with the database,
// but it is more conservative. If saves to the
// database fails, the IP will be allocated here
// until the next restart of Headscale.
usedIPs netipx.IPSetBuilder
}
// NewIPAllocator returns a new IPAllocator singleton which
// can be used to hand out unique IP addresses within the
// provided IPv4 and IPv6 prefix. It needs to be created
// when headscale starts and needs to finish its read
// transaction before any writes to the database occur.
func NewIPAllocator(
db *HSDatabase,
prefix4, prefix6 *netip.Prefix,
strategy types.IPAllocationStrategy,
) (*IPAllocator, error) {
ret := IPAllocator{
prefix4: prefix4,
prefix6: prefix6,
strategy: strategy,
}
var (
v4s []sql.NullString
v6s []sql.NullString
)
if db != nil {
err := db.Read(func(rx *gorm.DB) error {
return rx.Model(&types.Node{}).Pluck("ipv4", &v4s).Error
})
if err != nil {
return nil, fmt.Errorf("reading IPv4 addresses from database: %w", err)
}
err = db.Read(func(rx *gorm.DB) error {
return rx.Model(&types.Node{}).Pluck("ipv6", &v6s).Error
})
if err != nil {
return nil, fmt.Errorf("reading IPv6 addresses from database: %w", err)
}
}
var ips netipx.IPSetBuilder
// Add network and broadcast addrs to used pool so they
// are not handed out to nodes.
if prefix4 != nil {
network4, broadcast4 := util.GetIPPrefixEndpoints(*prefix4)
ips.Add(network4)
ips.Add(broadcast4)
// Use network as starting point, it will be used to call .Next()
// TODO(kradalby): Could potentially take all the IPs loaded from
// the database into account to start at a more "educated" location.
ret.prev4 = network4
}
if prefix6 != nil {
network6, broadcast6 := util.GetIPPrefixEndpoints(*prefix6)
ips.Add(network6)
ips.Add(broadcast6)
ret.prev6 = network6
}
// Fetch all the IP Addresses currently handed out from the Database
// and add them to the used IP set.
for _, addrStr := range append(v4s, v6s...) {
if addrStr.Valid {
addr, err := netip.ParseAddr(addrStr.String)
if err != nil {
return nil, fmt.Errorf("parsing IP address from database: %w", err)
}
ips.Add(addr)
}
}
// Build the initial IPSet to validate that we can use it.
_, err := ips.IPSet()
if err != nil {
return nil, fmt.Errorf(
"building initial IP Set: %w",
err,
)
}
ret.usedIPs = ips
return &ret, nil
}
func (i *IPAllocator) Next() (*netip.Addr, *netip.Addr, error) {
i.mu.Lock()
defer i.mu.Unlock()
var (
err error
ret4 *netip.Addr
ret6 *netip.Addr
)
if i.prefix4 != nil {
ret4, err = i.next(i.prev4, i.prefix4)
if err != nil {
return nil, nil, fmt.Errorf("allocating IPv4 address: %w", err)
}
i.prev4 = *ret4
}
if i.prefix6 != nil {
ret6, err = i.next(i.prev6, i.prefix6)
if err != nil {
return nil, nil, fmt.Errorf("allocating IPv6 address: %w", err)
}
i.prev6 = *ret6
}
return ret4, ret6, nil
}
var ErrCouldNotAllocateIP = errors.New("failed to allocate IP")
func (i *IPAllocator) nextLocked(prev netip.Addr, prefix *netip.Prefix) (*netip.Addr, error) {
i.mu.Lock()
defer i.mu.Unlock()
return i.next(prev, prefix)
}
func (i *IPAllocator) next(prev netip.Addr, prefix *netip.Prefix) (*netip.Addr, error) {
var (
err error
ip netip.Addr
)
switch i.strategy {
case types.IPAllocationStrategySequential:
// Get the first IP in our prefix
ip = prev.Next()
case types.IPAllocationStrategyRandom:
ip, err = randomNext(*prefix)
if err != nil {
return nil, fmt.Errorf("getting random IP: %w", err)
}
}
// TODO(kradalby): maybe this can be done less often.
set, err := i.usedIPs.IPSet()
if err != nil {
return nil, err
}
for {
if !prefix.Contains(ip) {
return nil, ErrCouldNotAllocateIP
}
// Check if the IP has already been allocated
// or if it is a IP reserved by Tailscale.
if set.Contains(ip) || isTailscaleReservedIP(ip) {
switch i.strategy {
case types.IPAllocationStrategySequential:
ip = ip.Next()
case types.IPAllocationStrategyRandom:
ip, err = randomNext(*prefix)
if err != nil {
return nil, fmt.Errorf("getting random IP: %w", err)
}
}
continue
}
i.usedIPs.Add(ip)
return &ip, nil
}
}
func randomNext(pfx netip.Prefix) (netip.Addr, error) {
rang := netipx.RangeOfPrefix(pfx)
fromIP, toIP := rang.From(), rang.To()
var from, to big.Int
from.SetBytes(fromIP.AsSlice())
to.SetBytes(toIP.AsSlice())
// Find the max, this is how we can do "random range",
// get the "max" as 0 -> to - from and then add back from
// after.
tempMax := big.NewInt(0).Sub(&to, &from)
out, err := rand.Int(rand.Reader, tempMax)
if err != nil {
return netip.Addr{}, fmt.Errorf("generating random IP: %w", err)
}
valInRange := big.NewInt(0).Add(&from, out)
ip, ok := netip.AddrFromSlice(valInRange.Bytes())
if !ok {
return netip.Addr{}, errGeneratedIPBytesInvalid
}
if !pfx.Contains(ip) {
return netip.Addr{}, fmt.Errorf(
"%w: ip(%s) not in prefix(%s)",
errGeneratedIPNotInPrefix,
ip.String(),
pfx.String(),
)
}
return ip, nil
}
func isTailscaleReservedIP(ip netip.Addr) bool {
return tsaddr.ChromeOSVMRange().Contains(ip) ||
tsaddr.TailscaleServiceIP() == ip ||
tsaddr.TailscaleServiceIPv6() == ip
}
// BackfillNodeIPs will take a database transaction, and
// iterate through all of the current nodes in headscale
// and ensure it has IP addresses according to the current
// configuration.
// This means that if both IPv4 and IPv6 is set in the
// config, and some nodes are missing that type of IP,
// it will be added.
// If a prefix type has been removed (IPv4 or IPv6), it
// will remove the IPs in that family from the node.
func (db *HSDatabase) BackfillNodeIPs(i *IPAllocator) ([]string, error) {
var (
err error
ret []string
)
err = db.Write(func(tx *gorm.DB) error {
if i == nil {
return fmt.Errorf("backfilling IPs: %w", errIPAllocatorNil)
}
log.Trace().Caller().Msgf("starting to backfill IPs")
nodes, err := ListNodes(tx)
if err != nil {
return fmt.Errorf("listing nodes to backfill IPs: %w", err)
}
for _, node := range nodes {
log.Trace().Caller().EmbedObject(node).Msg("ip backfill check started because node found in database")
changed := false
// IPv4 prefix is set, but node ip is missing, alloc
if i.prefix4 != nil && node.IPv4 == nil {
ret4, err := i.nextLocked(i.prev4, i.prefix4)
if err != nil {
return fmt.Errorf("allocating IPv4 for node(%d): %w", node.ID, err)
}
node.IPv4 = ret4
changed = true
ret = append(ret, fmt.Sprintf("assigned IPv4 %q to Node(%d) %q", ret4.String(), node.ID, node.Hostname))
}
// IPv6 prefix is set, but node ip is missing, alloc
if i.prefix6 != nil && node.IPv6 == nil {
ret6, err := i.nextLocked(i.prev6, i.prefix6)
if err != nil {
return fmt.Errorf("allocating IPv6 for node(%d): %w", node.ID, err)
}
node.IPv6 = ret6
changed = true
ret = append(ret, fmt.Sprintf("assigned IPv6 %q to Node(%d) %q", ret6.String(), node.ID, node.Hostname))
}
// IPv4 prefix is not set, but node has IP, remove
if i.prefix4 == nil && node.IPv4 != nil {
ret = append(ret, fmt.Sprintf("removing IPv4 %q from Node(%d) %q", node.IPv4.String(), node.ID, node.Hostname))
node.IPv4 = nil
changed = true
}
// IPv6 prefix is not set, but node has IP, remove
if i.prefix6 == nil && node.IPv6 != nil {
ret = append(ret, fmt.Sprintf("removing IPv6 %q from Node(%d) %q", node.IPv6.String(), node.ID, node.Hostname))
node.IPv6 = nil
changed = true
}
if changed {
// Use Updates() with Select() to only update IP fields, avoiding overwriting
// other fields like Expiry. We need Select() because Updates() alone skips
// zero values, but we DO want to update IPv4/IPv6 to nil when removing them.
// See issue #2862.
err := tx.Model(node).Select("ipv4", "ipv6").Updates(node).Error
if err != nil {
return fmt.Errorf("saving node(%d) after adding IPs: %w", node.ID, err)
}
}
}
return nil
})
return ret, err
}
func (i *IPAllocator) FreeIPs(ips []netip.Addr) {
i.mu.Lock()
defer i.mu.Unlock()
for _, ip := range ips {
i.usedIPs.Remove(ip)
}
}
================================================
FILE: hscontrol/db/ip_test.go
================================================
package db
import (
"fmt"
"net/netip"
"strings"
"testing"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"tailscale.com/net/tsaddr"
)
var mpp = func(pref string) *netip.Prefix {
p := netip.MustParsePrefix(pref)
return &p
}
var na = netip.MustParseAddr
var nap = func(pref string) *netip.Addr {
n := na(pref)
return &n
}
func TestIPAllocatorSequential(t *testing.T) {
tests := []struct {
name string
dbFunc func() *HSDatabase
prefix4 *netip.Prefix
prefix6 *netip.Prefix
getCount int
want4 []netip.Addr
want6 []netip.Addr
}{
{
name: "simple",
dbFunc: func() *HSDatabase {
return nil
},
prefix4: mpp("100.64.0.0/10"),
prefix6: mpp("fd7a:115c:a1e0::/48"),
getCount: 1,
want4: []netip.Addr{
na("100.64.0.1"),
},
want6: []netip.Addr{
na("fd7a:115c:a1e0::1"),
},
},
{
name: "simple-v4",
dbFunc: func() *HSDatabase {
return nil
},
prefix4: mpp("100.64.0.0/10"),
getCount: 1,
want4: []netip.Addr{
na("100.64.0.1"),
},
},
{
name: "simple-v6",
dbFunc: func() *HSDatabase {
return nil
},
prefix6: mpp("fd7a:115c:a1e0::/48"),
getCount: 1,
want6: []netip.Addr{
na("fd7a:115c:a1e0::1"),
},
},
{
name: "simple-with-db",
dbFunc: func() *HSDatabase {
db := dbForTest(t)
user := types.User{Name: ""}
db.DB.Save(&user)
db.DB.Save(&types.Node{
User: &user,
IPv4: nap("100.64.0.1"),
IPv6: nap("fd7a:115c:a1e0::1"),
})
return db
},
prefix4: mpp("100.64.0.0/10"),
prefix6: mpp("fd7a:115c:a1e0::/48"),
getCount: 1,
want4: []netip.Addr{
na("100.64.0.2"),
},
want6: []netip.Addr{
na("fd7a:115c:a1e0::2"),
},
},
{
name: "before-after-free-middle-in-db",
dbFunc: func() *HSDatabase {
db := dbForTest(t)
user := types.User{Name: ""}
db.DB.Save(&user)
db.DB.Save(&types.Node{
User: &user,
IPv4: nap("100.64.0.2"),
IPv6: nap("fd7a:115c:a1e0::2"),
})
return db
},
prefix4: mpp("100.64.0.0/10"),
prefix6: mpp("fd7a:115c:a1e0::/48"),
getCount: 2,
want4: []netip.Addr{
na("100.64.0.1"),
na("100.64.0.3"),
},
want6: []netip.Addr{
na("fd7a:115c:a1e0::1"),
na("fd7a:115c:a1e0::3"),
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
db := tt.dbFunc()
alloc, _ := NewIPAllocator(
db,
tt.prefix4,
tt.prefix6,
types.IPAllocationStrategySequential,
)
var (
got4s []netip.Addr
got6s []netip.Addr
)
for range tt.getCount {
got4, got6, err := alloc.Next()
if err != nil {
t.Fatalf("allocating next IP: %s", err)
}
if got4 != nil {
got4s = append(got4s, *got4)
}
if got6 != nil {
got6s = append(got6s, *got6)
}
}
if diff := cmp.Diff(tt.want4, got4s, util.Comparers...); diff != "" {
t.Errorf("IPAllocator 4s unexpected result (-want +got):\n%s", diff)
}
if diff := cmp.Diff(tt.want6, got6s, util.Comparers...); diff != "" {
t.Errorf("IPAllocator 6s unexpected result (-want +got):\n%s", diff)
}
})
}
}
func TestIPAllocatorRandom(t *testing.T) {
tests := []struct {
name string
dbFunc func() *HSDatabase
getCount int
prefix4 *netip.Prefix
prefix6 *netip.Prefix
want4 bool
want6 bool
}{
{
name: "simple",
dbFunc: func() *HSDatabase {
return nil
},
prefix4: mpp("100.64.0.0/10"),
prefix6: mpp("fd7a:115c:a1e0::/48"),
getCount: 1,
want4: true,
want6: true,
},
{
name: "simple-v4",
dbFunc: func() *HSDatabase {
return nil
},
prefix4: mpp("100.64.0.0/10"),
getCount: 1,
want4: true,
want6: false,
},
{
name: "simple-v6",
dbFunc: func() *HSDatabase {
return nil
},
prefix6: mpp("fd7a:115c:a1e0::/48"),
getCount: 1,
want4: false,
want6: true,
},
{
name: "generate-lots-of-random",
dbFunc: func() *HSDatabase {
return nil
},
prefix4: mpp("100.64.0.0/10"),
prefix6: mpp("fd7a:115c:a1e0::/48"),
getCount: 1000,
want4: true,
want6: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
db := tt.dbFunc()
alloc, _ := NewIPAllocator(db, tt.prefix4, tt.prefix6, types.IPAllocationStrategyRandom)
for range tt.getCount {
got4, got6, err := alloc.Next()
if err != nil {
t.Fatalf("allocating next IP: %s", err)
}
t.Logf("addrs ipv4: %v, ipv6: %v", got4, got6)
if tt.want4 {
if got4 == nil {
t.Fatalf("expected ipv4 addr, got nil")
}
}
if tt.want6 {
if got6 == nil {
t.Fatalf("expected ipv4 addr, got nil")
}
}
}
})
}
}
func TestBackfillIPAddresses(t *testing.T) {
fullNodeP := func(i int) *types.Node {
v4 := fmt.Sprintf("100.64.0.%d", i)
v6 := fmt.Sprintf("fd7a:115c:a1e0::%d", i)
return &types.Node{
IPv4: nap(v4),
IPv6: nap(v6),
}
}
tests := []struct {
name string
dbFunc func() *HSDatabase
prefix4 *netip.Prefix
prefix6 *netip.Prefix
want types.Nodes
}{
{
name: "simple-backfill-ipv6",
dbFunc: func() *HSDatabase {
db := dbForTest(t)
user := types.User{Name: ""}
db.DB.Save(&user)
db.DB.Save(&types.Node{
User: &user,
IPv4: nap("100.64.0.1"),
})
return db
},
prefix4: mpp("100.64.0.0/10"),
prefix6: mpp("fd7a:115c:a1e0::/48"),
want: types.Nodes{
&types.Node{
IPv4: nap("100.64.0.1"),
IPv6: nap("fd7a:115c:a1e0::1"),
},
},
},
{
name: "simple-backfill-ipv4",
dbFunc: func() *HSDatabase {
db := dbForTest(t)
user := types.User{Name: ""}
db.DB.Save(&user)
db.DB.Save(&types.Node{
User: &user,
IPv6: nap("fd7a:115c:a1e0::1"),
})
return db
},
prefix4: mpp("100.64.0.0/10"),
prefix6: mpp("fd7a:115c:a1e0::/48"),
want: types.Nodes{
&types.Node{
IPv4: nap("100.64.0.1"),
IPv6: nap("fd7a:115c:a1e0::1"),
},
},
},
{
name: "simple-backfill-remove-ipv6",
dbFunc: func() *HSDatabase {
db := dbForTest(t)
user := types.User{Name: ""}
db.DB.Save(&user)
db.DB.Save(&types.Node{
User: &user,
IPv4: nap("100.64.0.1"),
IPv6: nap("fd7a:115c:a1e0::1"),
})
return db
},
prefix4: mpp("100.64.0.0/10"),
want: types.Nodes{
&types.Node{
IPv4: nap("100.64.0.1"),
},
},
},
{
name: "simple-backfill-remove-ipv4",
dbFunc: func() *HSDatabase {
db := dbForTest(t)
user := types.User{Name: ""}
db.DB.Save(&user)
db.DB.Save(&types.Node{
User: &user,
IPv4: nap("100.64.0.1"),
IPv6: nap("fd7a:115c:a1e0::1"),
})
return db
},
prefix6: mpp("fd7a:115c:a1e0::/48"),
want: types.Nodes{
&types.Node{
IPv6: nap("fd7a:115c:a1e0::1"),
},
},
},
{
name: "multi-backfill-ipv6",
dbFunc: func() *HSDatabase {
db := dbForTest(t)
user := types.User{Name: ""}
db.DB.Save(&user)
db.DB.Save(&types.Node{
User: &user,
IPv4: nap("100.64.0.1"),
})
db.DB.Save(&types.Node{
User: &user,
IPv4: nap("100.64.0.2"),
})
db.DB.Save(&types.Node{
User: &user,
IPv4: nap("100.64.0.3"),
})
db.DB.Save(&types.Node{
User: &user,
IPv4: nap("100.64.0.4"),
})
return db
},
prefix4: mpp("100.64.0.0/10"),
prefix6: mpp("fd7a:115c:a1e0::/48"),
want: types.Nodes{
fullNodeP(1),
fullNodeP(2),
fullNodeP(3),
fullNodeP(4),
},
},
}
comps := append(util.Comparers, cmpopts.IgnoreFields(types.Node{},
"ID",
"User",
"UserID",
"Endpoints",
"Hostinfo",
"CreatedAt",
"UpdatedAt",
))
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
db := tt.dbFunc()
alloc, err := NewIPAllocator(
db,
tt.prefix4,
tt.prefix6,
types.IPAllocationStrategySequential,
)
if err != nil {
t.Fatalf("failed to set up ip alloc: %s", err)
}
logs, err := db.BackfillNodeIPs(alloc)
if err != nil {
t.Fatalf("failed to backfill: %s", err)
}
t.Logf("backfill log: \n%s", strings.Join(logs, "\n"))
got, err := db.ListNodes()
if err != nil {
t.Fatalf("failed to get nodes: %s", err)
}
if diff := cmp.Diff(tt.want, got, comps...); diff != "" {
t.Errorf("Backfill unexpected result (-want +got):\n%s", diff)
}
})
}
}
func TestIPAllocatorNextNoReservedIPs(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
defer db.Close()
alloc, err := NewIPAllocator(
db,
new(tsaddr.CGNATRange()),
new(tsaddr.TailscaleULARange()),
types.IPAllocationStrategySequential,
)
if err != nil {
t.Fatalf("failed to set up ip alloc: %s", err)
}
// Validate that we do not give out 100.100.100.100
nextQuad100, err := alloc.next(na("100.100.100.99"), new(tsaddr.CGNATRange()))
require.NoError(t, err)
assert.Equal(t, na("100.100.100.101"), *nextQuad100)
// Validate that we do not give out fd7a:115c:a1e0::53
nextQuad100v6, err := alloc.next(na("fd7a:115c:a1e0::52"), new(tsaddr.TailscaleULARange()))
require.NoError(t, err)
assert.Equal(t, na("fd7a:115c:a1e0::54"), *nextQuad100v6)
// Validate that we do not give out fd7a:115c:a1e0::53
nextChrome, err := alloc.next(na("100.115.91.255"), new(tsaddr.CGNATRange()))
t.Logf("chrome: %s", nextChrome.String())
require.NoError(t, err)
assert.Equal(t, na("100.115.94.0"), *nextChrome)
}
================================================
FILE: hscontrol/db/main_test.go
================================================
package db
import (
"os"
"path/filepath"
"runtime"
"testing"
)
// TestMain ensures the working directory is set to the package source directory
// so that relative testdata/ paths resolve correctly when the test binary is
// executed from an arbitrary location (e.g., via "go tool stress").
func TestMain(m *testing.M) {
_, filename, _, ok := runtime.Caller(0)
if !ok {
panic("could not determine test source directory")
}
err := os.Chdir(filepath.Dir(filename))
if err != nil {
panic("could not chdir to test source directory: " + err.Error())
}
os.Exit(m.Run())
}
================================================
FILE: hscontrol/db/node.go
================================================
package db
import (
"encoding/json"
"errors"
"fmt"
"net/netip"
"regexp"
"slices"
"sort"
"strconv"
"strings"
"sync"
"testing"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/juanfont/headscale/hscontrol/util/zlog/zf"
"github.com/rs/zerolog/log"
"gorm.io/gorm"
"tailscale.com/net/tsaddr"
"tailscale.com/types/key"
)
const (
NodeGivenNameHashLength = 8
NodeGivenNameTrimSize = 2
// defaultTestNodePrefix is the default hostname prefix for nodes created in tests.
defaultTestNodePrefix = "testnode"
)
// ErrNodeNameNotUnique is returned when a node name is not unique.
var ErrNodeNameNotUnique = errors.New("node name is not unique")
var invalidDNSRegex = regexp.MustCompile("[^a-z0-9-.]+")
var (
ErrNodeNotFound = errors.New("node not found")
ErrNodeRouteIsNotAvailable = errors.New("route is not available on node")
ErrNodeNotFoundRegistrationCache = errors.New(
"node not found in registration cache",
)
ErrCouldNotConvertNodeInterface = errors.New("failed to convert node interface")
)
// ListPeers returns peers of node, regardless of any Policy or if the node is expired.
// If no peer IDs are given, all peers are returned.
// If at least one peer ID is given, only these peer nodes will be returned.
func (hsdb *HSDatabase) ListPeers(nodeID types.NodeID, peerIDs ...types.NodeID) (types.Nodes, error) {
return ListPeers(hsdb.DB, nodeID, peerIDs...)
}
// ListPeers returns peers of node, regardless of any Policy or if the node is expired.
// If no peer IDs are given, all peers are returned.
// If at least one peer ID is given, only these peer nodes will be returned.
func ListPeers(tx *gorm.DB, nodeID types.NodeID, peerIDs ...types.NodeID) (types.Nodes, error) {
nodes := types.Nodes{}
err := tx.
Preload("AuthKey").
Preload("AuthKey.User").
Preload("User").
Where("id <> ?", nodeID).
Where(peerIDs).Find(&nodes).Error
if err != nil {
return types.Nodes{}, err
}
sort.Slice(nodes, func(i, j int) bool { return nodes[i].ID < nodes[j].ID })
return nodes, nil
}
// ListNodes queries the database for either all nodes if no parameters are given
// or for the given nodes if at least one node ID is given as parameter.
func (hsdb *HSDatabase) ListNodes(nodeIDs ...types.NodeID) (types.Nodes, error) {
return ListNodes(hsdb.DB, nodeIDs...)
}
// ListNodes queries the database for either all nodes if no parameters are given
// or for the given nodes if at least one node ID is given as parameter.
func ListNodes(tx *gorm.DB, nodeIDs ...types.NodeID) (types.Nodes, error) {
nodes := types.Nodes{}
err := tx.
Preload("AuthKey").
Preload("AuthKey.User").
Preload("User").
Where(nodeIDs).Find(&nodes).Error
if err != nil {
return nil, err
}
return nodes, nil
}
func (hsdb *HSDatabase) ListEphemeralNodes() (types.Nodes, error) {
return Read(hsdb.DB, func(rx *gorm.DB) (types.Nodes, error) {
nodes := types.Nodes{}
err := rx.Joins("AuthKey").Where(`"AuthKey"."ephemeral" = true`).Find(&nodes).Error
if err != nil {
return nil, err
}
return nodes, nil
})
}
func (hsdb *HSDatabase) getNode(uid types.UserID, name string) (*types.Node, error) {
return Read(hsdb.DB, func(rx *gorm.DB) (*types.Node, error) {
return getNode(rx, uid, name)
})
}
// getNode finds a Node by name and user and returns the Node struct.
func getNode(tx *gorm.DB, uid types.UserID, name string) (*types.Node, error) {
nodes, err := ListNodesByUser(tx, uid)
if err != nil {
return nil, err
}
for _, m := range nodes {
if m.Hostname == name {
return m, nil
}
}
return nil, ErrNodeNotFound
}
func (hsdb *HSDatabase) GetNodeByID(id types.NodeID) (*types.Node, error) {
return GetNodeByID(hsdb.DB, id)
}
// GetNodeByID finds a Node by ID and returns the Node struct.
func GetNodeByID(tx *gorm.DB, id types.NodeID) (*types.Node, error) {
mach := types.Node{}
if result := tx.
Preload("AuthKey").
Preload("AuthKey.User").
Preload("User").
Find(&types.Node{ID: id}).First(&mach); result.Error != nil {
return nil, result.Error
}
return &mach, nil
}
func (hsdb *HSDatabase) GetNodeByMachineKey(machineKey key.MachinePublic) (*types.Node, error) {
return GetNodeByMachineKey(hsdb.DB, machineKey)
}
// GetNodeByMachineKey finds a Node by its MachineKey and returns the Node struct.
func GetNodeByMachineKey(
tx *gorm.DB,
machineKey key.MachinePublic,
) (*types.Node, error) {
mach := types.Node{}
if result := tx.
Preload("AuthKey").
Preload("AuthKey.User").
Preload("User").
First(&mach, "machine_key = ?", machineKey.String()); result.Error != nil {
return nil, result.Error
}
return &mach, nil
}
func (hsdb *HSDatabase) GetNodeByNodeKey(nodeKey key.NodePublic) (*types.Node, error) {
return GetNodeByNodeKey(hsdb.DB, nodeKey)
}
// GetNodeByNodeKey finds a Node by its NodeKey and returns the Node struct.
func GetNodeByNodeKey(
tx *gorm.DB,
nodeKey key.NodePublic,
) (*types.Node, error) {
mach := types.Node{}
if result := tx.
Preload("AuthKey").
Preload("AuthKey.User").
Preload("User").
First(&mach, "node_key = ?", nodeKey.String()); result.Error != nil {
return nil, result.Error
}
return &mach, nil
}
func (hsdb *HSDatabase) SetTags(
nodeID types.NodeID,
tags []string,
) error {
return hsdb.Write(func(tx *gorm.DB) error {
return SetTags(tx, nodeID, tags)
})
}
// SetTags takes a NodeID and update the forced tags.
// It will overwrite any tags with the new list.
func SetTags(
tx *gorm.DB,
nodeID types.NodeID,
tags []string,
) error {
if len(tags) == 0 {
// if no tags are provided, we remove all tags
err := tx.Model(&types.Node{}).Where("id = ?", nodeID).Update("tags", "[]").Error
if err != nil {
return fmt.Errorf("removing tags: %w", err)
}
return nil
}
slices.Sort(tags)
tags = slices.Compact(tags)
b, err := json.Marshal(tags)
if err != nil {
return err
}
err = tx.Model(&types.Node{}).Where("id = ?", nodeID).Update("tags", string(b)).Error
if err != nil {
return fmt.Errorf("updating tags: %w", err)
}
return nil
}
// SetApprovedRoutes takes a Node struct pointer and updates the approved routes.
func SetApprovedRoutes(
tx *gorm.DB,
nodeID types.NodeID,
routes []netip.Prefix,
) error {
if len(routes) == 0 {
// if no routes are provided, we remove all
err := tx.Model(&types.Node{}).Where("id = ?", nodeID).Update("approved_routes", "[]").Error
if err != nil {
return fmt.Errorf("removing approved routes: %w", err)
}
return nil
}
// When approving exit routes, ensure both IPv4 and IPv6 are included
// If either 0.0.0.0/0 or ::/0 is being approved, both should be approved
hasIPv4Exit := slices.Contains(routes, tsaddr.AllIPv4())
hasIPv6Exit := slices.Contains(routes, tsaddr.AllIPv6())
if hasIPv4Exit && !hasIPv6Exit {
routes = append(routes, tsaddr.AllIPv6())
} else if hasIPv6Exit && !hasIPv4Exit {
routes = append(routes, tsaddr.AllIPv4())
}
b, err := json.Marshal(routes)
if err != nil {
return err
}
if err := tx.Model(&types.Node{}).Where("id = ?", nodeID).Update("approved_routes", string(b)).Error; err != nil { //nolint:noinlineerr
return fmt.Errorf("updating approved routes: %w", err)
}
return nil
}
// SetLastSeen sets a node's last seen field indicating that we
// have recently communicating with this node.
func (hsdb *HSDatabase) SetLastSeen(nodeID types.NodeID, lastSeen time.Time) error {
return hsdb.Write(func(tx *gorm.DB) error {
return SetLastSeen(tx, nodeID, lastSeen)
})
}
// SetLastSeen sets a node's last seen field indicating that we
// have recently communicating with this node.
func SetLastSeen(tx *gorm.DB, nodeID types.NodeID, lastSeen time.Time) error {
return tx.Model(&types.Node{}).Where("id = ?", nodeID).Update("last_seen", lastSeen).Error
}
// RenameNode takes a Node struct and a new GivenName for the nodes
// and renames it. Validation should be done in the state layer before calling this function.
func RenameNode(tx *gorm.DB,
nodeID types.NodeID, newName string,
) error {
err := util.ValidateHostname(newName)
if err != nil {
return fmt.Errorf("renaming node: %w", err)
}
// Check if the new name is unique
var count int64
err = tx.Model(&types.Node{}).Where("given_name = ? AND id != ?", newName, nodeID).Count(&count).Error
if err != nil {
return fmt.Errorf("checking name uniqueness: %w", err)
}
if count > 0 {
return ErrNodeNameNotUnique
}
if err := tx.Model(&types.Node{}).Where("id = ?", nodeID).Update("given_name", newName).Error; err != nil { //nolint:noinlineerr
return fmt.Errorf("renaming node in database: %w", err)
}
return nil
}
func (hsdb *HSDatabase) NodeSetExpiry(nodeID types.NodeID, expiry *time.Time) error {
return hsdb.Write(func(tx *gorm.DB) error {
return NodeSetExpiry(tx, nodeID, expiry)
})
}
// NodeSetExpiry sets a new expiry time for a node.
// If expiry is nil, the node's expiry is disabled (node will never expire).
func NodeSetExpiry(tx *gorm.DB, nodeID types.NodeID, expiry *time.Time) error {
return tx.Model(&types.Node{}).Where("id = ?", nodeID).Update("expiry", expiry).Error
}
func (hsdb *HSDatabase) DeleteNode(node *types.Node) error {
return hsdb.Write(func(tx *gorm.DB) error {
return DeleteNode(tx, node)
})
}
// DeleteNode deletes a Node from the database.
// Caller is responsible for notifying all of change.
func DeleteNode(tx *gorm.DB,
node *types.Node,
) error {
// Unscoped causes the node to be fully removed from the database.
err := tx.Unscoped().Delete(&types.Node{}, node.ID).Error
if err != nil {
return err
}
return nil
}
// DeleteEphemeralNode deletes a Node from the database, note that this method
// will remove it straight, and not notify any changes or consider any routes.
// It is intended for Ephemeral nodes.
func (hsdb *HSDatabase) DeleteEphemeralNode(
nodeID types.NodeID,
) error {
return hsdb.Write(func(tx *gorm.DB) error {
err := tx.Unscoped().Delete(&types.Node{}, nodeID).Error
if err != nil {
return err
}
return nil
})
}
// RegisterNodeForTest is used only for testing purposes to register a node directly in the database.
// Production code should use state.HandleNodeFromAuthPath or state.HandleNodeFromPreAuthKey.
func RegisterNodeForTest(tx *gorm.DB, node types.Node, ipv4 *netip.Addr, ipv6 *netip.Addr) (*types.Node, error) {
if !testing.Testing() {
panic("RegisterNodeForTest can only be called during tests")
}
logEvent := log.Debug().
Str(zf.NodeHostname, node.Hostname).
Str(zf.MachineKey, node.MachineKey.ShortString()).
Str(zf.NodeKey, node.NodeKey.ShortString())
if node.User != nil {
logEvent = logEvent.Str(zf.UserName, node.User.Username())
} else if node.UserID != nil {
logEvent = logEvent.Uint(zf.UserID, *node.UserID)
} else {
logEvent = logEvent.Str(zf.UserName, "none")
}
logEvent.Msg("registering test node")
// If the a new node is registered with the same machine key, to the same user,
// update the existing node.
// If the same node is registered again, but to a new user, then that is considered
// a new node.
oldNode, _ := GetNodeByMachineKey(tx, node.MachineKey)
if oldNode != nil && oldNode.UserID == node.UserID {
node.ID = oldNode.ID
node.GivenName = oldNode.GivenName
node.ApprovedRoutes = oldNode.ApprovedRoutes
// Don't overwrite the provided IPs with old ones when they exist
if ipv4 == nil {
ipv4 = oldNode.IPv4
}
if ipv6 == nil {
ipv6 = oldNode.IPv6
}
}
// If the node exists and it already has IP(s), we just save it
// so we store the node.Expire and node.Nodekey that has been set when
// adding it to the registrationCache
if node.IPv4 != nil || node.IPv6 != nil {
err := tx.Save(&node).Error
if err != nil {
return nil, fmt.Errorf("registering existing node in database: %w", err)
}
log.Trace().
Caller().
Str(zf.NodeHostname, node.Hostname).
Str(zf.MachineKey, node.MachineKey.ShortString()).
Str(zf.NodeKey, node.NodeKey.ShortString()).
Str(zf.UserName, node.User.Username()).
Msg("Test node authorized again")
return &node, nil
}
node.IPv4 = ipv4
node.IPv6 = ipv6
var err error
node.Hostname, err = util.NormaliseHostname(node.Hostname)
if err != nil {
newHostname := util.InvalidString()
log.Info().Err(err).Str(zf.InvalidHostname, node.Hostname).Str(zf.NewHostname, newHostname).Msgf("invalid hostname, replacing")
node.Hostname = newHostname
}
if node.GivenName == "" {
givenName, err := EnsureUniqueGivenName(tx, node.Hostname)
if err != nil {
return nil, fmt.Errorf("ensuring unique given name: %w", err)
}
node.GivenName = givenName
}
if err := tx.Save(&node).Error; err != nil { //nolint:noinlineerr
return nil, fmt.Errorf("saving node to database: %w", err)
}
log.Trace().
Caller().
Str(zf.NodeHostname, node.Hostname).
Msg("Test node registered with the database")
return &node, nil
}
// NodeSetNodeKey sets the node key of a node and saves it to the database.
func NodeSetNodeKey(tx *gorm.DB, node *types.Node, nodeKey key.NodePublic) error {
return tx.Model(node).Updates(types.Node{
NodeKey: nodeKey,
}).Error
}
func (hsdb *HSDatabase) NodeSetMachineKey(
node *types.Node,
machineKey key.MachinePublic,
) error {
return hsdb.Write(func(tx *gorm.DB) error {
return NodeSetMachineKey(tx, node, machineKey)
})
}
// NodeSetMachineKey sets the node key of a node and saves it to the database.
func NodeSetMachineKey(
tx *gorm.DB,
node *types.Node,
machineKey key.MachinePublic,
) error {
return tx.Model(node).Updates(types.Node{
MachineKey: machineKey,
}).Error
}
func generateGivenName(suppliedName string, randomSuffix bool) (string, error) {
// Strip invalid DNS characters for givenName
suppliedName = strings.ToLower(suppliedName)
suppliedName = invalidDNSRegex.ReplaceAllString(suppliedName, "")
if len(suppliedName) > util.LabelHostnameLength {
return "", types.ErrHostnameTooLong
}
if randomSuffix {
// Trim if a hostname will be longer than 63 chars after adding the hash.
trimmedHostnameLength := util.LabelHostnameLength - NodeGivenNameHashLength - NodeGivenNameTrimSize
if len(suppliedName) > trimmedHostnameLength {
suppliedName = suppliedName[:trimmedHostnameLength]
}
suffix, err := util.GenerateRandomStringDNSSafe(NodeGivenNameHashLength)
if err != nil {
return "", err
}
suppliedName += "-" + suffix
}
return suppliedName, nil
}
func isUniqueName(tx *gorm.DB, name string) (bool, error) {
nodes := types.Nodes{}
err := tx.
Where("given_name = ?", name).Find(&nodes).Error
if err != nil {
return false, err
}
return len(nodes) == 0, nil
}
// EnsureUniqueGivenName generates a unique given name for a node based on its hostname.
func EnsureUniqueGivenName(
tx *gorm.DB,
name string,
) (string, error) {
givenName, err := generateGivenName(name, false)
if err != nil {
return "", err
}
unique, err := isUniqueName(tx, givenName)
if err != nil {
return "", err
}
if !unique {
postfixedName, err := generateGivenName(name, true)
if err != nil {
return "", err
}
givenName = postfixedName
}
return givenName, nil
}
// EphemeralGarbageCollector is a garbage collector that will delete nodes after
// a certain amount of time.
// It is used to delete ephemeral nodes that have disconnected and should be
// cleaned up.
type EphemeralGarbageCollector struct {
mu sync.Mutex
deleteFunc func(types.NodeID)
toBeDeleted map[types.NodeID]*time.Timer
deleteCh chan types.NodeID
cancelCh chan struct{}
}
// NewEphemeralGarbageCollector creates a new EphemeralGarbageCollector, it takes
// a deleteFunc that will be called when a node is scheduled for deletion.
func NewEphemeralGarbageCollector(deleteFunc func(types.NodeID)) *EphemeralGarbageCollector {
return &EphemeralGarbageCollector{
toBeDeleted: make(map[types.NodeID]*time.Timer),
deleteCh: make(chan types.NodeID, 10),
cancelCh: make(chan struct{}),
deleteFunc: deleteFunc,
}
}
// Close stops the garbage collector.
func (e *EphemeralGarbageCollector) Close() {
e.mu.Lock()
defer e.mu.Unlock()
// Stop all timers
for _, timer := range e.toBeDeleted {
timer.Stop()
}
// Close the cancel channel to signal all goroutines to exit
close(e.cancelCh)
}
// Schedule schedules a node for deletion after the expiry duration.
// If the garbage collector is already closed, this is a no-op.
func (e *EphemeralGarbageCollector) Schedule(nodeID types.NodeID, expiry time.Duration) {
e.mu.Lock()
defer e.mu.Unlock()
// Don't schedule new timers if the garbage collector is already closed
select {
case <-e.cancelCh:
// The cancel channel is closed, meaning the GC is shutting down
// or already shut down, so we shouldn't schedule anything new
return
default:
// Continue with scheduling
}
// If a timer already exists for this node, stop it first
if oldTimer, exists := e.toBeDeleted[nodeID]; exists {
oldTimer.Stop()
}
timer := time.NewTimer(expiry)
e.toBeDeleted[nodeID] = timer
// Start a goroutine to handle the timer completion
go func() {
select {
case <-timer.C:
// This is to handle the situation where the GC is shutting down and
// we are trying to schedule a new node for deletion at the same time
// i.e. We don't want to send to deleteCh if the GC is shutting down
// So, we try to send to deleteCh, but also watch for cancelCh
select {
case e.deleteCh <- nodeID:
// Successfully sent to deleteCh
case <-e.cancelCh:
// GC is shutting down, don't send to deleteCh
return
}
case <-e.cancelCh:
// If the GC is closed, exit the goroutine
return
}
}()
}
// Cancel cancels the deletion of a node.
func (e *EphemeralGarbageCollector) Cancel(nodeID types.NodeID) {
e.mu.Lock()
defer e.mu.Unlock()
if timer, ok := e.toBeDeleted[nodeID]; ok {
timer.Stop()
delete(e.toBeDeleted, nodeID)
}
}
// Start starts the garbage collector.
func (e *EphemeralGarbageCollector) Start() {
for {
select {
case <-e.cancelCh:
return
case nodeID := <-e.deleteCh:
e.mu.Lock()
delete(e.toBeDeleted, nodeID)
e.mu.Unlock()
go e.deleteFunc(nodeID)
}
}
}
func (hsdb *HSDatabase) CreateNodeForTest(user *types.User, hostname ...string) *types.Node {
if !testing.Testing() {
panic("CreateNodeForTest can only be called during tests")
}
if user == nil {
panic("CreateNodeForTest requires a valid user")
}
nodeName := defaultTestNodePrefix
if len(hostname) > 0 && hostname[0] != "" {
nodeName = hostname[0]
}
// Create a preauth key for the node
pak, err := hsdb.CreatePreAuthKey(user.TypedID(), false, false, nil, nil)
if err != nil {
panic(fmt.Sprintf("failed to create preauth key for test node: %v", err))
}
pakID := pak.ID
nodeKey := key.NewNode()
machineKey := key.NewMachine()
discoKey := key.NewDisco()
node := &types.Node{
MachineKey: machineKey.Public(),
NodeKey: nodeKey.Public(),
DiscoKey: discoKey.Public(),
Hostname: nodeName,
UserID: &user.ID,
RegisterMethod: util.RegisterMethodAuthKey,
AuthKeyID: &pakID,
}
err = hsdb.DB.Save(node).Error
if err != nil {
panic(fmt.Sprintf("failed to create test node: %v", err))
}
return node
}
func (hsdb *HSDatabase) CreateRegisteredNodeForTest(user *types.User, hostname ...string) *types.Node {
if !testing.Testing() {
panic("CreateRegisteredNodeForTest can only be called during tests")
}
node := hsdb.CreateNodeForTest(user, hostname...)
// Allocate IPs for the test node using the database's IP allocator
// This is a simplified allocation for testing - in production this would use State.ipAlloc
ipv4, ipv6, err := hsdb.allocateTestIPs(node.ID)
if err != nil {
panic(fmt.Sprintf("failed to allocate IPs for test node: %v", err))
}
var registeredNode *types.Node
err = hsdb.DB.Transaction(func(tx *gorm.DB) error {
var err error
registeredNode, err = RegisterNodeForTest(tx, *node, ipv4, ipv6)
return err
})
if err != nil {
panic(fmt.Sprintf("failed to register test node: %v", err))
}
return registeredNode
}
func (hsdb *HSDatabase) CreateNodesForTest(user *types.User, count int, hostnamePrefix ...string) []*types.Node {
if !testing.Testing() {
panic("CreateNodesForTest can only be called during tests")
}
if user == nil {
panic("CreateNodesForTest requires a valid user")
}
prefix := defaultTestNodePrefix
if len(hostnamePrefix) > 0 && hostnamePrefix[0] != "" {
prefix = hostnamePrefix[0]
}
nodes := make([]*types.Node, count)
for i := range count {
hostname := prefix + "-" + strconv.Itoa(i)
nodes[i] = hsdb.CreateNodeForTest(user, hostname)
}
return nodes
}
func (hsdb *HSDatabase) CreateRegisteredNodesForTest(user *types.User, count int, hostnamePrefix ...string) []*types.Node {
if !testing.Testing() {
panic("CreateRegisteredNodesForTest can only be called during tests")
}
if user == nil {
panic("CreateRegisteredNodesForTest requires a valid user")
}
prefix := defaultTestNodePrefix
if len(hostnamePrefix) > 0 && hostnamePrefix[0] != "" {
prefix = hostnamePrefix[0]
}
nodes := make([]*types.Node, count)
for i := range count {
hostname := prefix + "-" + strconv.Itoa(i)
nodes[i] = hsdb.CreateRegisteredNodeForTest(user, hostname)
}
return nodes
}
// allocateTestIPs allocates sequential test IPs for nodes during testing.
func (hsdb *HSDatabase) allocateTestIPs(nodeID types.NodeID) (*netip.Addr, *netip.Addr, error) {
if !testing.Testing() {
panic("allocateTestIPs can only be called during tests")
}
// Use simple sequential allocation for tests
// IPv4: 100.64.x.y (where x = nodeID/256, y = nodeID%256)
// IPv6: fd7a:115c:a1e0::x:y (where x = high byte, y = low byte)
// This supports up to 65535 nodes
const (
maxTestNodes = 65535
ipv4ByteDivisor = 256
)
if nodeID > maxTestNodes {
return nil, nil, ErrCouldNotAllocateIP
}
// Split nodeID into high and low bytes for IPv4 (100.64.high.low)
highByte := byte(nodeID / ipv4ByteDivisor)
lowByte := byte(nodeID % ipv4ByteDivisor)
ipv4 := netip.AddrFrom4([4]byte{100, 64, highByte, lowByte})
// For IPv6, use the last two bytes of the address (fd7a:115c:a1e0::high:low)
ipv6 := netip.AddrFrom16([16]byte{0xfd, 0x7a, 0x11, 0x5c, 0xa1, 0xe0, 0, 0, 0, 0, 0, 0, 0, 0, highByte, lowByte})
return &ipv4, &ipv6, nil
}
================================================
FILE: hscontrol/db/node_test.go
================================================
package db
import (
"crypto/rand"
"fmt"
"math/big"
"net/netip"
"regexp"
"runtime"
"sync"
"sync/atomic"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/juanfont/headscale/hscontrol/policy"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gorm.io/gorm"
"tailscale.com/net/tsaddr"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
)
func TestGetNode(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
user := db.CreateUserForTest("test")
_, err = db.getNode(types.UserID(user.ID), "testnode")
require.Error(t, err)
node := db.CreateNodeForTest(user, "testnode")
_, err = db.getNode(types.UserID(user.ID), "testnode")
require.NoError(t, err)
assert.Equal(t, "testnode", node.Hostname)
}
func TestGetNodeByID(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
user := db.CreateUserForTest("test")
_, err = db.GetNodeByID(0)
require.Error(t, err)
node := db.CreateNodeForTest(user, "testnode")
retrievedNode, err := db.GetNodeByID(node.ID)
require.NoError(t, err)
assert.Equal(t, "testnode", retrievedNode.Hostname)
}
func TestHardDeleteNode(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
user := db.CreateUserForTest("test")
node := db.CreateNodeForTest(user, "testnode3")
err = db.DeleteNode(node)
require.NoError(t, err)
_, err = db.getNode(types.UserID(user.ID), "testnode3")
require.Error(t, err)
}
func TestListPeersManyNodes(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
user := db.CreateUserForTest("test")
_, err = db.GetNodeByID(0)
require.Error(t, err)
nodes := db.CreateNodesForTest(user, 11, "testnode")
firstNode := nodes[0]
peersOfFirstNode, err := db.ListPeers(firstNode.ID)
require.NoError(t, err)
assert.Len(t, peersOfFirstNode, 10)
assert.Equal(t, "testnode-1", peersOfFirstNode[0].Hostname)
assert.Equal(t, "testnode-6", peersOfFirstNode[5].Hostname)
assert.Equal(t, "testnode-10", peersOfFirstNode[9].Hostname)
}
func TestExpireNode(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
user, err := db.CreateUser(types.User{Name: "test"})
require.NoError(t, err)
pak, err := db.CreatePreAuthKey(user.TypedID(), false, false, nil, nil)
require.NoError(t, err)
pakID := pak.ID
_, err = db.getNode(types.UserID(user.ID), "testnode")
require.Error(t, err)
nodeKey := key.NewNode()
machineKey := key.NewMachine()
node := &types.Node{
ID: 0,
MachineKey: machineKey.Public(),
NodeKey: nodeKey.Public(),
Hostname: "testnode",
UserID: &user.ID,
RegisterMethod: util.RegisterMethodAuthKey,
AuthKeyID: &pakID,
Expiry: &time.Time{},
}
db.DB.Save(node)
nodeFromDB, err := db.getNode(types.UserID(user.ID), "testnode")
require.NoError(t, err)
require.NotNil(t, nodeFromDB)
assert.False(t, nodeFromDB.IsExpired())
now := time.Now()
err = db.NodeSetExpiry(nodeFromDB.ID, &now)
require.NoError(t, err)
nodeFromDB, err = db.getNode(types.UserID(user.ID), "testnode")
require.NoError(t, err)
assert.True(t, nodeFromDB.IsExpired())
}
func TestDisableNodeExpiry(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
user, err := db.CreateUser(types.User{Name: "test"})
require.NoError(t, err)
pak, err := db.CreatePreAuthKey(user.TypedID(), false, false, nil, nil)
require.NoError(t, err)
pakID := pak.ID
node := &types.Node{
ID: 0,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "testnode",
UserID: &user.ID,
RegisterMethod: util.RegisterMethodAuthKey,
AuthKeyID: &pakID,
Expiry: &time.Time{},
}
db.DB.Save(node)
// Set an expiry first.
past := time.Now().Add(-time.Hour)
err = db.NodeSetExpiry(node.ID, &past)
require.NoError(t, err)
nodeFromDB, err := db.getNode(types.UserID(user.ID), "testnode")
require.NoError(t, err)
assert.True(t, nodeFromDB.IsExpired(), "node should be expired")
// Disable expiry by setting nil.
err = db.NodeSetExpiry(node.ID, nil)
require.NoError(t, err)
nodeFromDB, err = db.getNode(types.UserID(user.ID), "testnode")
require.NoError(t, err)
assert.False(t, nodeFromDB.IsExpired(), "node should not be expired after disabling expiry")
assert.Nil(t, nodeFromDB.Expiry, "expiry should be nil after disabling")
}
func TestSetTags(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
user, err := db.CreateUser(types.User{Name: "test"})
require.NoError(t, err)
pak, err := db.CreatePreAuthKey(user.TypedID(), false, false, nil, nil)
require.NoError(t, err)
pakID := pak.ID
_, err = db.getNode(types.UserID(user.ID), "testnode")
require.Error(t, err)
nodeKey := key.NewNode()
machineKey := key.NewMachine()
node := &types.Node{
ID: 0,
MachineKey: machineKey.Public(),
NodeKey: nodeKey.Public(),
Hostname: "testnode",
UserID: &user.ID,
RegisterMethod: util.RegisterMethodAuthKey,
AuthKeyID: &pakID,
}
trx := db.DB.Save(node)
require.NoError(t, trx.Error)
// assign simple tags
sTags := []string{"tag:test", "tag:foo"}
err = db.SetTags(node.ID, sTags)
require.NoError(t, err)
node, err = db.getNode(types.UserID(user.ID), "testnode")
require.NoError(t, err)
assert.Equal(t, sTags, node.Tags)
// assign duplicate tags, expect no errors but no doubles in DB
eTags := []string{"tag:bar", "tag:test", "tag:unknown", "tag:test"}
err = db.SetTags(node.ID, eTags)
require.NoError(t, err)
node, err = db.getNode(types.UserID(user.ID), "testnode")
require.NoError(t, err)
assert.Equal(t, []string{"tag:bar", "tag:test", "tag:unknown"}, node.Tags)
}
func TestHeadscale_generateGivenName(t *testing.T) {
type args struct {
suppliedName string
randomSuffix bool
}
tests := []struct {
name string
args args
want *regexp.Regexp
wantErr bool
}{
{
name: "simple node name generation",
args: args{
suppliedName: "testnode",
randomSuffix: false,
},
want: regexp.MustCompile("^testnode$"),
wantErr: false,
},
{
name: "UPPERCASE node name generation",
args: args{
suppliedName: "TestNode",
randomSuffix: false,
},
want: regexp.MustCompile("^testnode$"),
wantErr: false,
},
{
name: "node name with 53 chars",
args: args{
suppliedName: "testmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaachine",
randomSuffix: false,
},
want: regexp.MustCompile("^testmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaachine$"),
wantErr: false,
},
{
name: "node name with 63 chars",
args: args{
suppliedName: "nodeeeeeee12345678901234567890123456789012345678901234567890123",
randomSuffix: false,
},
want: regexp.MustCompile("^nodeeeeeee12345678901234567890123456789012345678901234567890123$"),
wantErr: false,
},
{
name: "node name with 64 chars",
args: args{
suppliedName: "nodeeeeeee123456789012345678901234567890123456789012345678901234",
randomSuffix: false,
},
want: nil,
wantErr: true,
},
{
name: "node name with 73 chars",
args: args{
suppliedName: "nodeeeeeee123456789012345678901234567890123456789012345678901234567890123",
randomSuffix: false,
},
want: nil,
wantErr: true,
},
{
name: "node name with random suffix",
args: args{
suppliedName: "test",
randomSuffix: true,
},
want: regexp.MustCompile(fmt.Sprintf("^test-[a-z0-9]{%d}$", NodeGivenNameHashLength)),
wantErr: false,
},
{
name: "node name with 63 chars with random suffix",
args: args{
suppliedName: "nodeeee12345678901234567890123456789012345678901234567890123",
randomSuffix: true,
},
want: regexp.MustCompile(fmt.Sprintf("^nodeeee1234567890123456789012345678901234567890123456-[a-z0-9]{%d}$", NodeGivenNameHashLength)),
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := generateGivenName(tt.args.suppliedName, tt.args.randomSuffix)
if (err != nil) != tt.wantErr {
t.Errorf(
"Headscale.GenerateGivenName() error = %v, wantErr %v",
err,
tt.wantErr,
)
return
}
if tt.want != nil && !tt.want.MatchString(got) {
t.Errorf(
"Headscale.GenerateGivenName() = %v, does not match %v",
tt.want,
got,
)
}
if len(got) > util.LabelHostnameLength {
t.Errorf(
"Headscale.GenerateGivenName() = %v is larger than allowed DNS segment %d",
got,
util.LabelHostnameLength,
)
}
})
}
}
func TestAutoApproveRoutes(t *testing.T) {
tests := []struct {
name string
acl string
routes []netip.Prefix
want []netip.Prefix
want2 []netip.Prefix
expectChange bool // whether to expect route changes
}{
{
name: "no-auto-approvers-empty-policy",
acl: `
{
"groups": {
"group:admins": ["test@"]
},
"acls": [
{
"action": "accept",
"src": ["group:admins"],
"dst": ["group:admins:*"]
}
]
}`,
routes: []netip.Prefix{netip.MustParsePrefix("10.33.0.0/16")},
want: []netip.Prefix{}, // Should be empty - no auto-approvers
want2: []netip.Prefix{}, // Should be empty - no auto-approvers
expectChange: false, // No changes expected
},
{
name: "no-auto-approvers-explicit-empty",
acl: `
{
"groups": {
"group:admins": ["test@"]
},
"acls": [
{
"action": "accept",
"src": ["group:admins"],
"dst": ["group:admins:*"]
}
],
"autoApprovers": {
"routes": {},
"exitNode": []
}
}`,
routes: []netip.Prefix{netip.MustParsePrefix("10.33.0.0/16")},
want: []netip.Prefix{}, // Should be empty - explicitly empty auto-approvers
want2: []netip.Prefix{}, // Should be empty - explicitly empty auto-approvers
expectChange: false, // No changes expected
},
{
name: "2068-approve-issue-sub-kube",
acl: `
{
"groups": {
"group:k8s": ["test@"]
},
// "acls": [
// {"action": "accept", "users": ["*"], "ports": ["*:*"]},
// ],
"autoApprovers": {
"routes": {
"10.42.0.0/16": ["test@"],
}
}
}`,
routes: []netip.Prefix{netip.MustParsePrefix("10.42.7.0/24")},
want: []netip.Prefix{netip.MustParsePrefix("10.42.7.0/24")},
expectChange: true, // Routes should be approved
},
{
name: "2068-approve-issue-sub-exit-tag",
acl: `
{
"tagOwners": {
"tag:exit": ["test@"],
},
"groups": {
"group:test": ["test@"]
},
// "acls": [
// {"action": "accept", "users": ["*"], "ports": ["*:*"]},
// ],
"autoApprovers": {
"exitNode": ["tag:exit"],
"routes": {
"10.10.0.0/16": ["group:test"],
"10.11.0.0/16": ["test@"],
"8.11.0.0/24": ["test2@"], // No nodes
}
}
}`,
routes: []netip.Prefix{
tsaddr.AllIPv4(),
tsaddr.AllIPv6(),
netip.MustParsePrefix("10.10.0.0/16"),
netip.MustParsePrefix("10.11.0.0/24"),
// Not approved
netip.MustParsePrefix("8.11.0.0/24"),
},
want: []netip.Prefix{
netip.MustParsePrefix("10.10.0.0/16"),
netip.MustParsePrefix("10.11.0.0/24"),
},
want2: []netip.Prefix{
tsaddr.AllIPv4(),
tsaddr.AllIPv6(),
},
expectChange: true, // Routes should be approved
},
}
for _, tt := range tests {
pmfs := policy.PolicyManagerFuncsForTest([]byte(tt.acl))
for i, pmf := range pmfs {
t.Run(fmt.Sprintf("%s-policy-index%d", tt.name, i), func(t *testing.T) {
adb, err := newSQLiteTestDB()
require.NoError(t, err)
user, err := adb.CreateUser(types.User{Name: "test"})
require.NoError(t, err)
_, err = adb.CreateUser(types.User{Name: "test2"})
require.NoError(t, err)
taggedUser, err := adb.CreateUser(types.User{Name: "tagged"})
require.NoError(t, err)
node := types.Node{
ID: 1,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "testnode",
UserID: &user.ID,
RegisterMethod: util.RegisterMethodAuthKey,
Hostinfo: &tailcfg.Hostinfo{
RoutableIPs: tt.routes,
},
IPv4: new(netip.MustParseAddr("100.64.0.1")),
}
err = adb.DB.Save(&node).Error
require.NoError(t, err)
nodeTagged := types.Node{
ID: 2,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "taggednode",
UserID: &taggedUser.ID,
RegisterMethod: util.RegisterMethodAuthKey,
Hostinfo: &tailcfg.Hostinfo{
RoutableIPs: tt.routes,
},
Tags: []string{"tag:exit"},
IPv4: new(netip.MustParseAddr("100.64.0.2")),
}
err = adb.DB.Save(&nodeTagged).Error
require.NoError(t, err)
users, err := adb.ListUsers()
require.NoError(t, err)
nodes, err := adb.ListNodes()
require.NoError(t, err)
pm, err := pmf(users, nodes.ViewSlice())
require.NoError(t, err)
require.NotNil(t, pm)
newRoutes1, changed1 := policy.ApproveRoutesWithPolicy(pm, node.View(), node.ApprovedRoutes, tt.routes)
assert.Equal(t, tt.expectChange, changed1)
if changed1 {
err = SetApprovedRoutes(adb.DB, node.ID, newRoutes1)
require.NoError(t, err)
}
newRoutes2, changed2 := policy.ApproveRoutesWithPolicy(pm, nodeTagged.View(), nodeTagged.ApprovedRoutes, tt.routes)
if changed2 {
err = SetApprovedRoutes(adb.DB, nodeTagged.ID, newRoutes2)
require.NoError(t, err)
}
node1ByID, err := adb.GetNodeByID(1)
require.NoError(t, err)
// For empty auto-approvers tests, handle nil vs empty slice comparison
expectedRoutes1 := tt.want
if len(expectedRoutes1) == 0 {
expectedRoutes1 = nil
}
if diff := cmp.Diff(expectedRoutes1, node1ByID.AllApprovedRoutes(), util.Comparers...); diff != "" {
t.Errorf("unexpected enabled routes (-want +got):\n%s", diff)
}
node2ByID, err := adb.GetNodeByID(2)
require.NoError(t, err)
expectedRoutes2 := tt.want2
if len(expectedRoutes2) == 0 {
expectedRoutes2 = nil
}
if diff := cmp.Diff(expectedRoutes2, node2ByID.AllApprovedRoutes(), util.Comparers...); diff != "" {
t.Errorf("unexpected enabled routes (-want +got):\n%s", diff)
}
})
}
}
}
func TestEphemeralGarbageCollectorOrder(t *testing.T) {
want := []types.NodeID{1, 3}
got := []types.NodeID{}
var mu sync.Mutex
deletionCount := make(chan struct{}, 10)
e := NewEphemeralGarbageCollector(func(ni types.NodeID) {
mu.Lock()
defer mu.Unlock()
got = append(got, ni)
deletionCount <- struct{}{}
})
go e.Start()
// Use shorter timeouts for faster tests
go e.Schedule(1, 50*time.Millisecond)
go e.Schedule(2, 100*time.Millisecond)
go e.Schedule(3, 150*time.Millisecond)
go e.Schedule(4, 200*time.Millisecond)
// Wait for first deletion (node 1 at 50ms)
select {
case <-deletionCount:
case <-time.After(time.Second):
t.Fatal("timeout waiting for first deletion")
}
// Cancel nodes 2 and 4
go e.Cancel(2)
go e.Cancel(4)
// Wait for node 3 to be deleted (at 150ms)
select {
case <-deletionCount:
case <-time.After(time.Second):
t.Fatal("timeout waiting for second deletion")
}
// Give a bit more time for any unexpected deletions
select {
case <-deletionCount:
// Unexpected - more deletions than expected
case <-time.After(300 * time.Millisecond):
// Expected - no more deletions
}
e.Close()
mu.Lock()
defer mu.Unlock()
if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("wrong nodes deleted, unexpected result (-want +got):\n%s", diff)
}
}
func TestEphemeralGarbageCollectorLoads(t *testing.T) {
var (
got []types.NodeID
mu sync.Mutex
)
want := 1000
var deletedCount int64
e := NewEphemeralGarbageCollector(func(ni types.NodeID) {
mu.Lock()
defer mu.Unlock()
// Yield to other goroutines to introduce variability
runtime.Gosched()
got = append(got, ni)
atomic.AddInt64(&deletedCount, 1)
})
go e.Start()
// Use shorter expiry for faster tests
for i := range want {
go e.Schedule(types.NodeID(i), 100*time.Millisecond) //nolint:gosec // test code, no overflow risk
}
// Wait for all deletions to complete
assert.EventuallyWithT(t, func(c *assert.CollectT) {
count := atomic.LoadInt64(&deletedCount)
assert.Equal(c, int64(want), count, "all nodes should be deleted")
}, 10*time.Second, 50*time.Millisecond, "waiting for all deletions")
e.Close()
mu.Lock()
defer mu.Unlock()
if len(got) != want {
t.Errorf("expected %d, got %d", want, len(got))
}
}
//nolint:unused
func generateRandomNumber(t *testing.T, maxVal int64) int64 {
t.Helper()
maxB := big.NewInt(maxVal)
n, err := rand.Int(rand.Reader, maxB)
if err != nil {
t.Fatalf("getting random number: %s", err)
}
return n.Int64() + 1
}
func TestListEphemeralNodes(t *testing.T) {
db, err := newSQLiteTestDB()
if err != nil {
t.Fatalf("creating db: %s", err)
}
user, err := db.CreateUser(types.User{Name: "test"})
require.NoError(t, err)
pak, err := db.CreatePreAuthKey(user.TypedID(), false, false, nil, nil)
require.NoError(t, err)
pakEph, err := db.CreatePreAuthKey(user.TypedID(), false, true, nil, nil)
require.NoError(t, err)
pakID := pak.ID
pakEphID := pakEph.ID
node := types.Node{
ID: 0,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "test",
UserID: &user.ID,
RegisterMethod: util.RegisterMethodAuthKey,
AuthKeyID: &pakID,
}
nodeEph := types.Node{
ID: 0,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "ephemeral",
UserID: &user.ID,
RegisterMethod: util.RegisterMethodAuthKey,
AuthKeyID: &pakEphID,
}
err = db.DB.Save(&node).Error
require.NoError(t, err)
err = db.DB.Save(&nodeEph).Error
require.NoError(t, err)
nodes, err := db.ListNodes()
require.NoError(t, err)
ephemeralNodes, err := db.ListEphemeralNodes()
require.NoError(t, err)
assert.Len(t, nodes, 2)
assert.Len(t, ephemeralNodes, 1)
assert.Equal(t, nodeEph.ID, ephemeralNodes[0].ID)
assert.Equal(t, nodeEph.AuthKeyID, ephemeralNodes[0].AuthKeyID)
assert.Equal(t, nodeEph.UserID, ephemeralNodes[0].UserID)
assert.Equal(t, nodeEph.Hostname, ephemeralNodes[0].Hostname)
}
func TestNodeNaming(t *testing.T) {
db, err := newSQLiteTestDB()
if err != nil {
t.Fatalf("creating db: %s", err)
}
user, err := db.CreateUser(types.User{Name: "test"})
require.NoError(t, err)
user2, err := db.CreateUser(types.User{Name: "user2"})
require.NoError(t, err)
node := types.Node{
ID: 0,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "test",
UserID: &user.ID,
RegisterMethod: util.RegisterMethodAuthKey,
Hostinfo: &tailcfg.Hostinfo{},
}
node2 := types.Node{
ID: 0,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "test",
UserID: &user2.ID,
RegisterMethod: util.RegisterMethodAuthKey,
Hostinfo: &tailcfg.Hostinfo{},
}
// Using non-ASCII characters in the hostname can
// break your network, so they should be replaced when registering
// a node.
// https://github.com/juanfont/headscale/issues/2343
nodeInvalidHostname := types.Node{
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "我的电脑", //nolint:gosmopolitan // intentional i18n test data
UserID: &user2.ID,
RegisterMethod: util.RegisterMethodAuthKey,
}
nodeShortHostname := types.Node{
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "a",
UserID: &user2.ID,
RegisterMethod: util.RegisterMethodAuthKey,
}
err = db.DB.Save(&node).Error
require.NoError(t, err)
err = db.DB.Save(&node2).Error
require.NoError(t, err)
err = db.DB.Transaction(func(tx *gorm.DB) error {
_, err := RegisterNodeForTest(tx, node, nil, nil)
if err != nil {
return err
}
_, err = RegisterNodeForTest(tx, node2, nil, nil)
if err != nil {
return err
}
_, _ = RegisterNodeForTest(tx, nodeInvalidHostname, new(mpp("100.64.0.66/32").Addr()), nil)
_, err = RegisterNodeForTest(tx, nodeShortHostname, new(mpp("100.64.0.67/32").Addr()), nil)
return err
})
require.NoError(t, err)
nodes, err := db.ListNodes()
require.NoError(t, err)
assert.Len(t, nodes, 4)
t.Logf("node1 %s %s", nodes[0].Hostname, nodes[0].GivenName)
t.Logf("node2 %s %s", nodes[1].Hostname, nodes[1].GivenName)
t.Logf("node3 %s %s", nodes[2].Hostname, nodes[2].GivenName)
t.Logf("node4 %s %s", nodes[3].Hostname, nodes[3].GivenName)
assert.Equal(t, nodes[0].Hostname, nodes[0].GivenName)
assert.NotEqual(t, nodes[1].Hostname, nodes[1].GivenName)
assert.Equal(t, nodes[0].Hostname, nodes[1].Hostname)
assert.NotEqual(t, nodes[0].Hostname, nodes[1].GivenName)
assert.Contains(t, nodes[1].GivenName, nodes[0].Hostname)
assert.Equal(t, nodes[0].GivenName, nodes[1].Hostname)
assert.Len(t, nodes[0].Hostname, 4)
assert.Len(t, nodes[1].Hostname, 4)
assert.Len(t, nodes[0].GivenName, 4)
assert.Len(t, nodes[1].GivenName, 13)
assert.Contains(t, nodes[2].Hostname, "invalid-") // invalid chars
assert.Contains(t, nodes[2].GivenName, "invalid-")
assert.Contains(t, nodes[3].Hostname, "invalid-") // too short
assert.Contains(t, nodes[3].GivenName, "invalid-")
// Nodes can be renamed to a unique name
err = db.Write(func(tx *gorm.DB) error {
return RenameNode(tx, nodes[0].ID, "newname")
})
require.NoError(t, err)
nodes, err = db.ListNodes()
require.NoError(t, err)
assert.Len(t, nodes, 4)
assert.Equal(t, "test", nodes[0].Hostname)
assert.Equal(t, "newname", nodes[0].GivenName)
// Nodes can reuse name that is no longer used
err = db.Write(func(tx *gorm.DB) error {
return RenameNode(tx, nodes[1].ID, "test")
})
require.NoError(t, err)
nodes, err = db.ListNodes()
require.NoError(t, err)
assert.Len(t, nodes, 4)
assert.Equal(t, "test", nodes[0].Hostname)
assert.Equal(t, "newname", nodes[0].GivenName)
assert.Equal(t, "test", nodes[1].GivenName)
// Nodes cannot be renamed to used names
err = db.Write(func(tx *gorm.DB) error {
return RenameNode(tx, nodes[0].ID, "test")
})
require.ErrorContains(t, err, "name is not unique")
// Rename invalid chars
err = db.Write(func(tx *gorm.DB) error {
return RenameNode(tx, nodes[2].ID, "我的电脑") //nolint:gosmopolitan // intentional i18n test data
})
require.ErrorContains(t, err, "invalid characters")
// Rename too short
err = db.Write(func(tx *gorm.DB) error {
return RenameNode(tx, nodes[3].ID, "a")
})
require.ErrorContains(t, err, "at least 2 characters")
// Rename with emoji
err = db.Write(func(tx *gorm.DB) error {
return RenameNode(tx, nodes[0].ID, "hostname-with-💩")
})
require.ErrorContains(t, err, "invalid characters")
// Rename with only emoji
err = db.Write(func(tx *gorm.DB) error {
return RenameNode(tx, nodes[0].ID, "🚀")
})
assert.ErrorContains(t, err, "invalid characters")
}
func TestRenameNodeComprehensive(t *testing.T) {
db, err := newSQLiteTestDB()
if err != nil {
t.Fatalf("creating db: %s", err)
}
user, err := db.CreateUser(types.User{Name: "test"})
require.NoError(t, err)
node := types.Node{
ID: 0,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "testnode",
UserID: &user.ID,
RegisterMethod: util.RegisterMethodAuthKey,
Hostinfo: &tailcfg.Hostinfo{},
}
err = db.DB.Save(&node).Error
require.NoError(t, err)
err = db.DB.Transaction(func(tx *gorm.DB) error {
_, err := RegisterNodeForTest(tx, node, nil, nil)
return err
})
require.NoError(t, err)
nodes, err := db.ListNodes()
require.NoError(t, err)
assert.Len(t, nodes, 1)
tests := []struct {
name string
newName string
wantErr string
}{
{
name: "uppercase_rejected",
newName: "User2-Host",
wantErr: "must be lowercase",
},
{
name: "underscore_rejected",
newName: "test_node",
wantErr: "invalid characters",
},
{
name: "at_sign_uppercase_rejected",
newName: "Test@Host",
wantErr: "must be lowercase",
},
{
name: "at_sign_rejected",
newName: "test@host",
wantErr: "invalid characters",
},
{
name: "chinese_chars_with_dash_rejected",
newName: "server-北京-01", //nolint:gosmopolitan // intentional i18n test data
wantErr: "invalid characters",
},
{
name: "chinese_only_rejected",
newName: "我的电脑", //nolint:gosmopolitan // intentional i18n test data
wantErr: "invalid characters",
},
{
name: "emoji_with_text_rejected",
newName: "laptop-🚀",
wantErr: "invalid characters",
},
{
name: "mixed_chinese_emoji_rejected",
newName: "测试💻机器", //nolint:gosmopolitan // intentional i18n test data
wantErr: "invalid characters",
},
{
name: "only_emojis_rejected",
newName: "🎉🎊",
wantErr: "invalid characters",
},
{
name: "only_at_signs_rejected",
newName: "@@@",
wantErr: "invalid characters",
},
{
name: "starts_with_dash_rejected",
newName: "-test",
wantErr: "cannot start or end with a hyphen",
},
{
name: "ends_with_dash_rejected",
newName: "test-",
wantErr: "cannot start or end with a hyphen",
},
{
name: "too_long_hostname_rejected",
newName: "this-is-a-very-long-hostname-that-exceeds-sixty-three-characters-limit",
wantErr: "must not exceed 63 characters",
},
{
name: "too_short_hostname_rejected",
newName: "a",
wantErr: "at least 2 characters",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := db.Write(func(tx *gorm.DB) error {
return RenameNode(tx, nodes[0].ID, tt.newName)
})
assert.ErrorContains(t, err, tt.wantErr)
})
}
}
func TestListPeers(t *testing.T) {
// Setup test database
db, err := newSQLiteTestDB()
if err != nil {
t.Fatalf("creating db: %s", err)
}
user, err := db.CreateUser(types.User{Name: "test"})
require.NoError(t, err)
user2, err := db.CreateUser(types.User{Name: "user2"})
require.NoError(t, err)
node1 := types.Node{
ID: 0,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "test1",
UserID: &user.ID,
RegisterMethod: util.RegisterMethodAuthKey,
Hostinfo: &tailcfg.Hostinfo{},
}
node2 := types.Node{
ID: 0,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "test2",
UserID: &user2.ID,
RegisterMethod: util.RegisterMethodAuthKey,
Hostinfo: &tailcfg.Hostinfo{},
}
err = db.DB.Save(&node1).Error
require.NoError(t, err)
err = db.DB.Save(&node2).Error
require.NoError(t, err)
err = db.DB.Transaction(func(tx *gorm.DB) error {
_, err := RegisterNodeForTest(tx, node1, nil, nil)
if err != nil {
return err
}
_, err = RegisterNodeForTest(tx, node2, nil, nil)
return err
})
require.NoError(t, err)
nodes, err := db.ListNodes()
require.NoError(t, err)
assert.Len(t, nodes, 2)
// No parameter means no filter, should return all peers
nodes, err = db.ListPeers(1)
require.NoError(t, err)
assert.Len(t, nodes, 1)
assert.Equal(t, "test2", nodes[0].Hostname)
// Empty node list should return all peers
nodes, err = db.ListPeers(1, types.NodeIDs{}...)
require.NoError(t, err)
assert.Len(t, nodes, 1)
assert.Equal(t, "test2", nodes[0].Hostname)
// No match in IDs should return empty list and no error
nodes, err = db.ListPeers(1, types.NodeIDs{3, 4, 5}...)
require.NoError(t, err)
assert.Empty(t, nodes)
// Partial match in IDs
nodes, err = db.ListPeers(1, types.NodeIDs{2, 3}...)
require.NoError(t, err)
assert.Len(t, nodes, 1)
assert.Equal(t, "test2", nodes[0].Hostname)
// Several matched IDs, but node ID is still filtered out
nodes, err = db.ListPeers(1, types.NodeIDs{1, 2, 3}...)
require.NoError(t, err)
assert.Len(t, nodes, 1)
assert.Equal(t, "test2", nodes[0].Hostname)
}
func TestListNodes(t *testing.T) {
// Setup test database
db, err := newSQLiteTestDB()
if err != nil {
t.Fatalf("creating db: %s", err)
}
user, err := db.CreateUser(types.User{Name: "test"})
require.NoError(t, err)
user2, err := db.CreateUser(types.User{Name: "user2"})
require.NoError(t, err)
node1 := types.Node{
ID: 0,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "test1",
UserID: &user.ID,
RegisterMethod: util.RegisterMethodAuthKey,
Hostinfo: &tailcfg.Hostinfo{},
}
node2 := types.Node{
ID: 0,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "test2",
UserID: &user2.ID,
RegisterMethod: util.RegisterMethodAuthKey,
Hostinfo: &tailcfg.Hostinfo{},
}
err = db.DB.Save(&node1).Error
require.NoError(t, err)
err = db.DB.Save(&node2).Error
require.NoError(t, err)
err = db.DB.Transaction(func(tx *gorm.DB) error {
_, err := RegisterNodeForTest(tx, node1, nil, nil)
if err != nil {
return err
}
_, err = RegisterNodeForTest(tx, node2, nil, nil)
return err
})
require.NoError(t, err)
nodes, err := db.ListNodes()
require.NoError(t, err)
assert.Len(t, nodes, 2)
// No parameter means no filter, should return all nodes
nodes, err = db.ListNodes()
require.NoError(t, err)
assert.Len(t, nodes, 2)
assert.Equal(t, "test1", nodes[0].Hostname)
assert.Equal(t, "test2", nodes[1].Hostname)
// Empty node list should return all nodes
nodes, err = db.ListNodes(types.NodeIDs{}...)
require.NoError(t, err)
assert.Len(t, nodes, 2)
assert.Equal(t, "test1", nodes[0].Hostname)
assert.Equal(t, "test2", nodes[1].Hostname)
// No match in IDs should return empty list and no error
nodes, err = db.ListNodes(types.NodeIDs{3, 4, 5}...)
require.NoError(t, err)
assert.Empty(t, nodes)
// Partial match in IDs
nodes, err = db.ListNodes(types.NodeIDs{2, 3}...)
require.NoError(t, err)
assert.Len(t, nodes, 1)
assert.Equal(t, "test2", nodes[0].Hostname)
// Several matched IDs
nodes, err = db.ListNodes(types.NodeIDs{1, 2, 3}...)
require.NoError(t, err)
assert.Len(t, nodes, 2)
assert.Equal(t, "test1", nodes[0].Hostname)
assert.Equal(t, "test2", nodes[1].Hostname)
}
================================================
FILE: hscontrol/db/policy.go
================================================
package db
import (
"errors"
"os"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"gorm.io/gorm"
"gorm.io/gorm/clause"
)
// SetPolicy sets the policy in the database.
func (hsdb *HSDatabase) SetPolicy(policy string) (*types.Policy, error) {
// Create a new policy.
p := types.Policy{
Data: policy,
}
err := hsdb.DB.Clauses(clause.Returning{}).Create(&p).Error
if err != nil {
return nil, err
}
return &p, nil
}
// GetPolicy returns the latest policy in the database.
func (hsdb *HSDatabase) GetPolicy() (*types.Policy, error) {
return GetPolicy(hsdb.DB)
}
// GetPolicy returns the latest policy from the database.
// This standalone function can be used in contexts where HSDatabase is not available,
// such as during migrations.
func GetPolicy(tx *gorm.DB) (*types.Policy, error) {
var p types.Policy
// Query:
// SELECT * FROM policies ORDER BY id DESC LIMIT 1;
err := tx.
Order("id DESC").
Limit(1).
First(&p).Error
if err != nil {
if errors.Is(err, gorm.ErrRecordNotFound) {
return nil, types.ErrPolicyNotFound
}
return nil, err
}
return &p, nil
}
// PolicyBytes loads policy configuration from file or database based on the configured mode.
// Returns nil if no policy is configured, which is valid.
// This standalone function can be used in contexts where HSDatabase is not available,
// such as during migrations.
func PolicyBytes(tx *gorm.DB, cfg *types.Config) ([]byte, error) {
switch cfg.Policy.Mode {
case types.PolicyModeFile:
path := cfg.Policy.Path
// It is fine to start headscale without a policy file.
if len(path) == 0 {
return nil, nil
}
absPath := util.AbsolutePathFromConfigPath(path)
return os.ReadFile(absPath)
case types.PolicyModeDB:
p, err := GetPolicy(tx)
if err != nil {
if errors.Is(err, types.ErrPolicyNotFound) {
return nil, nil
}
return nil, err
}
if p.Data == "" {
return nil, nil
}
return []byte(p.Data), nil
}
return nil, nil
}
================================================
FILE: hscontrol/db/preauth_keys.go
================================================
package db
import (
"errors"
"fmt"
"slices"
"strings"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"golang.org/x/crypto/bcrypt"
"gorm.io/gorm"
"tailscale.com/util/set"
)
var (
ErrPreAuthKeyNotFound = errors.New("auth-key not found")
ErrPreAuthKeyExpired = errors.New("auth-key expired")
ErrSingleUseAuthKeyHasBeenUsed = errors.New("auth-key has already been used")
ErrUserMismatch = errors.New("user mismatch")
ErrPreAuthKeyACLTagInvalid = errors.New("auth-key tag is invalid")
)
func (hsdb *HSDatabase) CreatePreAuthKey(
uid *types.UserID,
reusable bool,
ephemeral bool,
expiration *time.Time,
aclTags []string,
) (*types.PreAuthKeyNew, error) {
return Write(hsdb.DB, func(tx *gorm.DB) (*types.PreAuthKeyNew, error) {
return CreatePreAuthKey(tx, uid, reusable, ephemeral, expiration, aclTags)
})
}
const (
authKeyPrefix = "hskey-auth-"
authKeyPrefixLength = 12
authKeyLength = 64
)
// CreatePreAuthKey creates a new PreAuthKey in a user, and returns it.
// The uid parameter can be nil for system-created tagged keys.
// For tagged keys, uid tracks "created by" (who created the key).
// For user-owned keys, uid tracks the node owner.
func CreatePreAuthKey(
tx *gorm.DB,
uid *types.UserID,
reusable bool,
ephemeral bool,
expiration *time.Time,
aclTags []string,
) (*types.PreAuthKeyNew, error) {
// Validate: must be tagged OR user-owned, not neither
if uid == nil && len(aclTags) == 0 {
return nil, ErrPreAuthKeyNotTaggedOrOwned
}
var (
user *types.User
userID *uint
)
if uid != nil {
var err error
user, err = GetUserByID(tx, *uid)
if err != nil {
return nil, err
}
userID = &user.ID
}
// Remove duplicates and sort for consistency
aclTags = set.SetOf(aclTags).Slice()
slices.Sort(aclTags)
// TODO(kradalby): factor out and create a reusable tag validation,
// check if there is one in Tailscale's lib.
for _, tag := range aclTags {
if !strings.HasPrefix(tag, "tag:") {
return nil, fmt.Errorf(
"%w: '%s' did not begin with 'tag:'",
ErrPreAuthKeyACLTagInvalid,
tag,
)
}
}
now := time.Now().UTC()
prefix, err := util.GenerateRandomStringURLSafe(authKeyPrefixLength)
if err != nil {
return nil, err
}
// Validate generated prefix (should always be valid, but be defensive)
if len(prefix) != authKeyPrefixLength {
return nil, fmt.Errorf("%w: generated prefix has invalid length: expected %d, got %d", ErrPreAuthKeyFailedToParse, authKeyPrefixLength, len(prefix))
}
if !isValidBase64URLSafe(prefix) {
return nil, fmt.Errorf("%w: generated prefix contains invalid characters", ErrPreAuthKeyFailedToParse)
}
toBeHashed, err := util.GenerateRandomStringURLSafe(authKeyLength)
if err != nil {
return nil, err
}
// Validate generated hash (should always be valid, but be defensive)
if len(toBeHashed) != authKeyLength {
return nil, fmt.Errorf("%w: generated hash has invalid length: expected %d, got %d", ErrPreAuthKeyFailedToParse, authKeyLength, len(toBeHashed))
}
if !isValidBase64URLSafe(toBeHashed) {
return nil, fmt.Errorf("%w: generated hash contains invalid characters", ErrPreAuthKeyFailedToParse)
}
keyStr := authKeyPrefix + prefix + "-" + toBeHashed
hash, err := bcrypt.GenerateFromPassword([]byte(toBeHashed), bcrypt.DefaultCost)
if err != nil {
return nil, err
}
key := types.PreAuthKey{
UserID: userID, // nil for system-created keys, or "created by" for tagged keys
User: user, // nil for system-created keys
Reusable: reusable,
Ephemeral: ephemeral,
CreatedAt: &now,
Expiration: expiration,
Tags: aclTags, // empty for user-owned keys
Prefix: prefix, // Store prefix
Hash: hash, // Store hash
}
if err := tx.Save(&key).Error; err != nil { //nolint:noinlineerr
return nil, fmt.Errorf("creating key in database: %w", err)
}
return &types.PreAuthKeyNew{
ID: key.ID,
Key: keyStr,
Reusable: key.Reusable,
Ephemeral: key.Ephemeral,
Tags: key.Tags,
Expiration: key.Expiration,
CreatedAt: key.CreatedAt,
User: key.User,
}, nil
}
func (hsdb *HSDatabase) ListPreAuthKeys() ([]types.PreAuthKey, error) {
return Read(hsdb.DB, ListPreAuthKeys)
}
// ListPreAuthKeys returns all PreAuthKeys in the database.
func ListPreAuthKeys(tx *gorm.DB) ([]types.PreAuthKey, error) {
var keys []types.PreAuthKey
err := tx.Preload("User").Find(&keys).Error
if err != nil {
return nil, err
}
return keys, nil
}
var (
ErrPreAuthKeyFailedToParse = errors.New("failed to parse auth-key")
ErrPreAuthKeyNotTaggedOrOwned = errors.New("auth-key must be either tagged or owned by user")
)
func findAuthKey(tx *gorm.DB, keyStr string) (*types.PreAuthKey, error) {
var pak types.PreAuthKey
// Validate input is not empty
if keyStr == "" {
return nil, ErrPreAuthKeyFailedToParse
}
_, prefixAndHash, found := strings.Cut(keyStr, authKeyPrefix)
if !found {
// Legacy format (plaintext) - backwards compatibility
err := tx.Preload("User").First(&pak, "key = ?", keyStr).Error
if err != nil {
return nil, ErrPreAuthKeyNotFound
}
return &pak, nil
}
// New format: hskey-auth-{12-char-prefix}-{64-char-hash}
// Expected minimum length: 12 (prefix) + 1 (separator) + 64 (hash) = 77
const expectedMinLength = authKeyPrefixLength + 1 + authKeyLength
if len(prefixAndHash) < expectedMinLength {
return nil, fmt.Errorf(
"%w: key too short, expected at least %d chars after prefix, got %d",
ErrPreAuthKeyFailedToParse,
expectedMinLength,
len(prefixAndHash),
)
}
// Use fixed-length parsing instead of separator-based to handle dashes in base64 URL-safe
prefix := prefixAndHash[:authKeyPrefixLength]
// Validate separator at expected position
if prefixAndHash[authKeyPrefixLength] != '-' {
return nil, fmt.Errorf(
"%w: expected separator '-' at position %d, got '%c'",
ErrPreAuthKeyFailedToParse,
authKeyPrefixLength,
prefixAndHash[authKeyPrefixLength],
)
}
hash := prefixAndHash[authKeyPrefixLength+1:]
// Validate hash length
if len(hash) != authKeyLength {
return nil, fmt.Errorf(
"%w: hash length mismatch, expected %d chars, got %d",
ErrPreAuthKeyFailedToParse,
authKeyLength,
len(hash),
)
}
// Validate prefix contains only base64 URL-safe characters
if !isValidBase64URLSafe(prefix) {
return nil, fmt.Errorf(
"%w: prefix contains invalid characters (expected base64 URL-safe: A-Za-z0-9_-)",
ErrPreAuthKeyFailedToParse,
)
}
// Validate hash contains only base64 URL-safe characters
if !isValidBase64URLSafe(hash) {
return nil, fmt.Errorf(
"%w: hash contains invalid characters (expected base64 URL-safe: A-Za-z0-9_-)",
ErrPreAuthKeyFailedToParse,
)
}
// Look up key by prefix
err := tx.Preload("User").First(&pak, "prefix = ?", prefix).Error
if err != nil {
return nil, ErrPreAuthKeyNotFound
}
// Verify hash matches
err = bcrypt.CompareHashAndPassword(pak.Hash, []byte(hash))
if err != nil {
return nil, fmt.Errorf("invalid auth key: %w", err)
}
return &pak, nil
}
// isValidBase64URLSafe checks if a string contains only base64 URL-safe characters.
func isValidBase64URLSafe(s string) bool {
for _, c := range s {
if (c < 'A' || c > 'Z') && (c < 'a' || c > 'z') && (c < '0' || c > '9') && c != '-' && c != '_' {
return false
}
}
return true
}
func (hsdb *HSDatabase) GetPreAuthKey(key string) (*types.PreAuthKey, error) {
return GetPreAuthKey(hsdb.DB, key)
}
// GetPreAuthKey returns a PreAuthKey for a given key. The caller is responsible
// for checking if the key is usable (expired or used).
func GetPreAuthKey(tx *gorm.DB, key string) (*types.PreAuthKey, error) {
return findAuthKey(tx, key)
}
// DestroyPreAuthKey destroys a preauthkey. Returns error if the PreAuthKey
// does not exist. This also clears the auth_key_id on any nodes that reference
// this key.
func DestroyPreAuthKey(tx *gorm.DB, id uint64) error {
return tx.Transaction(func(db *gorm.DB) error {
// First, clear the foreign key reference on any nodes using this key
err := db.Model(&types.Node{}).
Where("auth_key_id = ?", id).
Update("auth_key_id", nil).Error
if err != nil {
return fmt.Errorf("clearing auth_key_id on nodes: %w", err)
}
// Then delete the pre-auth key
err = tx.Unscoped().Delete(&types.PreAuthKey{}, id).Error
if err != nil {
return err
}
return nil
})
}
func (hsdb *HSDatabase) ExpirePreAuthKey(id uint64) error {
return hsdb.Write(func(tx *gorm.DB) error {
return ExpirePreAuthKey(tx, id)
})
}
func (hsdb *HSDatabase) DeletePreAuthKey(id uint64) error {
return hsdb.Write(func(tx *gorm.DB) error {
return DestroyPreAuthKey(tx, id)
})
}
// UsePreAuthKey marks a PreAuthKey as used.
func UsePreAuthKey(tx *gorm.DB, k *types.PreAuthKey) error {
err := tx.Model(k).Update("used", true).Error
if err != nil {
return fmt.Errorf("updating key used status in database: %w", err)
}
k.Used = true
return nil
}
// ExpirePreAuthKey marks a PreAuthKey as expired.
func ExpirePreAuthKey(tx *gorm.DB, id uint64) error {
now := time.Now()
return tx.Model(&types.PreAuthKey{}).Where("id = ?", id).Update("expiration", now).Error
}
================================================
FILE: hscontrol/db/preauth_keys_test.go
================================================
package db
import (
"fmt"
"slices"
"strings"
"testing"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestCreatePreAuthKey(t *testing.T) {
tests := []struct {
name string
test func(*testing.T, *HSDatabase)
}{
{
name: "error_invalid_user_id",
test: func(t *testing.T, db *HSDatabase) {
t.Helper()
_, err := db.CreatePreAuthKey(new(types.UserID(12345)), true, false, nil, nil)
assert.Error(t, err)
},
},
{
name: "success_create_and_list",
test: func(t *testing.T, db *HSDatabase) {
t.Helper()
user, err := db.CreateUser(types.User{Name: "test"})
require.NoError(t, err)
key, err := db.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
require.NoError(t, err)
assert.NotEmpty(t, key.Key)
// List keys for the user
keys, err := db.ListPreAuthKeys()
require.NoError(t, err)
assert.Len(t, keys, 1)
// Verify User association is populated
assert.Equal(t, user.ID, keys[0].User.ID)
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
tt.test(t, db)
})
}
}
func TestPreAuthKeyACLTags(t *testing.T) {
tests := []struct {
name string
test func(*testing.T, *HSDatabase)
}{
{
name: "reject_malformed_tags",
test: func(t *testing.T, db *HSDatabase) {
t.Helper()
user, err := db.CreateUser(types.User{Name: "test-tags-1"})
require.NoError(t, err)
_, err = db.CreatePreAuthKey(user.TypedID(), false, false, nil, []string{"badtag"})
assert.Error(t, err)
},
},
{
name: "deduplicate_and_sort_tags",
test: func(t *testing.T, db *HSDatabase) {
t.Helper()
user, err := db.CreateUser(types.User{Name: "test-tags-2"})
require.NoError(t, err)
expectedTags := []string{"tag:test1", "tag:test2"}
tagsWithDuplicate := []string{"tag:test1", "tag:test2", "tag:test2"}
_, err = db.CreatePreAuthKey(user.TypedID(), false, false, nil, tagsWithDuplicate)
require.NoError(t, err)
listedPaks, err := db.ListPreAuthKeys()
require.NoError(t, err)
require.Len(t, listedPaks, 1)
gotTags := listedPaks[0].Proto().GetAclTags()
slices.Sort(gotTags)
assert.Equal(t, expectedTags, gotTags)
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
tt.test(t, db)
})
}
}
func TestCannotDeleteAssignedPreAuthKey(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
user, err := db.CreateUser(types.User{Name: "test8"})
require.NoError(t, err)
key, err := db.CreatePreAuthKey(user.TypedID(), false, false, nil, []string{"tag:good"})
require.NoError(t, err)
node := types.Node{
ID: 0,
Hostname: "testest",
UserID: &user.ID,
RegisterMethod: util.RegisterMethodAuthKey,
AuthKeyID: new(key.ID),
}
db.DB.Save(&node)
err = db.DB.Delete(&types.PreAuthKey{ID: key.ID}).Error
require.ErrorContains(t, err, "constraint failed: FOREIGN KEY constraint failed")
}
func TestPreAuthKeyAuthentication(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
user := db.CreateUserForTest("test-user")
tests := []struct {
name string
setupKey func() string // Returns key string to test
wantFindErr bool // Error when finding the key
wantValidateErr bool // Error when validating the key
validateResult func(*testing.T, *types.PreAuthKey)
}{
{
name: "legacy_key_plaintext",
setupKey: func() string {
// Insert legacy key directly using GORM (simulate existing production key)
// Note: We use raw SQL to bypass GORM's handling and set prefix to empty string
// which simulates how legacy keys exist in production databases
legacyKey := "abc123def456ghi789jkl012mno345pqr678stu901vwx234yz"
now := time.Now()
// Use raw SQL to insert with empty prefix to avoid UNIQUE constraint
err := db.DB.Exec(`
INSERT INTO pre_auth_keys (key, user_id, reusable, ephemeral, used, created_at)
VALUES (?, ?, ?, ?, ?, ?)
`, legacyKey, user.ID, true, false, false, now).Error
require.NoError(t, err)
return legacyKey
},
wantFindErr: false,
wantValidateErr: false,
validateResult: func(t *testing.T, pak *types.PreAuthKey) {
t.Helper()
assert.Equal(t, user.ID, *pak.UserID)
assert.NotEmpty(t, pak.Key) // Legacy keys have Key populated
assert.Empty(t, pak.Prefix) // Legacy keys have empty Prefix
assert.Nil(t, pak.Hash) // Legacy keys have nil Hash
},
},
{
name: "new_key_bcrypt",
setupKey: func() string {
// Create new key via API
keyStr, err := db.CreatePreAuthKey(
user.TypedID(),
true, false, nil, []string{"tag:test"},
)
require.NoError(t, err)
return keyStr.Key
},
wantFindErr: false,
wantValidateErr: false,
validateResult: func(t *testing.T, pak *types.PreAuthKey) {
t.Helper()
assert.Equal(t, user.ID, *pak.UserID)
assert.Empty(t, pak.Key) // New keys have empty Key
assert.NotEmpty(t, pak.Prefix) // New keys have Prefix
assert.NotNil(t, pak.Hash) // New keys have Hash
assert.Len(t, pak.Prefix, 12) // Prefix is 12 chars
},
},
{
name: "new_key_format_validation",
setupKey: func() string {
keyStr, err := db.CreatePreAuthKey(
user.TypedID(),
true, false, nil, nil,
)
require.NoError(t, err)
// Verify format: hskey-auth-{12-char-prefix}-{64-char-hash}
// Use fixed-length parsing since prefix/hash can contain dashes (base64 URL-safe)
assert.True(t, strings.HasPrefix(keyStr.Key, "hskey-auth-"))
// Extract prefix and hash using fixed-length parsing like the real code does
_, prefixAndHash, found := strings.Cut(keyStr.Key, "hskey-auth-")
assert.True(t, found)
assert.GreaterOrEqual(t, len(prefixAndHash), 12+1+64) // prefix + '-' + hash minimum
prefix := prefixAndHash[:12]
assert.Len(t, prefix, 12) // Prefix is 12 chars
assert.Equal(t, byte('-'), prefixAndHash[12]) // Separator
hash := prefixAndHash[13:]
assert.Len(t, hash, 64) // Hash is 64 chars
return keyStr.Key
},
wantFindErr: false,
wantValidateErr: false,
},
{
name: "invalid_bcrypt_hash",
setupKey: func() string {
// Create valid key
key, err := db.CreatePreAuthKey(
user.TypedID(),
true, false, nil, nil,
)
require.NoError(t, err)
keyStr := key.Key
// Return key with tampered hash using fixed-length parsing
_, prefixAndHash, _ := strings.Cut(keyStr, "hskey-auth-")
prefix := prefixAndHash[:12]
return "hskey-auth-" + prefix + "-" + "wrong_hash_here_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
},
wantFindErr: true,
wantValidateErr: false,
},
{
name: "empty_key",
setupKey: func() string {
return ""
},
wantFindErr: true,
wantValidateErr: false,
},
{
name: "key_too_short",
setupKey: func() string {
return "hskey-auth-short"
},
wantFindErr: true,
wantValidateErr: false,
},
{
name: "missing_separator",
setupKey: func() string {
return "hskey-auth-ABCDEFGHIJKLabcdefghijklmnopqrstuvwxyz1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ"
},
wantFindErr: true,
wantValidateErr: false,
},
{
name: "hash_too_short",
setupKey: func() string {
return "hskey-auth-ABCDEFGHIJKL-short"
},
wantFindErr: true,
wantValidateErr: false,
},
{
name: "prefix_with_invalid_chars",
setupKey: func() string {
return "hskey-auth-ABC$EF@HIJKL-" + strings.Repeat("a", 64)
},
wantFindErr: true,
wantValidateErr: false,
},
{
name: "hash_with_invalid_chars",
setupKey: func() string {
return "hskey-auth-ABCDEFGHIJKL-" + "invalid$chars" + strings.Repeat("a", 54)
},
wantFindErr: true,
wantValidateErr: false,
},
{
name: "prefix_not_found_in_db",
setupKey: func() string {
// Create a validly formatted key but with a prefix that doesn't exist
return "hskey-auth-NotInDB12345-" + strings.Repeat("a", 64)
},
wantFindErr: true,
wantValidateErr: false,
},
{
name: "expired_legacy_key",
setupKey: func() string {
legacyKey := "expired_legacy_key_123456789012345678901234"
now := time.Now()
expiration := time.Now().Add(-1 * time.Hour) // Expired 1 hour ago
// Use raw SQL to avoid UNIQUE constraint on empty prefix
err := db.DB.Exec(`
INSERT INTO pre_auth_keys (key, user_id, reusable, ephemeral, used, created_at, expiration)
VALUES (?, ?, ?, ?, ?, ?, ?)
`, legacyKey, user.ID, true, false, false, now, expiration).Error
require.NoError(t, err)
return legacyKey
},
wantFindErr: false,
wantValidateErr: true,
},
{
name: "used_single_use_legacy_key",
setupKey: func() string {
legacyKey := "used_legacy_key_123456789012345678901234567"
now := time.Now()
// Use raw SQL to avoid UNIQUE constraint on empty prefix
err := db.DB.Exec(`
INSERT INTO pre_auth_keys (key, user_id, reusable, ephemeral, used, created_at)
VALUES (?, ?, ?, ?, ?, ?)
`, legacyKey, user.ID, false, false, true, now).Error
require.NoError(t, err)
return legacyKey
},
wantFindErr: false,
wantValidateErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
keyStr := tt.setupKey()
pak, err := db.GetPreAuthKey(keyStr)
if tt.wantFindErr {
assert.Error(t, err)
return
}
require.NoError(t, err)
require.NotNil(t, pak)
// Check validation if needed
if tt.wantValidateErr {
err := pak.Validate()
assert.Error(t, err)
return
}
if tt.validateResult != nil {
tt.validateResult(t, pak)
}
})
}
}
func TestMultipleLegacyKeysAllowed(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
user, err := db.CreateUser(types.User{Name: "test-legacy"})
require.NoError(t, err)
// Create multiple legacy keys by directly inserting with empty prefix
// This simulates the migration scenario where existing databases have multiple
// plaintext keys without prefix/hash fields
now := time.Now()
for i := range 5 {
legacyKey := fmt.Sprintf("legacy_key_%d_%s", i, strings.Repeat("x", 40))
err := db.DB.Exec(`
INSERT INTO pre_auth_keys (key, prefix, hash, user_id, reusable, ephemeral, used, created_at)
VALUES (?, '', NULL, ?, ?, ?, ?, ?)
`, legacyKey, user.ID, true, false, false, now).Error
require.NoError(t, err, "should allow multiple legacy keys with empty prefix")
}
// Verify all legacy keys can be retrieved
var legacyKeys []types.PreAuthKey
err = db.DB.Where("prefix = '' OR prefix IS NULL").Find(&legacyKeys).Error
require.NoError(t, err)
assert.Len(t, legacyKeys, 5, "should have created 5 legacy keys")
// Now create new bcrypt-based keys - these should have unique prefixes
key1, err := db.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
require.NoError(t, err)
assert.NotEmpty(t, key1.Key)
key2, err := db.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
require.NoError(t, err)
assert.NotEmpty(t, key2.Key)
// Verify the new keys have different prefixes
pak1, err := db.GetPreAuthKey(key1.Key)
require.NoError(t, err)
assert.NotEmpty(t, pak1.Prefix)
pak2, err := db.GetPreAuthKey(key2.Key)
require.NoError(t, err)
assert.NotEmpty(t, pak2.Prefix)
assert.NotEqual(t, pak1.Prefix, pak2.Prefix, "new keys should have unique prefixes")
// Verify we cannot manually insert duplicate non-empty prefixes
duplicatePrefix := "test_prefix1"
hash1 := []byte("hash1")
hash2 := []byte("hash2")
// First insert should succeed
err = db.DB.Exec(`
INSERT INTO pre_auth_keys (key, prefix, hash, user_id, reusable, ephemeral, used, created_at)
VALUES ('', ?, ?, ?, ?, ?, ?, ?)
`, duplicatePrefix, hash1, user.ID, true, false, false, now).Error
require.NoError(t, err, "first key with prefix should succeed")
// Second insert with same prefix should fail
err = db.DB.Exec(`
INSERT INTO pre_auth_keys (key, prefix, hash, user_id, reusable, ephemeral, used, created_at)
VALUES ('', ?, ?, ?, ?, ?, ?, ?)
`, duplicatePrefix, hash2, user.ID, true, false, false, now).Error
require.Error(t, err, "duplicate non-empty prefix should be rejected")
assert.Contains(t, err.Error(), "UNIQUE constraint failed", "should fail with UNIQUE constraint error")
}
================================================
FILE: hscontrol/db/schema.sql
================================================
-- This file is the representation of the SQLite schema of Headscale.
-- It is the "source of truth" and is used to validate any migrations
-- that are run against the database to ensure it ends in the expected state.
CREATE TABLE migrations(id text,PRIMARY KEY(id));
CREATE TABLE users(
id integer PRIMARY KEY AUTOINCREMENT,
name text,
display_name text,
email text,
provider_identifier text,
provider text,
profile_pic_url text,
created_at datetime,
updated_at datetime,
deleted_at datetime
);
CREATE INDEX idx_users_deleted_at ON users(deleted_at);
-- The following three UNIQUE indexes work together to enforce the user identity model:
--
-- 1. Users can be either local (provider_identifier is NULL) or from external providers (provider_identifier set)
-- 2. Each external provider identifier must be unique across the system
-- 3. Local usernames must be unique among local users
-- 4. The same username can exist across different providers with different identifiers
--
-- Examples:
-- - Can create local user "alice" (provider_identifier=NULL)
-- - Can create external user "alice" with GitHub (name="alice", provider_identifier="alice_github")
-- - Can create external user "alice" with Google (name="alice", provider_identifier="alice_google")
-- - Cannot create another local user "alice" (blocked by idx_name_no_provider_identifier)
-- - Cannot create another user with provider_identifier="alice_github" (blocked by idx_provider_identifier)
-- - Cannot create user "bob" with provider_identifier="alice_github" (blocked by idx_name_provider_identifier)
CREATE UNIQUE INDEX idx_provider_identifier ON users(provider_identifier) WHERE provider_identifier IS NOT NULL;
CREATE UNIQUE INDEX idx_name_provider_identifier ON users(name, provider_identifier);
CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users(name) WHERE provider_identifier IS NULL;
CREATE TABLE pre_auth_keys(
id integer PRIMARY KEY AUTOINCREMENT,
key text,
prefix text,
hash blob,
user_id integer,
reusable numeric,
ephemeral numeric DEFAULT false,
used numeric DEFAULT false,
tags text,
expiration datetime,
created_at datetime,
CONSTRAINT fk_pre_auth_keys_user FOREIGN KEY(user_id) REFERENCES users(id) ON DELETE SET NULL
);
CREATE UNIQUE INDEX idx_pre_auth_keys_prefix ON pre_auth_keys(prefix) WHERE prefix IS NOT NULL AND prefix != '';
CREATE TABLE api_keys(
id integer PRIMARY KEY AUTOINCREMENT,
prefix text,
hash blob,
expiration datetime,
last_seen datetime,
created_at datetime
);
CREATE UNIQUE INDEX idx_api_keys_prefix ON api_keys(prefix);
CREATE TABLE nodes(
id integer PRIMARY KEY AUTOINCREMENT,
machine_key text,
node_key text,
disco_key text,
endpoints text,
host_info text,
ipv4 text,
ipv6 text,
hostname text,
given_name varchar(63),
-- user_id is NULL for tagged nodes (owned by tags, not a user).
-- Only set for user-owned nodes (no tags).
user_id integer,
register_method text,
tags text,
auth_key_id integer,
last_seen datetime,
expiry datetime,
approved_routes text,
created_at datetime,
updated_at datetime,
deleted_at datetime,
CONSTRAINT fk_nodes_user FOREIGN KEY(user_id) REFERENCES users(id) ON DELETE CASCADE,
CONSTRAINT fk_nodes_auth_key FOREIGN KEY(auth_key_id) REFERENCES pre_auth_keys(id)
);
CREATE TABLE policies(
id integer PRIMARY KEY AUTOINCREMENT,
data text,
created_at datetime,
updated_at datetime,
deleted_at datetime
);
CREATE INDEX idx_policies_deleted_at ON policies(deleted_at);
CREATE TABLE database_versions(
id integer PRIMARY KEY,
version text NOT NULL,
updated_at datetime
);
================================================
FILE: hscontrol/db/sqliteconfig/config.go
================================================
// Package sqliteconfig provides type-safe configuration for SQLite databases
// with proper enum validation and URL generation for modernc.org/sqlite driver.
package sqliteconfig
import (
"errors"
"fmt"
"strings"
)
// Errors returned by config validation.
var (
ErrPathEmpty = errors.New("path cannot be empty")
ErrBusyTimeoutNegative = errors.New("busy_timeout must be >= 0")
ErrInvalidJournalMode = errors.New("invalid journal_mode")
ErrInvalidAutoVacuum = errors.New("invalid auto_vacuum")
ErrWALAutocheckpoint = errors.New("wal_autocheckpoint must be >= -1")
ErrInvalidSynchronous = errors.New("invalid synchronous")
ErrInvalidTxLock = errors.New("invalid txlock")
)
const (
// DefaultBusyTimeout is the default busy timeout in milliseconds.
DefaultBusyTimeout = 10000
)
// JournalMode represents SQLite journal_mode pragma values.
// Journal modes control how SQLite handles write transactions and crash recovery.
//
// Performance vs Durability Tradeoffs:
//
// WAL (Write-Ahead Logging) - Recommended for production:
// - Best performance for concurrent reads/writes
// - Readers don't block writers, writers don't block readers
// - Excellent crash recovery with minimal data loss risk
// - Uses additional .wal and .shm files
// - Default choice for Headscale production deployments
//
// DELETE - Traditional rollback journal:
// - Good performance for single-threaded access
// - Readers block writers and vice versa
// - Reliable crash recovery but with exclusive locking
// - Creates temporary journal files during transactions
// - Suitable for low-concurrency scenarios
//
// TRUNCATE - Similar to DELETE but faster cleanup:
// - Slightly better performance than DELETE
// - Same concurrency limitations as DELETE
// - Faster transaction commit by truncating instead of deleting journal
//
// PERSIST - Journal file remains between transactions:
// - Avoids file creation/deletion overhead
// - Same concurrency limitations as DELETE
// - Good for frequent small transactions
//
// MEMORY - Journal kept in memory:
// - Fastest performance but NO crash recovery
// - Data loss risk on power failure or crash
// - Only suitable for temporary or non-critical data
//
// OFF - No journaling:
// - Maximum performance but NO transaction safety
// - High risk of database corruption on crash
// - Should only be used for read-only or disposable databases
type JournalMode string
const (
// JournalModeWAL enables Write-Ahead Logging (RECOMMENDED for production).
// Best concurrent performance + crash recovery. Uses additional .wal/.shm files.
JournalModeWAL JournalMode = "WAL"
// JournalModeDelete uses traditional rollback journaling.
// Good single-threaded performance, readers block writers. Creates temp journal files.
JournalModeDelete JournalMode = "DELETE"
// JournalModeTruncate is like DELETE but with faster cleanup.
// Slightly better performance than DELETE, same safety with exclusive locking.
JournalModeTruncate JournalMode = "TRUNCATE"
// JournalModePersist keeps journal file between transactions.
// Good for frequent transactions, avoids file creation/deletion overhead.
JournalModePersist JournalMode = "PERSIST"
// JournalModeMemory keeps journal in memory (DANGEROUS).
// Fastest performance but NO crash recovery - data loss on power failure.
JournalModeMemory JournalMode = "MEMORY"
// JournalModeOff disables journaling entirely (EXTREMELY DANGEROUS).
// Maximum performance but high corruption risk. Only for disposable databases.
JournalModeOff JournalMode = "OFF"
)
// IsValid returns true if the JournalMode is valid.
func (j JournalMode) IsValid() bool {
switch j {
case JournalModeWAL, JournalModeDelete, JournalModeTruncate,
JournalModePersist, JournalModeMemory, JournalModeOff:
return true
default:
return false
}
}
// String returns the string representation.
func (j JournalMode) String() string {
return string(j)
}
// AutoVacuum represents SQLite auto_vacuum pragma values.
// Auto-vacuum controls how SQLite reclaims space from deleted data.
//
// Performance vs Storage Tradeoffs:
//
// INCREMENTAL - Recommended for production:
// - Reclaims space gradually during normal operations
// - Minimal performance impact on writes
// - Database size shrinks automatically over time
// - Can manually trigger with PRAGMA incremental_vacuum
// - Good balance of space efficiency and performance
//
// FULL - Automatic space reclamation:
// - Immediately reclaims space on every DELETE/DROP
// - Higher write overhead due to page reorganization
// - Keeps database file size minimal
// - Can cause significant slowdowns on large deletions
// - Best for applications with frequent deletes and limited storage
//
// NONE - No automatic space reclamation:
// - Fastest write performance (no vacuum overhead)
// - Database file only grows, never shrinks
// - Deleted space is reused but file size remains large
// - Requires manual VACUUM to reclaim space
// - Best for write-heavy workloads where storage isn't constrained
type AutoVacuum string
const (
// AutoVacuumNone disables automatic space reclamation.
// Fastest writes, file only grows. Requires manual VACUUM to reclaim space.
AutoVacuumNone AutoVacuum = "NONE"
// AutoVacuumFull immediately reclaims space on every DELETE/DROP.
// Minimal file size but slower writes. Can impact performance on large deletions.
AutoVacuumFull AutoVacuum = "FULL"
// AutoVacuumIncremental reclaims space gradually (RECOMMENDED for production).
// Good balance: minimal write impact, automatic space management over time.
AutoVacuumIncremental AutoVacuum = "INCREMENTAL"
)
// IsValid returns true if the AutoVacuum is valid.
func (a AutoVacuum) IsValid() bool {
switch a {
case AutoVacuumNone, AutoVacuumFull, AutoVacuumIncremental:
return true
default:
return false
}
}
// String returns the string representation.
func (a AutoVacuum) String() string {
return string(a)
}
// Synchronous represents SQLite synchronous pragma values.
// Synchronous mode controls how aggressively SQLite flushes data to disk.
//
// Performance vs Durability Tradeoffs:
//
// NORMAL - Recommended for production:
// - Good balance of performance and safety
// - Syncs at critical moments (transaction commits in WAL mode)
// - Very low risk of corruption, minimal performance impact
// - Safe with WAL mode even with power loss
// - Default choice for most production applications
//
// FULL - Maximum durability:
// - Syncs to disk after every write operation
// - Highest data safety, virtually no corruption risk
// - Significant performance penalty (up to 50% slower)
// - Recommended for critical data where corruption is unacceptable
//
// EXTRA - Paranoid mode:
// - Even more aggressive syncing than FULL
// - Maximum possible data safety
// - Severe performance impact
// - Only for extremely critical scenarios
//
// OFF - Maximum performance, minimum safety:
// - No syncing, relies on OS to flush data
// - Fastest possible performance
// - High risk of corruption on power failure or crash
// - Only suitable for non-critical or easily recreatable data
type Synchronous string
const (
// SynchronousOff disables syncing (DANGEROUS).
// Fastest performance but high corruption risk on power failure. Avoid in production.
SynchronousOff Synchronous = "OFF"
// SynchronousNormal provides balanced performance and safety (RECOMMENDED).
// Good performance with low corruption risk. Safe with WAL mode on power loss.
SynchronousNormal Synchronous = "NORMAL"
// SynchronousFull provides maximum durability with performance cost.
// Syncs after every write. Up to 50% slower but virtually no corruption risk.
SynchronousFull Synchronous = "FULL"
// SynchronousExtra provides paranoid-level data safety (EXTREME).
// Maximum safety with severe performance impact. Rarely needed in practice.
SynchronousExtra Synchronous = "EXTRA"
)
// IsValid returns true if the Synchronous is valid.
func (s Synchronous) IsValid() bool {
switch s {
case SynchronousOff, SynchronousNormal, SynchronousFull, SynchronousExtra:
return true
default:
return false
}
}
// String returns the string representation.
func (s Synchronous) String() string {
return string(s)
}
// TxLock represents SQLite transaction lock mode.
// Transaction lock mode determines when write locks are acquired during transactions.
//
// Lock Acquisition Behavior:
//
// DEFERRED - SQLite default, acquire lock lazily:
// - Transaction starts without any lock
// - First read acquires SHARED lock
// - First write attempts to upgrade to RESERVED lock
// - If another transaction holds RESERVED: SQLITE_BUSY (potential deadlock)
// - Can cause deadlocks when multiple connections attempt concurrent writes
//
// IMMEDIATE - Recommended for write-heavy workloads:
// - Transaction immediately acquires RESERVED lock at BEGIN
// - If lock unavailable, waits up to busy_timeout before failing
// - Other writers queue orderly instead of deadlocking
// - Prevents the upgrade-lock deadlock scenario
// - Slight overhead for read-only transactions that don't need locks
//
// EXCLUSIVE - Maximum isolation:
// - Transaction immediately acquires EXCLUSIVE lock at BEGIN
// - No other connections can read or write
// - Highest isolation but lowest concurrency
// - Rarely needed in practice
type TxLock string
const (
// TxLockDeferred acquires locks lazily (SQLite default).
// Risk of SQLITE_BUSY deadlocks with concurrent writers. Use for read-heavy workloads.
TxLockDeferred TxLock = "deferred"
// TxLockImmediate acquires write lock immediately (RECOMMENDED for production).
// Prevents deadlocks by acquiring RESERVED lock at transaction start.
// Writers queue orderly, respecting busy_timeout.
TxLockImmediate TxLock = "immediate"
// TxLockExclusive acquires exclusive lock immediately.
// Maximum isolation, no concurrent reads or writes. Rarely needed.
TxLockExclusive TxLock = "exclusive"
)
// IsValid returns true if the TxLock is valid.
func (t TxLock) IsValid() bool {
switch t {
case TxLockDeferred, TxLockImmediate, TxLockExclusive, "":
return true
default:
return false
}
}
// String returns the string representation.
func (t TxLock) String() string {
return string(t)
}
// Config holds SQLite database configuration with type-safe enums.
// This configuration balances performance, durability, and operational requirements
// for Headscale's SQLite database usage patterns.
type Config struct {
Path string // file path or ":memory:"
BusyTimeout int // milliseconds (0 = default/disabled)
JournalMode JournalMode // journal mode (affects concurrency and crash recovery)
AutoVacuum AutoVacuum // auto vacuum mode (affects storage efficiency)
WALAutocheckpoint int // pages (-1 = default/not set, 0 = disabled, >0 = enabled)
Synchronous Synchronous // synchronous mode (affects durability vs performance)
ForeignKeys bool // enable foreign key constraints (data integrity)
TxLock TxLock // transaction lock mode (affects write concurrency)
}
// Default returns the production configuration optimized for Headscale's usage patterns.
// This configuration prioritizes:
// - Concurrent access (WAL mode for multiple readers/writers)
// - Data durability with good performance (NORMAL synchronous)
// - Automatic space management (INCREMENTAL auto-vacuum)
// - Data integrity (foreign key constraints enabled)
// - Safe concurrent writes (IMMEDIATE transaction lock)
// - Reasonable timeout for busy database scenarios (10s)
func Default(path string) *Config {
return &Config{
Path: path,
BusyTimeout: DefaultBusyTimeout,
JournalMode: JournalModeWAL,
AutoVacuum: AutoVacuumIncremental,
WALAutocheckpoint: 1000,
Synchronous: SynchronousNormal,
ForeignKeys: true,
TxLock: TxLockImmediate,
}
}
// Memory returns a configuration for in-memory databases.
func Memory() *Config {
return &Config{
Path: ":memory:",
WALAutocheckpoint: -1, // not set, use driver default
ForeignKeys: true,
}
}
// Validate checks if all configuration values are valid.
func (c *Config) Validate() error {
if c.Path == "" {
return ErrPathEmpty
}
if c.BusyTimeout < 0 {
return fmt.Errorf("%w, got %d", ErrBusyTimeoutNegative, c.BusyTimeout)
}
if c.JournalMode != "" && !c.JournalMode.IsValid() {
return fmt.Errorf("%w: %s", ErrInvalidJournalMode, c.JournalMode)
}
if c.AutoVacuum != "" && !c.AutoVacuum.IsValid() {
return fmt.Errorf("%w: %s", ErrInvalidAutoVacuum, c.AutoVacuum)
}
if c.WALAutocheckpoint < -1 {
return fmt.Errorf("%w, got %d", ErrWALAutocheckpoint, c.WALAutocheckpoint)
}
if c.Synchronous != "" && !c.Synchronous.IsValid() {
return fmt.Errorf("%w: %s", ErrInvalidSynchronous, c.Synchronous)
}
if c.TxLock != "" && !c.TxLock.IsValid() {
return fmt.Errorf("%w: %s", ErrInvalidTxLock, c.TxLock)
}
return nil
}
// ToURL builds a properly encoded SQLite connection string using _pragma parameters
// compatible with modernc.org/sqlite driver.
func (c *Config) ToURL() (string, error) {
err := c.Validate()
if err != nil {
return "", fmt.Errorf("invalid config: %w", err)
}
var pragmas []string
// Add pragma parameters only if they're set (non-zero/non-empty)
if c.BusyTimeout > 0 {
pragmas = append(pragmas, fmt.Sprintf("busy_timeout=%d", c.BusyTimeout))
}
if c.JournalMode != "" {
pragmas = append(pragmas, fmt.Sprintf("journal_mode=%s", c.JournalMode))
}
if c.AutoVacuum != "" {
pragmas = append(pragmas, fmt.Sprintf("auto_vacuum=%s", c.AutoVacuum))
}
if c.WALAutocheckpoint >= 0 {
pragmas = append(pragmas, fmt.Sprintf("wal_autocheckpoint=%d", c.WALAutocheckpoint))
}
if c.Synchronous != "" {
pragmas = append(pragmas, fmt.Sprintf("synchronous=%s", c.Synchronous))
}
if c.ForeignKeys {
pragmas = append(pragmas, "foreign_keys=ON")
}
// Handle different database types
var baseURL string
if c.Path == ":memory:" {
baseURL = ":memory:"
} else {
baseURL = "file:" + c.Path
}
// Build query parameters
queryParts := make([]string, 0, 1+len(pragmas))
// Add _txlock first (it's a connection parameter, not a pragma)
if c.TxLock != "" {
queryParts = append(queryParts, "_txlock="+string(c.TxLock))
}
// Add pragma parameters
for _, pragma := range pragmas {
queryParts = append(queryParts, "_pragma="+pragma)
}
if len(queryParts) > 0 {
baseURL += "?" + strings.Join(queryParts, "&")
}
return baseURL, nil
}
================================================
FILE: hscontrol/db/sqliteconfig/config_test.go
================================================
package sqliteconfig
import (
"testing"
)
func TestJournalMode(t *testing.T) {
tests := []struct {
mode JournalMode
valid bool
}{
{JournalModeWAL, true},
{JournalModeDelete, true},
{JournalModeTruncate, true},
{JournalModePersist, true},
{JournalModeMemory, true},
{JournalModeOff, true},
{JournalMode("INVALID"), false},
{JournalMode(""), false},
}
for _, tt := range tests {
t.Run(string(tt.mode), func(t *testing.T) {
if got := tt.mode.IsValid(); got != tt.valid {
t.Errorf("JournalMode(%q).IsValid() = %v, want %v", tt.mode, got, tt.valid)
}
})
}
}
func TestAutoVacuum(t *testing.T) {
tests := []struct {
mode AutoVacuum
valid bool
}{
{AutoVacuumNone, true},
{AutoVacuumFull, true},
{AutoVacuumIncremental, true},
{AutoVacuum("INVALID"), false},
{AutoVacuum(""), false},
}
for _, tt := range tests {
t.Run(string(tt.mode), func(t *testing.T) {
if got := tt.mode.IsValid(); got != tt.valid {
t.Errorf("AutoVacuum(%q).IsValid() = %v, want %v", tt.mode, got, tt.valid)
}
})
}
}
func TestSynchronous(t *testing.T) {
tests := []struct {
mode Synchronous
valid bool
}{
{SynchronousOff, true},
{SynchronousNormal, true},
{SynchronousFull, true},
{SynchronousExtra, true},
{Synchronous("INVALID"), false},
{Synchronous(""), false},
}
for _, tt := range tests {
t.Run(string(tt.mode), func(t *testing.T) {
if got := tt.mode.IsValid(); got != tt.valid {
t.Errorf("Synchronous(%q).IsValid() = %v, want %v", tt.mode, got, tt.valid)
}
})
}
}
func TestTxLock(t *testing.T) {
tests := []struct {
mode TxLock
valid bool
}{
{TxLockDeferred, true},
{TxLockImmediate, true},
{TxLockExclusive, true},
{TxLock(""), true}, // empty is valid (uses driver default)
{TxLock("IMMEDIATE"), false}, // uppercase is invalid
{TxLock("INVALID"), false},
}
for _, tt := range tests {
name := string(tt.mode)
if name == "" {
name = "empty"
}
t.Run(name, func(t *testing.T) {
if got := tt.mode.IsValid(); got != tt.valid {
t.Errorf("TxLock(%q).IsValid() = %v, want %v", tt.mode, got, tt.valid)
}
})
}
}
func TestTxLockString(t *testing.T) {
tests := []struct {
mode TxLock
want string
}{
{TxLockDeferred, "deferred"},
{TxLockImmediate, "immediate"},
{TxLockExclusive, "exclusive"},
}
for _, tt := range tests {
t.Run(tt.want, func(t *testing.T) {
if got := tt.mode.String(); got != tt.want {
t.Errorf("TxLock.String() = %q, want %q", got, tt.want)
}
})
}
}
func TestConfigValidate(t *testing.T) {
tests := []struct {
name string
config *Config
wantErr bool
}{
{
name: "valid default config",
config: Default("/path/to/db.sqlite"),
},
{
name: "empty path",
config: &Config{
Path: "",
},
wantErr: true,
},
{
name: "negative busy timeout",
config: &Config{
Path: "/path/to/db.sqlite",
BusyTimeout: -1,
},
wantErr: true,
},
{
name: "invalid journal mode",
config: &Config{
Path: "/path/to/db.sqlite",
JournalMode: JournalMode("INVALID"),
},
wantErr: true,
},
{
name: "invalid txlock",
config: &Config{
Path: "/path/to/db.sqlite",
TxLock: TxLock("INVALID"),
},
wantErr: true,
},
{
name: "valid txlock immediate",
config: &Config{
Path: "/path/to/db.sqlite",
TxLock: TxLockImmediate,
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := tt.config.Validate()
if (err != nil) != tt.wantErr {
t.Errorf("Config.Validate() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}
func TestConfigToURL(t *testing.T) {
tests := []struct {
name string
config *Config
want string
}{
{
name: "default config includes txlock immediate",
config: Default("/path/to/db.sqlite"),
want: "file:/path/to/db.sqlite?_txlock=immediate&_pragma=busy_timeout=10000&_pragma=journal_mode=WAL&_pragma=auto_vacuum=INCREMENTAL&_pragma=wal_autocheckpoint=1000&_pragma=synchronous=NORMAL&_pragma=foreign_keys=ON",
},
{
name: "memory config",
config: Memory(),
want: ":memory:?_pragma=foreign_keys=ON",
},
{
name: "minimal config",
config: &Config{
Path: "/simple/db.sqlite",
WALAutocheckpoint: -1, // not set
},
want: "file:/simple/db.sqlite",
},
{
name: "custom config",
config: &Config{
Path: "/custom/db.sqlite",
BusyTimeout: 5000,
JournalMode: JournalModeDelete,
WALAutocheckpoint: -1, // not set
Synchronous: SynchronousFull,
ForeignKeys: true,
},
want: "file:/custom/db.sqlite?_pragma=busy_timeout=5000&_pragma=journal_mode=DELETE&_pragma=synchronous=FULL&_pragma=foreign_keys=ON",
},
{
name: "memory with custom timeout",
config: &Config{
Path: ":memory:",
BusyTimeout: 2000,
WALAutocheckpoint: -1, // not set
ForeignKeys: true,
},
want: ":memory:?_pragma=busy_timeout=2000&_pragma=foreign_keys=ON",
},
{
name: "wal autocheckpoint zero",
config: &Config{
Path: "/test.db",
WALAutocheckpoint: 0,
},
want: "file:/test.db?_pragma=wal_autocheckpoint=0",
},
{
name: "all options",
config: &Config{
Path: "/full.db",
BusyTimeout: 15000,
JournalMode: JournalModeWAL,
AutoVacuum: AutoVacuumFull,
WALAutocheckpoint: 1000,
Synchronous: SynchronousExtra,
ForeignKeys: true,
},
want: "file:/full.db?_pragma=busy_timeout=15000&_pragma=journal_mode=WAL&_pragma=auto_vacuum=FULL&_pragma=wal_autocheckpoint=1000&_pragma=synchronous=EXTRA&_pragma=foreign_keys=ON",
},
{
name: "with txlock immediate",
config: &Config{
Path: "/test.db",
BusyTimeout: 5000,
TxLock: TxLockImmediate,
WALAutocheckpoint: -1,
ForeignKeys: true,
},
want: "file:/test.db?_txlock=immediate&_pragma=busy_timeout=5000&_pragma=foreign_keys=ON",
},
{
name: "with txlock deferred",
config: &Config{
Path: "/test.db",
TxLock: TxLockDeferred,
WALAutocheckpoint: -1,
ForeignKeys: true,
},
want: "file:/test.db?_txlock=deferred&_pragma=foreign_keys=ON",
},
{
name: "with txlock exclusive",
config: &Config{
Path: "/test.db",
TxLock: TxLockExclusive,
WALAutocheckpoint: -1,
},
want: "file:/test.db?_txlock=exclusive",
},
{
name: "empty txlock omitted from URL",
config: &Config{
Path: "/test.db",
TxLock: "",
BusyTimeout: 1000,
WALAutocheckpoint: -1,
ForeignKeys: true,
},
want: "file:/test.db?_pragma=busy_timeout=1000&_pragma=foreign_keys=ON",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := tt.config.ToURL()
if err != nil {
t.Errorf("Config.ToURL() error = %v", err)
return
}
if got != tt.want {
t.Errorf("Config.ToURL() = %q, want %q", got, tt.want)
}
})
}
}
func TestConfigToURLInvalid(t *testing.T) {
config := &Config{
Path: "",
BusyTimeout: -1,
}
_, err := config.ToURL()
if err == nil {
t.Error("Config.ToURL() with invalid config should return error")
}
}
func TestDefaultConfigHasTxLockImmediate(t *testing.T) {
config := Default("/test.db")
if config.TxLock != TxLockImmediate {
t.Errorf("Default().TxLock = %q, want %q", config.TxLock, TxLockImmediate)
}
}
================================================
FILE: hscontrol/db/sqliteconfig/integration_test.go
================================================
package sqliteconfig
import (
"context"
"database/sql"
"path/filepath"
"strings"
"testing"
_ "modernc.org/sqlite"
)
const memoryDBPath = ":memory:"
// TestSQLiteDriverPragmaIntegration verifies that the modernc.org/sqlite driver
// correctly applies all pragma settings from URL parameters, ensuring they work
// the same as the old SQL PRAGMA statements approach.
func TestSQLiteDriverPragmaIntegration(t *testing.T) {
tests := []struct {
name string
config *Config
expected map[string]any
}{
{
name: "default configuration",
config: Default("/tmp/test.db"),
expected: map[string]any{
"busy_timeout": 10000,
"journal_mode": "wal",
"auto_vacuum": 2, // INCREMENTAL = 2
"wal_autocheckpoint": 1000,
"synchronous": 1, // NORMAL = 1
"foreign_keys": 1, // ON = 1
},
},
{
name: "memory database with foreign keys",
config: Memory(),
expected: map[string]any{
"foreign_keys": 1, // ON = 1
},
},
{
name: "custom configuration",
config: &Config{
Path: "/tmp/custom.db",
BusyTimeout: 5000,
JournalMode: JournalModeDelete,
AutoVacuum: AutoVacuumFull,
WALAutocheckpoint: 1000,
Synchronous: SynchronousFull,
ForeignKeys: true,
},
expected: map[string]any{
"busy_timeout": 5000,
"journal_mode": "delete",
"auto_vacuum": 1, // FULL = 1
"wal_autocheckpoint": 1000,
"synchronous": 2, // FULL = 2
"foreign_keys": 1, // ON = 1
},
},
{
name: "foreign keys disabled",
config: &Config{
Path: "/tmp/no_fk.db",
ForeignKeys: false,
},
expected: map[string]any{
// foreign_keys should not be set (defaults to 0/OFF)
"foreign_keys": 0,
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Create temporary database file if not memory
if tt.config.Path == memoryDBPath {
// For memory databases, no changes needed
} else {
tempDir := t.TempDir()
dbPath := filepath.Join(tempDir, "test.db")
// Update config with actual temp path
configCopy := *tt.config
configCopy.Path = dbPath
tt.config = &configCopy
}
// Generate URL and open database
url, err := tt.config.ToURL()
if err != nil {
t.Fatalf("Failed to generate URL: %v", err)
}
t.Logf("Opening database with URL: %s", url)
db, err := sql.Open("sqlite", url)
if err != nil {
t.Fatalf("Failed to open database: %v", err)
}
defer db.Close()
// Test connection
ctx := context.Background()
err = db.PingContext(ctx)
if err != nil {
t.Fatalf("Failed to ping database: %v", err)
}
// Verify each expected pragma setting
for pragma, expectedValue := range tt.expected {
t.Run("pragma_"+pragma, func(t *testing.T) {
var actualValue any
query := "PRAGMA " + pragma
err := db.QueryRowContext(ctx, query).Scan(&actualValue)
if err != nil {
t.Fatalf("Failed to query %s: %v", query, err)
}
t.Logf("%s: expected=%v, actual=%v", pragma, expectedValue, actualValue)
// Handle type conversion for comparison
switch expected := expectedValue.(type) {
case int:
if actual, ok := actualValue.(int64); ok {
if int64(expected) != actual {
t.Errorf("%s: expected %d, got %d", pragma, expected, actual)
}
} else {
t.Errorf("%s: expected int %d, got %T %v", pragma, expected, actualValue, actualValue)
}
case string:
if actual, ok := actualValue.(string); ok {
if expected != actual {
t.Errorf("%s: expected %q, got %q", pragma, expected, actual)
}
} else {
t.Errorf("%s: expected string %q, got %T %v", pragma, expected, actualValue, actualValue)
}
default:
t.Errorf("Unsupported expected type for %s: %T", pragma, expectedValue)
}
})
}
})
}
}
// TestForeignKeyConstraintEnforcement verifies that foreign key constraints
// are actually enforced when enabled via URL parameters.
func TestForeignKeyConstraintEnforcement(t *testing.T) {
tempDir := t.TempDir()
dbPath := filepath.Join(tempDir, "fk_test.db")
config := Default(dbPath)
url, err := config.ToURL()
if err != nil {
t.Fatalf("Failed to generate URL: %v", err)
}
db, err := sql.Open("sqlite", url)
if err != nil {
t.Fatalf("Failed to open database: %v", err)
}
defer db.Close()
ctx := context.Background()
// Create test tables with foreign key relationship
schema := `
CREATE TABLE parent (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL
);
CREATE TABLE child (
id INTEGER PRIMARY KEY,
parent_id INTEGER NOT NULL,
name TEXT NOT NULL,
FOREIGN KEY (parent_id) REFERENCES parent(id)
);
`
_, err = db.ExecContext(ctx, schema)
if err != nil {
t.Fatalf("Failed to create schema: %v", err)
}
// Insert parent record
_, err = db.ExecContext(ctx, "INSERT INTO parent (id, name) VALUES (1, 'Parent 1')")
if err != nil {
t.Fatalf("Failed to insert parent: %v", err)
}
// Test 1: Valid foreign key should work
_, err = db.ExecContext(ctx, "INSERT INTO child (id, parent_id, name) VALUES (1, 1, 'Child 1')")
if err != nil {
t.Fatalf("Valid foreign key insert failed: %v", err)
}
// Test 2: Invalid foreign key should fail
_, err = db.ExecContext(ctx, "INSERT INTO child (id, parent_id, name) VALUES (2, 999, 'Child 2')")
if err == nil {
t.Error("Expected foreign key constraint violation, but insert succeeded")
} else if !contains(err.Error(), "FOREIGN KEY constraint failed") {
t.Errorf("Expected foreign key constraint error, got: %v", err)
} else {
t.Logf("✓ Foreign key constraint correctly enforced: %v", err)
}
// Test 3: Deleting referenced parent should fail
_, err = db.ExecContext(ctx, "DELETE FROM parent WHERE id = 1")
if err == nil {
t.Error("Expected foreign key constraint violation when deleting referenced parent")
} else if !contains(err.Error(), "FOREIGN KEY constraint failed") {
t.Errorf("Expected foreign key constraint error on delete, got: %v", err)
} else {
t.Logf("✓ Foreign key constraint correctly prevented parent deletion: %v", err)
}
}
// TestJournalModeValidation verifies that the journal_mode setting is applied correctly.
func TestJournalModeValidation(t *testing.T) {
modes := []struct {
mode JournalMode
expected string
}{
{JournalModeWAL, "wal"},
{JournalModeDelete, "delete"},
{JournalModeTruncate, "truncate"},
{JournalModeMemory, "memory"},
}
for _, tt := range modes {
t.Run(string(tt.mode), func(t *testing.T) {
tempDir := t.TempDir()
dbPath := filepath.Join(tempDir, "journal_test.db")
config := &Config{
Path: dbPath,
JournalMode: tt.mode,
ForeignKeys: true,
}
url, err := config.ToURL()
if err != nil {
t.Fatalf("Failed to generate URL: %v", err)
}
db, err := sql.Open("sqlite", url)
if err != nil {
t.Fatalf("Failed to open database: %v", err)
}
defer db.Close()
var actualMode string
err = db.QueryRowContext(context.Background(), "PRAGMA journal_mode").Scan(&actualMode)
if err != nil {
t.Fatalf("Failed to query journal_mode: %v", err)
}
if actualMode != tt.expected {
t.Errorf("journal_mode: expected %q, got %q", tt.expected, actualMode)
} else {
t.Logf("✓ journal_mode correctly set to: %s", actualMode)
}
})
}
}
// contains checks if a string contains a substring (helper function).
func contains(str, substr string) bool {
return strings.Contains(str, substr)
}
================================================
FILE: hscontrol/db/suite_test.go
================================================
package db
import (
"log"
"net/url"
"os"
"strconv"
"strings"
"testing"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/rs/zerolog"
"zombiezen.com/go/postgrestest"
)
func newSQLiteTestDB() (*HSDatabase, error) {
tmpDir, err := os.MkdirTemp("", "headscale-db-test-*")
if err != nil {
return nil, err
}
log.Printf("database path: %s", tmpDir+"/headscale_test.db")
zerolog.SetGlobalLevel(zerolog.Disabled)
db, err := NewHeadscaleDatabase(
&types.Config{
Database: types.DatabaseConfig{
Type: types.DatabaseSqlite,
Sqlite: types.SqliteConfig{
Path: tmpDir + "/headscale_test.db",
},
},
Policy: types.PolicyConfig{
Mode: types.PolicyModeDB,
},
},
emptyCache(),
)
if err != nil {
return nil, err
}
return db, nil
}
func newPostgresTestDB(t *testing.T) *HSDatabase {
t.Helper()
return newHeadscaleDBFromPostgresURL(t, newPostgresDBForTest(t))
}
func newPostgresDBForTest(t *testing.T) *url.URL {
t.Helper()
ctx := t.Context()
srv, err := postgrestest.Start(ctx)
if err != nil {
t.Skipf("start postgres: %s", err)
}
t.Cleanup(srv.Cleanup)
u, err := srv.CreateDatabase(ctx)
if err != nil {
t.Fatal(err)
}
t.Logf("created local postgres: %s", u)
pu, _ := url.Parse(u)
return pu
}
func newHeadscaleDBFromPostgresURL(t *testing.T, pu *url.URL) *HSDatabase {
t.Helper()
pass, _ := pu.User.Password()
port, _ := strconv.Atoi(pu.Port())
db, err := NewHeadscaleDatabase(
&types.Config{
Database: types.DatabaseConfig{
Type: types.DatabasePostgres,
Postgres: types.PostgresConfig{
Host: pu.Hostname(),
User: pu.User.Username(),
Name: strings.TrimLeft(pu.Path, "/"),
Pass: pass,
Port: port,
Ssl: "disable",
},
},
Policy: types.PolicyConfig{
Mode: types.PolicyModeDB,
},
},
emptyCache(),
)
if err != nil {
t.Fatal(err)
}
return db
}
================================================
FILE: hscontrol/db/testdata/sqlite/failing-node-preauth-constraint_dump.sql
================================================
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE IF NOT EXISTS "api_keys" (`id` integer,`prefix` text UNIQUE,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`));
INSERT INTO api_keys VALUES(1,'hFKcRjLyfw',X'243261243130242e68554a6739332e6658333061326457723637464f2e6146424c74726e4542474c6c746437597a4253534d6f3677326d3944664d61','2023-04-09 22:34:28.624250346+00:00','2023-07-08 22:34:28.559681279+00:00',NULL);
INSERT INTO api_keys VALUES(2,'88Wbitubag',X'243261243130246f7932506d53375033334b733861376e7745434f3665674e776e517659374b5474326a30686958446c6c55696c3568513948307665','2024-07-28 21:59:38.786936789+00:00','2024-10-26 21:59:38.724189498+00:00',NULL);
CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`));
INSERT INTO migrations VALUES('202312101416');
INSERT INTO migrations VALUES('202312101430');
INSERT INTO migrations VALUES('202402151347');
INSERT INTO migrations VALUES('2024041121742');
INSERT INTO migrations VALUES('202406021630');
INSERT INTO migrations VALUES('202409271400');
INSERT INTO migrations VALUES('202407191627');
INSERT INTO migrations VALUES('202408181235');
INSERT INTO migrations VALUES('202501221827');
INSERT INTO migrations VALUES('202501311657');
CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text);
CREATE TABLE IF NOT EXISTS "pre_auth_keys" (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,`tags` text,PRIMARY KEY (`id`),CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL);
CREATE TABLE IF NOT EXISTS "nodes" (`id` integer,`machine_key` text,`node_key` text,`disco_key` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`ipv4` text,`ipv6` text,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`));
INSERT INTO nodes VALUES(1,'mkey:a0ab77456320823945ae0331823e3c0d516fae9585bd42698dfa1ac3d7679e63','nodekey:7c84167ab68f494942de14deb83587fd841843de2bac105b6c670048c160554f','discokey:53075b3c6cad3b62a2a29caea61beeb93f66b8c75cb89dac465236a5bbf57759','hostname_1','given_name1',1,'cli','["tag:sshclient","tag:ssh"]',0,'2025-02-05 16:46:13.960213431+00:00','0001-01-01 00:00:00+00:00','{}','[]','2023-03-30 23:18:17.612740902+00:00','2025-02-05 16:46:13.960284003+00:00',NULL,'100.64.0.1','fd7a:115c:a1e0::1');
INSERT INTO nodes VALUES(2,'mkey:f63dda7495db68077080364ba4109f48dee7a59310b9ed4968beb40d038eb622','nodekey:8186817337049e092e6ea02507091d8e9686924d46ad0e74a90370ec0113c440','discokey:28a2df7e73b8196c6859c94329443a28f9605b2b83541b685c1db666bd835775','hostname_2','given_name2',1,'cli','["tag:sshclient"]',0,'2024-07-30 17:37:24.266006395+00:00','0001-01-01 00:00:00+00:00','{}','[]','2023-03-30 23:20:01.05202704+00:00','2024-07-30 17:37:24.266082813+00:00',NULL,'100.64.0.2','fd7a:115c:a1e0::2');
INSERT INTO nodes VALUES(3,'mkey:0af53661fedf5143af3ea79e596928302e51c9fc9f0ea9ed1f2bb7d54778b80e','nodekey:8defd8272fd2851601158b2444fc8d1ab12b6187ec5db154b7a83bb75b2ce952','discokey:ba9d1ffac1997acbd8d281b8711699daa77ed91691772683ebbfdaafa2518a52','hostname_3','given_name3',1,'cli','["tag:ssh"]',0,'2025-02-05 16:48:00.460606473+00:00','0001-01-01 00:00:00+00:00','{}','[]','2023-03-30 23:36:04.930844845+00:00','2025-02-05 16:48:00.460679869+00:00',NULL,'100.64.0.3','fd7a:115c:a1e0::3');
INSERT INTO nodes VALUES(4,'mkey:365e2055485de89e65e63c13e426b1ec5d5606327d63955b38be1d3f8cbbac6c','nodekey:996b9814e405f572fc0338f91b0c53f3a3a9a5b1ae0d2846d179195778d50909','discokey:ed72cb545b46b3e2ed0332f9cb4d7f4e774ea5834e2cbadc43c9bf7918ef2503','hostname_4','given_name4',1,'cli','["tag:ssh"]',0,'2025-02-05 16:48:00.460607206+00:00','0001-01-01 00:00:00+00:00','{}','[]','2023-03-31 15:51:56.149734121+00:00','2025-02-05 16:48:00.46092239+00:00',NULL,'100.64.0.4','fd7a:115c:a1e0::4');
INSERT INTO nodes VALUES(5,'mkey:1d04be488182a66cd7df4596ac59a40613eac6465a331af9ac6c91bb70754a25','nodekey:9b617f3e7941ac70b76f0e40c55543173e0432d4a9bb8bcb8b25d93b60a5da0e','discokey:15834557115cb889e8362e7f2cae1cfd7e78e754cb7310cff6b5c5b5d3027e35','hostname_5','given_name5',1,'cli','["tag:sshclient","tag:ssh"]',0,'2023-04-21 15:07:38.796218079+00:00','0001-01-01 00:00:00+00:00','{}','[]','2023-04-21 13:16:19.148836255+00:00','2024-04-17 15:39:21.339518261+00:00',NULL,'100.64.0.5','fd7a:115c:a1e0::5');
INSERT INTO nodes VALUES(6,'mkey:ed649503734e31eafad7f884ac8ee36ba0922c57cda8b6946cb439b1ed645676','nodekey:200484e66b43012eca81ec8850e4b5d1dd8fa538dfebdaac718f202cd2f1f955','discokey:600651ed2436ce5a49e71b3980f93070d888e6d65d608a64be29fdeed9f7bd6b','hostname_6','given_name6',1,'cli','["tag:ssh"]',0,'2023-07-09 16:56:18.876491583+00:00','0001-01-01 00:00:00+00:00','{}','[]','2023-05-07 10:30:54.520661376+00:00','2024-04-17 15:39:23.182648721+00:00',NULL,'100.64.0.6','fd7a:115c:a1e0::6');
CREATE TABLE IF NOT EXISTS "routes" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE);
CREATE TABLE IF NOT EXISTS "users" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text,PRIMARY KEY (`id`));
INSERT INTO users VALUES(1,'2023-03-30 23:08:54.151102578+00:00','2023-03-30 23:08:54.151102578+00:00',NULL,'username_1','display_name_1','email_1@example.com',NULL,NULL,NULL);
DELETE FROM sqlite_sequence;
CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`);
CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`);
CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`);
CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`);
COMMIT;
================================================
FILE: hscontrol/db/testdata/sqlite/headscale_0.26.0-beta.1_dump.sql
================================================
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`));
INSERT INTO migrations VALUES('202312101416');
INSERT INTO migrations VALUES('202312101430');
INSERT INTO migrations VALUES('202402151347');
INSERT INTO migrations VALUES('2024041121742');
INSERT INTO migrations VALUES('202406021630');
INSERT INTO migrations VALUES('202409271400');
INSERT INTO migrations VALUES('202407191627');
INSERT INTO migrations VALUES('202408181235');
INSERT INTO migrations VALUES('202501221827');
INSERT INTO migrations VALUES('202501311657');
INSERT INTO migrations VALUES('202502070949');
INSERT INTO migrations VALUES('202502131714');
INSERT INTO migrations VALUES('202502171819');
CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text);
CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL);
CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime);
CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text);
CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`expiry` datetime,`approved_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`),CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE);
DELETE FROM sqlite_sequence;
INSERT INTO sqlite_sequence VALUES('nodes',0);
CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`);
CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`);
CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`);
CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL;
CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier);
CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL;
COMMIT;
================================================
FILE: hscontrol/db/testdata/sqlite/headscale_0.26.0-beta.2_dump.sql
================================================
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`));
INSERT INTO migrations VALUES('202312101416');
INSERT INTO migrations VALUES('202312101430');
INSERT INTO migrations VALUES('202402151347');
INSERT INTO migrations VALUES('2024041121742');
INSERT INTO migrations VALUES('202406021630');
INSERT INTO migrations VALUES('202409271400');
INSERT INTO migrations VALUES('202407191627');
INSERT INTO migrations VALUES('202408181235');
INSERT INTO migrations VALUES('202501221827');
INSERT INTO migrations VALUES('202501311657');
INSERT INTO migrations VALUES('202502070949');
INSERT INTO migrations VALUES('202502131714');
INSERT INTO migrations VALUES('202502171819');
INSERT INTO migrations VALUES('202505091439');
CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text);
CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL);
CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime);
CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`expiry` datetime,`last_seen` datetime,`approved_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`));
CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text);
DELETE FROM sqlite_sequence;
INSERT INTO sqlite_sequence VALUES('nodes',0);
CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`);
CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`);
CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`);
CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL;
CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier);
CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL;
COMMIT;
================================================
FILE: hscontrol/db/testdata/sqlite/headscale_0.26.0_dump.sql
================================================
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`));
INSERT INTO migrations VALUES('202312101416');
INSERT INTO migrations VALUES('202312101430');
INSERT INTO migrations VALUES('202402151347');
INSERT INTO migrations VALUES('2024041121742');
INSERT INTO migrations VALUES('202406021630');
INSERT INTO migrations VALUES('202409271400');
INSERT INTO migrations VALUES('202407191627');
INSERT INTO migrations VALUES('202408181235');
INSERT INTO migrations VALUES('202501221827');
INSERT INTO migrations VALUES('202501311657');
INSERT INTO migrations VALUES('202502070949');
INSERT INTO migrations VALUES('202502131714');
INSERT INTO migrations VALUES('202502171819');
INSERT INTO migrations VALUES('202505091439');
INSERT INTO migrations VALUES('202505141324');
CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text);
CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL);
CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime);
CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`expiry` datetime,`last_seen` datetime,`approved_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`));
CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text);
DELETE FROM sqlite_sequence;
INSERT INTO sqlite_sequence VALUES('nodes',0);
CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`);
CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`);
CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`);
CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL;
CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier);
CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL;
COMMIT;
================================================
FILE: hscontrol/db/testdata/sqlite/headscale_0.26.1_dump-litestream.sql
================================================
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`));
INSERT INTO migrations VALUES('202312101416');
INSERT INTO migrations VALUES('202312101430');
INSERT INTO migrations VALUES('202402151347');
INSERT INTO migrations VALUES('2024041121742');
INSERT INTO migrations VALUES('202406021630');
INSERT INTO migrations VALUES('202409271400');
INSERT INTO migrations VALUES('202407191627');
INSERT INTO migrations VALUES('202408181235');
INSERT INTO migrations VALUES('202501221827');
INSERT INTO migrations VALUES('202501311657');
INSERT INTO migrations VALUES('202502070949');
INSERT INTO migrations VALUES('202502131714');
INSERT INTO migrations VALUES('202502171819');
INSERT INTO migrations VALUES('202505091439');
INSERT INTO migrations VALUES('202505141324');
CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text);
CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL);
CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime);
CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`expiry` datetime,`last_seen` datetime,`approved_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`));
CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text);
DELETE FROM sqlite_sequence;
INSERT INTO sqlite_sequence VALUES('nodes',0);
CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`);
CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`);
CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`);
CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL;
CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier);
CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL;
CREATE TABLE _litestream_seq (id INTEGER PRIMARY KEY, seq INTEGER);
CREATE TABLE _litestream_lock (id INTEGER);
COMMIT;
================================================
FILE: hscontrol/db/testdata/sqlite/headscale_0.26.1_dump.sql
================================================
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`));
INSERT INTO migrations VALUES('202312101416');
INSERT INTO migrations VALUES('202312101430');
INSERT INTO migrations VALUES('202402151347');
INSERT INTO migrations VALUES('2024041121742');
INSERT INTO migrations VALUES('202406021630');
INSERT INTO migrations VALUES('202409271400');
INSERT INTO migrations VALUES('202407191627');
INSERT INTO migrations VALUES('202408181235');
INSERT INTO migrations VALUES('202501221827');
INSERT INTO migrations VALUES('202501311657');
INSERT INTO migrations VALUES('202502070949');
INSERT INTO migrations VALUES('202502131714');
INSERT INTO migrations VALUES('202502171819');
INSERT INTO migrations VALUES('202505091439');
INSERT INTO migrations VALUES('202505141324');
CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text);
CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL);
CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime);
CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`expiry` datetime,`last_seen` datetime,`approved_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`));
CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text);
DELETE FROM sqlite_sequence;
INSERT INTO sqlite_sequence VALUES('nodes',0);
CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`);
CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`);
CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`);
CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL;
CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier);
CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL;
COMMIT;
================================================
FILE: hscontrol/db/testdata/sqlite/headscale_0.26.1_dump_schema-to-0.27.0-old-table-cleanup.sql
================================================
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`));
INSERT INTO migrations VALUES('202312101416');
INSERT INTO migrations VALUES('202312101430');
INSERT INTO migrations VALUES('202402151347');
INSERT INTO migrations VALUES('2024041121742');
INSERT INTO migrations VALUES('202406021630');
INSERT INTO migrations VALUES('202409271400');
INSERT INTO migrations VALUES('202407191627');
INSERT INTO migrations VALUES('202408181235');
INSERT INTO migrations VALUES('202501221827');
INSERT INTO migrations VALUES('202501311657');
INSERT INTO migrations VALUES('202502070949');
INSERT INTO migrations VALUES('202502131714');
INSERT INTO migrations VALUES('202502171819');
INSERT INTO migrations VALUES('202505091439');
INSERT INTO migrations VALUES('202505141324');
CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text);
CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL);
CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime);
CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`expiry` datetime,`last_seen` datetime,`approved_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`));
CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text);
DELETE FROM sqlite_sequence;
INSERT INTO sqlite_sequence VALUES('nodes',0);
CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`);
CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`);
CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`);
CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL;
CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier);
CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL;
-- Create all the old tables we have had and ensure they are clean up.
CREATE TABLE `namespaces` (`id` text,`deleted_at` datetime,PRIMARY KEY (`id`));
CREATE TABLE `machines` (`id` text,PRIMARY KEY (`id`));
CREATE TABLE `kvs` (`id` text,PRIMARY KEY (`id`));
CREATE TABLE `shared_machines` (`id` text,`deleted_at` datetime,PRIMARY KEY (`id`));
CREATE TABLE `pre_auth_key_acl_tags` (`id` text,PRIMARY KEY (`id`));
CREATE TABLE `routes` (`id` text,`deleted_at` datetime,PRIMARY KEY (`id`));
CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`);
CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`);
CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`);
COMMIT;
================================================
FILE: hscontrol/db/testdata/sqlite/request_tags_migration_test.sql
================================================
-- Test SQL dump for RequestTags migration (202601121700-migrate-hostinfo-request-tags)
-- and forced_tags->tags rename migration (202511131445-node-forced-tags-to-tags)
--
-- This dump simulates a 0.27.x database where:
-- - Tags from --advertise-tags were stored only in host_info.RequestTags
-- - The tags column is still named forced_tags
--
-- Test scenarios:
-- 1. Node with RequestTags that user is authorized for (should be migrated)
-- 2. Node with RequestTags that user is NOT authorized for (should be rejected)
-- 3. Node with existing forced_tags that should be preserved
-- 4. Node with RequestTags that overlap with existing tags (no duplicates)
-- 5. Node without RequestTags (should be unchanged)
-- 6. Node with RequestTags via group membership (should be migrated)
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
-- Migrations table - includes all migrations BEFORE the two tag migrations
CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`));
INSERT INTO migrations VALUES('202312101416');
INSERT INTO migrations VALUES('202312101430');
INSERT INTO migrations VALUES('202402151347');
INSERT INTO migrations VALUES('2024041121742');
INSERT INTO migrations VALUES('202406021630');
INSERT INTO migrations VALUES('202409271400');
INSERT INTO migrations VALUES('202407191627');
INSERT INTO migrations VALUES('202408181235');
INSERT INTO migrations VALUES('202501221827');
INSERT INTO migrations VALUES('202501311657');
INSERT INTO migrations VALUES('202502070949');
INSERT INTO migrations VALUES('202502131714');
INSERT INTO migrations VALUES('202502171819');
INSERT INTO migrations VALUES('202505091439');
INSERT INTO migrations VALUES('202505141324');
INSERT INTO migrations VALUES('202507021200');
INSERT INTO migrations VALUES('202510311551');
INSERT INTO migrations VALUES('202511101554-drop-old-idx');
INSERT INTO migrations VALUES('202511011637-preauthkey-bcrypt');
INSERT INTO migrations VALUES('202511122344-remove-newline-index');
-- Note: 202511131445-node-forced-tags-to-tags is NOT included - it will run
-- Note: 202601121700-migrate-hostinfo-request-tags is NOT included - it will run
-- Users table
-- Note: User names must match the usernames in the policy (with @)
CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text);
INSERT INTO users VALUES(1,'2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL,'user1@example.com','User One','user1@example.com',NULL,NULL,NULL);
INSERT INTO users VALUES(2,'2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL,'user2@example.com','User Two','user2@example.com',NULL,NULL,NULL);
INSERT INTO users VALUES(3,'2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL,'admin1@example.com','Admin One','admin1@example.com',NULL,NULL,NULL);
-- Pre-auth keys table
CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,`prefix` text,`hash` blob,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL);
-- API keys table
CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime);
-- Nodes table - using OLD schema with forced_tags (not tags)
CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`expiry` datetime,`last_seen` datetime,`approved_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`));
-- Node 1: user1 owns it, has RequestTags for tag:server (user1 is authorized for this tag)
-- Expected: tag:server should be added to tags
INSERT INTO nodes VALUES(1,'mkey:a0ab77456320823945ae0331823e3c0d516fae9585bd42698dfa1ac3d7679e01','nodekey:7c84167ab68f494942de14deb83587fd841843de2bac105b6c670048c1605501','discokey:53075b3c6cad3b62a2a29caea61beeb93f66b8c75cb89dac465236a5bbf57701','[]','{"RequestTags":["tag:server"]}','100.64.0.1','fd7a:115c:a1e0::1','node1','node1',1,'oidc','[]',NULL,'0001-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00','[]','2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL);
-- Node 2: user1 owns it, has RequestTags for tag:unauthorized (user1 is NOT authorized for this tag)
-- Expected: tag:unauthorized should be rejected, tags stays empty
INSERT INTO nodes VALUES(2,'mkey:a0ab77456320823945ae0331823e3c0d516fae9585bd42698dfa1ac3d7679e02','nodekey:7c84167ab68f494942de14deb83587fd841843de2bac105b6c670048c1605502','discokey:53075b3c6cad3b62a2a29caea61beeb93f66b8c75cb89dac465236a5bbf57702','[]','{"RequestTags":["tag:unauthorized"]}','100.64.0.2','fd7a:115c:a1e0::2','node2','node2',1,'oidc','[]',NULL,'0001-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00','[]','2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL);
-- Node 3: user2 owns it, has RequestTags for tag:client (user2 is authorized)
-- Also has existing forced_tags that should be preserved
-- Expected: tag:client added, tag:existing preserved
INSERT INTO nodes VALUES(3,'mkey:a0ab77456320823945ae0331823e3c0d516fae9585bd42698dfa1ac3d7679e03','nodekey:7c84167ab68f494942de14deb83587fd841843de2bac105b6c670048c1605503','discokey:53075b3c6cad3b62a2a29caea61beeb93f66b8c75cb89dac465236a5bbf57703','[]','{"RequestTags":["tag:client"]}','100.64.0.3','fd7a:115c:a1e0::3','node3','node3',2,'oidc','["tag:existing"]',NULL,'0001-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00','[]','2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL);
-- Node 4: user1 owns it, has RequestTags for tag:server which already exists in forced_tags
-- Expected: no duplicates, tags should be ["tag:server"]
INSERT INTO nodes VALUES(4,'mkey:a0ab77456320823945ae0331823e3c0d516fae9585bd42698dfa1ac3d7679e04','nodekey:7c84167ab68f494942de14deb83587fd841843de2bac105b6c670048c1605504','discokey:53075b3c6cad3b62a2a29caea61beeb93f66b8c75cb89dac465236a5bbf57704','[]','{"RequestTags":["tag:server"]}','100.64.0.4','fd7a:115c:a1e0::4','node4','node4',1,'oidc','["tag:server"]',NULL,'0001-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00','[]','2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL);
-- Node 5: user2 owns it, no RequestTags in host_info
-- Expected: tags unchanged (empty)
INSERT INTO nodes VALUES(5,'mkey:a0ab77456320823945ae0331823e3c0d516fae9585bd42698dfa1ac3d7679e05','nodekey:7c84167ab68f494942de14deb83587fd841843de2bac105b6c670048c1605505','discokey:53075b3c6cad3b62a2a29caea61beeb93f66b8c75cb89dac465236a5bbf57705','[]','{}','100.64.0.5','fd7a:115c:a1e0::5','node5','node5',2,'oidc','[]',NULL,'0001-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00','[]','2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL);
-- Node 6: admin1 owns it, has RequestTags for tag:admin (admin1 is in group:admins which owns tag:admin)
-- Expected: tag:admin should be added via group membership
INSERT INTO nodes VALUES(6,'mkey:a0ab77456320823945ae0331823e3c0d516fae9585bd42698dfa1ac3d7679e06','nodekey:7c84167ab68f494942de14deb83587fd841843de2bac105b6c670048c1605506','discokey:53075b3c6cad3b62a2a29caea61beeb93f66b8c75cb89dac465236a5bbf57706','[]','{"RequestTags":["tag:admin"]}','100.64.0.6','fd7a:115c:a1e0::6','node6','node6',3,'oidc','[]',NULL,'0001-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00','[]','2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL);
-- Node 7: user1 owns it, has multiple RequestTags (tag:server authorized, tag:forbidden not authorized)
-- Expected: tag:server added, tag:forbidden rejected
INSERT INTO nodes VALUES(7,'mkey:a0ab77456320823945ae0331823e3c0d516fae9585bd42698dfa1ac3d7679e07','nodekey:7c84167ab68f494942de14deb83587fd841843de2bac105b6c670048c1605507','discokey:53075b3c6cad3b62a2a29caea61beeb93f66b8c75cb89dac465236a5bbf57707','[]','{"RequestTags":["tag:server","tag:forbidden"]}','100.64.0.7','fd7a:115c:a1e0::7','node7','node7',1,'oidc','[]',NULL,'0001-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00','[]','2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL);
-- Policies table with tagOwners defining who can use which tags
-- Note: Usernames in policy must contain @ (e.g., user1@example.com or just user1@)
CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text);
INSERT INTO policies VALUES(1,'2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL,'{
"groups": {
"group:admins": ["admin1@example.com"]
},
"tagOwners": {
"tag:server": ["user1@example.com"],
"tag:client": ["user1@example.com", "user2@example.com"],
"tag:admin": ["group:admins"]
},
"acls": [
{"action": "accept", "src": ["*"], "dst": ["*:*"]}
]
}');
-- Indexes (using exact format expected by schema validation)
DELETE FROM sqlite_sequence;
INSERT INTO sqlite_sequence VALUES('users',3);
INSERT INTO sqlite_sequence VALUES('nodes',7);
INSERT INTO sqlite_sequence VALUES('policies',1);
CREATE INDEX idx_users_deleted_at ON users(deleted_at);
CREATE UNIQUE INDEX idx_api_keys_prefix ON api_keys(prefix);
CREATE INDEX idx_policies_deleted_at ON policies(deleted_at);
CREATE UNIQUE INDEX idx_provider_identifier ON users(provider_identifier) WHERE provider_identifier IS NOT NULL;
CREATE UNIQUE INDEX idx_name_provider_identifier ON users(name, provider_identifier);
CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users(name) WHERE provider_identifier IS NULL;
CREATE UNIQUE INDEX IF NOT EXISTS idx_pre_auth_keys_prefix ON pre_auth_keys(prefix) WHERE prefix IS NOT NULL AND prefix != '';
COMMIT;
================================================
FILE: hscontrol/db/text_serialiser.go
================================================
package db
import (
"context"
"encoding"
"errors"
"fmt"
"reflect"
"gorm.io/gorm/schema"
)
var (
errUnmarshalTextValue = errors.New("unmarshalling text value")
errUnsupportedType = errors.New("unsupported type")
errTextMarshalerOnly = errors.New("only encoding.TextMarshaler is supported")
)
// Got from https://github.com/xdg-go/strum/blob/main/types.go
var textUnmarshalerType = reflect.TypeFor[encoding.TextUnmarshaler]()
func isTextUnmarshaler(rv reflect.Value) bool {
return rv.Type().Implements(textUnmarshalerType)
}
func maybeInstantiatePtr(rv reflect.Value) {
if rv.Kind() == reflect.Ptr && rv.IsNil() {
np := reflect.New(rv.Type().Elem())
rv.Set(np)
}
}
func decodingError(name string, err error) error {
return fmt.Errorf("decoding to %s: %w", name, err)
}
// TextSerialiser implements the Serialiser interface for fields that
// have a type that implements encoding.TextUnmarshaler.
type TextSerialiser struct{}
func (TextSerialiser) Scan(ctx context.Context, field *schema.Field, dst reflect.Value, dbValue any) error {
fieldValue := reflect.New(field.FieldType)
// If the field is a pointer, we need to dereference it to get the actual type
// so we do not end with a second pointer.
if fieldValue.Elem().Kind() == reflect.Ptr {
fieldValue = fieldValue.Elem()
}
if dbValue != nil {
var bytes []byte
switch v := dbValue.(type) {
case []byte:
bytes = v
case string:
bytes = []byte(v)
default:
return fmt.Errorf("%w: %#v", errUnmarshalTextValue, dbValue)
}
if isTextUnmarshaler(fieldValue) {
maybeInstantiatePtr(fieldValue)
f := fieldValue.MethodByName("UnmarshalText")
args := []reflect.Value{reflect.ValueOf(bytes)}
ret := f.Call(args)
if !ret[0].IsNil() {
if err, ok := ret[0].Interface().(error); ok {
return decodingError(field.Name, err)
}
}
// If the underlying field is to a pointer type, we need to
// assign the value as a pointer to it.
// If it is not a pointer, we need to assign the value to the
// field.
dstField := field.ReflectValueOf(ctx, dst)
if dstField.Kind() == reflect.Ptr {
dstField.Set(fieldValue)
} else {
dstField.Set(fieldValue.Elem())
}
return nil
} else {
return fmt.Errorf("%w: %T", errUnsupportedType, fieldValue.Interface())
}
}
return nil
}
func (TextSerialiser) Value(ctx context.Context, field *schema.Field, dst reflect.Value, fieldValue any) (any, error) {
switch v := fieldValue.(type) {
case encoding.TextMarshaler:
// If the value is nil, we return nil, however, go nil values are not
// always comparable, particularly when reflection is involved:
// https://dev.to/arxeiss/in-go-nil-is-not-equal-to-nil-sometimes-jn8
if v == nil || (reflect.ValueOf(v).Kind() == reflect.Ptr && reflect.ValueOf(v).IsNil()) {
return nil, nil //nolint:nilnil // intentional: nil value for GORM serializer
}
b, err := v.MarshalText()
if err != nil {
return nil, err
}
return string(b), nil
default:
return nil, fmt.Errorf("%w, got %T", errTextMarshalerOnly, v)
}
}
================================================
FILE: hscontrol/db/user_update_test.go
================================================
package db
import (
"database/sql"
"testing"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gorm.io/gorm"
)
// TestUserUpdatePreservesUnchangedFields verifies that updating a user
// preserves fields that aren't modified. This test validates the fix
// for using Updates() instead of Save() in UpdateUser-like operations.
func TestUserUpdatePreservesUnchangedFields(t *testing.T) {
database := dbForTest(t)
// Create a user with all fields set
initialUser := types.User{
Name: "testuser",
DisplayName: "Test User Display",
Email: "test@example.com",
ProviderIdentifier: sql.NullString{
String: "provider-123",
Valid: true,
},
}
createdUser, err := database.CreateUser(initialUser)
require.NoError(t, err)
require.NotNil(t, createdUser)
// Verify initial state
assert.Equal(t, "testuser", createdUser.Name)
assert.Equal(t, "Test User Display", createdUser.DisplayName)
assert.Equal(t, "test@example.com", createdUser.Email)
assert.True(t, createdUser.ProviderIdentifier.Valid)
assert.Equal(t, "provider-123", createdUser.ProviderIdentifier.String)
// Simulate what UpdateUser does: load user, modify one field, save
_, err = Write(database.DB, func(tx *gorm.DB) (*types.User, error) {
user, err := GetUserByID(tx, types.UserID(createdUser.ID))
if err != nil {
return nil, err
}
// Modify ONLY DisplayName
user.DisplayName = "Updated Display Name"
// This is the line being tested - currently uses Save() which writes ALL fields, potentially overwriting unchanged ones
err = tx.Save(user).Error
if err != nil {
return nil, err
}
return user, nil
})
require.NoError(t, err)
// Read user back from database
updatedUser, err := Read(database.DB, func(rx *gorm.DB) (*types.User, error) {
return GetUserByID(rx, types.UserID(createdUser.ID))
})
require.NoError(t, err)
// Verify that DisplayName was updated
assert.Equal(t, "Updated Display Name", updatedUser.DisplayName)
// CRITICAL: Verify that other fields were NOT overwritten
// With Save(), these assertions should pass because the user object
// was loaded from DB and has all fields populated.
// But if Updates() is used, these will also pass (and it's safer).
assert.Equal(t, "testuser", updatedUser.Name, "Name should be preserved")
assert.Equal(t, "test@example.com", updatedUser.Email, "Email should be preserved")
assert.True(t, updatedUser.ProviderIdentifier.Valid, "ProviderIdentifier should be preserved")
assert.Equal(t, "provider-123", updatedUser.ProviderIdentifier.String, "ProviderIdentifier value should be preserved")
}
// TestUserUpdateWithUpdatesMethod tests that using Updates() instead of Save()
// works correctly and only updates modified fields.
func TestUserUpdateWithUpdatesMethod(t *testing.T) {
database := dbForTest(t)
// Create a user
initialUser := types.User{
Name: "testuser",
DisplayName: "Original Display",
Email: "original@example.com",
ProviderIdentifier: sql.NullString{
String: "provider-abc",
Valid: true,
},
}
createdUser, err := database.CreateUser(initialUser)
require.NoError(t, err)
// Update using Updates() method
_, err = Write(database.DB, func(tx *gorm.DB) (*types.User, error) {
user, err := GetUserByID(tx, types.UserID(createdUser.ID))
if err != nil {
return nil, err
}
// Modify multiple fields
user.DisplayName = "New Display"
user.Email = "new@example.com"
// Use Updates() instead of Save()
err = tx.Updates(user).Error
if err != nil {
return nil, err
}
return user, nil
})
require.NoError(t, err)
// Verify changes
updatedUser, err := Read(database.DB, func(rx *gorm.DB) (*types.User, error) {
return GetUserByID(rx, types.UserID(createdUser.ID))
})
require.NoError(t, err)
// Verify updated fields
assert.Equal(t, "New Display", updatedUser.DisplayName)
assert.Equal(t, "new@example.com", updatedUser.Email)
// Verify preserved fields
assert.Equal(t, "testuser", updatedUser.Name)
assert.True(t, updatedUser.ProviderIdentifier.Valid)
assert.Equal(t, "provider-abc", updatedUser.ProviderIdentifier.String)
}
================================================
FILE: hscontrol/db/users.go
================================================
package db
import (
"errors"
"fmt"
"strconv"
"testing"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"gorm.io/gorm"
)
var (
ErrUserExists = errors.New("user already exists")
ErrUserNotFound = errors.New("user not found")
ErrUserStillHasNodes = errors.New("user not empty: node(s) found")
ErrUserWhereInvalidCount = errors.New("expect 0 or 1 where User structs")
ErrUserNotUnique = errors.New("expected exactly one user")
)
func (hsdb *HSDatabase) CreateUser(user types.User) (*types.User, error) {
return Write(hsdb.DB, func(tx *gorm.DB) (*types.User, error) {
return CreateUser(tx, user)
})
}
// CreateUser creates a new User. Returns error if could not be created
// or another user already exists.
func CreateUser(tx *gorm.DB, user types.User) (*types.User, error) {
err := util.ValidateHostname(user.Name)
if err != nil {
return nil, err
}
err = tx.Create(&user).Error
if err != nil {
return nil, fmt.Errorf("creating user: %w", err)
}
return &user, nil
}
func (hsdb *HSDatabase) DestroyUser(uid types.UserID) error {
return hsdb.Write(func(tx *gorm.DB) error {
return DestroyUser(tx, uid)
})
}
// DestroyUser destroys a User. Returns error if the User does
// not exist or if there are user-owned nodes associated with it.
// Tagged nodes have user_id = NULL so they do not block deletion.
func DestroyUser(tx *gorm.DB, uid types.UserID) error {
user, err := GetUserByID(tx, uid)
if err != nil {
return err
}
nodes, err := ListNodesByUser(tx, uid)
if err != nil {
return err
}
if len(nodes) > 0 {
return ErrUserStillHasNodes
}
keys, err := ListPreAuthKeys(tx)
if err != nil {
return err
}
for _, key := range keys {
err = DestroyPreAuthKey(tx, key.ID)
if err != nil {
return err
}
}
if result := tx.Unscoped().Delete(&user); result.Error != nil {
return result.Error
}
return nil
}
func (hsdb *HSDatabase) RenameUser(uid types.UserID, newName string) error {
return hsdb.Write(func(tx *gorm.DB) error {
return RenameUser(tx, uid, newName)
})
}
var ErrCannotChangeOIDCUser = errors.New("cannot edit OIDC user")
// RenameUser renames a User. Returns error if the User does
// not exist or if another User exists with the new name.
func RenameUser(tx *gorm.DB, uid types.UserID, newName string) error {
var err error
oldUser, err := GetUserByID(tx, uid)
if err != nil {
return err
}
if err = util.ValidateHostname(newName); err != nil { //nolint:noinlineerr
return err
}
if oldUser.Provider == util.RegisterMethodOIDC {
return ErrCannotChangeOIDCUser
}
oldUser.Name = newName
err = tx.Updates(&oldUser).Error
if err != nil {
return err
}
return nil
}
func (hsdb *HSDatabase) GetUserByID(uid types.UserID) (*types.User, error) {
return GetUserByID(hsdb.DB, uid)
}
func GetUserByID(tx *gorm.DB, uid types.UserID) (*types.User, error) {
user := types.User{}
if result := tx.First(&user, "id = ?", uid); errors.Is(
result.Error,
gorm.ErrRecordNotFound,
) {
return nil, ErrUserNotFound
}
return &user, nil
}
func (hsdb *HSDatabase) GetUserByOIDCIdentifier(id string) (*types.User, error) {
return Read(hsdb.DB, func(rx *gorm.DB) (*types.User, error) {
return GetUserByOIDCIdentifier(rx, id)
})
}
func GetUserByOIDCIdentifier(tx *gorm.DB, id string) (*types.User, error) {
user := types.User{}
if result := tx.First(&user, "provider_identifier = ?", id); errors.Is(
result.Error,
gorm.ErrRecordNotFound,
) {
return nil, ErrUserNotFound
}
return &user, nil
}
func (hsdb *HSDatabase) ListUsers(where ...*types.User) ([]types.User, error) {
return ListUsers(hsdb.DB, where...)
}
// ListUsers gets all the existing users.
func ListUsers(tx *gorm.DB, where ...*types.User) ([]types.User, error) {
if len(where) > 1 {
return nil, fmt.Errorf("%w, got %d", ErrUserWhereInvalidCount, len(where))
}
var user *types.User
if len(where) == 1 {
user = where[0]
}
users := []types.User{}
err := tx.Where(user).Find(&users).Error
if err != nil {
return nil, err
}
return users, nil
}
// GetUserByName returns a user if the provided username is
// unique, and otherwise an error.
func (hsdb *HSDatabase) GetUserByName(name string) (*types.User, error) {
users, err := hsdb.ListUsers(&types.User{Name: name})
if err != nil {
return nil, err
}
if len(users) == 0 {
return nil, ErrUserNotFound
}
if len(users) != 1 {
return nil, fmt.Errorf("%w, found %d", ErrUserNotUnique, len(users))
}
return &users[0], nil
}
// ListNodesByUser gets all the nodes in a given user.
func ListNodesByUser(tx *gorm.DB, uid types.UserID) (types.Nodes, error) {
nodes := types.Nodes{}
uidPtr := uint(uid)
err := tx.Preload("AuthKey").Preload("AuthKey.User").Preload("User").Where(&types.Node{UserID: &uidPtr}).Find(&nodes).Error
if err != nil {
return nil, err
}
return nodes, nil
}
func (hsdb *HSDatabase) CreateUserForTest(name ...string) *types.User {
if !testing.Testing() {
panic("CreateUserForTest can only be called during tests")
}
userName := "testuser"
if len(name) > 0 && name[0] != "" {
userName = name[0]
}
user, err := hsdb.CreateUser(types.User{Name: userName})
if err != nil {
panic(fmt.Sprintf("failed to create test user: %v", err))
}
return user
}
func (hsdb *HSDatabase) CreateUsersForTest(count int, namePrefix ...string) []*types.User {
if !testing.Testing() {
panic("CreateUsersForTest can only be called during tests")
}
prefix := "testuser"
if len(namePrefix) > 0 && namePrefix[0] != "" {
prefix = namePrefix[0]
}
users := make([]*types.User, count)
for i := range count {
name := prefix + "-" + strconv.Itoa(i)
users[i] = hsdb.CreateUserForTest(name)
}
return users
}
================================================
FILE: hscontrol/db/users_test.go
================================================
package db
import (
"testing"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gorm.io/gorm"
)
func TestCreateAndDestroyUser(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
user := db.CreateUserForTest("test")
assert.Equal(t, "test", user.Name)
users, err := db.ListUsers()
require.NoError(t, err)
assert.Len(t, users, 1)
err = db.DestroyUser(types.UserID(user.ID))
require.NoError(t, err)
_, err = db.GetUserByID(types.UserID(user.ID))
assert.Error(t, err)
}
func TestDestroyUserErrors(t *testing.T) {
tests := []struct {
name string
test func(*testing.T, *HSDatabase)
}{
{
name: "error_user_not_found",
test: func(t *testing.T, db *HSDatabase) {
t.Helper()
err := db.DestroyUser(9998)
assert.ErrorIs(t, err, ErrUserNotFound)
},
},
{
name: "success_deletes_preauthkeys",
test: func(t *testing.T, db *HSDatabase) {
t.Helper()
user := db.CreateUserForTest("test")
pak, err := db.CreatePreAuthKey(user.TypedID(), false, false, nil, nil)
require.NoError(t, err)
err = db.DestroyUser(types.UserID(user.ID))
require.NoError(t, err)
// Verify preauth key was deleted (need to search by prefix for new keys)
var foundPak types.PreAuthKey
result := db.DB.First(&foundPak, "id = ?", pak.ID)
assert.ErrorIs(t, result.Error, gorm.ErrRecordNotFound)
},
},
{
name: "error_user_has_nodes",
test: func(t *testing.T, db *HSDatabase) {
t.Helper()
user, err := db.CreateUser(types.User{Name: "test"})
require.NoError(t, err)
pak, err := db.CreatePreAuthKey(user.TypedID(), false, false, nil, nil)
require.NoError(t, err)
pakID := pak.ID
node := types.Node{
ID: 0,
Hostname: "testnode",
UserID: &user.ID,
RegisterMethod: util.RegisterMethodAuthKey,
AuthKeyID: &pakID,
}
trx := db.DB.Save(&node)
require.NoError(t, trx.Error)
err = db.DestroyUser(types.UserID(user.ID))
assert.ErrorIs(t, err, ErrUserStillHasNodes)
},
},
{
// https://github.com/juanfont/headscale/issues/3077
// Tagged nodes have user_id = NULL, so they do not block
// user deletion and are unaffected by ON DELETE CASCADE.
name: "success_user_only_has_tagged_nodes",
test: func(t *testing.T, db *HSDatabase) {
t.Helper()
user, err := db.CreateUser(types.User{Name: "test"})
require.NoError(t, err)
// Create a tagged node with no user_id (the invariant).
node := types.Node{
ID: 0,
Hostname: "tagged-node",
RegisterMethod: util.RegisterMethodAuthKey,
Tags: []string{"tag:server"},
}
trx := db.DB.Save(&node)
require.NoError(t, trx.Error)
err = db.DestroyUser(types.UserID(user.ID))
require.NoError(t, err)
// User is gone.
_, err = db.GetUserByID(types.UserID(user.ID))
require.ErrorIs(t, err, ErrUserNotFound)
// Tagged node survives.
var survivingNode types.Node
result := db.DB.First(&survivingNode, "id = ?", node.ID)
require.NoError(t, result.Error)
assert.Nil(t, survivingNode.UserID)
assert.Equal(t, []string{"tag:server"}, survivingNode.Tags)
},
},
{
// A user who has both tagged and user-owned nodes cannot
// be deleted; the user-owned nodes still block deletion.
name: "error_user_has_tagged_and_owned_nodes",
test: func(t *testing.T, db *HSDatabase) {
t.Helper()
user, err := db.CreateUser(types.User{Name: "test"})
require.NoError(t, err)
// Tagged node: no user_id.
taggedNode := types.Node{
ID: 0,
Hostname: "tagged-node",
RegisterMethod: util.RegisterMethodAuthKey,
Tags: []string{"tag:server"},
}
trx := db.DB.Save(&taggedNode)
require.NoError(t, trx.Error)
// User-owned node: has user_id.
ownedNode := types.Node{
ID: 0,
Hostname: "owned-node",
UserID: &user.ID,
RegisterMethod: util.RegisterMethodAuthKey,
}
trx = db.DB.Save(&ownedNode)
require.NoError(t, trx.Error)
err = db.DestroyUser(types.UserID(user.ID))
require.ErrorIs(t, err, ErrUserStillHasNodes)
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
tt.test(t, db)
})
}
}
func TestRenameUser(t *testing.T) {
tests := []struct {
name string
test func(*testing.T, *HSDatabase)
}{
{
name: "success_rename",
test: func(t *testing.T, db *HSDatabase) {
t.Helper()
userTest := db.CreateUserForTest("test")
assert.Equal(t, "test", userTest.Name)
users, err := db.ListUsers()
require.NoError(t, err)
assert.Len(t, users, 1)
err = db.RenameUser(types.UserID(userTest.ID), "test-renamed")
require.NoError(t, err)
users, err = db.ListUsers(&types.User{Name: "test"})
require.NoError(t, err)
assert.Empty(t, users)
users, err = db.ListUsers(&types.User{Name: "test-renamed"})
require.NoError(t, err)
assert.Len(t, users, 1)
},
},
{
name: "error_user_not_found",
test: func(t *testing.T, db *HSDatabase) {
t.Helper()
err := db.RenameUser(99988, "test")
assert.ErrorIs(t, err, ErrUserNotFound)
},
},
{
name: "error_duplicate_name",
test: func(t *testing.T, db *HSDatabase) {
t.Helper()
userTest := db.CreateUserForTest("test")
userTest2 := db.CreateUserForTest("test2")
assert.Equal(t, "test", userTest.Name)
assert.Equal(t, "test2", userTest2.Name)
err := db.RenameUser(types.UserID(userTest2.ID), "test")
require.Error(t, err)
assert.Contains(t, err.Error(), "UNIQUE constraint failed")
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
db, err := newSQLiteTestDB()
require.NoError(t, err)
tt.test(t, db)
})
}
}
================================================
FILE: hscontrol/db/versioncheck.go
================================================
package db
import (
"errors"
"fmt"
"strconv"
"strings"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/rs/zerolog/log"
"gorm.io/gorm"
)
var errVersionUpgrade = errors.New("version upgrade not supported")
var errVersionDowngrade = errors.New("version downgrade not supported")
var errVersionMajorChange = errors.New("major version change not supported")
var errVersionParse = errors.New("cannot parse version")
var errVersionFormat = errors.New(
"version does not follow semver major.minor.patch format",
)
// DatabaseVersion tracks the headscale version that last
// successfully started against this database.
// It is a single-row table (ID is always 1).
type DatabaseVersion struct {
ID uint `gorm:"primaryKey"`
Version string `gorm:"not null"`
UpdatedAt time.Time
}
// semver holds parsed major.minor.patch components.
type semver struct {
Major int
Minor int
Patch int
}
func (s semver) String() string {
return fmt.Sprintf("v%d.%d.%d", s.Major, s.Minor, s.Patch)
}
// parseVersion parses a version string like "v0.25.0", "0.25.1",
// "v0.25.0-beta.1", or "v0.25.0-rc1+build123" into its major, minor,
// patch components. Pre-release and build metadata suffixes are stripped.
func parseVersion(s string) (semver, error) {
if s == "" || s == "dev" {
return semver{}, fmt.Errorf("%q: %w", s, errVersionParse)
}
v := strings.TrimPrefix(s, "v")
// Strip pre-release suffix (everything after first '-')
// and build metadata (everything after first '+').
if idx := strings.IndexAny(v, "-+"); idx != -1 {
v = v[:idx]
}
parts := strings.Split(v, ".")
if len(parts) != 3 {
return semver{}, fmt.Errorf("%q: %w", s, errVersionFormat)
}
major, err := strconv.Atoi(parts[0])
if err != nil {
return semver{}, fmt.Errorf("invalid major version in %q: %w", s, err)
}
minor, err := strconv.Atoi(parts[1])
if err != nil {
return semver{}, fmt.Errorf("invalid minor version in %q: %w", s, err)
}
patch, err := strconv.Atoi(parts[2])
if err != nil {
return semver{}, fmt.Errorf("invalid patch version in %q: %w", s, err)
}
return semver{Major: major, Minor: minor, Patch: patch}, nil
}
// ensureDatabaseVersionTable creates the database_versions table if it
// does not already exist. Uses GORM AutoMigrate to handle dialect
// differences between SQLite (datetime) and PostgreSQL (timestamp).
// This runs before gormigrate migrations.
func ensureDatabaseVersionTable(db *gorm.DB) error {
err := db.AutoMigrate(&DatabaseVersion{})
if err != nil {
return fmt.Errorf("creating database version table: %w", err)
}
return nil
}
// getDatabaseVersion reads the stored version from the database.
// Returns an empty string if no version has been stored yet.
func getDatabaseVersion(db *gorm.DB) (string, error) {
var version string
result := db.Raw("SELECT version FROM database_versions WHERE id = 1").Scan(&version)
if result.Error != nil {
return "", fmt.Errorf("reading database version: %w", result.Error)
}
if result.RowsAffected == 0 {
return "", nil
}
return version, nil
}
// setDatabaseVersion upserts the version row in the database.
func setDatabaseVersion(db *gorm.DB, version string) error {
now := time.Now().UTC()
// Try update first, then insert if no rows affected.
result := db.Exec(
"UPDATE database_versions SET version = ?, updated_at = ? WHERE id = 1",
version, now,
)
if result.Error != nil {
return fmt.Errorf("updating database version: %w", result.Error)
}
if result.RowsAffected == 0 {
err := db.Exec(
"INSERT INTO database_versions (id, version, updated_at) VALUES (1, ?, ?)",
version, now,
).Error
if err != nil {
return fmt.Errorf("inserting database version: %w", err)
}
}
return nil
}
// isDev reports whether a version string represents a development build
// that should skip version checking.
func isDev(version string) bool {
return version == "" || version == "dev" || version == "(devel)"
}
// checkVersionUpgradePath verifies that the running headscale version
// is compatible with the version that last used this database.
//
// Rules:
// - If the running binary has no version ("dev" or empty), warn and skip.
// - If no version is stored in the database, allow (first run with this feature).
// - If the stored version is "dev", allow (previous run was unversioned).
// - Same minor version: always allowed (patch changes in either direction).
// - Single minor version upgrade (stored.minor+1 == current.minor): allowed.
// - Multi-minor upgrade or any minor downgrade: blocked with a fatal error.
func checkVersionUpgradePath(db *gorm.DB) error {
err := ensureDatabaseVersionTable(db)
if err != nil {
return err
}
currentVersion := types.GetVersionInfo().Version
// Running binary has no real version — skip the check but
// preserve whatever version is already stored.
if isDev(currentVersion) {
storedVersion, err := getDatabaseVersion(db)
if err != nil {
return err
}
if storedVersion != "" && !isDev(storedVersion) {
log.Warn().
Str("database_version", storedVersion).
Msg("running a development build of headscale without a version number, " +
"database version check is skipped, the stored database version is preserved")
}
return nil
}
storedVersion, err := getDatabaseVersion(db)
if err != nil {
return err
}
// No stored version — first run with this feature. Allow startup;
// the version will be stored after migrations succeed.
if storedVersion == "" {
return nil
}
// Previous run was an unversioned build — no meaningful comparison.
if isDev(storedVersion) {
return nil
}
current, err := parseVersion(currentVersion)
if err != nil {
return fmt.Errorf("parsing current version: %w", err)
}
stored, err := parseVersion(storedVersion)
if err != nil {
return fmt.Errorf("parsing stored database version: %w", err)
}
if current.Major != stored.Major {
return fmt.Errorf(
"headscale version %s cannot be used with a database last used by %s: %w",
currentVersion, storedVersion, errVersionMajorChange,
)
}
minorDiff := current.Minor - stored.Minor
switch {
case minorDiff == 0:
// Same minor version — patch changes are always fine.
return nil
case minorDiff == 1:
// Single minor version upgrade — allowed.
return nil
case minorDiff > 1:
// Multi-minor upgrade — blocked.
return fmt.Errorf(
"headscale version %s cannot be used with a database last used by %s, "+
"upgrading more than one minor version at a time is not supported, "+
"please upgrade to the latest v%d.%d.x release first, then to %s, "+
"release page: https://github.com/juanfont/headscale/releases: %w",
currentVersion, storedVersion,
stored.Major, stored.Minor+1,
current.String(),
errVersionUpgrade,
)
default:
// minorDiff < 0 — any minor downgrade is blocked.
return fmt.Errorf(
"headscale version %s cannot be used with a database last used by %s, "+
"downgrading to a previous minor version is not supported, "+
"release page: https://github.com/juanfont/headscale/releases: %w",
currentVersion, storedVersion,
errVersionDowngrade,
)
}
}
================================================
FILE: hscontrol/db/versioncheck_test.go
================================================
package db
import (
"fmt"
"testing"
"github.com/glebarez/sqlite"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gorm.io/gorm"
)
func TestParseVersion(t *testing.T) {
tests := []struct {
input string
want semver
wantErr bool
}{
{input: "v0.25.0", want: semver{0, 25, 0}},
{input: "0.25.0", want: semver{0, 25, 0}},
{input: "v0.25.1", want: semver{0, 25, 1}},
{input: "v1.0.0", want: semver{1, 0, 0}},
{input: "v0.28.3", want: semver{0, 28, 3}},
// Pre-release suffixes stripped
{input: "v0.25.0-beta.1", want: semver{0, 25, 0}},
{input: "v0.25.0-rc1", want: semver{0, 25, 0}},
// Build metadata stripped
{input: "v0.25.0+build123", want: semver{0, 25, 0}},
{input: "v0.25.0-beta.1+build123", want: semver{0, 25, 0}},
// Invalid inputs
{input: "", wantErr: true},
{input: "dev", wantErr: true},
{input: "vfoo.bar.baz", wantErr: true},
{input: "v1.2", wantErr: true},
{input: "v1", wantErr: true},
{input: "not-a-version", wantErr: true},
{input: "v1.2.3.4", wantErr: true},
{input: "(devel)", wantErr: true},
}
for _, tt := range tests {
t.Run(tt.input, func(t *testing.T) {
got, err := parseVersion(tt.input)
if tt.wantErr {
assert.Error(t, err)
} else {
require.NoError(t, err)
assert.Equal(t, tt.want, got)
}
})
}
}
func TestSemverString(t *testing.T) {
s := semver{0, 28, 3}
assert.Equal(t, "v0.28.3", s.String())
}
func TestIsDev(t *testing.T) {
assert.True(t, isDev(""))
assert.True(t, isDev("dev"))
assert.True(t, isDev("(devel)"))
assert.False(t, isDev("v0.28.0"))
assert.False(t, isDev("0.28.0"))
}
// versionTestDB creates an in-memory SQLite database with the
// database_versions table already bootstrapped.
func versionTestDB(t *testing.T) *gorm.DB {
t.Helper()
db, err := gorm.Open(sqlite.Open("file::memory:"), &gorm.Config{})
require.NoError(t, err)
err = ensureDatabaseVersionTable(db)
require.NoError(t, err)
return db
}
func TestSetAndGetDatabaseVersion(t *testing.T) {
db := versionTestDB(t)
// Initially empty
v, err := getDatabaseVersion(db)
require.NoError(t, err)
assert.Empty(t, v)
// Set a version
err = setDatabaseVersion(db, "v0.27.0")
require.NoError(t, err)
v, err = getDatabaseVersion(db)
require.NoError(t, err)
assert.Equal(t, "v0.27.0", v)
// Update the version (upsert)
err = setDatabaseVersion(db, "v0.28.0")
require.NoError(t, err)
v, err = getDatabaseVersion(db)
require.NoError(t, err)
assert.Equal(t, "v0.28.0", v)
}
func TestEnsureDatabaseVersionTableIdempotent(t *testing.T) {
db, err := gorm.Open(sqlite.Open("file::memory:"), &gorm.Config{})
require.NoError(t, err)
// Call twice — should not error
err = ensureDatabaseVersionTable(db)
require.NoError(t, err)
err = ensureDatabaseVersionTable(db)
require.NoError(t, err)
}
// TestCheckVersionUpgradePathDirect tests the version comparison logic
// by directly seeding the database, bypassing types.GetVersionInfo()
// (which returns "dev" in test environments and cannot be overridden).
func TestCheckVersionUpgradePathDirect(t *testing.T) {
tests := []struct {
name string
storedVersion string // empty means no row stored
currentVersion string
wantErr bool
errContains string
}{
// Fresh database (no stored version)
{
name: "fresh db allows any version",
storedVersion: "",
currentVersion: "v0.28.0",
},
// Stored is dev
{
name: "real version over dev db",
storedVersion: "dev",
currentVersion: "v0.28.0",
},
{
name: "devel version in db",
storedVersion: "(devel)",
currentVersion: "v0.28.0",
},
// Same version
{
name: "same version",
storedVersion: "v0.27.0",
currentVersion: "v0.27.0",
},
// Patch changes within same minor
{
name: "patch upgrade",
storedVersion: "v0.27.0",
currentVersion: "v0.27.3",
},
{
name: "patch downgrade within same minor",
storedVersion: "v0.27.3",
currentVersion: "v0.27.0",
},
// Single minor upgrade
{
name: "single minor upgrade",
storedVersion: "v0.27.0",
currentVersion: "v0.28.0",
},
{
name: "single minor upgrade with different patches",
storedVersion: "v0.27.3",
currentVersion: "v0.28.1",
},
// Multi-minor upgrade (blocked)
{
name: "two minor versions ahead",
storedVersion: "v0.25.0",
currentVersion: "v0.27.0",
wantErr: true,
errContains: "latest v0.26.x",
},
{
name: "three minor versions ahead",
storedVersion: "v0.25.0",
currentVersion: "v0.28.0",
wantErr: true,
errContains: "latest v0.26.x",
},
// Minor downgrades (blocked)
{
name: "single minor downgrade",
storedVersion: "v0.28.0",
currentVersion: "v0.27.0",
wantErr: true,
errContains: "downgrading",
},
{
name: "multi minor downgrade",
storedVersion: "v0.28.0",
currentVersion: "v0.25.0",
wantErr: true,
errContains: "downgrading",
},
// Major version mismatch
{
name: "major version upgrade",
storedVersion: "v0.28.0",
currentVersion: "v1.0.0",
wantErr: true,
errContains: "major version",
},
{
name: "major version downgrade",
storedVersion: "v1.0.0",
currentVersion: "v0.28.0",
wantErr: true,
errContains: "major version",
},
// Pre-release versions
{
name: "pre-release single minor upgrade",
storedVersion: "v0.27.0",
currentVersion: "v0.28.0-beta.1",
},
{
name: "pre-release multi minor upgrade blocked",
storedVersion: "v0.25.0",
currentVersion: "v0.27.0-rc1",
wantErr: true,
errContains: "latest v0.26.x",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
db := versionTestDB(t)
// Seed the stored version if provided
if tt.storedVersion != "" {
err := setDatabaseVersion(db, tt.storedVersion)
require.NoError(t, err)
}
err := checkVersionUpgradePathFromVersions(db, tt.currentVersion)
if tt.wantErr {
require.Error(t, err)
if tt.errContains != "" {
assert.Contains(t, err.Error(), tt.errContains)
}
} else {
assert.NoError(t, err)
}
})
}
}
// checkVersionUpgradePathFromVersions is a test helper that runs the
// version comparison logic with a specific currentVersion string,
// bypassing types.GetVersionInfo(). It replicates the logic from
// checkVersionUpgradePath but accepts the version as a parameter.
func checkVersionUpgradePathFromVersions(db *gorm.DB, currentVersion string) error {
if isDev(currentVersion) {
return nil
}
storedVersion, err := getDatabaseVersion(db)
if err != nil {
return err
}
if storedVersion == "" {
return nil
}
if isDev(storedVersion) {
return nil
}
current, err := parseVersion(currentVersion)
if err != nil {
return err
}
stored, err := parseVersion(storedVersion)
if err != nil {
return err
}
if current.Major != stored.Major {
return errVersionMajorChange
}
minorDiff := current.Minor - stored.Minor
switch {
case minorDiff == 0:
return nil
case minorDiff == 1:
return nil
case minorDiff > 1:
return fmt.Errorf(
"please upgrade to the latest v%d.%d.x release first: %w",
stored.Major, stored.Minor+1,
errVersionUpgrade,
)
default:
return fmt.Errorf("downgrading: %w", errVersionDowngrade)
}
}
================================================
FILE: hscontrol/debug.go
================================================
package hscontrol
import (
"encoding/json"
"fmt"
"net/http"
"strings"
"github.com/arl/statsviz"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/prometheus/client_golang/prometheus/promhttp"
"tailscale.com/tsweb"
)
func (h *Headscale) debugHTTPServer() *http.Server {
debugMux := http.NewServeMux()
debug := tsweb.Debugger(debugMux)
// State overview endpoint
debug.Handle("overview", "State overview", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Check Accept header to determine response format
acceptHeader := r.Header.Get("Accept")
wantsJSON := strings.Contains(acceptHeader, "application/json")
if wantsJSON {
overview := h.state.DebugOverviewJSON()
overviewJSON, err := json.MarshalIndent(overview, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
_, _ = w.Write(overviewJSON)
} else {
// Default to text/plain for backward compatibility
overview := h.state.DebugOverview()
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte(overview))
}
}))
// Configuration endpoint
debug.Handle("config", "Current configuration", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
config := h.state.DebugConfig()
configJSON, err := json.MarshalIndent(config, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
_, _ = w.Write(configJSON)
}))
// Policy endpoint
debug.Handle("policy", "Current policy", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
policy, err := h.state.DebugPolicy()
if err != nil {
httpError(w, err)
return
}
// Policy data is HuJSON, which is a superset of JSON
// Set content type based on Accept header preference
acceptHeader := r.Header.Get("Accept")
if strings.Contains(acceptHeader, "application/json") {
w.Header().Set("Content-Type", "application/json")
} else {
w.Header().Set("Content-Type", "text/plain")
}
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte(policy))
}))
// Filter rules endpoint
debug.Handle("filter", "Current filter rules", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
filter, err := h.state.DebugFilter()
if err != nil {
httpError(w, err)
return
}
filterJSON, err := json.MarshalIndent(filter, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
_, _ = w.Write(filterJSON)
}))
// SSH policies endpoint
debug.Handle("ssh", "SSH policies per node", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
sshPolicies := h.state.DebugSSHPolicies()
sshJSON, err := json.MarshalIndent(sshPolicies, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
_, _ = w.Write(sshJSON)
}))
// DERP map endpoint
debug.Handle("derp", "DERP map configuration", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Check Accept header to determine response format
acceptHeader := r.Header.Get("Accept")
wantsJSON := strings.Contains(acceptHeader, "application/json")
if wantsJSON {
derpInfo := h.state.DebugDERPJSON()
derpJSON, err := json.MarshalIndent(derpInfo, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
_, _ = w.Write(derpJSON)
} else {
// Default to text/plain for backward compatibility
derpInfo := h.state.DebugDERPMap()
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte(derpInfo))
}
}))
// NodeStore endpoint
debug.Handle("nodestore", "NodeStore information", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Check Accept header to determine response format
acceptHeader := r.Header.Get("Accept")
wantsJSON := strings.Contains(acceptHeader, "application/json")
if wantsJSON {
nodeStoreNodes := h.state.DebugNodeStoreJSON()
nodeStoreJSON, err := json.MarshalIndent(nodeStoreNodes, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
_, _ = w.Write(nodeStoreJSON)
} else {
// Default to text/plain for backward compatibility
nodeStoreInfo := h.state.DebugNodeStore()
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte(nodeStoreInfo))
}
}))
// Registration cache endpoint
debug.Handle("registration-cache", "Registration cache information", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
cacheInfo := h.state.DebugRegistrationCache()
cacheJSON, err := json.MarshalIndent(cacheInfo, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
_, _ = w.Write(cacheJSON)
}))
// Routes endpoint
debug.Handle("routes", "Primary routes", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Check Accept header to determine response format
acceptHeader := r.Header.Get("Accept")
wantsJSON := strings.Contains(acceptHeader, "application/json")
if wantsJSON {
routes := h.state.DebugRoutes()
routesJSON, err := json.MarshalIndent(routes, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
_, _ = w.Write(routesJSON)
} else {
// Default to text/plain for backward compatibility
routes := h.state.DebugRoutesString()
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte(routes))
}
}))
// Policy manager endpoint
debug.Handle("policy-manager", "Policy manager state", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Check Accept header to determine response format
acceptHeader := r.Header.Get("Accept")
wantsJSON := strings.Contains(acceptHeader, "application/json")
if wantsJSON {
policyManagerInfo := h.state.DebugPolicyManagerJSON()
policyManagerJSON, err := json.MarshalIndent(policyManagerInfo, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
_, _ = w.Write(policyManagerJSON)
} else {
// Default to text/plain for backward compatibility
policyManagerInfo := h.state.DebugPolicyManager()
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte(policyManagerInfo))
}
}))
debug.Handle("mapresponses", "Map responses for all nodes", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
res, err := h.mapBatcher.DebugMapResponses()
if err != nil {
httpError(w, err)
return
}
if res == nil {
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte("HEADSCALE_DEBUG_DUMP_MAPRESPONSE_PATH not set"))
return
}
resJSON, err := json.MarshalIndent(res, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
_, _ = w.Write(resJSON)
}))
// Batcher endpoint
debug.Handle("batcher", "Batcher connected nodes", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Check Accept header to determine response format
acceptHeader := r.Header.Get("Accept")
wantsJSON := strings.Contains(acceptHeader, "application/json")
if wantsJSON {
batcherInfo := h.debugBatcherJSON()
batcherJSON, err := json.MarshalIndent(batcherInfo, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
_, _ = w.Write(batcherJSON)
} else {
// Default to text/plain for backward compatibility
batcherInfo := h.debugBatcher()
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte(batcherInfo))
}
}))
err := statsviz.Register(debugMux)
if err == nil {
debug.URL("/debug/statsviz", "Statsviz (visualise go metrics)")
}
debug.URL("/metrics", "Prometheus metrics")
debugMux.Handle("/metrics", promhttp.Handler())
debugHTTPServer := &http.Server{
Addr: h.cfg.MetricsAddr,
Handler: debugMux,
ReadTimeout: types.HTTPTimeout,
WriteTimeout: 0,
}
return debugHTTPServer
}
// debugBatcher returns debug information about the batcher's connected nodes.
func (h *Headscale) debugBatcher() string {
var sb strings.Builder
sb.WriteString("=== Batcher Connected Nodes ===\n\n")
totalNodes := 0
connectedCount := 0
// Collect nodes and sort them by ID
type nodeStatus struct {
id types.NodeID
connected bool
activeConnections int
}
var nodes []nodeStatus
debugInfo := h.mapBatcher.Debug()
for nodeID, info := range debugInfo {
nodes = append(nodes, nodeStatus{
id: nodeID,
connected: info.Connected,
activeConnections: info.ActiveConnections,
})
totalNodes++
if info.Connected {
connectedCount++
}
}
// Sort by node ID
for i := 0; i < len(nodes); i++ {
for j := i + 1; j < len(nodes); j++ {
if nodes[i].id > nodes[j].id {
nodes[i], nodes[j] = nodes[j], nodes[i]
}
}
}
// Output sorted nodes
for _, node := range nodes {
status := "disconnected"
if node.connected {
status = "connected"
}
if node.activeConnections > 0 {
sb.WriteString(fmt.Sprintf("Node %d:\t%s (%d connections)\n", node.id, status, node.activeConnections))
} else {
sb.WriteString(fmt.Sprintf("Node %d:\t%s\n", node.id, status))
}
}
sb.WriteString(fmt.Sprintf("\nSummary: %d connected, %d total\n", connectedCount, totalNodes))
return sb.String()
}
// DebugBatcherInfo represents batcher connection information in a structured format.
type DebugBatcherInfo struct {
ConnectedNodes map[string]DebugBatcherNodeInfo `json:"connected_nodes"` // NodeID -> node connection info
TotalNodes int `json:"total_nodes"`
}
// DebugBatcherNodeInfo represents connection information for a single node.
type DebugBatcherNodeInfo struct {
Connected bool `json:"connected"`
ActiveConnections int `json:"active_connections"`
}
// debugBatcherJSON returns structured debug information about the batcher's connected nodes.
func (h *Headscale) debugBatcherJSON() DebugBatcherInfo {
info := DebugBatcherInfo{
ConnectedNodes: make(map[string]DebugBatcherNodeInfo),
TotalNodes: 0,
}
debugInfo := h.mapBatcher.Debug()
for nodeID, debugData := range debugInfo {
info.ConnectedNodes[fmt.Sprintf("%d", nodeID)] = DebugBatcherNodeInfo{
Connected: debugData.Connected,
ActiveConnections: debugData.ActiveConnections,
}
info.TotalNodes++
}
return info
}
================================================
FILE: hscontrol/derp/derp.go
================================================
package derp
import (
"cmp"
"context"
"encoding/json"
"hash/crc64"
"io"
"maps"
"math/rand"
"net/http"
"net/url"
"os"
"reflect"
"slices"
"sync"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/spf13/viper"
"gopkg.in/yaml.v3"
"tailscale.com/tailcfg"
)
func loadDERPMapFromPath(path string) (*tailcfg.DERPMap, error) {
derpFile, err := os.Open(path)
if err != nil {
return nil, err
}
defer derpFile.Close()
var derpMap tailcfg.DERPMap
b, err := io.ReadAll(derpFile)
if err != nil {
return nil, err
}
err = yaml.Unmarshal(b, &derpMap)
return &derpMap, err
}
func loadDERPMapFromURL(addr url.URL) (*tailcfg.DERPMap, error) {
ctx, cancel := context.WithTimeout(context.Background(), types.HTTPTimeout)
defer cancel()
req, err := http.NewRequestWithContext(ctx, http.MethodGet, addr.String(), nil)
if err != nil {
return nil, err
}
client := http.Client{
Timeout: types.HTTPTimeout,
}
resp, err := client.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, err
}
var derpMap tailcfg.DERPMap
err = json.Unmarshal(body, &derpMap)
return &derpMap, err
}
// mergeDERPMaps naively merges a list of DERPMaps into a single
// DERPMap, it will _only_ look at the Regions, an integer.
// If a region exists in two of the given DERPMaps, the region
// form the _last_ DERPMap will be preserved.
// An empty DERPMap list will result in a DERPMap with no regions.
func mergeDERPMaps(derpMaps []*tailcfg.DERPMap) *tailcfg.DERPMap {
result := tailcfg.DERPMap{
OmitDefaultRegions: false,
Regions: map[int]*tailcfg.DERPRegion{},
}
for _, derpMap := range derpMaps {
maps.Copy(result.Regions, derpMap.Regions)
}
for id, region := range result.Regions {
if region == nil {
delete(result.Regions, id)
}
}
return &result
}
func GetDERPMap(cfg types.DERPConfig) (*tailcfg.DERPMap, error) {
var derpMaps []*tailcfg.DERPMap
if cfg.DERPMap != nil {
derpMaps = append(derpMaps, cfg.DERPMap)
}
for _, addr := range cfg.URLs {
derpMap, err := loadDERPMapFromURL(addr)
if err != nil {
return nil, err
}
derpMaps = append(derpMaps, derpMap)
}
for _, path := range cfg.Paths {
derpMap, err := loadDERPMapFromPath(path)
if err != nil {
return nil, err
}
derpMaps = append(derpMaps, derpMap)
}
derpMap := mergeDERPMaps(derpMaps)
shuffleDERPMap(derpMap)
return derpMap, nil
}
func shuffleDERPMap(dm *tailcfg.DERPMap) {
if dm == nil || len(dm.Regions) == 0 {
return
}
// Collect region IDs and sort them to ensure deterministic iteration order.
// Map iteration order is non-deterministic in Go, which would cause the
// shuffle to be non-deterministic even with a fixed seed.
ids := make([]int, 0, len(dm.Regions))
for id := range dm.Regions {
ids = append(ids, id)
}
slices.Sort(ids)
for _, id := range ids {
region := dm.Regions[id]
if len(region.Nodes) == 0 {
continue
}
dm.Regions[id] = shuffleRegionNoClone(region)
}
}
var crc64Table = crc64.MakeTable(crc64.ISO)
var (
derpRandomOnce sync.Once
derpRandomInst *rand.Rand
derpRandomMu sync.Mutex
)
func derpRandom() *rand.Rand {
derpRandomMu.Lock()
defer derpRandomMu.Unlock()
derpRandomOnce.Do(func() {
seed := cmp.Or(viper.GetString("dns.base_domain"), time.Now().String())
rnd := rand.New(rand.NewSource(0)) //nolint:gosec // weak random is fine for DERP scrambling
rnd.Seed(int64(crc64.Checksum([]byte(seed), crc64Table))) //nolint:gosec // safe conversion
derpRandomInst = rnd
})
return derpRandomInst
}
func resetDerpRandomForTesting() {
derpRandomMu.Lock()
defer derpRandomMu.Unlock()
derpRandomOnce = sync.Once{}
derpRandomInst = nil
}
func shuffleRegionNoClone(r *tailcfg.DERPRegion) *tailcfg.DERPRegion {
derpRandom().Shuffle(len(r.Nodes), reflect.Swapper(r.Nodes))
return r
}
================================================
FILE: hscontrol/derp/derp_test.go
================================================
package derp
import (
"testing"
"github.com/google/go-cmp/cmp"
"github.com/spf13/viper"
"tailscale.com/tailcfg"
)
func TestShuffleDERPMapDeterministic(t *testing.T) {
tests := []struct {
name string
baseDomain string
derpMap *tailcfg.DERPMap
expected *tailcfg.DERPMap
}{
{
name: "single region with 4 nodes",
baseDomain: "test1.example.com",
derpMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
1: {
RegionID: 1,
RegionCode: "nyc",
RegionName: "New York City",
Nodes: []*tailcfg.DERPNode{
{Name: "1f", RegionID: 1, HostName: "derp1f.tailscale.com"},
{Name: "1g", RegionID: 1, HostName: "derp1g.tailscale.com"},
{Name: "1h", RegionID: 1, HostName: "derp1h.tailscale.com"},
{Name: "1i", RegionID: 1, HostName: "derp1i.tailscale.com"},
},
},
},
},
expected: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
1: {
RegionID: 1,
RegionCode: "nyc",
RegionName: "New York City",
Nodes: []*tailcfg.DERPNode{
{Name: "1g", RegionID: 1, HostName: "derp1g.tailscale.com"},
{Name: "1f", RegionID: 1, HostName: "derp1f.tailscale.com"},
{Name: "1i", RegionID: 1, HostName: "derp1i.tailscale.com"},
{Name: "1h", RegionID: 1, HostName: "derp1h.tailscale.com"},
},
},
},
},
},
{
name: "multiple regions with nodes",
baseDomain: "test2.example.com",
derpMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
10: {
RegionID: 10,
RegionCode: "sea",
RegionName: "Seattle",
Nodes: []*tailcfg.DERPNode{
{Name: "10b", RegionID: 10, HostName: "derp10b.tailscale.com"},
{Name: "10c", RegionID: 10, HostName: "derp10c.tailscale.com"},
{Name: "10d", RegionID: 10, HostName: "derp10d.tailscale.com"},
},
},
2: {
RegionID: 2,
RegionCode: "sfo",
RegionName: "San Francisco",
Nodes: []*tailcfg.DERPNode{
{Name: "2d", RegionID: 2, HostName: "derp2d.tailscale.com"},
{Name: "2e", RegionID: 2, HostName: "derp2e.tailscale.com"},
{Name: "2f", RegionID: 2, HostName: "derp2f.tailscale.com"},
},
},
},
},
expected: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
10: {
RegionID: 10,
RegionCode: "sea",
RegionName: "Seattle",
Nodes: []*tailcfg.DERPNode{
{Name: "10d", RegionID: 10, HostName: "derp10d.tailscale.com"},
{Name: "10c", RegionID: 10, HostName: "derp10c.tailscale.com"},
{Name: "10b", RegionID: 10, HostName: "derp10b.tailscale.com"},
},
},
2: {
RegionID: 2,
RegionCode: "sfo",
RegionName: "San Francisco",
Nodes: []*tailcfg.DERPNode{
{Name: "2d", RegionID: 2, HostName: "derp2d.tailscale.com"},
{Name: "2e", RegionID: 2, HostName: "derp2e.tailscale.com"},
{Name: "2f", RegionID: 2, HostName: "derp2f.tailscale.com"},
},
},
},
},
},
{
name: "large region with many nodes",
baseDomain: "test3.example.com",
derpMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
4: {
RegionID: 4,
RegionCode: "fra",
RegionName: "Frankfurt",
Nodes: []*tailcfg.DERPNode{
{Name: "4f", RegionID: 4, HostName: "derp4f.tailscale.com"},
{Name: "4g", RegionID: 4, HostName: "derp4g.tailscale.com"},
{Name: "4h", RegionID: 4, HostName: "derp4h.tailscale.com"},
{Name: "4i", RegionID: 4, HostName: "derp4i.tailscale.com"},
},
},
},
},
expected: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
4: {
RegionID: 4,
RegionCode: "fra",
RegionName: "Frankfurt",
Nodes: []*tailcfg.DERPNode{
{Name: "4f", RegionID: 4, HostName: "derp4f.tailscale.com"},
{Name: "4h", RegionID: 4, HostName: "derp4h.tailscale.com"},
{Name: "4g", RegionID: 4, HostName: "derp4g.tailscale.com"},
{Name: "4i", RegionID: 4, HostName: "derp4i.tailscale.com"},
},
},
},
},
},
{
name: "same region different base domain",
baseDomain: "different.example.com",
derpMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
4: {
RegionID: 4,
RegionCode: "fra",
RegionName: "Frankfurt",
Nodes: []*tailcfg.DERPNode{
{Name: "4f", RegionID: 4, HostName: "derp4f.tailscale.com"},
{Name: "4g", RegionID: 4, HostName: "derp4g.tailscale.com"},
{Name: "4h", RegionID: 4, HostName: "derp4h.tailscale.com"},
{Name: "4i", RegionID: 4, HostName: "derp4i.tailscale.com"},
},
},
},
},
expected: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
4: {
RegionID: 4,
RegionCode: "fra",
RegionName: "Frankfurt",
Nodes: []*tailcfg.DERPNode{
{Name: "4g", RegionID: 4, HostName: "derp4g.tailscale.com"},
{Name: "4i", RegionID: 4, HostName: "derp4i.tailscale.com"},
{Name: "4f", RegionID: 4, HostName: "derp4f.tailscale.com"},
{Name: "4h", RegionID: 4, HostName: "derp4h.tailscale.com"},
},
},
},
},
},
{
name: "same dataset with another base domain",
baseDomain: "another.example.com",
derpMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
4: {
RegionID: 4,
RegionCode: "fra",
RegionName: "Frankfurt",
Nodes: []*tailcfg.DERPNode{
{Name: "4f", RegionID: 4, HostName: "derp4f.tailscale.com"},
{Name: "4g", RegionID: 4, HostName: "derp4g.tailscale.com"},
{Name: "4h", RegionID: 4, HostName: "derp4h.tailscale.com"},
{Name: "4i", RegionID: 4, HostName: "derp4i.tailscale.com"},
},
},
},
},
expected: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
4: {
RegionID: 4,
RegionCode: "fra",
RegionName: "Frankfurt",
Nodes: []*tailcfg.DERPNode{
{Name: "4h", RegionID: 4, HostName: "derp4h.tailscale.com"},
{Name: "4f", RegionID: 4, HostName: "derp4f.tailscale.com"},
{Name: "4g", RegionID: 4, HostName: "derp4g.tailscale.com"},
{Name: "4i", RegionID: 4, HostName: "derp4i.tailscale.com"},
},
},
},
},
},
{
name: "same dataset with yet another base domain",
baseDomain: "yetanother.example.com",
derpMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
4: {
RegionID: 4,
RegionCode: "fra",
RegionName: "Frankfurt",
Nodes: []*tailcfg.DERPNode{
{Name: "4f", RegionID: 4, HostName: "derp4f.tailscale.com"},
{Name: "4g", RegionID: 4, HostName: "derp4g.tailscale.com"},
{Name: "4h", RegionID: 4, HostName: "derp4h.tailscale.com"},
{Name: "4i", RegionID: 4, HostName: "derp4i.tailscale.com"},
},
},
},
},
expected: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
4: {
RegionID: 4,
RegionCode: "fra",
RegionName: "Frankfurt",
Nodes: []*tailcfg.DERPNode{
{Name: "4i", RegionID: 4, HostName: "derp4i.tailscale.com"},
{Name: "4h", RegionID: 4, HostName: "derp4h.tailscale.com"},
{Name: "4f", RegionID: 4, HostName: "derp4f.tailscale.com"},
{Name: "4g", RegionID: 4, HostName: "derp4g.tailscale.com"},
},
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
viper.Set("dns.base_domain", tt.baseDomain)
defer viper.Reset()
resetDerpRandomForTesting()
testMap := tt.derpMap.View().AsStruct()
shuffleDERPMap(testMap)
if diff := cmp.Diff(tt.expected, testMap); diff != "" {
t.Errorf("Shuffled DERP map doesn't match expected (-expected +actual):\n%s", diff)
}
})
}
}
func TestShuffleDERPMapEdgeCases(t *testing.T) {
tests := []struct {
name string
derpMap *tailcfg.DERPMap
}{
{
name: "nil derp map",
derpMap: nil,
},
{
name: "empty derp map",
derpMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{},
},
},
{
name: "region with no nodes",
derpMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
1: {
RegionID: 1,
RegionCode: "empty",
RegionName: "Empty Region",
Nodes: []*tailcfg.DERPNode{},
},
},
},
},
{
name: "region with single node",
derpMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
1: {
RegionID: 1,
RegionCode: "single",
RegionName: "Single Node Region",
Nodes: []*tailcfg.DERPNode{
{Name: "1a", RegionID: 1, HostName: "derp1a.tailscale.com"},
},
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
shuffleDERPMap(tt.derpMap)
})
}
}
func TestShuffleDERPMapWithoutBaseDomain(t *testing.T) {
viper.Reset()
resetDerpRandomForTesting()
derpMap := &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
1: {
RegionID: 1,
RegionCode: "test",
RegionName: "Test Region",
Nodes: []*tailcfg.DERPNode{
{Name: "1a", RegionID: 1, HostName: "derp1a.test.com"},
{Name: "1b", RegionID: 1, HostName: "derp1b.test.com"},
{Name: "1c", RegionID: 1, HostName: "derp1c.test.com"},
{Name: "1d", RegionID: 1, HostName: "derp1d.test.com"},
},
},
},
}
original := derpMap.View().AsStruct()
shuffleDERPMap(derpMap)
if len(derpMap.Regions) != 1 || len(derpMap.Regions[1].Nodes) != 4 {
t.Error("Shuffle corrupted DERP map structure")
}
originalNodes := make(map[string]bool)
for _, node := range original.Regions[1].Nodes {
originalNodes[node.Name] = true
}
shuffledNodes := make(map[string]bool)
for _, node := range derpMap.Regions[1].Nodes {
shuffledNodes[node.Name] = true
}
if diff := cmp.Diff(originalNodes, shuffledNodes); diff != "" {
t.Errorf("Shuffle changed node set (-original +shuffled):\n%s", diff)
}
}
================================================
FILE: hscontrol/derp/server/derp_server.go
================================================
package server
import (
"bufio"
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net"
"net/http"
"net/netip"
"net/url"
"strconv"
"strings"
"time"
"github.com/coder/websocket"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
"tailscale.com/derp"
"tailscale.com/derp/derpserver"
"tailscale.com/envknob"
"tailscale.com/net/stun"
"tailscale.com/net/wsconn"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
)
// fastStartHeader is the header (with value "1") that signals to the HTTP
// server that the DERP HTTP client does not want the HTTP 101 response
// headers and it will begin writing & reading the DERP protocol immediately
// following its HTTP request.
const (
fastStartHeader = "Derp-Fast-Start"
DerpVerifyScheme = "headscale-derp-verify"
)
// debugUseDERPIP is a debug-only flag that causes the DERP server to resolve
// hostnames to IP addresses when generating the DERP region configuration.
// This is useful for integration testing where DNS resolution may be unreliable.
var debugUseDERPIP = envknob.Bool("HEADSCALE_DEBUG_DERP_USE_IP")
type DERPServer struct {
serverURL string
key key.NodePrivate
cfg *types.DERPConfig
tailscaleDERP *derpserver.Server
}
func NewDERPServer(
serverURL string,
derpKey key.NodePrivate,
cfg *types.DERPConfig,
) (*DERPServer, error) {
log.Trace().Caller().Msg("creating new embedded DERP server")
server := derpserver.New(derpKey, util.TSLogfWrapper()) // nolint // zerolinter complains
if cfg.ServerVerifyClients {
server.SetVerifyClientURL(DerpVerifyScheme + "://verify")
server.SetVerifyClientURLFailOpen(false)
}
return &DERPServer{
serverURL: serverURL,
key: derpKey,
cfg: cfg,
tailscaleDERP: server,
}, nil
}
func (d *DERPServer) GenerateRegion() (tailcfg.DERPRegion, error) {
serverURL, err := url.Parse(d.serverURL)
if err != nil {
return tailcfg.DERPRegion{}, err
}
var (
host string
port int
portStr string
)
// Extract hostname and port from URL
host, portStr, err = net.SplitHostPort(serverURL.Host)
if err != nil {
if serverURL.Scheme == "https" {
host = serverURL.Host
port = 443
} else {
host = serverURL.Host
port = 80
}
} else {
port, err = strconv.Atoi(portStr)
if err != nil {
return tailcfg.DERPRegion{}, err
}
}
// If debug flag is set, resolve hostname to IP address
if debugUseDERPIP {
ips, err := new(net.Resolver).LookupIPAddr(context.Background(), host)
if err != nil {
log.Error().Caller().Err(err).Msgf("failed to resolve DERP hostname %s to IP, using hostname", host)
} else if len(ips) > 0 {
// Use the first IP address
ipStr := ips[0].IP.String()
log.Info().Caller().Msgf("HEADSCALE_DEBUG_DERP_USE_IP: resolved %s to %s", host, ipStr)
host = ipStr
}
}
localDERPregion := tailcfg.DERPRegion{
RegionID: d.cfg.ServerRegionID,
RegionCode: d.cfg.ServerRegionCode,
RegionName: d.cfg.ServerRegionName,
Avoid: false,
Nodes: []*tailcfg.DERPNode{
{
Name: strconv.Itoa(d.cfg.ServerRegionID),
RegionID: d.cfg.ServerRegionID,
HostName: host,
DERPPort: port,
IPv4: d.cfg.IPv4,
IPv6: d.cfg.IPv6,
},
},
}
_, portSTUNStr, err := net.SplitHostPort(d.cfg.STUNAddr)
if err != nil {
return tailcfg.DERPRegion{}, err
}
portSTUN, err := strconv.Atoi(portSTUNStr)
if err != nil {
return tailcfg.DERPRegion{}, err
}
localDERPregion.Nodes[0].STUNPort = portSTUN
log.Info().Caller().Msgf("derp region: %+v", localDERPregion)
log.Info().Caller().Msgf("derp nodes[0]: %+v", localDERPregion.Nodes[0])
return localDERPregion, nil
}
func (d *DERPServer) DERPHandler(
writer http.ResponseWriter,
req *http.Request,
) {
log.Trace().Caller().Msgf("/derp request from %v", req.RemoteAddr)
upgrade := strings.ToLower(req.Header.Get("Upgrade"))
if upgrade != "websocket" && upgrade != "derp" {
if upgrade != "" {
log.Warn().
Caller().
Msg("No Upgrade header in DERP server request. If headscale is behind a reverse proxy, make sure it is configured to pass WebSockets through.")
}
writer.Header().Set("Content-Type", "text/plain")
writer.WriteHeader(http.StatusUpgradeRequired)
_, err := writer.Write([]byte("DERP requires connection upgrade"))
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write HTTP response")
}
return
}
if strings.Contains(req.Header.Get("Sec-Websocket-Protocol"), "derp") {
d.serveWebsocket(writer, req)
} else {
d.servePlain(writer, req)
}
}
func (d *DERPServer) serveWebsocket(writer http.ResponseWriter, req *http.Request) {
websocketConn, err := websocket.Accept(writer, req, &websocket.AcceptOptions{
Subprotocols: []string{"derp"},
OriginPatterns: []string{"*"},
// Disable compression because DERP transmits WireGuard messages that
// are not compressible.
// Additionally, Safari has a broken implementation of compression
// (see https://github.com/nhooyr/websocket/issues/218) that makes
// enabling it actively harmful.
CompressionMode: websocket.CompressionDisabled,
})
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to upgrade websocket request")
writer.Header().Set("Content-Type", "text/plain")
writer.WriteHeader(http.StatusInternalServerError)
_, err = writer.Write([]byte("Failed to upgrade websocket request"))
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write HTTP response")
}
return
}
defer websocketConn.Close(websocket.StatusInternalError, "closing")
if websocketConn.Subprotocol() != "derp" {
websocketConn.Close(websocket.StatusPolicyViolation, "client must speak the derp subprotocol")
return
}
wc := wsconn.NetConn(req.Context(), websocketConn, websocket.MessageBinary, req.RemoteAddr)
brw := bufio.NewReadWriter(bufio.NewReader(wc), bufio.NewWriter(wc))
d.tailscaleDERP.Accept(req.Context(), wc, brw, req.RemoteAddr)
}
func (d *DERPServer) servePlain(writer http.ResponseWriter, req *http.Request) {
fastStart := req.Header.Get(fastStartHeader) == "1"
hijacker, ok := writer.(http.Hijacker)
if !ok {
log.Error().Caller().Msg("derp requires Hijacker interface from Gin")
writer.Header().Set("Content-Type", "text/plain")
writer.WriteHeader(http.StatusInternalServerError)
_, err := writer.Write([]byte("HTTP does not support general TCP support"))
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write HTTP response")
}
return
}
netConn, conn, err := hijacker.Hijack()
if err != nil {
log.Error().Caller().Err(err).Msgf("hijack failed")
writer.Header().Set("Content-Type", "text/plain")
writer.WriteHeader(http.StatusInternalServerError)
_, err = writer.Write([]byte("HTTP does not support general TCP support"))
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write HTTP response")
}
return
}
log.Trace().Caller().Msgf("hijacked connection from %v", req.RemoteAddr)
if !fastStart {
pubKey := d.key.Public()
pubKeyStr, _ := pubKey.MarshalText() //nolint
fmt.Fprintf(conn, "HTTP/1.1 101 Switching Protocols\r\n"+
"Upgrade: DERP\r\n"+
"Connection: Upgrade\r\n"+
"Derp-Version: %v\r\n"+
"Derp-Public-Key: %s\r\n\r\n",
derp.ProtocolVersion,
string(pubKeyStr))
}
d.tailscaleDERP.Accept(req.Context(), netConn, conn, netConn.RemoteAddr().String())
}
// DERPProbeHandler is the endpoint that js/wasm clients hit to measure
// DERP latency, since they can't do UDP STUN queries.
func DERPProbeHandler(
writer http.ResponseWriter,
req *http.Request,
) {
switch req.Method {
case http.MethodHead, http.MethodGet:
writer.Header().Set("Access-Control-Allow-Origin", "*")
writer.WriteHeader(http.StatusOK)
default:
writer.WriteHeader(http.StatusMethodNotAllowed)
_, err := writer.Write([]byte("bogus probe method"))
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write HTTP response")
}
}
}
// DERPBootstrapDNSHandler implements the /bootstrap-dns endpoint
// Described in https://github.com/tailscale/tailscale/issues/1405,
// this endpoint provides a way to help a client when it fails to start up
// because its DNS are broken.
// The initial implementation is here https://github.com/tailscale/tailscale/pull/1406
// They have a cache, but not clear if that is really necessary at Headscale, uh, scale.
// An example implementation is found here https://derp.tailscale.com/bootstrap-dns
// Coordination server is included automatically, since local DERP is using the same DNS Name in d.serverURL.
func DERPBootstrapDNSHandler(
derpMap tailcfg.DERPMapView,
) func(http.ResponseWriter, *http.Request) {
return func(
writer http.ResponseWriter,
req *http.Request,
) {
dnsEntries := make(map[string][]net.IP)
resolvCtx, cancel := context.WithTimeout(req.Context(), time.Minute)
defer cancel()
var resolver net.Resolver
for _, region := range derpMap.Regions().All() { //nolint:unqueryvet // not SQLBoiler, tailcfg iterator
for _, node := range region.Nodes().All() { //nolint:unqueryvet // not SQLBoiler, tailcfg iterator
addrs, err := resolver.LookupIP(resolvCtx, "ip", node.HostName())
if err != nil {
log.Trace().
Caller().
Err(err).
Msgf("bootstrap DNS lookup failed %q", node.HostName())
continue
}
dnsEntries[node.HostName()] = addrs
}
}
writer.Header().Set("Content-Type", "application/json")
writer.WriteHeader(http.StatusOK)
err := json.NewEncoder(writer).Encode(dnsEntries)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write HTTP response")
}
}
}
// ServeSTUN starts a STUN server on the configured addr.
func (d *DERPServer) ServeSTUN() {
packetConn, err := new(net.ListenConfig).ListenPacket(context.Background(), "udp", d.cfg.STUNAddr)
if err != nil {
log.Fatal().Msgf("failed to open STUN listener: %v", err)
}
log.Info().Msgf("stun server started at %s", packetConn.LocalAddr())
udpConn, ok := packetConn.(*net.UDPConn)
if !ok {
log.Fatal().Msg("stun listener is not a UDP listener")
}
serverSTUNListener(context.Background(), udpConn)
}
func serverSTUNListener(ctx context.Context, packetConn *net.UDPConn) {
var (
buf [64 << 10]byte
bytesRead int
udpAddr *net.UDPAddr
err error
)
for {
bytesRead, udpAddr, err = packetConn.ReadFromUDP(buf[:])
if err != nil {
if ctx.Err() != nil {
return
}
log.Error().Caller().Err(err).Msgf("stun ReadFrom")
// Rate limit error logging - wait before retrying, but respect context cancellation
select {
case <-ctx.Done():
return
case <-time.After(time.Second):
}
continue
}
log.Trace().Caller().Msgf("stun request from %v", udpAddr)
pkt := buf[:bytesRead]
if !stun.Is(pkt) {
log.Trace().Caller().Msgf("udp packet is not stun")
continue
}
txid, err := stun.ParseBindingRequest(pkt)
if err != nil {
log.Trace().Caller().Err(err).Msgf("stun parse error")
continue
}
addr, _ := netip.AddrFromSlice(udpAddr.IP)
res := stun.Response(txid, netip.AddrPortFrom(addr, uint16(udpAddr.Port))) //nolint:gosec // port is always <=65535
_, err = packetConn.WriteTo(res, udpAddr)
if err != nil {
log.Trace().Caller().Err(err).Msgf("issue writing to UDP")
continue
}
}
}
func NewDERPVerifyTransport(handleVerifyRequest func(*http.Request, io.Writer) error) *DERPVerifyTransport {
return &DERPVerifyTransport{
handleVerifyRequest: handleVerifyRequest,
}
}
type DERPVerifyTransport struct {
handleVerifyRequest func(*http.Request, io.Writer) error
}
func (t *DERPVerifyTransport) RoundTrip(req *http.Request) (*http.Response, error) {
buf := new(bytes.Buffer)
err := t.handleVerifyRequest(req, buf)
if err != nil {
log.Error().Caller().Err(err).Msg("failed to handle client verify request")
return nil, err
}
resp := &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(buf),
}
return resp, nil
}
================================================
FILE: hscontrol/dns/extrarecords.go
================================================
package dns
import (
"context"
"crypto/sha256"
"encoding/json"
"errors"
"fmt"
"os"
"sync"
"github.com/cenkalti/backoff/v5"
"github.com/fsnotify/fsnotify"
"github.com/rs/zerolog/log"
"tailscale.com/tailcfg"
"tailscale.com/util/set"
)
// ErrPathIsDirectory is returned when a directory path is provided where a file is expected.
var ErrPathIsDirectory = errors.New("path is a directory, only file is supported")
type ExtraRecordsMan struct {
mu sync.RWMutex
records set.Set[tailcfg.DNSRecord]
watcher *fsnotify.Watcher
path string
updateCh chan []tailcfg.DNSRecord
closeCh chan struct{}
hashes map[string][32]byte
}
// NewExtraRecordsManager creates a new ExtraRecordsMan and starts watching the file at the given path.
func NewExtraRecordsManager(path string) (*ExtraRecordsMan, error) {
watcher, err := fsnotify.NewWatcher()
if err != nil {
return nil, fmt.Errorf("creating watcher: %w", err)
}
fi, err := os.Stat(path)
if err != nil {
return nil, fmt.Errorf("getting file info: %w", err)
}
if fi.IsDir() {
return nil, fmt.Errorf("%w: %s", ErrPathIsDirectory, path)
}
records, hash, err := readExtraRecordsFromPath(path)
if err != nil {
return nil, fmt.Errorf("reading extra records from path: %w", err)
}
er := &ExtraRecordsMan{
watcher: watcher,
path: path,
records: set.SetOf(records),
hashes: map[string][32]byte{
path: hash,
},
closeCh: make(chan struct{}),
updateCh: make(chan []tailcfg.DNSRecord),
}
err = watcher.Add(path)
if err != nil {
return nil, fmt.Errorf("adding path to watcher: %w", err)
}
log.Trace().Caller().Strs("watching", watcher.WatchList()).Msg("started filewatcher")
return er, nil
}
func (e *ExtraRecordsMan) Records() []tailcfg.DNSRecord {
e.mu.RLock()
defer e.mu.RUnlock()
return e.records.Slice()
}
func (e *ExtraRecordsMan) Run() {
for {
select {
case <-e.closeCh:
return
case event, ok := <-e.watcher.Events:
if !ok {
log.Error().Caller().Msgf("file watcher event channel closing")
return
}
switch event.Op {
case fsnotify.Create, fsnotify.Write, fsnotify.Chmod:
log.Trace().Caller().Str("path", event.Name).Str("op", event.Op.String()).Msg("extra records received filewatch event")
if event.Name != e.path {
continue
}
e.updateRecords()
// If a file is removed or renamed, fsnotify will lose track of it
// and not watch it. We will therefore attempt to re-add it with a backoff.
case fsnotify.Remove, fsnotify.Rename:
_, err := backoff.Retry(context.Background(), func() (struct{}, error) {
if _, err := os.Stat(e.path); err != nil { //nolint:noinlineerr
return struct{}{}, err
}
return struct{}{}, nil
}, backoff.WithBackOff(backoff.NewExponentialBackOff()))
if err != nil {
log.Error().Caller().Err(err).Msgf("extra records filewatcher retrying to find file after delete")
continue
}
err = e.watcher.Add(e.path)
if err != nil {
log.Error().Caller().Err(err).Msgf("extra records filewatcher re-adding file after delete failed, giving up.")
return
} else {
log.Trace().Caller().Str("path", e.path).Msg("extra records file re-added after delete")
e.updateRecords()
}
}
case err, ok := <-e.watcher.Errors:
if !ok {
log.Error().Caller().Msgf("file watcher error channel closing")
return
}
log.Error().Caller().Err(err).Msgf("extra records filewatcher returned error: %q", err)
}
}
}
func (e *ExtraRecordsMan) Close() {
e.watcher.Close()
close(e.closeCh)
}
func (e *ExtraRecordsMan) UpdateCh() <-chan []tailcfg.DNSRecord {
return e.updateCh
}
func (e *ExtraRecordsMan) updateRecords() {
records, newHash, err := readExtraRecordsFromPath(e.path)
if err != nil {
log.Error().Caller().Err(err).Msgf("reading extra records from path: %s", e.path)
return
}
// If there are no records, ignore the update.
if records == nil {
return
}
e.mu.Lock()
defer e.mu.Unlock()
// If there has not been any change, ignore the update.
if oldHash, ok := e.hashes[e.path]; ok {
if newHash == oldHash {
return
}
}
oldCount := e.records.Len()
e.records = set.SetOf(records)
e.hashes[e.path] = newHash
log.Trace().Caller().Interface("records", e.records).Msgf("extra records updated from path, count old: %d, new: %d", oldCount, e.records.Len())
e.updateCh <- e.records.Slice()
}
// readExtraRecordsFromPath reads a JSON file of tailcfg.DNSRecord
// and returns the records and the hash of the file.
func readExtraRecordsFromPath(path string) ([]tailcfg.DNSRecord, [32]byte, error) {
b, err := os.ReadFile(path)
if err != nil {
return nil, [32]byte{}, fmt.Errorf("reading path: %s, err: %w", path, err)
}
// If the read was triggered too fast, and the file is not complete, ignore the update
// if the file is empty. A consecutive update will be triggered when the file is complete.
if len(b) == 0 {
return nil, [32]byte{}, nil
}
var records []tailcfg.DNSRecord
err = json.Unmarshal(b, &records)
if err != nil {
return nil, [32]byte{}, fmt.Errorf("unmarshalling records, content: %q: %w", string(b), err)
}
hash := sha256.Sum256(b)
return records, hash, nil
}
================================================
FILE: hscontrol/grpcv1.go
================================================
//go:generate buf generate --template ../buf.gen.yaml -o .. ../proto
// nolint
package hscontrol
import (
"context"
"errors"
"fmt"
"io"
"net/netip"
"os"
"slices"
"sort"
"strings"
"time"
"github.com/rs/zerolog/log"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"google.golang.org/protobuf/types/known/timestamppb"
"gorm.io/gorm"
"tailscale.com/net/tsaddr"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
"tailscale.com/types/views"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/state"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/juanfont/headscale/hscontrol/util/zlog/zf"
)
type headscaleV1APIServer struct { // v1.HeadscaleServiceServer
v1.UnimplementedHeadscaleServiceServer
h *Headscale
}
func newHeadscaleV1APIServer(h *Headscale) v1.HeadscaleServiceServer {
return headscaleV1APIServer{
h: h,
}
}
func (api headscaleV1APIServer) CreateUser(
ctx context.Context,
request *v1.CreateUserRequest,
) (*v1.CreateUserResponse, error) {
newUser := types.User{
Name: request.GetName(),
DisplayName: request.GetDisplayName(),
Email: request.GetEmail(),
ProfilePicURL: request.GetPictureUrl(),
}
user, policyChanged, err := api.h.state.CreateUser(newUser)
if err != nil {
return nil, status.Errorf(codes.Internal, "creating user: %s", err)
}
// CreateUser returns a policy change response if the user creation affected policy.
// This triggers a full policy re-evaluation for all connected nodes.
api.h.Change(policyChanged)
return &v1.CreateUserResponse{User: user.Proto()}, nil
}
func (api headscaleV1APIServer) RenameUser(
ctx context.Context,
request *v1.RenameUserRequest,
) (*v1.RenameUserResponse, error) {
oldUser, err := api.h.state.GetUserByID(types.UserID(request.GetOldId()))
if err != nil {
return nil, err
}
_, c, err := api.h.state.RenameUser(types.UserID(oldUser.ID), request.GetNewName())
if err != nil {
return nil, err
}
// Send policy update notifications if needed
api.h.Change(c)
newUser, err := api.h.state.GetUserByName(request.GetNewName())
if err != nil {
return nil, err
}
return &v1.RenameUserResponse{User: newUser.Proto()}, nil
}
func (api headscaleV1APIServer) DeleteUser(
ctx context.Context,
request *v1.DeleteUserRequest,
) (*v1.DeleteUserResponse, error) {
user, err := api.h.state.GetUserByID(types.UserID(request.GetId()))
if err != nil {
return nil, err
}
policyChanged, err := api.h.state.DeleteUser(types.UserID(user.ID))
if err != nil {
return nil, err
}
// Use the change returned from DeleteUser which includes proper policy updates
api.h.Change(policyChanged)
return &v1.DeleteUserResponse{}, nil
}
func (api headscaleV1APIServer) ListUsers(
ctx context.Context,
request *v1.ListUsersRequest,
) (*v1.ListUsersResponse, error) {
var err error
var users []types.User
switch {
case request.GetName() != "":
users, err = api.h.state.ListUsersWithFilter(&types.User{Name: request.GetName()})
case request.GetEmail() != "":
users, err = api.h.state.ListUsersWithFilter(&types.User{Email: request.GetEmail()})
case request.GetId() != 0:
users, err = api.h.state.ListUsersWithFilter(&types.User{Model: gorm.Model{ID: uint(request.GetId())}})
default:
users, err = api.h.state.ListAllUsers()
}
if err != nil {
return nil, err
}
response := make([]*v1.User, len(users))
for index, user := range users {
response[index] = user.Proto()
}
sort.Slice(response, func(i, j int) bool {
return response[i].Id < response[j].Id
})
return &v1.ListUsersResponse{Users: response}, nil
}
func (api headscaleV1APIServer) CreatePreAuthKey(
ctx context.Context,
request *v1.CreatePreAuthKeyRequest,
) (*v1.CreatePreAuthKeyResponse, error) {
var expiration time.Time
if request.GetExpiration() != nil {
expiration = request.GetExpiration().AsTime()
}
for _, tag := range request.AclTags {
err := validateTag(tag)
if err != nil {
return &v1.CreatePreAuthKeyResponse{
PreAuthKey: nil,
}, status.Error(codes.InvalidArgument, err.Error())
}
}
var userID *types.UserID
if request.GetUser() != 0 {
user, err := api.h.state.GetUserByID(types.UserID(request.GetUser()))
if err != nil {
return nil, err
}
userID = user.TypedID()
}
preAuthKey, err := api.h.state.CreatePreAuthKey(
userID,
request.GetReusable(),
request.GetEphemeral(),
&expiration,
request.AclTags,
)
if err != nil {
return nil, err
}
return &v1.CreatePreAuthKeyResponse{PreAuthKey: preAuthKey.Proto()}, nil
}
func (api headscaleV1APIServer) ExpirePreAuthKey(
ctx context.Context,
request *v1.ExpirePreAuthKeyRequest,
) (*v1.ExpirePreAuthKeyResponse, error) {
err := api.h.state.ExpirePreAuthKey(request.GetId())
if err != nil {
return nil, err
}
return &v1.ExpirePreAuthKeyResponse{}, nil
}
func (api headscaleV1APIServer) DeletePreAuthKey(
ctx context.Context,
request *v1.DeletePreAuthKeyRequest,
) (*v1.DeletePreAuthKeyResponse, error) {
err := api.h.state.DeletePreAuthKey(request.GetId())
if err != nil {
return nil, err
}
return &v1.DeletePreAuthKeyResponse{}, nil
}
func (api headscaleV1APIServer) ListPreAuthKeys(
ctx context.Context,
request *v1.ListPreAuthKeysRequest,
) (*v1.ListPreAuthKeysResponse, error) {
preAuthKeys, err := api.h.state.ListPreAuthKeys()
if err != nil {
return nil, err
}
response := make([]*v1.PreAuthKey, len(preAuthKeys))
for index, key := range preAuthKeys {
response[index] = key.Proto()
}
sort.Slice(response, func(i, j int) bool {
return response[i].Id < response[j].Id
})
return &v1.ListPreAuthKeysResponse{PreAuthKeys: response}, nil
}
func (api headscaleV1APIServer) RegisterNode(
ctx context.Context,
request *v1.RegisterNodeRequest,
) (*v1.RegisterNodeResponse, error) {
// Generate ephemeral registration key for tracking this registration flow in logs
registrationKey, err := util.GenerateRegistrationKey()
if err != nil {
log.Warn().Err(err).Msg("failed to generate registration key")
registrationKey = "" // Continue without key if generation fails
}
log.Trace().
Caller().
Str(zf.UserName, request.GetUser()).
Str(zf.RegistrationID, request.GetKey()).
Str(zf.RegistrationKey, registrationKey).
Msg("registering node")
registrationId, err := types.AuthIDFromString(request.GetKey())
if err != nil {
return nil, err
}
user, err := api.h.state.GetUserByName(request.GetUser())
if err != nil {
return nil, fmt.Errorf("looking up user: %w", err)
}
node, nodeChange, err := api.h.state.HandleNodeFromAuthPath(
registrationId,
types.UserID(user.ID),
nil,
util.RegisterMethodCLI,
)
if err != nil {
log.Error().
Str(zf.RegistrationKey, registrationKey).
Err(err).
Msg("failed to register node")
return nil, err
}
log.Info().
Str(zf.RegistrationKey, registrationKey).
EmbedObject(node).
Msg("node registered successfully")
// This is a bit of a back and forth, but we have a bit of a chicken and egg
// dependency here.
// Because the way the policy manager works, we need to have the node
// in the database, then add it to the policy manager and then we can
// approve the route. This means we get this dance where the node is
// first added to the database, then we add it to the policy manager via
// SaveNode (which automatically updates the policy manager) and then we can auto approve the routes.
// As that only approves the struct object, we need to save it again and
// ensure we send an update.
// This works, but might be another good candidate for doing some sort of
// eventbus.
routeChange, err := api.h.state.AutoApproveRoutes(node)
if err != nil {
return nil, fmt.Errorf("auto approving routes: %w", err)
}
// Send both changes. Empty changes are ignored by Change().
api.h.Change(nodeChange, routeChange)
return &v1.RegisterNodeResponse{Node: node.Proto()}, nil
}
func (api headscaleV1APIServer) GetNode(
ctx context.Context,
request *v1.GetNodeRequest,
) (*v1.GetNodeResponse, error) {
node, ok := api.h.state.GetNodeByID(types.NodeID(request.GetNodeId()))
if !ok {
return nil, status.Errorf(codes.NotFound, "node not found")
}
resp := node.Proto()
return &v1.GetNodeResponse{Node: resp}, nil
}
func (api headscaleV1APIServer) SetTags(
ctx context.Context,
request *v1.SetTagsRequest,
) (*v1.SetTagsResponse, error) {
// Validate tags not empty - tagged nodes must have at least one tag
if len(request.GetTags()) == 0 {
return &v1.SetTagsResponse{
Node: nil,
}, status.Error(
codes.InvalidArgument,
"cannot remove all tags from a node - tagged nodes must have at least one tag",
)
}
// Validate tag format
for _, tag := range request.GetTags() {
err := validateTag(tag)
if err != nil {
return nil, err
}
}
// User XOR Tags: nodes are either tagged or user-owned, never both.
// Setting tags on a user-owned node converts it to a tagged node.
// Once tagged, a node cannot be converted back to user-owned.
_, found := api.h.state.GetNodeByID(types.NodeID(request.GetNodeId()))
if !found {
return &v1.SetTagsResponse{
Node: nil,
}, status.Error(codes.NotFound, "node not found")
}
node, nodeChange, err := api.h.state.SetNodeTags(types.NodeID(request.GetNodeId()), request.GetTags())
if err != nil {
return &v1.SetTagsResponse{
Node: nil,
}, status.Error(codes.InvalidArgument, err.Error())
}
api.h.Change(nodeChange)
log.Trace().
Caller().
EmbedObject(node).
Strs("tags", request.GetTags()).
Msg("changing tags of node")
return &v1.SetTagsResponse{Node: node.Proto()}, nil
}
func (api headscaleV1APIServer) SetApprovedRoutes(
ctx context.Context,
request *v1.SetApprovedRoutesRequest,
) (*v1.SetApprovedRoutesResponse, error) {
log.Debug().
Caller().
Uint64(zf.NodeID, request.GetNodeId()).
Strs("requestedRoutes", request.GetRoutes()).
Msg("gRPC SetApprovedRoutes called")
var newApproved []netip.Prefix
for _, route := range request.GetRoutes() {
prefix, err := netip.ParsePrefix(route)
if err != nil {
return nil, fmt.Errorf("parsing route: %w", err)
}
// If the prefix is an exit route, add both. The client expect both
// to annotate the node as an exit node.
if prefix == tsaddr.AllIPv4() || prefix == tsaddr.AllIPv6() {
newApproved = append(newApproved, tsaddr.AllIPv4(), tsaddr.AllIPv6())
} else {
newApproved = append(newApproved, prefix)
}
}
slices.SortFunc(newApproved, netip.Prefix.Compare)
newApproved = slices.Compact(newApproved)
node, nodeChange, err := api.h.state.SetApprovedRoutes(types.NodeID(request.GetNodeId()), newApproved)
if err != nil {
return nil, status.Error(codes.InvalidArgument, err.Error())
}
// Always propagate node changes from SetApprovedRoutes
api.h.Change(nodeChange)
proto := node.Proto()
// Populate SubnetRoutes with PrimaryRoutes to ensure it includes only the
// routes that are actively served from the node (per architectural requirement in types/node.go)
primaryRoutes := api.h.state.GetNodePrimaryRoutes(node.ID())
proto.SubnetRoutes = util.PrefixesToString(primaryRoutes)
log.Debug().
Caller().
EmbedObject(node).
Strs("approvedRoutes", util.PrefixesToString(node.ApprovedRoutes().AsSlice())).
Strs("primaryRoutes", util.PrefixesToString(primaryRoutes)).
Strs("finalSubnetRoutes", proto.SubnetRoutes).
Msg("gRPC SetApprovedRoutes completed")
return &v1.SetApprovedRoutesResponse{Node: proto}, nil
}
func validateTag(tag string) error {
if strings.Index(tag, "tag:") != 0 {
return errors.New("tag must start with the string 'tag:'")
}
if strings.ToLower(tag) != tag {
return errors.New("tag should be lowercase")
}
if len(strings.Fields(tag)) > 1 {
return errors.New("tags must not contain spaces")
}
return nil
}
func (api headscaleV1APIServer) DeleteNode(
ctx context.Context,
request *v1.DeleteNodeRequest,
) (*v1.DeleteNodeResponse, error) {
node, ok := api.h.state.GetNodeByID(types.NodeID(request.GetNodeId()))
if !ok {
return nil, status.Errorf(codes.NotFound, "node not found")
}
nodeChange, err := api.h.state.DeleteNode(node)
if err != nil {
return nil, err
}
api.h.Change(nodeChange)
return &v1.DeleteNodeResponse{}, nil
}
func (api headscaleV1APIServer) ExpireNode(
ctx context.Context,
request *v1.ExpireNodeRequest,
) (*v1.ExpireNodeResponse, error) {
if request.GetDisableExpiry() && request.GetExpiry() != nil {
return nil, status.Error(
codes.InvalidArgument,
"cannot set both disable_expiry and expiry",
)
}
// Handle disable expiry request - node will never expire.
if request.GetDisableExpiry() {
node, nodeChange, err := api.h.state.SetNodeExpiry(
types.NodeID(request.GetNodeId()), nil,
)
if err != nil {
return nil, err
}
api.h.Change(nodeChange)
log.Trace().
Caller().
EmbedObject(node).
Msg("node expiry disabled")
return &v1.ExpireNodeResponse{Node: node.Proto()}, nil
}
expiry := time.Now()
if request.GetExpiry() != nil {
expiry = request.GetExpiry().AsTime()
}
node, nodeChange, err := api.h.state.SetNodeExpiry(
types.NodeID(request.GetNodeId()), &expiry,
)
if err != nil {
return nil, err
}
// TODO(kradalby): Ensure that both the selfupdate and peer updates are sent
api.h.Change(nodeChange)
log.Trace().
Caller().
EmbedObject(node).
Time(zf.ExpiresAt, expiry).
Msg("node expired")
return &v1.ExpireNodeResponse{Node: node.Proto()}, nil
}
func (api headscaleV1APIServer) RenameNode(
ctx context.Context,
request *v1.RenameNodeRequest,
) (*v1.RenameNodeResponse, error) {
node, nodeChange, err := api.h.state.RenameNode(types.NodeID(request.GetNodeId()), request.GetNewName())
if err != nil {
return nil, err
}
// TODO(kradalby): investigate if we need selfupdate
api.h.Change(nodeChange)
log.Trace().
Caller().
EmbedObject(node).
Str(zf.NewName, request.GetNewName()).
Msg("node renamed")
return &v1.RenameNodeResponse{Node: node.Proto()}, nil
}
func (api headscaleV1APIServer) ListNodes(
ctx context.Context,
request *v1.ListNodesRequest,
) (*v1.ListNodesResponse, error) {
// TODO(kradalby): it looks like this can be simplified a lot,
// the filtering of nodes by user, vs nodes as a whole can
// probably be done once.
// TODO(kradalby): This should be done in one tx.
if request.GetUser() != "" {
user, err := api.h.state.GetUserByName(request.GetUser())
if err != nil {
return nil, err
}
nodes := api.h.state.ListNodesByUser(types.UserID(user.ID))
response := nodesToProto(api.h.state, nodes)
return &v1.ListNodesResponse{Nodes: response}, nil
}
nodes := api.h.state.ListNodes()
response := nodesToProto(api.h.state, nodes)
return &v1.ListNodesResponse{Nodes: response}, nil
}
func nodesToProto(state *state.State, nodes views.Slice[types.NodeView]) []*v1.Node {
response := make([]*v1.Node, nodes.Len())
for index, node := range nodes.All() {
resp := node.Proto()
// Tags-as-identity: tagged nodes show as TaggedDevices user in API responses
// (UserID may be set internally for "created by" tracking)
if node.IsTagged() {
resp.User = types.TaggedDevices.Proto()
}
resp.SubnetRoutes = util.PrefixesToString(append(state.GetNodePrimaryRoutes(node.ID()), node.ExitRoutes()...))
response[index] = resp
}
sort.Slice(response, func(i, j int) bool {
return response[i].Id < response[j].Id
})
return response
}
func (api headscaleV1APIServer) BackfillNodeIPs(
ctx context.Context,
request *v1.BackfillNodeIPsRequest,
) (*v1.BackfillNodeIPsResponse, error) {
log.Trace().Caller().Msg("backfill called")
if !request.Confirmed {
return nil, errors.New("not confirmed, aborting")
}
changes, err := api.h.state.BackfillNodeIPs()
if err != nil {
return nil, err
}
return &v1.BackfillNodeIPsResponse{Changes: changes}, nil
}
func (api headscaleV1APIServer) CreateApiKey(
ctx context.Context,
request *v1.CreateApiKeyRequest,
) (*v1.CreateApiKeyResponse, error) {
var expiration time.Time
if request.GetExpiration() != nil {
expiration = request.GetExpiration().AsTime()
}
apiKey, _, err := api.h.state.CreateAPIKey(&expiration)
if err != nil {
return nil, err
}
return &v1.CreateApiKeyResponse{ApiKey: apiKey}, nil
}
// apiKeyIdentifier is implemented by requests that identify an API key.
type apiKeyIdentifier interface {
GetId() uint64
GetPrefix() string
}
// getAPIKey retrieves an API key by ID or prefix from the request.
// Returns InvalidArgument if neither or both are provided.
func (api headscaleV1APIServer) getAPIKey(req apiKeyIdentifier) (*types.APIKey, error) {
hasID := req.GetId() != 0
hasPrefix := req.GetPrefix() != ""
switch {
case hasID && hasPrefix:
return nil, status.Error(codes.InvalidArgument, "provide either id or prefix, not both")
case hasID:
return api.h.state.GetAPIKeyByID(req.GetId())
case hasPrefix:
return api.h.state.GetAPIKey(req.GetPrefix())
default:
return nil, status.Error(codes.InvalidArgument, "must provide id or prefix")
}
}
func (api headscaleV1APIServer) ExpireApiKey(
ctx context.Context,
request *v1.ExpireApiKeyRequest,
) (*v1.ExpireApiKeyResponse, error) {
apiKey, err := api.getAPIKey(request)
if err != nil {
return nil, err
}
err = api.h.state.ExpireAPIKey(apiKey)
if err != nil {
return nil, err
}
return &v1.ExpireApiKeyResponse{}, nil
}
func (api headscaleV1APIServer) ListApiKeys(
ctx context.Context,
request *v1.ListApiKeysRequest,
) (*v1.ListApiKeysResponse, error) {
apiKeys, err := api.h.state.ListAPIKeys()
if err != nil {
return nil, err
}
response := make([]*v1.ApiKey, len(apiKeys))
for index, key := range apiKeys {
response[index] = key.Proto()
}
sort.Slice(response, func(i, j int) bool {
return response[i].Id < response[j].Id
})
return &v1.ListApiKeysResponse{ApiKeys: response}, nil
}
func (api headscaleV1APIServer) DeleteApiKey(
ctx context.Context,
request *v1.DeleteApiKeyRequest,
) (*v1.DeleteApiKeyResponse, error) {
apiKey, err := api.getAPIKey(request)
if err != nil {
return nil, err
}
if err := api.h.state.DestroyAPIKey(*apiKey); err != nil {
return nil, err
}
return &v1.DeleteApiKeyResponse{}, nil
}
func (api headscaleV1APIServer) GetPolicy(
_ context.Context,
_ *v1.GetPolicyRequest,
) (*v1.GetPolicyResponse, error) {
switch api.h.cfg.Policy.Mode {
case types.PolicyModeDB:
p, err := api.h.state.GetPolicy()
if err != nil {
return nil, fmt.Errorf("loading ACL from database: %w", err)
}
return &v1.GetPolicyResponse{
Policy: p.Data,
UpdatedAt: timestamppb.New(p.UpdatedAt),
}, nil
case types.PolicyModeFile:
// Read the file and return the contents as-is.
absPath := util.AbsolutePathFromConfigPath(api.h.cfg.Policy.Path)
f, err := os.Open(absPath)
if err != nil {
return nil, fmt.Errorf("reading policy from path %q: %w", absPath, err)
}
defer f.Close()
b, err := io.ReadAll(f)
if err != nil {
return nil, fmt.Errorf("reading policy from file: %w", err)
}
return &v1.GetPolicyResponse{Policy: string(b)}, nil
}
return nil, fmt.Errorf("no supported policy mode found in configuration, policy.mode: %q", api.h.cfg.Policy.Mode)
}
func (api headscaleV1APIServer) SetPolicy(
_ context.Context,
request *v1.SetPolicyRequest,
) (*v1.SetPolicyResponse, error) {
if api.h.cfg.Policy.Mode != types.PolicyModeDB {
return nil, types.ErrPolicyUpdateIsDisabled
}
p := request.GetPolicy()
// Validate and reject configuration that would error when applied
// when creating a map response. This requires nodes, so there is still
// a scenario where they might be allowed if the server has no nodes
// yet, but it should help for the general case and for hot reloading
// configurations.
nodes := api.h.state.ListNodes()
_, err := api.h.state.SetPolicy([]byte(p))
if err != nil {
return nil, fmt.Errorf("setting policy: %w", err)
}
if nodes.Len() > 0 {
_, err = api.h.state.SSHPolicy(nodes.At(0))
if err != nil {
return nil, fmt.Errorf("verifying SSH rules: %w", err)
}
}
updated, err := api.h.state.SetPolicyInDB(p)
if err != nil {
return nil, err
}
// Always reload policy to ensure route re-evaluation, even if policy content hasn't changed.
// This ensures that routes are re-evaluated for auto-approval in cases where routes
// were manually disabled but could now be auto-approved with the current policy.
cs, err := api.h.state.ReloadPolicy()
if err != nil {
return nil, fmt.Errorf("reloading policy: %w", err)
}
if len(cs) > 0 {
api.h.Change(cs...)
} else {
log.Debug().
Caller().
Msg("No policy changes to distribute because ReloadPolicy returned empty changeset")
}
response := &v1.SetPolicyResponse{
Policy: updated.Data,
UpdatedAt: timestamppb.New(updated.UpdatedAt),
}
log.Debug().
Caller().
Msg("gRPC SetPolicy completed successfully because response prepared")
return response, nil
}
// The following service calls are for testing and debugging
func (api headscaleV1APIServer) DebugCreateNode(
ctx context.Context,
request *v1.DebugCreateNodeRequest,
) (*v1.DebugCreateNodeResponse, error) {
user, err := api.h.state.GetUserByName(request.GetUser())
if err != nil {
return nil, err
}
routes, err := util.StringToIPPrefix(request.GetRoutes())
if err != nil {
return nil, err
}
log.Trace().
Caller().
Interface("route-prefix", routes).
Interface("route-str", request.GetRoutes()).
Msg("Creating routes for node")
hostinfo := tailcfg.Hostinfo{
RoutableIPs: routes,
OS: "TestOS",
Hostname: request.GetName(),
}
registrationId, err := types.AuthIDFromString(request.GetKey())
if err != nil {
return nil, err
}
newNode := types.Node{
NodeKey: key.NewNode().Public(),
MachineKey: key.NewMachine().Public(),
Hostname: request.GetName(),
User: user,
Expiry: &time.Time{},
LastSeen: &time.Time{},
Hostinfo: &hostinfo,
}
log.Debug().
Caller().
Str("registration_id", registrationId.String()).
Msg("adding debug machine via CLI, appending to registration cache")
authRegReq := types.NewRegisterAuthRequest(newNode)
api.h.state.SetAuthCacheEntry(registrationId, authRegReq)
return &v1.DebugCreateNodeResponse{Node: newNode.Proto()}, nil
}
func (api headscaleV1APIServer) Health(
ctx context.Context,
request *v1.HealthRequest,
) (*v1.HealthResponse, error) {
var healthErr error
response := &v1.HealthResponse{}
if err := api.h.state.PingDB(ctx); err != nil {
healthErr = fmt.Errorf("pinging database: %w", err)
} else {
response.DatabaseConnectivity = true
}
if healthErr != nil {
log.Error().Err(healthErr).Msg("health check failed")
}
return response, healthErr
}
func (api headscaleV1APIServer) AuthRegister(
ctx context.Context,
request *v1.AuthRegisterRequest,
) (*v1.AuthRegisterResponse, error) {
resp, err := api.RegisterNode(ctx, &v1.RegisterNodeRequest{
Key: request.GetAuthId(),
User: request.GetUser(),
})
if err != nil {
return nil, err
}
return &v1.AuthRegisterResponse{Node: resp.GetNode()}, nil
}
func (api headscaleV1APIServer) AuthApprove(
ctx context.Context,
request *v1.AuthApproveRequest,
) (*v1.AuthApproveResponse, error) {
authID, err := types.AuthIDFromString(request.GetAuthId())
if err != nil {
return nil, status.Errorf(codes.InvalidArgument, "invalid auth_id: %v", err)
}
authReq, ok := api.h.state.GetAuthCacheEntry(authID)
if !ok {
return nil, status.Errorf(codes.NotFound, "no pending auth session for auth_id %s", authID)
}
authReq.FinishAuth(types.AuthVerdict{})
return &v1.AuthApproveResponse{}, nil
}
func (api headscaleV1APIServer) AuthReject(
ctx context.Context,
request *v1.AuthRejectRequest,
) (*v1.AuthRejectResponse, error) {
authID, err := types.AuthIDFromString(request.GetAuthId())
if err != nil {
return nil, status.Errorf(codes.InvalidArgument, "invalid auth_id: %v", err)
}
authReq, ok := api.h.state.GetAuthCacheEntry(authID)
if !ok {
return nil, status.Errorf(codes.NotFound, "no pending auth session for auth_id %s", authID)
}
authReq.FinishAuth(types.AuthVerdict{
Err: fmt.Errorf("auth request rejected"),
})
return &v1.AuthRejectResponse{}, nil
}
func (api headscaleV1APIServer) mustEmbedUnimplementedHeadscaleServiceServer() {}
================================================
FILE: hscontrol/grpcv1_test.go
================================================
package hscontrol
import (
"context"
"testing"
"time"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
)
func Test_validateTag(t *testing.T) {
type args struct {
tag string
}
tests := []struct {
name string
args args
wantErr bool
}{
{
name: "valid tag",
args: args{tag: "tag:test"},
wantErr: false,
},
{
name: "tag without tag prefix",
args: args{tag: "test"},
wantErr: true,
},
{
name: "uppercase tag",
args: args{tag: "tag:tEST"},
wantErr: true,
},
{
name: "tag that contains space",
args: args{tag: "tag:this is a spaced tag"},
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := validateTag(tt.args.tag)
if (err != nil) != tt.wantErr {
t.Errorf("validateTag() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}
// TestSetTags_Conversion tests the conversion of user-owned nodes to tagged nodes.
// The tags-as-identity model allows one-way conversion from user-owned to tagged.
// Tag authorization is checked via the policy manager - unauthorized tags are rejected.
func TestSetTags_Conversion(t *testing.T) {
t.Parallel()
app := createTestApp(t)
// Create test user and nodes
user := app.state.CreateUserForTest("test-user")
// Create a pre-auth key WITHOUT tags for user-owned node
pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, nil)
require.NoError(t, err)
machineKey1 := key.NewMachine()
nodeKey1 := key.NewNode()
// Register a user-owned node (via untagged PreAuthKey)
userOwnedReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey1.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "user-owned-node",
},
}
_, err = app.handleRegisterWithAuthKey(userOwnedReq, machineKey1.Public())
require.NoError(t, err)
// Get the created node
userOwnedNode, found := app.state.GetNodeByNodeKey(nodeKey1.Public())
require.True(t, found)
// Create API server instance
apiServer := newHeadscaleV1APIServer(app)
tests := []struct {
name string
nodeID uint64
tags []string
wantErr bool
wantCode codes.Code
wantErrMessage string
}{
{
// Conversion is allowed, but tag authorization fails without tagOwners
name: "reject unauthorized tags on user-owned node",
nodeID: uint64(userOwnedNode.ID()),
tags: []string{"tag:server"},
wantErr: true,
wantCode: codes.InvalidArgument,
wantErrMessage: "requested tags",
},
{
// Conversion is allowed, but tag authorization fails without tagOwners
name: "reject multiple unauthorized tags",
nodeID: uint64(userOwnedNode.ID()),
tags: []string{"tag:server", "tag:database"},
wantErr: true,
wantCode: codes.InvalidArgument,
wantErrMessage: "requested tags",
},
{
name: "reject non-existent node",
nodeID: 99999,
tags: []string{"tag:server"},
wantErr: true,
wantCode: codes.NotFound,
wantErrMessage: "node not found",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
resp, err := apiServer.SetTags(context.Background(), &v1.SetTagsRequest{
NodeId: tt.nodeID,
Tags: tt.tags,
})
if tt.wantErr {
require.Error(t, err)
st, ok := status.FromError(err)
require.True(t, ok, "error should be a gRPC status error")
assert.Equal(t, tt.wantCode, st.Code())
assert.Contains(t, st.Message(), tt.wantErrMessage)
assert.Nil(t, resp.GetNode())
} else {
require.NoError(t, err)
assert.NotNil(t, resp)
assert.NotNil(t, resp.GetNode())
}
})
}
}
// TestSetTags_TaggedNode tests that SetTags correctly identifies tagged nodes
// and doesn't reject them with the "user-owned nodes" error.
// Note: This test doesn't validate ACL tag authorization - that's tested elsewhere.
func TestSetTags_TaggedNode(t *testing.T) {
t.Parallel()
app := createTestApp(t)
// Create test user and tagged pre-auth key
user := app.state.CreateUserForTest("test-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, []string{"tag:initial"})
require.NoError(t, err)
machineKey := key.NewMachine()
nodeKey := key.NewNode()
// Register a tagged node (via tagged PreAuthKey)
taggedReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-node",
},
}
_, err = app.handleRegisterWithAuthKey(taggedReq, machineKey.Public())
require.NoError(t, err)
// Get the created node
taggedNode, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
assert.True(t, taggedNode.IsTagged(), "Node should be tagged")
assert.False(t, taggedNode.UserID().Valid(), "Tagged node should not have UserID")
// Create API server instance
apiServer := newHeadscaleV1APIServer(app)
// Test: SetTags should work on tagged nodes.
resp, err := apiServer.SetTags(context.Background(), &v1.SetTagsRequest{
NodeId: uint64(taggedNode.ID()),
Tags: []string{"tag:initial"}, // Keep existing tag to avoid ACL validation issues
})
// The call should NOT fail with "cannot set tags on user-owned nodes"
if err != nil {
st, ok := status.FromError(err)
require.True(t, ok)
// If error is about unauthorized tags, that's fine - ACL validation is working
// If error is about user-owned nodes, that's the bug we're testing for
assert.NotContains(t, st.Message(), "user-owned nodes", "Should not reject tagged nodes as user-owned")
} else {
// Success is also fine
assert.NotNil(t, resp)
}
}
// TestSetTags_CannotRemoveAllTags tests that SetTags rejects attempts to remove
// all tags from a tagged node, enforcing Tailscale's requirement that tagged
// nodes must have at least one tag.
func TestSetTags_CannotRemoveAllTags(t *testing.T) {
t.Parallel()
app := createTestApp(t)
// Create test user and tagged pre-auth key
user := app.state.CreateUserForTest("test-user")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, []string{"tag:server"})
require.NoError(t, err)
machineKey := key.NewMachine()
nodeKey := key.NewNode()
// Register a tagged node
taggedReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-node",
},
}
_, err = app.handleRegisterWithAuthKey(taggedReq, machineKey.Public())
require.NoError(t, err)
// Get the created node
taggedNode, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
assert.True(t, taggedNode.IsTagged())
// Create API server instance
apiServer := newHeadscaleV1APIServer(app)
// Attempt to remove all tags (empty array)
resp, err := apiServer.SetTags(context.Background(), &v1.SetTagsRequest{
NodeId: uint64(taggedNode.ID()),
Tags: []string{}, // Empty - attempting to remove all tags
})
// Should fail with InvalidArgument error
require.Error(t, err)
st, ok := status.FromError(err)
require.True(t, ok, "error should be a gRPC status error")
assert.Equal(t, codes.InvalidArgument, st.Code())
assert.Contains(t, st.Message(), "cannot remove all tags")
assert.Nil(t, resp.GetNode())
}
// TestDeleteUser_ReturnsProperChangeSignal tests issue #2967 fix:
// When a user is deleted, the state should return a non-empty change signal
// to ensure policy manager is updated and clients are notified immediately.
func TestDeleteUser_ReturnsProperChangeSignal(t *testing.T) {
t.Parallel()
app := createTestApp(t)
// Create a user
user := app.state.CreateUserForTest("test-user-to-delete")
require.NotNil(t, user)
// Delete the user and verify a non-empty change is returned
// Issue #2967: Without the fix, DeleteUser returned an empty change,
// causing stale policy state until another user operation triggered an update.
changeSignal, err := app.state.DeleteUser(*user.TypedID())
require.NoError(t, err, "DeleteUser should succeed")
assert.False(t, changeSignal.IsEmpty(), "DeleteUser should return a non-empty change signal (issue #2967)")
}
// TestDeleteUser_TaggedNodeSurvives tests that deleting a user succeeds when
// the user's only nodes are tagged, and that those nodes remain in the
// NodeStore with nil UserID.
// https://github.com/juanfont/headscale/issues/3077
func TestDeleteUser_TaggedNodeSurvives(t *testing.T) {
t.Parallel()
app := createTestApp(t)
user := app.state.CreateUserForTest("legacy-user")
// Register a tagged node via the full auth flow.
tags := []string{"tag:server"}
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, tags)
require.NoError(t, err)
machineKey := key.NewMachine()
nodeKey := key.NewNode()
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-server",
},
Expiry: time.Now().Add(24 * time.Hour),
}
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
require.NoError(t, err)
require.True(t, resp.MachineAuthorized)
// Verify the registered node has nil UserID (enforced invariant).
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
require.True(t, node.IsTagged())
assert.False(t, node.UserID().Valid(),
"tagged node should have nil UserID after registration")
nodeID := node.ID()
// NodeStore should not list the tagged node under any user.
nodesForUser := app.state.ListNodesByUser(types.UserID(user.ID))
assert.Equal(t, 0, nodesForUser.Len(),
"tagged nodes should not appear in nodesByUser index")
// Delete the user.
changeSignal, err := app.state.DeleteUser(*user.TypedID())
require.NoError(t, err)
assert.False(t, changeSignal.IsEmpty())
// Tagged node survives in the NodeStore.
nodeAfter, found := app.state.GetNodeByID(nodeID)
require.True(t, found, "tagged node should survive user deletion")
assert.True(t, nodeAfter.IsTagged())
assert.False(t, nodeAfter.UserID().Valid())
// Tagged node appears in the global list.
allNodes := app.state.ListNodes()
foundInAll := false
for _, n := range allNodes.All() {
if n.ID() == nodeID {
foundInAll = true
break
}
}
assert.True(t, foundInAll, "tagged node should appear in the global node list")
}
// TestExpireApiKey_ByID tests that API keys can be expired by ID.
func TestExpireApiKey_ByID(t *testing.T) {
t.Parallel()
app := createTestApp(t)
apiServer := newHeadscaleV1APIServer(app)
// Create an API key
createResp, err := apiServer.CreateApiKey(context.Background(), &v1.CreateApiKeyRequest{})
require.NoError(t, err)
require.NotEmpty(t, createResp.GetApiKey())
// List keys to get the ID
listResp, err := apiServer.ListApiKeys(context.Background(), &v1.ListApiKeysRequest{})
require.NoError(t, err)
require.Len(t, listResp.GetApiKeys(), 1)
keyID := listResp.GetApiKeys()[0].GetId()
// Expire by ID
_, err = apiServer.ExpireApiKey(context.Background(), &v1.ExpireApiKeyRequest{
Id: keyID,
})
require.NoError(t, err)
// Verify key is expired (expiration is set to now or in the past)
listResp, err = apiServer.ListApiKeys(context.Background(), &v1.ListApiKeysRequest{})
require.NoError(t, err)
require.Len(t, listResp.GetApiKeys(), 1)
assert.NotNil(t, listResp.GetApiKeys()[0].GetExpiration(), "expiration should be set")
}
// TestExpireApiKey_ByPrefix tests that API keys can still be expired by prefix.
func TestExpireApiKey_ByPrefix(t *testing.T) {
t.Parallel()
app := createTestApp(t)
apiServer := newHeadscaleV1APIServer(app)
// Create an API key
createResp, err := apiServer.CreateApiKey(context.Background(), &v1.CreateApiKeyRequest{})
require.NoError(t, err)
require.NotEmpty(t, createResp.GetApiKey())
// List keys to get the prefix
listResp, err := apiServer.ListApiKeys(context.Background(), &v1.ListApiKeysRequest{})
require.NoError(t, err)
require.Len(t, listResp.GetApiKeys(), 1)
keyPrefix := listResp.GetApiKeys()[0].GetPrefix()
// Expire by prefix
_, err = apiServer.ExpireApiKey(context.Background(), &v1.ExpireApiKeyRequest{
Prefix: keyPrefix,
})
require.NoError(t, err)
}
// TestDeleteApiKey_ByID tests that API keys can be deleted by ID.
func TestDeleteApiKey_ByID(t *testing.T) {
t.Parallel()
app := createTestApp(t)
apiServer := newHeadscaleV1APIServer(app)
// Create an API key
createResp, err := apiServer.CreateApiKey(context.Background(), &v1.CreateApiKeyRequest{})
require.NoError(t, err)
require.NotEmpty(t, createResp.GetApiKey())
// List keys to get the ID
listResp, err := apiServer.ListApiKeys(context.Background(), &v1.ListApiKeysRequest{})
require.NoError(t, err)
require.Len(t, listResp.GetApiKeys(), 1)
keyID := listResp.GetApiKeys()[0].GetId()
// Delete by ID
_, err = apiServer.DeleteApiKey(context.Background(), &v1.DeleteApiKeyRequest{
Id: keyID,
})
require.NoError(t, err)
// Verify key is deleted
listResp, err = apiServer.ListApiKeys(context.Background(), &v1.ListApiKeysRequest{})
require.NoError(t, err)
assert.Empty(t, listResp.GetApiKeys())
}
// TestDeleteApiKey_ByPrefix tests that API keys can still be deleted by prefix.
func TestDeleteApiKey_ByPrefix(t *testing.T) {
t.Parallel()
app := createTestApp(t)
apiServer := newHeadscaleV1APIServer(app)
// Create an API key
createResp, err := apiServer.CreateApiKey(context.Background(), &v1.CreateApiKeyRequest{})
require.NoError(t, err)
require.NotEmpty(t, createResp.GetApiKey())
// List keys to get the prefix
listResp, err := apiServer.ListApiKeys(context.Background(), &v1.ListApiKeysRequest{})
require.NoError(t, err)
require.Len(t, listResp.GetApiKeys(), 1)
keyPrefix := listResp.GetApiKeys()[0].GetPrefix()
// Delete by prefix
_, err = apiServer.DeleteApiKey(context.Background(), &v1.DeleteApiKeyRequest{
Prefix: keyPrefix,
})
require.NoError(t, err)
// Verify key is deleted
listResp, err = apiServer.ListApiKeys(context.Background(), &v1.ListApiKeysRequest{})
require.NoError(t, err)
assert.Empty(t, listResp.GetApiKeys())
}
// TestExpireApiKey_NoIdentifier tests that an error is returned when neither ID nor prefix is provided.
func TestExpireApiKey_NoIdentifier(t *testing.T) {
t.Parallel()
app := createTestApp(t)
apiServer := newHeadscaleV1APIServer(app)
_, err := apiServer.ExpireApiKey(context.Background(), &v1.ExpireApiKeyRequest{})
require.Error(t, err)
st, ok := status.FromError(err)
require.True(t, ok, "error should be a gRPC status error")
assert.Equal(t, codes.InvalidArgument, st.Code())
assert.Contains(t, st.Message(), "must provide id or prefix")
}
// TestDeleteApiKey_NoIdentifier tests that an error is returned when neither ID nor prefix is provided.
func TestDeleteApiKey_NoIdentifier(t *testing.T) {
t.Parallel()
app := createTestApp(t)
apiServer := newHeadscaleV1APIServer(app)
_, err := apiServer.DeleteApiKey(context.Background(), &v1.DeleteApiKeyRequest{})
require.Error(t, err)
st, ok := status.FromError(err)
require.True(t, ok, "error should be a gRPC status error")
assert.Equal(t, codes.InvalidArgument, st.Code())
assert.Contains(t, st.Message(), "must provide id or prefix")
}
// TestExpireApiKey_BothIdentifiers tests that an error is returned when both ID and prefix are provided.
func TestExpireApiKey_BothIdentifiers(t *testing.T) {
t.Parallel()
app := createTestApp(t)
apiServer := newHeadscaleV1APIServer(app)
_, err := apiServer.ExpireApiKey(context.Background(), &v1.ExpireApiKeyRequest{
Id: 1,
Prefix: "test",
})
require.Error(t, err)
st, ok := status.FromError(err)
require.True(t, ok, "error should be a gRPC status error")
assert.Equal(t, codes.InvalidArgument, st.Code())
assert.Contains(t, st.Message(), "provide either id or prefix, not both")
}
// TestDeleteApiKey_BothIdentifiers tests that an error is returned when both ID and prefix are provided.
func TestDeleteApiKey_BothIdentifiers(t *testing.T) {
t.Parallel()
app := createTestApp(t)
apiServer := newHeadscaleV1APIServer(app)
_, err := apiServer.DeleteApiKey(context.Background(), &v1.DeleteApiKeyRequest{
Id: 1,
Prefix: "test",
})
require.Error(t, err)
st, ok := status.FromError(err)
require.True(t, ok, "error should be a gRPC status error")
assert.Equal(t, codes.InvalidArgument, st.Code())
assert.Contains(t, st.Message(), "provide either id or prefix, not both")
}
================================================
FILE: hscontrol/handlers.go
================================================
package hscontrol
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/juanfont/headscale/hscontrol/assets"
"github.com/juanfont/headscale/hscontrol/templates"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/rs/zerolog/log"
"tailscale.com/tailcfg"
)
const (
// NoiseCapabilityVersion is used by Tailscale clients to indicate
// their codebase version. Tailscale clients can communicate over TS2021
// from CapabilityVersion 28, but we only have good support for it
// since https://github.com/tailscale/tailscale/pull/4323 (Noise in any HTTPS port).
//
// Related to this change, there is https://github.com/tailscale/tailscale/pull/5379,
// where CapabilityVersion 39 is introduced to indicate #4323 was merged.
//
// See also https://github.com/tailscale/tailscale/blob/main/tailcfg/tailcfg.go
NoiseCapabilityVersion = 39
reservedResponseHeaderSize = 4
)
// httpError logs an error and sends an HTTP error response with the given.
func httpError(w http.ResponseWriter, err error) {
if herr, ok := errors.AsType[HTTPError](err); ok {
http.Error(w, herr.Msg, herr.Code)
log.Error().Err(herr.Err).Int("code", herr.Code).Msgf("user msg: %s", herr.Msg)
} else {
http.Error(w, "internal server error", http.StatusInternalServerError)
log.Error().Err(err).Int("code", http.StatusInternalServerError).Msg("http internal server error")
}
}
// HTTPError represents an error that is surfaced to the user via web.
type HTTPError struct {
Code int // HTTP response code to send to client; 0 means 500
Msg string // Response body to send to client
Err error // Detailed error to log on the server
}
func (e HTTPError) Error() string { return fmt.Sprintf("http error[%d]: %s, %s", e.Code, e.Msg, e.Err) }
func (e HTTPError) Unwrap() error { return e.Err }
// NewHTTPError returns an HTTPError containing the given information.
func NewHTTPError(code int, msg string, err error) HTTPError {
return HTTPError{Code: code, Msg: msg, Err: err}
}
var errMethodNotAllowed = NewHTTPError(http.StatusMethodNotAllowed, "method not allowed", nil)
var ErrRegisterMethodCLIDoesNotSupportExpire = errors.New(
"machines registered with CLI do not support expiry",
)
func parseCapabilityVersion(req *http.Request) (tailcfg.CapabilityVersion, error) {
clientCapabilityStr := req.URL.Query().Get("v")
if clientCapabilityStr == "" {
return 0, NewHTTPError(http.StatusBadRequest, "capability version must be set", nil)
}
clientCapabilityVersion, err := strconv.Atoi(clientCapabilityStr)
if err != nil {
return 0, NewHTTPError(http.StatusBadRequest, "invalid capability version", fmt.Errorf("parsing capability version: %w", err))
}
return tailcfg.CapabilityVersion(clientCapabilityVersion), nil
}
func (h *Headscale) handleVerifyRequest(
req *http.Request,
writer io.Writer,
) error {
body, err := io.ReadAll(req.Body)
if err != nil {
return fmt.Errorf("reading request body: %w", err)
}
var derpAdmitClientRequest tailcfg.DERPAdmitClientRequest
if err := json.Unmarshal(body, &derpAdmitClientRequest); err != nil { //nolint:noinlineerr
return NewHTTPError(http.StatusBadRequest, "Bad Request: invalid JSON", fmt.Errorf("parsing DERP client request: %w", err))
}
nodes := h.state.ListNodes()
// Check if any node has the requested NodeKey
var nodeKeyFound bool
for _, node := range nodes.All() {
if node.NodeKey() == derpAdmitClientRequest.NodePublic {
nodeKeyFound = true
break
}
}
resp := &tailcfg.DERPAdmitClientResponse{
Allow: nodeKeyFound,
}
return json.NewEncoder(writer).Encode(resp)
}
// VerifyHandler see https://github.com/tailscale/tailscale/blob/964282d34f06ecc06ce644769c66b0b31d118340/derp/derp_server.go#L1159
// DERP use verifyClientsURL to verify whether a client is allowed to connect to the DERP server.
func (h *Headscale) VerifyHandler(
writer http.ResponseWriter,
req *http.Request,
) {
if req.Method != http.MethodPost {
httpError(writer, errMethodNotAllowed)
return
}
err := h.handleVerifyRequest(req, writer)
if err != nil {
httpError(writer, err)
return
}
writer.Header().Set("Content-Type", "application/json")
}
// KeyHandler provides the Headscale pub key
// Listens in /key.
func (h *Headscale) KeyHandler(
writer http.ResponseWriter,
req *http.Request,
) {
// New Tailscale clients send a 'v' parameter to indicate the CurrentCapabilityVersion
capVer, err := parseCapabilityVersion(req)
if err != nil {
httpError(writer, err)
return
}
// TS2021 (Tailscale v2 protocol) requires to have a different key
if capVer >= NoiseCapabilityVersion {
resp := tailcfg.OverTLSPublicKeyResponse{
PublicKey: h.noisePrivateKey.Public(),
}
writer.Header().Set("Content-Type", "application/json")
err := json.NewEncoder(writer).Encode(resp)
if err != nil {
log.Error().Err(err).Msg("failed to encode public key response")
}
return
}
}
func (h *Headscale) HealthHandler(
writer http.ResponseWriter,
req *http.Request,
) {
respond := func(err error) {
writer.Header().Set("Content-Type", "application/health+json; charset=utf-8")
res := struct {
Status string `json:"status"`
}{
Status: "pass",
}
if err != nil {
writer.WriteHeader(http.StatusInternalServerError)
res.Status = "fail"
}
encErr := json.NewEncoder(writer).Encode(res)
if encErr != nil {
log.Error().Err(encErr).Msg("failed to encode health response")
}
}
err := h.state.PingDB(req.Context())
if err != nil {
respond(err)
return
}
respond(nil)
}
func (h *Headscale) RobotsHandler(
writer http.ResponseWriter,
req *http.Request,
) {
writer.Header().Set("Content-Type", "text/plain")
writer.WriteHeader(http.StatusOK)
_, err := writer.Write([]byte("User-agent: *\nDisallow: /"))
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write HTTP response")
}
}
// VersionHandler returns version information about the Headscale server
// Listens in /version.
func (h *Headscale) VersionHandler(
writer http.ResponseWriter,
req *http.Request,
) {
writer.Header().Set("Content-Type", "application/json")
writer.WriteHeader(http.StatusOK)
versionInfo := types.GetVersionInfo()
err := json.NewEncoder(writer).Encode(versionInfo)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write version response")
}
}
type AuthProviderWeb struct {
serverURL string
}
func NewAuthProviderWeb(serverURL string) *AuthProviderWeb {
return &AuthProviderWeb{
serverURL: serverURL,
}
}
func (a *AuthProviderWeb) RegisterURL(authID types.AuthID) string {
return fmt.Sprintf(
"%s/register/%s",
strings.TrimSuffix(a.serverURL, "/"),
authID.String())
}
func (a *AuthProviderWeb) AuthURL(authID types.AuthID) string {
return fmt.Sprintf(
"%s/auth/%s",
strings.TrimSuffix(a.serverURL, "/"),
authID.String())
}
func (a *AuthProviderWeb) AuthHandler(
writer http.ResponseWriter,
req *http.Request,
) {
authID, err := authIDFromRequest(req)
if err != nil {
httpError(writer, err)
return
}
writer.Header().Set("Content-Type", "text/html; charset=utf-8")
writer.WriteHeader(http.StatusOK)
_, err = writer.Write([]byte(templates.AuthWeb(
"Authentication check",
"Run the command below in the headscale server to approve this authentication request:",
"headscale auth approve --auth-id "+authID.String(),
).Render()))
if err != nil {
log.Error().Err(err).Msg("failed to write auth response")
}
}
func authIDFromRequest(req *http.Request) (types.AuthID, error) {
raw, err := urlParam[string](req, "auth_id")
if err != nil {
return "", NewHTTPError(http.StatusBadRequest, "invalid auth id", fmt.Errorf("parsing auth_id from URL: %w", err))
}
// We need to make sure we dont open for XSS style injections, if the parameter that
// is passed as a key is not parsable/validated as a NodePublic key, then fail to render
// the template and log an error.
authId, err := types.AuthIDFromString(raw)
if err != nil {
return "", NewHTTPError(http.StatusBadRequest, "invalid auth id", fmt.Errorf("parsing auth_id from URL: %w", err))
}
return authId, nil
}
// RegisterHandler shows a simple message in the browser to point to the CLI
// Listens in /register/:registration_id.
//
// This is not part of the Tailscale control API, as we could send whatever URL
// in the RegisterResponse.AuthURL field.
func (a *AuthProviderWeb) RegisterHandler(
writer http.ResponseWriter,
req *http.Request,
) {
authId, err := authIDFromRequest(req)
if err != nil {
httpError(writer, err)
return
}
writer.Header().Set("Content-Type", "text/html; charset=utf-8")
writer.WriteHeader(http.StatusOK)
_, err = writer.Write([]byte(templates.AuthWeb(
"Node registration",
"Run the command below in the headscale server to add this node to your network:",
fmt.Sprintf("headscale auth register --auth-id %s --user USERNAME", authId.String()),
).Render()))
if err != nil {
log.Error().Err(err).Msg("failed to write register response")
}
}
func FaviconHandler(writer http.ResponseWriter, req *http.Request) {
writer.Header().Set("Content-Type", "image/png")
http.ServeContent(writer, req, "favicon.ico", time.Unix(0, 0), bytes.NewReader(assets.Favicon))
}
// BlankHandler returns a blank page with favicon linked.
func BlankHandler(writer http.ResponseWriter, res *http.Request) {
writer.Header().Set("Content-Type", "text/html; charset=utf-8")
writer.WriteHeader(http.StatusOK)
_, err := writer.Write([]byte(templates.BlankPage().Render()))
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write HTTP response")
}
}
================================================
FILE: hscontrol/mapper/batcher.go
================================================
package mapper
import (
"errors"
"fmt"
"sync"
"sync/atomic"
"time"
"github.com/juanfont/headscale/hscontrol/state"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/types/change"
"github.com/juanfont/headscale/hscontrol/util/zlog/zf"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/puzpuzpuz/xsync/v4"
"github.com/rs/zerolog/log"
"tailscale.com/tailcfg"
)
// Mapper errors.
var (
ErrInvalidNodeID = errors.New("invalid nodeID")
ErrMapperNil = errors.New("mapper is nil")
ErrNodeConnectionNil = errors.New("nodeConnection is nil")
ErrNodeNotFoundMapper = errors.New("node not found")
)
// offlineNodeCleanupThreshold is how long a node must be disconnected
// before cleanupOfflineNodes removes its in-memory state.
const offlineNodeCleanupThreshold = 15 * time.Minute
var mapResponseGenerated = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "headscale",
Name: "mapresponse_generated_total",
Help: "total count of mapresponses generated by response type",
}, []string{"response_type"})
func NewBatcher(batchTime time.Duration, workers int, mapper *mapper) *Batcher {
return &Batcher{
mapper: mapper,
workers: workers,
tick: time.NewTicker(batchTime),
// The size of this channel is arbitrary chosen, the sizing should be revisited.
workCh: make(chan work, workers*200),
done: make(chan struct{}),
nodes: xsync.NewMap[types.NodeID, *multiChannelNodeConn](),
}
}
// NewBatcherAndMapper creates a new Batcher with its mapper.
func NewBatcherAndMapper(cfg *types.Config, state *state.State) *Batcher {
m := newMapper(cfg, state)
b := NewBatcher(cfg.Tuning.BatchChangeDelay, cfg.Tuning.BatcherWorkers, m)
m.batcher = b
return b
}
// nodeConnection interface for different connection implementations.
type nodeConnection interface {
nodeID() types.NodeID
version() tailcfg.CapabilityVersion
send(data *tailcfg.MapResponse) error
// computePeerDiff returns peers that were previously sent but are no longer in the current list.
computePeerDiff(currentPeers []tailcfg.NodeID) (removed []tailcfg.NodeID)
// updateSentPeers updates the tracking of which peers have been sent to this node.
updateSentPeers(resp *tailcfg.MapResponse)
}
// generateMapResponse generates a [tailcfg.MapResponse] for the given NodeID based on the provided [change.Change].
func generateMapResponse(nc nodeConnection, mapper *mapper, r change.Change) (*tailcfg.MapResponse, error) {
nodeID := nc.nodeID()
version := nc.version()
if r.IsEmpty() {
return nil, nil //nolint:nilnil // Empty response means nothing to send
}
if nodeID == 0 {
return nil, fmt.Errorf("%w: %d", ErrInvalidNodeID, nodeID)
}
if mapper == nil {
return nil, fmt.Errorf("%w for nodeID %d", ErrMapperNil, nodeID)
}
// Handle self-only responses
if r.IsSelfOnly() && r.TargetNode != nodeID {
return nil, nil //nolint:nilnil // No response needed for other nodes when self-only
}
// Check if this is a self-update (the changed node is the receiving node).
// When true, ensure the response includes the node's self info so it sees
// its own attribute changes (e.g., tags changed via admin API).
isSelfUpdate := r.OriginNode != 0 && r.OriginNode == nodeID
var (
mapResp *tailcfg.MapResponse
err error
)
// Track metric using categorized type, not free-form reason
mapResponseGenerated.WithLabelValues(r.Type()).Inc()
// Check if this requires runtime peer visibility computation (e.g., policy changes)
if r.RequiresRuntimePeerComputation {
currentPeers := mapper.state.ListPeers(nodeID)
currentPeerIDs := make([]tailcfg.NodeID, 0, currentPeers.Len())
for _, peer := range currentPeers.All() {
currentPeerIDs = append(currentPeerIDs, peer.ID().NodeID())
}
removedPeers := nc.computePeerDiff(currentPeerIDs)
// Include self node when this is a self-update (e.g., node's own tags changed)
// so the node sees its updated self info along with new packet filters.
mapResp, err = mapper.policyChangeResponse(nodeID, version, removedPeers, currentPeers, isSelfUpdate)
} else if isSelfUpdate {
// Non-policy self-update: just send the self node info
mapResp, err = mapper.selfMapResponse(nodeID, version)
} else {
mapResp, err = mapper.buildFromChange(nodeID, version, &r)
}
if err != nil {
return nil, fmt.Errorf("generating map response for nodeID %d: %w", nodeID, err)
}
return mapResp, nil
}
// handleNodeChange generates and sends a [tailcfg.MapResponse] for a given node and [change.Change].
func handleNodeChange(nc nodeConnection, mapper *mapper, r change.Change) error {
if nc == nil {
return ErrNodeConnectionNil
}
nodeID := nc.nodeID()
log.Debug().Caller().Uint64(zf.NodeID, nodeID.Uint64()).Str(zf.Reason, r.Reason).Msg("node change processing started")
data, err := generateMapResponse(nc, mapper, r)
if err != nil {
return fmt.Errorf("generating map response for node %d: %w", nodeID, err)
}
if data == nil {
// No data to send is valid for some response types
return nil
}
// Send the map response
err = nc.send(data)
if err != nil {
return fmt.Errorf("sending map response to node %d: %w", nodeID, err)
}
// Update peer tracking after successful send
nc.updateSentPeers(data)
return nil
}
// workResult represents the result of processing a change.
type workResult struct {
mapResponse *tailcfg.MapResponse
err error
}
// work represents a unit of work to be processed by workers.
// All pending changes for a node are bundled into a single work item
// so that one worker processes them sequentially. This prevents
// out-of-order MapResponse delivery and races on lastSentPeers
// that occur when multiple workers process changes for the same node.
type work struct {
changes []change.Change
nodeID types.NodeID
resultCh chan<- workResult // optional channel for synchronous operations
}
// Batcher errors.
var (
errConnectionClosed = errors.New("connection channel already closed")
ErrInitialMapSendTimeout = errors.New("sending initial map: timeout")
ErrBatcherShuttingDown = errors.New("batcher shutting down")
ErrConnectionSendTimeout = errors.New("timeout sending to channel (likely stale connection)")
)
// Batcher batches and distributes map responses to connected nodes.
// It uses concurrent maps, per-node mutexes, and a worker pool.
//
// Lifecycle: Call Start() to spawn workers, then Close() to shut down.
// Close() blocks until all workers have exited. A Batcher must not
// be reused after Close().
type Batcher struct {
tick *time.Ticker
mapper *mapper
workers int
nodes *xsync.Map[types.NodeID, *multiChannelNodeConn]
// Work queue channel
workCh chan work
done chan struct{}
doneOnce sync.Once // Ensures done is only closed once
// wg tracks the doWork and all worker goroutines so that Close()
// can block until they have fully exited.
wg sync.WaitGroup
started atomic.Bool // Ensures Start() is only called once
// Metrics
totalNodes atomic.Int64
workQueuedCount atomic.Int64
workProcessed atomic.Int64
workErrors atomic.Int64
}
// AddNode registers a new node connection with the batcher and sends an initial map response.
// It creates or updates the node's connection data, validates the initial map generation,
// and notifies other nodes that this node has come online.
// The stop function tears down the owning session if this connection is later declared stale.
func (b *Batcher) AddNode(
id types.NodeID,
c chan<- *tailcfg.MapResponse,
version tailcfg.CapabilityVersion,
stop func(),
) error {
addNodeStart := time.Now()
nlog := log.With().Uint64(zf.NodeID, id.Uint64()).Logger()
// Generate connection ID
connID := generateConnectionID()
// Create new connection entry
now := time.Now()
newEntry := &connectionEntry{
id: connID,
c: c,
version: version,
created: now,
stop: stop,
}
// Initialize last used timestamp
newEntry.lastUsed.Store(now.Unix())
// Get or create multiChannelNodeConn - this reuses existing offline nodes for rapid reconnection
nodeConn, loaded := b.nodes.LoadOrStore(id, newMultiChannelNodeConn(id, b.mapper))
if !loaded {
b.totalNodes.Add(1)
}
// Add connection to the list (lock-free)
nodeConn.addConnection(newEntry)
// Use the worker pool for controlled concurrency instead of direct generation
initialMap, err := b.MapResponseFromChange(id, change.FullSelf(id))
if err != nil {
nlog.Error().Err(err).Msg("initial map generation failed")
nodeConn.removeConnectionByChannel(c)
if !nodeConn.hasActiveConnections() {
nodeConn.markDisconnected()
}
return fmt.Errorf("generating initial map for node %d: %w", id, err)
}
// Use a blocking send with timeout for initial map since the channel should be ready
// and we want to avoid the race condition where the receiver isn't ready yet
select {
case c <- initialMap:
// Success
case <-time.After(5 * time.Second): //nolint:mnd
nlog.Error().Err(ErrInitialMapSendTimeout).Msg("initial map send timeout")
nlog.Debug().Caller().Dur("timeout.duration", 5*time.Second). //nolint:mnd
Msg("initial map send timed out because channel was blocked or receiver not ready")
nodeConn.removeConnectionByChannel(c)
if !nodeConn.hasActiveConnections() {
nodeConn.markDisconnected()
}
return fmt.Errorf("%w for node %d", ErrInitialMapSendTimeout, id)
}
// Mark the node as connected now that the initial map was sent.
nodeConn.markConnected()
// Node will automatically receive updates through the normal flow
// The initial full map already contains all current state
nlog.Debug().Caller().Dur(zf.TotalDuration, time.Since(addNodeStart)).
Int("active.connections", nodeConn.getActiveConnectionCount()).
Msg("node connection established in batcher")
return nil
}
// RemoveNode disconnects a node from the batcher, marking it as offline and cleaning up its state.
// It validates the connection channel matches one of the current connections, closes that specific connection,
// and keeps the node entry alive for rapid reconnections instead of aggressive deletion.
// Reports if the node still has active connections after removal.
func (b *Batcher) RemoveNode(id types.NodeID, c chan<- *tailcfg.MapResponse) bool {
nlog := log.With().Uint64(zf.NodeID, id.Uint64()).Logger()
nodeConn, exists := b.nodes.Load(id)
if !exists || nodeConn == nil {
nlog.Debug().Caller().Msg("removeNode called for non-existent node")
return false
}
// Remove specific connection
removed := nodeConn.removeConnectionByChannel(c)
if !removed {
nlog.Debug().Caller().Msg("removeNode: channel not found, connection already removed or invalid")
}
// Check if node has any remaining active connections
if nodeConn.hasActiveConnections() {
nlog.Debug().Caller().
Int("active.connections", nodeConn.getActiveConnectionCount()).
Msg("node connection removed but keeping online, other connections remain")
return true // Node still has active connections
}
// No active connections - keep the node entry alive for rapid reconnections
// The node will get a fresh full map when it reconnects
nlog.Debug().Caller().Msg("node disconnected from batcher, keeping entry for rapid reconnection")
nodeConn.markDisconnected()
return false
}
// AddWork queues a change to be processed by the batcher.
func (b *Batcher) AddWork(r ...change.Change) {
b.addToBatch(r...)
}
func (b *Batcher) Start() {
if !b.started.CompareAndSwap(false, true) {
return
}
b.wg.Add(1)
go b.doWork()
}
func (b *Batcher) Close() {
// Signal shutdown to all goroutines, only once.
// Workers and queueWork both select on done, so closing it
// is sufficient for graceful shutdown. We intentionally do NOT
// close workCh here because processBatchedChanges or
// MapResponseFromChange may still be sending on it concurrently.
b.doneOnce.Do(func() {
close(b.done)
})
// Wait for all worker goroutines (and doWork) to exit before
// tearing down node connections. This prevents workers from
// sending on connections that are being closed concurrently.
b.wg.Wait()
// Stop the ticker to prevent resource leaks.
b.tick.Stop()
// Close the underlying channels supplying the data to the clients.
b.nodes.Range(func(nodeID types.NodeID, conn *multiChannelNodeConn) bool {
if conn == nil {
return true
}
conn.close()
return true
})
}
func (b *Batcher) doWork() {
defer b.wg.Done()
for i := range b.workers {
b.wg.Add(1)
go b.worker(i + 1)
}
// Create a cleanup ticker for removing truly disconnected nodes
cleanupTicker := time.NewTicker(5 * time.Minute)
defer cleanupTicker.Stop()
for {
select {
case <-b.tick.C:
// Process batched changes
b.processBatchedChanges()
case <-cleanupTicker.C:
// Clean up nodes that have been offline for too long
b.cleanupOfflineNodes()
case <-b.done:
log.Info().Msg("batcher done channel closed, stopping to feed workers")
return
}
}
}
func (b *Batcher) worker(workerID int) {
defer b.wg.Done()
wlog := log.With().Int(zf.WorkerID, workerID).Logger()
for {
select {
case w, ok := <-b.workCh:
if !ok {
wlog.Debug().Msg("worker channel closing, shutting down")
return
}
b.workProcessed.Add(1)
// Synchronous path: a caller is blocking on resultCh
// waiting for a generated MapResponse (used by AddNode
// for the initial map). Always contains a single change.
if w.resultCh != nil {
var result workResult
if nc, exists := b.nodes.Load(w.nodeID); exists && nc != nil {
// Hold workMu so concurrent async work for this
// node waits until the initial map is sent.
nc.workMu.Lock()
var err error
result.mapResponse, err = generateMapResponse(nc, b.mapper, w.changes[0])
result.err = err
if result.err != nil {
b.workErrors.Add(1)
wlog.Error().Err(result.err).
Uint64(zf.NodeID, w.nodeID.Uint64()).
Str(zf.Reason, w.changes[0].Reason).
Msg("failed to generate map response for synchronous work")
} else if result.mapResponse != nil {
nc.updateSentPeers(result.mapResponse)
}
nc.workMu.Unlock()
} else {
result.err = fmt.Errorf("%w: %d", ErrNodeNotFoundMapper, w.nodeID)
b.workErrors.Add(1)
wlog.Error().Err(result.err).
Uint64(zf.NodeID, w.nodeID.Uint64()).
Msg("node not found for synchronous work")
}
select {
case w.resultCh <- result:
case <-b.done:
return
}
continue
}
// Async path: process all bundled changes sequentially.
// workMu ensures that if another worker picks up the next
// tick's bundle for the same node, it waits until we
// finish — preventing out-of-order delivery and races
// on lastSentPeers (Clear+Store vs Range).
if nc, exists := b.nodes.Load(w.nodeID); exists && nc != nil {
nc.workMu.Lock()
for _, ch := range w.changes {
err := nc.change(ch)
if err != nil {
b.workErrors.Add(1)
wlog.Error().Err(err).
Uint64(zf.NodeID, w.nodeID.Uint64()).
Str(zf.Reason, ch.Reason).
Msg("failed to apply change")
}
}
nc.workMu.Unlock()
}
case <-b.done:
wlog.Debug().Msg("batcher shutting down, exiting worker")
return
}
}
}
// queueWork safely queues work.
func (b *Batcher) queueWork(w work) {
b.workQueuedCount.Add(1)
select {
case b.workCh <- w:
// Successfully queued
case <-b.done:
// Batcher is shutting down
return
}
}
// addToBatch adds changes to the pending batch.
func (b *Batcher) addToBatch(changes ...change.Change) {
// Clean up any nodes being permanently removed from the system.
//
// This handles the case where a node is deleted from state but the batcher
// still has it registered. By cleaning up here, we prevent "node not found"
// errors when workers try to generate map responses for deleted nodes.
//
// Safety: change.Change.PeersRemoved is ONLY populated when nodes are actually
// deleted from the system (via change.NodeRemoved in state.DeleteNode). Policy
// changes that affect peer visibility do NOT use this field - they set
// RequiresRuntimePeerComputation=true and compute removed peers at runtime,
// putting them in tailcfg.MapResponse.PeersRemoved (a different struct).
// Therefore, this cleanup only removes nodes that are truly being deleted,
// not nodes that are still connected but have lost visibility of certain peers.
//
// See: https://github.com/juanfont/headscale/issues/2924
for _, ch := range changes {
for _, removedID := range ch.PeersRemoved {
if _, existed := b.nodes.LoadAndDelete(removedID); existed {
b.totalNodes.Add(-1)
log.Debug().
Uint64(zf.NodeID, removedID.Uint64()).
Msg("removed deleted node from batcher")
}
}
}
// Short circuit if any of the changes is a full update, which
// means we can skip sending individual changes.
if change.HasFull(changes) {
b.nodes.Range(func(_ types.NodeID, nc *multiChannelNodeConn) bool {
if nc == nil {
return true
}
nc.pendingMu.Lock()
nc.pending = []change.Change{change.FullUpdate()}
nc.pendingMu.Unlock()
return true
})
return
}
broadcast, targeted := change.SplitTargetedAndBroadcast(changes)
// Handle targeted changes - send only to the specific node
for _, ch := range targeted {
if nc, ok := b.nodes.Load(ch.TargetNode); ok && nc != nil {
nc.appendPending(ch)
}
}
// Handle broadcast changes - send to all nodes, filtering as needed
if len(broadcast) > 0 {
b.nodes.Range(func(nodeID types.NodeID, nc *multiChannelNodeConn) bool {
if nc == nil {
return true
}
filtered := change.FilterForNode(nodeID, broadcast)
if len(filtered) > 0 {
nc.appendPending(filtered...)
}
return true
})
}
}
// processBatchedChanges processes all pending batched changes.
func (b *Batcher) processBatchedChanges() {
b.nodes.Range(func(nodeID types.NodeID, nc *multiChannelNodeConn) bool {
if nc == nil {
return true
}
pending := nc.drainPending()
if len(pending) == 0 {
return true
}
// Queue a single work item containing all pending changes.
// One item per node ensures a single worker processes them
// sequentially, preventing out-of-order delivery.
b.queueWork(work{changes: pending, nodeID: nodeID, resultCh: nil})
return true
})
}
// cleanupOfflineNodes removes nodes that have been offline for too long to prevent memory leaks.
// Uses Compute() for atomic check-and-delete to prevent TOCTOU races where a node
// reconnects between the hasActiveConnections() check and the Delete() call.
func (b *Batcher) cleanupOfflineNodes() {
var nodesToCleanup []types.NodeID
// Find nodes that have been offline for too long by scanning b.nodes
// and checking each node's disconnectedAt timestamp.
b.nodes.Range(func(nodeID types.NodeID, nc *multiChannelNodeConn) bool {
if nc != nil && !nc.hasActiveConnections() && nc.offlineDuration() > offlineNodeCleanupThreshold {
nodesToCleanup = append(nodesToCleanup, nodeID)
}
return true
})
// Clean up the identified nodes using Compute() for atomic check-and-delete.
// This prevents a TOCTOU race where a node reconnects (adding an active
// connection) between the hasActiveConnections() check and the Delete() call.
cleaned := 0
for _, nodeID := range nodesToCleanup {
b.nodes.Compute(
nodeID,
func(conn *multiChannelNodeConn, loaded bool) (*multiChannelNodeConn, xsync.ComputeOp) {
if !loaded || conn == nil || conn.hasActiveConnections() {
return conn, xsync.CancelOp
}
// Perform all bookkeeping inside the Compute callback so
// that a concurrent AddNode (which calls LoadOrStore on
// b.nodes) cannot slip in between the delete and the
// counter update.
b.totalNodes.Add(-1)
cleaned++
log.Info().Uint64(zf.NodeID, nodeID.Uint64()).
Dur("offline_duration", offlineNodeCleanupThreshold).
Msg("cleaning up node that has been offline for too long")
return conn, xsync.DeleteOp
},
)
}
if cleaned > 0 {
log.Info().Int(zf.CleanedNodes, cleaned).
Msg("completed cleanup of long-offline nodes")
}
}
// IsConnected is a lock-free read that checks if a node is connected.
// A node is considered connected if it has active connections or has
// not been marked as disconnected.
func (b *Batcher) IsConnected(id types.NodeID) bool {
nodeConn, exists := b.nodes.Load(id)
if !exists || nodeConn == nil {
return false
}
return nodeConn.isConnected()
}
// ConnectedMap returns a lock-free map of all known nodes and their
// connection status (true = connected, false = disconnected).
func (b *Batcher) ConnectedMap() *xsync.Map[types.NodeID, bool] {
ret := xsync.NewMap[types.NodeID, bool]()
b.nodes.Range(func(id types.NodeID, nc *multiChannelNodeConn) bool {
if nc != nil {
ret.Store(id, nc.isConnected())
}
return true
})
return ret
}
// MapResponseFromChange queues work to generate a map response and waits for the result.
// This allows synchronous map generation using the same worker pool.
func (b *Batcher) MapResponseFromChange(id types.NodeID, ch change.Change) (*tailcfg.MapResponse, error) {
resultCh := make(chan workResult, 1)
// Queue the work with a result channel using the safe queueing method
b.queueWork(work{changes: []change.Change{ch}, nodeID: id, resultCh: resultCh})
// Wait for the result
select {
case result := <-resultCh:
return result.mapResponse, result.err
case <-b.done:
return nil, fmt.Errorf("%w while generating map response for node %d", ErrBatcherShuttingDown, id)
}
}
// DebugNodeInfo contains debug information about a node's connections.
type DebugNodeInfo struct {
Connected bool `json:"connected"`
ActiveConnections int `json:"active_connections"`
}
// Debug returns a pre-baked map of node debug information for the debug interface.
func (b *Batcher) Debug() map[types.NodeID]DebugNodeInfo {
result := make(map[types.NodeID]DebugNodeInfo)
b.nodes.Range(func(id types.NodeID, nc *multiChannelNodeConn) bool {
if nc == nil {
return true
}
result[id] = DebugNodeInfo{
Connected: nc.isConnected(),
ActiveConnections: nc.getActiveConnectionCount(),
}
return true
})
return result
}
func (b *Batcher) DebugMapResponses() (map[types.NodeID][]tailcfg.MapResponse, error) {
return b.mapper.debugMapResponses()
}
// WorkErrors returns the count of work errors encountered.
// This is primarily useful for testing and debugging.
func (b *Batcher) WorkErrors() int64 {
return b.workErrors.Load()
}
================================================
FILE: hscontrol/mapper/batcher_bench_test.go
================================================
package mapper
// Benchmarks for batcher components and full pipeline.
//
// Organized into three tiers:
// - Component benchmarks: individual functions (connectionEntry.send, computePeerDiff, etc.)
// - System benchmarks: batching mechanics (addToBatch, processBatchedChanges, broadcast)
// - Full pipeline benchmarks: end-to-end with real DB (gated behind !testing.Short())
//
// All benchmarks use sub-benchmarks with 10/100/1000 node counts for scaling analysis.
import (
"fmt"
"sync"
"testing"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/types/change"
"github.com/puzpuzpuz/xsync/v4"
"github.com/rs/zerolog"
"tailscale.com/tailcfg"
)
// ============================================================================
// Component Benchmarks
// ============================================================================
// BenchmarkConnectionEntry_Send measures the throughput of sending a single
// MapResponse through a connectionEntry with a buffered channel.
func BenchmarkConnectionEntry_Send(b *testing.B) {
ch := make(chan *tailcfg.MapResponse, b.N+1)
entry := makeConnectionEntry("bench-conn", ch)
data := testMapResponse()
b.ResetTimer()
for range b.N {
_ = entry.send(data)
}
}
// BenchmarkMultiChannelSend measures broadcast throughput to multiple connections.
func BenchmarkMultiChannelSend(b *testing.B) {
for _, connCount := range []int{1, 3, 10} {
b.Run(fmt.Sprintf("%dconn", connCount), func(b *testing.B) {
mc := newMultiChannelNodeConn(1, nil)
channels := make([]chan *tailcfg.MapResponse, connCount)
for i := range channels {
channels[i] = make(chan *tailcfg.MapResponse, b.N+1)
mc.addConnection(makeConnectionEntry(fmt.Sprintf("conn-%d", i), channels[i]))
}
data := testMapResponse()
b.ResetTimer()
for range b.N {
_ = mc.send(data)
}
})
}
}
// BenchmarkComputePeerDiff measures the cost of computing peer diffs at scale.
func BenchmarkComputePeerDiff(b *testing.B) {
for _, peerCount := range []int{10, 100, 1000} {
b.Run(fmt.Sprintf("%dpeers", peerCount), func(b *testing.B) {
mc := newMultiChannelNodeConn(1, nil)
// Populate tracked peers: 1..peerCount
for i := 1; i <= peerCount; i++ {
mc.lastSentPeers.Store(tailcfg.NodeID(i), struct{}{})
}
// Current peers: remove ~10% (every 10th peer is missing)
current := make([]tailcfg.NodeID, 0, peerCount)
for i := 1; i <= peerCount; i++ {
if i%10 != 0 {
current = append(current, tailcfg.NodeID(i))
}
}
b.ResetTimer()
for range b.N {
_ = mc.computePeerDiff(current)
}
})
}
}
// BenchmarkUpdateSentPeers measures the cost of updating peer tracking state.
func BenchmarkUpdateSentPeers(b *testing.B) {
for _, peerCount := range []int{10, 100, 1000} {
b.Run(fmt.Sprintf("%dpeers_full", peerCount), func(b *testing.B) {
mc := newMultiChannelNodeConn(1, nil)
// Pre-build response with full peer list
peerIDs := make([]tailcfg.NodeID, peerCount)
for i := range peerIDs {
peerIDs[i] = tailcfg.NodeID(i + 1)
}
resp := testMapResponseWithPeers(peerIDs...)
b.ResetTimer()
for range b.N {
mc.updateSentPeers(resp)
}
})
b.Run(fmt.Sprintf("%dpeers_incremental", peerCount), func(b *testing.B) {
mc := newMultiChannelNodeConn(1, nil)
// Pre-populate with existing peers
for i := 1; i <= peerCount; i++ {
mc.lastSentPeers.Store(tailcfg.NodeID(i), struct{}{})
}
// Build incremental response: add 10% new peers
addCount := peerCount / 10
if addCount == 0 {
addCount = 1
}
resp := testMapResponse()
resp.PeersChanged = make([]*tailcfg.Node, addCount)
for i := range addCount {
resp.PeersChanged[i] = &tailcfg.Node{ID: tailcfg.NodeID(peerCount + i + 1)}
}
b.ResetTimer()
for range b.N {
mc.updateSentPeers(resp)
}
})
}
}
// ============================================================================
// System Benchmarks (no DB, batcher mechanics only)
// ============================================================================
// benchBatcher creates a lightweight batcher for benchmarks. Unlike the test
// helper, it doesn't register cleanup and suppresses logging.
func benchBatcher(nodeCount, bufferSize int) (*Batcher, map[types.NodeID]chan *tailcfg.MapResponse) {
b := &Batcher{
tick: time.NewTicker(1 * time.Hour), // never fires during bench
workers: 4,
workCh: make(chan work, 4*200),
nodes: xsync.NewMap[types.NodeID, *multiChannelNodeConn](),
done: make(chan struct{}),
}
channels := make(map[types.NodeID]chan *tailcfg.MapResponse, nodeCount)
for i := 1; i <= nodeCount; i++ {
id := types.NodeID(i) //nolint:gosec // benchmark with small controlled values
mc := newMultiChannelNodeConn(id, nil)
ch := make(chan *tailcfg.MapResponse, bufferSize)
entry := &connectionEntry{
id: fmt.Sprintf("conn-%d", i),
c: ch,
version: tailcfg.CapabilityVersion(100),
created: time.Now(),
}
entry.lastUsed.Store(time.Now().Unix())
mc.addConnection(entry)
b.nodes.Store(id, mc)
channels[id] = ch
}
b.totalNodes.Store(int64(nodeCount))
return b, channels
}
// BenchmarkAddToBatch_Broadcast measures the cost of broadcasting a change
// to all nodes via addToBatch (no worker processing, just queuing).
func BenchmarkAddToBatch_Broadcast(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, nodeCount := range []int{10, 100, 1000} {
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
batcher, _ := benchBatcher(nodeCount, 10)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
ch := change.DERPMap()
b.ResetTimer()
for range b.N {
batcher.addToBatch(ch)
// Clear pending to avoid unbounded growth
batcher.nodes.Range(func(_ types.NodeID, nc *multiChannelNodeConn) bool {
nc.drainPending()
return true
})
}
})
}
}
// BenchmarkAddToBatch_Targeted measures the cost of adding a targeted change
// to a single node.
func BenchmarkAddToBatch_Targeted(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, nodeCount := range []int{10, 100, 1000} {
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
batcher, _ := benchBatcher(nodeCount, 10)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
b.ResetTimer()
for i := range b.N {
targetID := types.NodeID(1 + (i % nodeCount)) //nolint:gosec // benchmark
ch := change.Change{
Reason: "bench-targeted",
TargetNode: targetID,
PeerPatches: []*tailcfg.PeerChange{
{NodeID: tailcfg.NodeID(targetID)}, //nolint:gosec // benchmark
},
}
batcher.addToBatch(ch)
// Clear pending periodically to avoid growth
if i%100 == 99 {
batcher.nodes.Range(func(_ types.NodeID, nc *multiChannelNodeConn) bool {
nc.drainPending()
return true
})
}
}
})
}
}
// BenchmarkAddToBatch_FullUpdate measures the cost of a FullUpdate broadcast.
func BenchmarkAddToBatch_FullUpdate(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, nodeCount := range []int{10, 100, 1000} {
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
batcher, _ := benchBatcher(nodeCount, 10)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
b.ResetTimer()
for range b.N {
batcher.addToBatch(change.FullUpdate())
}
})
}
}
// BenchmarkProcessBatchedChanges measures the cost of moving pending changes
// to the work queue.
func BenchmarkProcessBatchedChanges(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, nodeCount := range []int{10, 100, 1000} {
b.Run(fmt.Sprintf("%dpending", nodeCount), func(b *testing.B) {
batcher, _ := benchBatcher(nodeCount, 10)
// Use a very large work channel to avoid blocking
batcher.workCh = make(chan work, nodeCount*b.N+1)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
b.ResetTimer()
for range b.N {
b.StopTimer()
// Seed pending changes
for i := 1; i <= nodeCount; i++ {
if nc, ok := batcher.nodes.Load(types.NodeID(i)); ok { //nolint:gosec // benchmark
nc.appendPending(change.DERPMap())
}
}
b.StartTimer()
batcher.processBatchedChanges()
}
})
}
}
// BenchmarkBroadcastToN measures end-to-end broadcast: addToBatch + processBatchedChanges
// to N nodes. Does NOT include worker processing (MapResponse generation).
func BenchmarkBroadcastToN(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, nodeCount := range []int{10, 100, 1000} {
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
batcher, _ := benchBatcher(nodeCount, 10)
batcher.workCh = make(chan work, nodeCount*b.N+1)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
ch := change.DERPMap()
b.ResetTimer()
for range b.N {
batcher.addToBatch(ch)
batcher.processBatchedChanges()
}
})
}
}
// BenchmarkMultiChannelBroadcast measures the cost of sending a MapResponse
// to N nodes each with varying connection counts.
func BenchmarkMultiChannelBroadcast(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, nodeCount := range []int{10, 100, 1000} {
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
batcher, _ := benchBatcher(nodeCount, b.N+1)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
// Add extra connections to every 3rd node
for i := 1; i <= nodeCount; i++ {
if i%3 == 0 {
if mc, ok := batcher.nodes.Load(types.NodeID(i)); ok { //nolint:gosec // benchmark
for j := range 2 {
ch := make(chan *tailcfg.MapResponse, b.N+1)
entry := &connectionEntry{
id: fmt.Sprintf("extra-%d-%d", i, j),
c: ch,
version: tailcfg.CapabilityVersion(100),
created: time.Now(),
}
entry.lastUsed.Store(time.Now().Unix())
mc.addConnection(entry)
}
}
}
}
data := testMapResponse()
b.ResetTimer()
for range b.N {
batcher.nodes.Range(func(_ types.NodeID, mc *multiChannelNodeConn) bool {
_ = mc.send(data)
return true
})
}
})
}
}
// BenchmarkConcurrentAddToBatch measures addToBatch throughput under
// concurrent access from multiple goroutines.
func BenchmarkConcurrentAddToBatch(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, nodeCount := range []int{10, 100, 1000} {
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
batcher, _ := benchBatcher(nodeCount, 10)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
// Background goroutine to drain pending periodically
drainDone := make(chan struct{})
go func() {
defer close(drainDone)
for {
select {
case <-batcher.done:
return
default:
batcher.nodes.Range(func(_ types.NodeID, nc *multiChannelNodeConn) bool {
nc.drainPending()
return true
})
time.Sleep(time.Millisecond) //nolint:forbidigo // benchmark drain loop
}
}
}()
ch := change.DERPMap()
b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
batcher.addToBatch(ch)
}
})
b.StopTimer()
// Cleanup
close(batcher.done)
<-drainDone
// Re-open done so the defer doesn't double-close
batcher.done = make(chan struct{})
})
}
}
// BenchmarkIsConnected measures the read throughput of IsConnected checks.
func BenchmarkIsConnected(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, nodeCount := range []int{10, 100, 1000} {
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
batcher, _ := benchBatcher(nodeCount, 1)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
b.ResetTimer()
for i := range b.N {
id := types.NodeID(1 + (i % nodeCount)) //nolint:gosec // benchmark
_ = batcher.IsConnected(id)
}
})
}
}
// BenchmarkConnectedMap measures the cost of building the full connected map.
func BenchmarkConnectedMap(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, nodeCount := range []int{10, 100, 1000} {
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
batcher, channels := benchBatcher(nodeCount, 1)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
// Disconnect 10% of nodes for a realistic mix
for i := 1; i <= nodeCount; i++ {
if i%10 == 0 {
id := types.NodeID(i) //nolint:gosec // benchmark
if mc, ok := batcher.nodes.Load(id); ok {
mc.removeConnectionByChannel(channels[id])
mc.markDisconnected()
}
}
}
b.ResetTimer()
for range b.N {
_ = batcher.ConnectedMap()
}
})
}
}
// BenchmarkConnectionChurn measures the cost of add/remove connection cycling
// which simulates client reconnection patterns.
func BenchmarkConnectionChurn(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, nodeCount := range []int{10, 100, 1000} {
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
batcher, channels := benchBatcher(nodeCount, 10)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
b.ResetTimer()
for i := range b.N {
id := types.NodeID(1 + (i % nodeCount)) //nolint:gosec // benchmark
mc, ok := batcher.nodes.Load(id)
if !ok {
continue
}
// Remove old connection
oldCh := channels[id]
mc.removeConnectionByChannel(oldCh)
// Add new connection
newCh := make(chan *tailcfg.MapResponse, 10)
entry := &connectionEntry{
id: fmt.Sprintf("churn-%d", i),
c: newCh,
version: tailcfg.CapabilityVersion(100),
created: time.Now(),
}
entry.lastUsed.Store(time.Now().Unix())
mc.addConnection(entry)
channels[id] = newCh
}
})
}
}
// BenchmarkConcurrentSendAndChurn measures the combined cost of sends happening
// concurrently with connection churn - the hot path in production.
func BenchmarkConcurrentSendAndChurn(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, nodeCount := range []int{10, 100} {
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
batcher, channels := benchBatcher(nodeCount, 100)
var mu sync.Mutex // protect channels map
stopChurn := make(chan struct{})
defer close(stopChurn)
// Background churn on 10% of nodes
go func() {
i := 0
for {
select {
case <-stopChurn:
return
default:
id := types.NodeID(1 + (i % nodeCount)) //nolint:gosec // benchmark
if i%10 == 0 { // only churn 10%
mc, ok := batcher.nodes.Load(id)
if ok {
mu.Lock()
oldCh := channels[id]
mu.Unlock()
mc.removeConnectionByChannel(oldCh)
newCh := make(chan *tailcfg.MapResponse, 100)
entry := &connectionEntry{
id: fmt.Sprintf("churn-%d", i),
c: newCh,
version: tailcfg.CapabilityVersion(100),
created: time.Now(),
}
entry.lastUsed.Store(time.Now().Unix())
mc.addConnection(entry)
mu.Lock()
channels[id] = newCh
mu.Unlock()
}
}
i++
}
}
}()
data := testMapResponse()
b.ResetTimer()
for range b.N {
batcher.nodes.Range(func(_ types.NodeID, mc *multiChannelNodeConn) bool {
_ = mc.send(data)
return true
})
}
})
}
}
// ============================================================================
// Full Pipeline Benchmarks (with DB)
// ============================================================================
// BenchmarkAddNode measures the cost of adding nodes to the batcher,
// including initial MapResponse generation from a real database.
func BenchmarkAddNode(b *testing.B) {
if testing.Short() {
b.Skip("skipping full pipeline benchmark in short mode")
}
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, nodeCount := range []int{10, 100} {
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
testData, cleanup := setupBatcherWithTestData(b, NewBatcherAndMapper, 1, nodeCount, largeBufferSize)
defer cleanup()
batcher := testData.Batcher
allNodes := testData.Nodes
// Start consumers
for i := range allNodes {
allNodes[i].start()
}
defer func() {
for i := range allNodes {
allNodes[i].cleanup()
}
}()
b.ResetTimer()
for range b.N {
// Connect all nodes (measuring AddNode cost)
for i := range allNodes {
node := &allNodes[i]
_ = batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100), nil)
}
b.StopTimer()
// Disconnect for next iteration
for i := range allNodes {
node := &allNodes[i]
batcher.RemoveNode(node.n.ID, node.ch)
}
// Drain channels
for i := range allNodes {
for {
select {
case <-allNodes[i].ch:
default:
goto drained
}
}
drained:
}
b.StartTimer()
}
})
}
}
// BenchmarkFullPipeline measures the full pipeline cost: addToBatch → processBatchedChanges
// → worker → generateMapResponse → send, with real nodes from a database.
func BenchmarkFullPipeline(b *testing.B) {
if testing.Short() {
b.Skip("skipping full pipeline benchmark in short mode")
}
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, nodeCount := range []int{10, 100} {
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
testData, cleanup := setupBatcherWithTestData(b, NewBatcherAndMapper, 1, nodeCount, largeBufferSize)
defer cleanup()
batcher := testData.Batcher
allNodes := testData.Nodes
// Start consumers
for i := range allNodes {
allNodes[i].start()
}
defer func() {
for i := range allNodes {
allNodes[i].cleanup()
}
}()
// Connect all nodes first
for i := range allNodes {
node := &allNodes[i]
err := batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100), nil)
if err != nil {
b.Fatalf("failed to add node %d: %v", i, err)
}
}
// Wait for initial maps to settle
time.Sleep(200 * time.Millisecond) //nolint:forbidigo // benchmark coordination
b.ResetTimer()
for range b.N {
batcher.AddWork(change.DERPMap())
// Allow workers to process (the batcher tick is what normally
// triggers processBatchedChanges, but for benchmarks we need
// to give the system time to process)
time.Sleep(20 * time.Millisecond) //nolint:forbidigo // benchmark coordination
}
})
}
}
// BenchmarkMapResponseFromChange measures the cost of synchronous
// MapResponse generation for individual nodes.
func BenchmarkMapResponseFromChange(b *testing.B) {
if testing.Short() {
b.Skip("skipping full pipeline benchmark in short mode")
}
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, nodeCount := range []int{10, 100} {
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
testData, cleanup := setupBatcherWithTestData(b, NewBatcherAndMapper, 1, nodeCount, largeBufferSize)
defer cleanup()
batcher := testData.Batcher
allNodes := testData.Nodes
// Start consumers
for i := range allNodes {
allNodes[i].start()
}
defer func() {
for i := range allNodes {
allNodes[i].cleanup()
}
}()
// Connect all nodes
for i := range allNodes {
node := &allNodes[i]
err := batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100), nil)
if err != nil {
b.Fatalf("failed to add node %d: %v", i, err)
}
}
time.Sleep(200 * time.Millisecond) //nolint:forbidigo // benchmark coordination
ch := change.DERPMap()
b.ResetTimer()
for i := range b.N {
nodeIdx := i % len(allNodes)
_, _ = batcher.MapResponseFromChange(allNodes[nodeIdx].n.ID, ch)
}
})
}
}
================================================
FILE: hscontrol/mapper/batcher_concurrency_test.go
================================================
package mapper
// Concurrency, lifecycle, and scale tests for the batcher.
// Tests in this file exercise:
// - addToBatch and processBatchedChanges under concurrent access
// - cleanupOfflineNodes correctness
// - Batcher lifecycle (Close, shutdown, double-close)
// - 1000-node scale testing of batching and channel mechanics
//
// Most tests use the lightweight batcher helper which creates a batcher with
// pre-populated nodes but NO database, enabling fast 1000-node tests.
import (
"fmt"
"runtime"
"sync"
"sync/atomic"
"testing"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/types/change"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/puzpuzpuz/xsync/v4"
"github.com/rs/zerolog"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"tailscale.com/tailcfg"
)
// ============================================================================
// Lightweight Batcher Helper (no database needed)
// ============================================================================
// lightweightBatcher provides a batcher with pre-populated nodes for testing
// the batching, channel, and concurrency mechanics without database overhead.
type lightweightBatcher struct {
b *Batcher
channels map[types.NodeID]chan *tailcfg.MapResponse
}
// setupLightweightBatcher creates a batcher with nodeCount pre-populated nodes.
// Each node gets a buffered channel of bufferSize. The batcher's worker loop
// is NOT started (no doWork), so addToBatch/processBatchedChanges can be tested
// in isolation. Use startWorkers() if you need the full loop.
func setupLightweightBatcher(t *testing.T, nodeCount, bufferSize int) *lightweightBatcher {
t.Helper()
b := &Batcher{
tick: time.NewTicker(10 * time.Millisecond),
workers: 4,
workCh: make(chan work, 4*200),
nodes: xsync.NewMap[types.NodeID, *multiChannelNodeConn](),
done: make(chan struct{}),
}
channels := make(map[types.NodeID]chan *tailcfg.MapResponse, nodeCount)
for i := 1; i <= nodeCount; i++ {
id := types.NodeID(i) //nolint:gosec // test with small controlled values
mc := newMultiChannelNodeConn(id, nil) // nil mapper is fine for channel tests
ch := make(chan *tailcfg.MapResponse, bufferSize)
entry := &connectionEntry{
id: fmt.Sprintf("conn-%d", i),
c: ch,
version: tailcfg.CapabilityVersion(100),
created: time.Now(),
}
entry.lastUsed.Store(time.Now().Unix())
mc.addConnection(entry)
b.nodes.Store(id, mc)
channels[id] = ch
}
b.totalNodes.Store(int64(nodeCount))
return &lightweightBatcher{b: b, channels: channels}
}
func (lb *lightweightBatcher) cleanup() {
lb.b.doneOnce.Do(func() {
close(lb.b.done)
})
lb.b.tick.Stop()
}
// countTotalPending counts total pending change entries across all nodes.
func countTotalPending(b *Batcher) int {
count := 0
b.nodes.Range(func(_ types.NodeID, nc *multiChannelNodeConn) bool {
nc.pendingMu.Lock()
count += len(nc.pending)
nc.pendingMu.Unlock()
return true
})
return count
}
// countNodesPending counts how many nodes have pending changes.
func countNodesPending(b *Batcher) int {
count := 0
b.nodes.Range(func(_ types.NodeID, nc *multiChannelNodeConn) bool {
nc.pendingMu.Lock()
hasPending := len(nc.pending) > 0
nc.pendingMu.Unlock()
if hasPending {
count++
}
return true
})
return count
}
// getPendingForNode returns pending changes for a specific node.
func getPendingForNode(b *Batcher, id types.NodeID) []change.Change {
nc, ok := b.nodes.Load(id)
if !ok {
return nil
}
nc.pendingMu.Lock()
pending := make([]change.Change, len(nc.pending))
copy(pending, nc.pending)
nc.pendingMu.Unlock()
return pending
}
// runConcurrently runs n goroutines executing fn, waits for all to finish,
// and returns the number of panics caught.
func runConcurrently(t *testing.T, n int, fn func(i int)) int {
t.Helper()
var (
wg sync.WaitGroup
panics atomic.Int64
)
for i := range n {
wg.Add(1)
go func(idx int) {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
panics.Add(1)
t.Logf("panic in goroutine %d: %v", idx, r)
}
}()
fn(idx)
}(i)
}
wg.Wait()
return int(panics.Load())
}
// runConcurrentlyWithTimeout is like runConcurrently but fails if not done
// within timeout (deadlock detection).
func runConcurrentlyWithTimeout(t *testing.T, n int, timeout time.Duration, fn func(i int)) int {
t.Helper()
done := make(chan int, 1)
go func() {
done <- runConcurrently(t, n, fn)
}()
select {
case panics := <-done:
return panics
case <-time.After(timeout):
t.Fatalf("deadlock detected: %d goroutines did not complete within %v", n, timeout)
return -1
}
}
// ============================================================================
// addToBatch Concurrency Tests
// ============================================================================
// TestAddToBatch_ConcurrentTargeted_NoDataLoss verifies that concurrent
// targeted addToBatch calls do not lose data.
//
// Previously (Bug #1): addToBatch used LoadOrStore→append→Store on a
// separate pendingChanges map, which was NOT atomic. Two goroutines could
// Load the same slice, both append, and one Store would overwrite the other.
// FIX: pendingChanges moved into multiChannelNodeConn with mutex protection,
// eliminating the race entirely.
func TestAddToBatch_ConcurrentTargeted_NoDataLoss(t *testing.T) {
lb := setupLightweightBatcher(t, 10, 10)
defer lb.cleanup()
targetNode := types.NodeID(1)
const goroutines = 100
// Each goroutine adds one targeted change to the same node
panics := runConcurrentlyWithTimeout(t, goroutines, 10*time.Second, func(i int) {
ch := change.Change{
Reason: fmt.Sprintf("targeted-%d", i),
TargetNode: targetNode,
PeerPatches: []*tailcfg.PeerChange{
{NodeID: tailcfg.NodeID(i + 100)}, //nolint:gosec // test
},
}
lb.b.addToBatch(ch)
})
require.Zero(t, panics, "no panics expected")
// All 100 changes MUST be present. The Load→append→Store race causes
// data loss: typically 30-50% of changes are silently dropped.
pending := getPendingForNode(lb.b, targetNode)
t.Logf("targeted changes: expected=%d, got=%d (lost=%d)",
goroutines, len(pending), goroutines-len(pending))
assert.Len(t, pending, goroutines,
"addToBatch lost %d/%d targeted changes under concurrent access",
goroutines-len(pending), goroutines)
}
// TestAddToBatch_ConcurrentBroadcast verifies that concurrent broadcasts
// distribute changes to all nodes.
func TestAddToBatch_ConcurrentBroadcast(t *testing.T) {
lb := setupLightweightBatcher(t, 50, 10)
defer lb.cleanup()
const goroutines = 50
panics := runConcurrentlyWithTimeout(t, goroutines, 10*time.Second, func(_ int) {
lb.b.addToBatch(change.DERPMap())
})
assert.Zero(t, panics, "no panics expected")
// Each node should have received some DERP changes
nodesWithPending := countNodesPending(lb.b)
t.Logf("nodes with pending changes: %d/%d", nodesWithPending, 50)
assert.Positive(t, nodesWithPending,
"at least some nodes should have pending changes after broadcast")
}
// TestAddToBatch_FullUpdateOverrides verifies that a FullUpdate replaces
// all pending changes for every node.
func TestAddToBatch_FullUpdateOverrides(t *testing.T) {
lb := setupLightweightBatcher(t, 10, 10)
defer lb.cleanup()
// Add some targeted changes first
for i := 1; i <= 10; i++ {
lb.b.addToBatch(change.Change{
Reason: "pre-existing",
TargetNode: types.NodeID(i), //nolint:gosec // test with small values
PeerPatches: []*tailcfg.PeerChange{
{NodeID: tailcfg.NodeID(100 + i)}, //nolint:gosec // test with small values
},
})
}
// Full update should replace all pending changes
lb.b.addToBatch(change.FullUpdate())
// Every node should have exactly one pending change (the FullUpdate)
lb.b.nodes.Range(func(id types.NodeID, _ *multiChannelNodeConn) bool {
pending := getPendingForNode(lb.b, id)
require.Len(t, pending, 1, "node %d should have exactly 1 pending (FullUpdate)", id)
assert.True(t, pending[0].IsFull(), "pending change should be a full update")
return true
})
}
// TestAddToBatch_NodeRemovalCleanup verifies that PeersRemoved in a change
// cleans up the node from the batcher's internal state.
func TestAddToBatch_NodeRemovalCleanup(t *testing.T) {
lb := setupLightweightBatcher(t, 5, 10)
defer lb.cleanup()
removedNode := types.NodeID(3)
// Verify node exists before removal
_, exists := lb.b.nodes.Load(removedNode)
require.True(t, exists, "node 3 should exist before removal")
// Send a change that includes node 3 in PeersRemoved
lb.b.addToBatch(change.Change{
Reason: "node deleted",
PeersRemoved: []types.NodeID{removedNode},
})
// Node should be removed from the nodes map
_, exists = lb.b.nodes.Load(removedNode)
assert.False(t, exists, "node 3 should be removed from nodes map")
pending := getPendingForNode(lb.b, removedNode)
assert.Empty(t, pending, "node 3 should have no pending changes")
assert.Equal(t, int64(4), lb.b.totalNodes.Load(), "total nodes should be decremented")
}
// ============================================================================
// processBatchedChanges Tests
// ============================================================================
// TestProcessBatchedChanges_QueuesWork verifies that processBatchedChanges
// moves pending changes to the work queue and clears them.
func TestProcessBatchedChanges_QueuesWork(t *testing.T) {
lb := setupLightweightBatcher(t, 3, 10)
defer lb.cleanup()
// Add pending changes for each node
for i := 1; i <= 3; i++ {
if nc, ok := lb.b.nodes.Load(types.NodeID(i)); ok { //nolint:gosec // test
nc.appendPending(change.DERPMap())
}
}
lb.b.processBatchedChanges()
// Pending should be cleared
assert.Equal(t, 0, countNodesPending(lb.b),
"all pending changes should be cleared after processing")
// Work items should be on the work channel
assert.Len(t, lb.b.workCh, 3,
"3 work items should be queued")
}
// TestProcessBatchedChanges_ConcurrentAdd_NoDataLoss verifies that concurrent
// addToBatch and processBatchedChanges calls do not lose data.
//
// Previously (Bug #2): processBatchedChanges used Range→Delete on a separate
// pendingChanges map. A concurrent addToBatch could Store new changes between
// Range reading the key and Delete removing it, losing freshly-stored changes.
// FIX: pendingChanges moved into multiChannelNodeConn with atomic drainPending(),
// eliminating the race entirely.
func TestProcessBatchedChanges_ConcurrentAdd_NoDataLoss(t *testing.T) {
// Use a single node to maximize contention on one key.
lb := setupLightweightBatcher(t, 1, 10)
defer lb.cleanup()
// Use a large work channel so processBatchedChanges never blocks.
lb.b.workCh = make(chan work, 100000)
const iterations = 500
var addedCount atomic.Int64
var wg sync.WaitGroup
// Goroutine 1: continuously add targeted changes to node 1
wg.Go(func() {
for i := range iterations {
lb.b.addToBatch(change.Change{
Reason: fmt.Sprintf("add-%d", i),
TargetNode: types.NodeID(1),
PeerPatches: []*tailcfg.PeerChange{
{NodeID: tailcfg.NodeID(i + 100)}, //nolint:gosec // test
},
})
addedCount.Add(1)
}
})
// Goroutine 2: continuously process batched changes
wg.Go(func() {
for range iterations {
lb.b.processBatchedChanges()
}
})
wg.Wait()
// One final process to flush any remaining
lb.b.processBatchedChanges()
// Count total changes across all bundled work items in the channel.
// Each work item may contain multiple changes since processBatchedChanges
// bundles all pending changes per node into a single work item.
queuedChanges := 0
workItems := len(lb.b.workCh)
for range workItems {
w := <-lb.b.workCh
queuedChanges += len(w.changes)
}
// Also count any still-pending
remaining := len(getPendingForNode(lb.b, types.NodeID(1)))
total := queuedChanges + remaining
added := int(addedCount.Load())
t.Logf("added=%d, queued_changes=%d (in %d work items), still_pending=%d, total_accounted=%d, lost=%d",
added, queuedChanges, workItems, remaining, total, added-total)
// Every added change must either be in the work queue or still pending.
assert.Equal(t, added, total,
"processBatchedChanges has %d inconsistent changes (%d added vs %d accounted) "+
"under concurrent access",
total-added, added, total)
}
// TestProcessBatchedChanges_EmptyPending verifies processBatchedChanges
// is a no-op when there are no pending changes.
func TestProcessBatchedChanges_EmptyPending(t *testing.T) {
lb := setupLightweightBatcher(t, 5, 10)
defer lb.cleanup()
lb.b.processBatchedChanges()
assert.Empty(t, lb.b.workCh,
"no work should be queued when there are no pending changes")
}
// TestProcessBatchedChanges_BundlesChangesPerNode verifies that multiple
// pending changes for the same node are bundled into a single work item.
// This prevents out-of-order delivery when different workers pick up
// separate changes for the same node.
func TestProcessBatchedChanges_BundlesChangesPerNode(t *testing.T) {
lb := setupLightweightBatcher(t, 3, 10)
defer lb.cleanup()
// Add multiple pending changes for node 1
if nc, ok := lb.b.nodes.Load(types.NodeID(1)); ok {
nc.appendPending(change.DERPMap())
nc.appendPending(change.DNSConfig())
nc.appendPending(change.PolicyOnly())
}
// Single change for node 2
if nc, ok := lb.b.nodes.Load(types.NodeID(2)); ok {
nc.appendPending(change.DERPMap())
}
lb.b.processBatchedChanges()
// Should produce exactly 2 work items: one per node with pending changes.
// Node 3 had no pending changes, so no work item for it.
assert.Len(t, lb.b.workCh, 2,
"should produce one work item per node, not per change")
// Drain and verify the bundled changes are intact
totalChanges := 0
for range 2 {
w := <-lb.b.workCh
totalChanges += len(w.changes)
if w.nodeID == types.NodeID(1) {
assert.Len(t, w.changes, 3,
"node 1's work item should contain all 3 changes")
} else {
assert.Len(t, w.changes, 1,
"node 2's work item should contain 1 change")
}
}
assert.Equal(t, 4, totalChanges, "total changes across all work items")
}
// TestWorkMu_PreventsInterTickRace verifies that workMu serializes change
// processing across consecutive batch ticks. Without workMu, two workers
// could process bundles from tick N and tick N+1 concurrently for the same
// node, causing out-of-order delivery and races on lastSentPeers.
func TestWorkMu_PreventsInterTickRace(t *testing.T) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
mc := newMultiChannelNodeConn(1, nil)
ch := make(chan *tailcfg.MapResponse, 100)
entry := &connectionEntry{
id: "test",
c: ch,
version: tailcfg.CapabilityVersion(100),
created: time.Now(),
}
entry.lastUsed.Store(time.Now().Unix())
mc.addConnection(entry)
// Track the order in which work completes
var (
order []int
mu sync.Mutex
)
record := func(id int) {
mu.Lock()
order = append(order, id)
mu.Unlock()
}
var wg sync.WaitGroup
// Simulate two workers grabbing consecutive tick bundles.
// Worker 1 holds workMu and sleeps, worker 2 must wait.
wg.Go(func() {
mc.workMu.Lock()
// Simulate processing time for tick N's bundle
time.Sleep(50 * time.Millisecond) //nolint:forbidigo
record(1)
mc.workMu.Unlock()
})
// Small delay so worker 1 grabs the lock first
time.Sleep(5 * time.Millisecond) //nolint:forbidigo
wg.Go(func() {
mc.workMu.Lock()
record(2)
mc.workMu.Unlock()
})
wg.Wait()
mu.Lock()
defer mu.Unlock()
require.Len(t, order, 2)
assert.Equal(t, 1, order[0], "worker 1 (tick N) should complete first")
assert.Equal(t, 2, order[1], "worker 2 (tick N+1) should complete second")
}
// ============================================================================
// cleanupOfflineNodes Tests
// ============================================================================
// TestCleanupOfflineNodes_RemovesOld verifies that nodes offline longer
// than the 15-minute threshold are removed.
func TestCleanupOfflineNodes_RemovesOld(t *testing.T) {
lb := setupLightweightBatcher(t, 5, 10)
defer lb.cleanup()
// Remove node 3's active connections and mark it disconnected 20 minutes ago
if mc, ok := lb.b.nodes.Load(types.NodeID(3)); ok {
ch := lb.channels[types.NodeID(3)]
mc.removeConnectionByChannel(ch)
oldTime := time.Now().Add(-20 * time.Minute)
mc.disconnectedAt.Store(&oldTime)
}
lb.b.cleanupOfflineNodes()
_, exists := lb.b.nodes.Load(types.NodeID(3))
assert.False(t, exists, "node 3 should be cleaned up (offline >15min)")
// Other nodes should still be present
_, exists = lb.b.nodes.Load(types.NodeID(1))
assert.True(t, exists, "node 1 should still exist")
}
// TestCleanupOfflineNodes_KeepsRecent verifies that recently disconnected
// nodes are not cleaned up.
func TestCleanupOfflineNodes_KeepsRecent(t *testing.T) {
lb := setupLightweightBatcher(t, 5, 10)
defer lb.cleanup()
// Remove node 3's connections and mark it disconnected 5 minutes ago (under threshold)
if mc, ok := lb.b.nodes.Load(types.NodeID(3)); ok {
ch := lb.channels[types.NodeID(3)]
mc.removeConnectionByChannel(ch)
recentTime := time.Now().Add(-5 * time.Minute)
mc.disconnectedAt.Store(&recentTime)
}
lb.b.cleanupOfflineNodes()
_, exists := lb.b.nodes.Load(types.NodeID(3))
assert.True(t, exists, "node 3 should NOT be cleaned up (offline <15min)")
}
// TestCleanupOfflineNodes_KeepsActive verifies that nodes with active
// connections are never cleaned up, even if disconnect time is set.
func TestCleanupOfflineNodes_KeepsActive(t *testing.T) {
lb := setupLightweightBatcher(t, 5, 10)
defer lb.cleanup()
// Set old disconnect time but keep the connection active
if mc, ok := lb.b.nodes.Load(types.NodeID(3)); ok {
oldTime := time.Now().Add(-20 * time.Minute)
mc.disconnectedAt.Store(&oldTime)
}
// Don't remove connection - node still has active connections
lb.b.cleanupOfflineNodes()
_, exists := lb.b.nodes.Load(types.NodeID(3))
assert.True(t, exists,
"node 3 should NOT be cleaned up (still has active connections)")
}
// ============================================================================
// Batcher Lifecycle Tests
// ============================================================================
// TestBatcher_CloseStopsWorkers verifies that Close() signals workers to stop
// and doesn't deadlock.
func TestBatcher_CloseStopsWorkers(t *testing.T) {
lb := setupLightweightBatcher(t, 3, 10)
// Start workers
lb.b.Start()
// Queue some work
if nc, ok := lb.b.nodes.Load(types.NodeID(1)); ok {
nc.appendPending(change.DERPMap())
}
lb.b.processBatchedChanges()
// Close should not deadlock
done := make(chan struct{})
go func() {
lb.b.Close()
close(done)
}()
select {
case <-done:
// Success
case <-time.After(5 * time.Second):
t.Fatal("Close() deadlocked")
}
}
// TestBatcher_CloseMultipleTimes_DoubleClosePanic exercises Bug #4:
// multiChannelNodeConn.close() has no idempotency guard. Calling Close()
// concurrently triggers close() on the same channels multiple times,
// panicking with "close of closed channel".
//
// BUG: batcher_lockfree.go:555-565 - close() calls close(conn.c) with no guard
// FIX: Add sync.Once or atomic.Bool to multiChannelNodeConn.close().
func TestBatcher_CloseMultipleTimes_DoubleClosePanic(t *testing.T) {
lb := setupLightweightBatcher(t, 3, 10)
lb.b.Start()
// Close multiple times concurrently.
// The done channel and workCh are protected by sync.Once and should not panic.
// But node connection close() WILL panic because it has no idempotency guard.
panics := runConcurrently(t, 10, func(_ int) {
lb.b.Close()
})
assert.Zero(t, panics,
"BUG #4: %d panics from concurrent Close() due to "+
"multiChannelNodeConn.close() lacking idempotency guard. "+
"Fix: add sync.Once or atomic.Bool to close()", panics)
}
// TestBatcher_MapResponseDuringShutdown verifies that MapResponseFromChange
// returns ErrBatcherShuttingDown when the batcher is closed.
func TestBatcher_MapResponseDuringShutdown(t *testing.T) {
lb := setupLightweightBatcher(t, 3, 10)
// Close the done channel
close(lb.b.done)
_, err := lb.b.MapResponseFromChange(types.NodeID(1), change.DERPMap())
assert.ErrorIs(t, err, ErrBatcherShuttingDown)
}
// TestBatcher_IsConnectedReflectsState verifies IsConnected accurately
// reflects the connection state of nodes.
func TestBatcher_IsConnectedReflectsState(t *testing.T) {
lb := setupLightweightBatcher(t, 5, 10)
defer lb.cleanup()
// All nodes should be connected
for i := 1; i <= 5; i++ {
assert.True(t, lb.b.IsConnected(types.NodeID(i)), //nolint:gosec // test
"node %d should be connected", i)
}
// Non-existent node should not be connected
assert.False(t, lb.b.IsConnected(types.NodeID(999)))
// Disconnect node 3 (remove connection + mark disconnected)
if mc, ok := lb.b.nodes.Load(types.NodeID(3)); ok {
mc.removeConnectionByChannel(lb.channels[types.NodeID(3)])
mc.markDisconnected()
}
assert.False(t, lb.b.IsConnected(types.NodeID(3)),
"node 3 should not be connected after disconnection")
// Other nodes should still be connected
assert.True(t, lb.b.IsConnected(types.NodeID(1)))
assert.True(t, lb.b.IsConnected(types.NodeID(5)))
}
// TestBatcher_ConnectedMapConsistency verifies ConnectedMap returns accurate
// state for all nodes.
func TestBatcher_ConnectedMapConsistency(t *testing.T) {
lb := setupLightweightBatcher(t, 5, 10)
defer lb.cleanup()
// Disconnect node 2
if mc, ok := lb.b.nodes.Load(types.NodeID(2)); ok {
mc.removeConnectionByChannel(lb.channels[types.NodeID(2)])
mc.markDisconnected()
}
cm := lb.b.ConnectedMap()
// Connected nodes
for _, id := range []types.NodeID{1, 3, 4, 5} {
val, ok := cm.Load(id)
assert.True(t, ok, "node %d should be in ConnectedMap", id)
assert.True(t, val, "node %d should be connected", id)
}
// Disconnected node
val, ok := cm.Load(types.NodeID(2))
assert.True(t, ok, "node 2 should be in ConnectedMap")
assert.False(t, val, "node 2 should be disconnected")
}
// ============================================================================
// Bug Reproduction Tests (all expected to FAIL until bugs are fixed)
// ============================================================================
// TestBug3_CleanupOfflineNodes_TOCTOU exercises Bug #3:
// TestBug3_CleanupOfflineNodes_TOCTOU exercises the TOCTOU race in
// cleanupOfflineNodes. Without the Compute() fix, the old code did:
//
// 1. Range connected map → collect candidates
// 2. Load node → check hasActiveConnections() == false
// 3. Delete node
//
// Between steps 2 and 3, AddNode could reconnect the node via
// LoadOrStore, adding a connection to the existing entry. The
// subsequent Delete would then remove the live reconnected node.
//
// FIX: Use Compute() on b.nodes for atomic check-and-delete. Inside
// the Compute closure, hasActiveConnections() is checked and the
// entry is only deleted if still inactive. A concurrent AddNode that
// calls addConnection() on the same entry makes hasActiveConnections()
// return true, causing Compute to cancel the delete.
func TestBug3_CleanupOfflineNodes_TOCTOU(t *testing.T) {
lb := setupLightweightBatcher(t, 5, 10)
defer lb.cleanup()
targetNode := types.NodeID(3)
// Remove node 3's active connections and mark it disconnected >15 minutes ago
if mc, ok := lb.b.nodes.Load(targetNode); ok {
ch := lb.channels[targetNode]
mc.removeConnectionByChannel(ch)
oldTime := time.Now().Add(-20 * time.Minute)
mc.disconnectedAt.Store(&oldTime)
}
// Verify node 3 has no active connections before we start.
if mc, ok := lb.b.nodes.Load(targetNode); ok {
require.False(t, mc.hasActiveConnections(),
"precondition: node 3 should have no active connections")
}
// Simulate a reconnection that happens BEFORE cleanup's Compute() runs.
// With the Compute() fix, the atomic check inside Compute sees
// hasActiveConnections()==true and cancels the delete.
mc, exists := lb.b.nodes.Load(targetNode)
require.True(t, exists, "node 3 should exist before reconnection")
newCh := make(chan *tailcfg.MapResponse, 10)
entry := &connectionEntry{
id: "reconnected",
c: newCh,
version: tailcfg.CapabilityVersion(100),
created: time.Now(),
}
entry.lastUsed.Store(time.Now().Unix())
mc.addConnection(entry)
mc.markConnected()
lb.channels[targetNode] = newCh
// Now run cleanup. Node 3 is in the candidates list (old disconnect
// time) but has been reconnected. The Compute() fix should see the
// active connection and cancel the delete.
lb.b.cleanupOfflineNodes()
// Node 3 MUST still exist because it has an active connection.
_, stillExists := lb.b.nodes.Load(targetNode)
assert.True(t, stillExists,
"BUG #3: cleanupOfflineNodes deleted node %d despite it having an active "+
"connection. The Compute() fix should atomically check "+
"hasActiveConnections() and cancel the delete.",
targetNode)
// Also verify the concurrent case: cleanup and reconnection racing.
// Set up node 3 as offline again.
mc.removeConnectionByChannel(newCh)
oldTime2 := time.Now().Add(-20 * time.Minute)
mc.disconnectedAt.Store(&oldTime2)
var wg sync.WaitGroup
// Run 100 iterations of concurrent cleanup + reconnection.
// With Compute(), either cleanup wins (node deleted, LoadOrStore
// recreates) or reconnection wins (Compute sees active conn, cancels).
// Either way the node must exist after both complete.
for range 100 {
wg.Go(func() {
// Simulate reconnection via addConnection (like AddNode does)
if mc, ok := lb.b.nodes.Load(targetNode); ok {
reconnCh := make(chan *tailcfg.MapResponse, 10)
reconnEntry := &connectionEntry{
id: "race-reconn",
c: reconnCh,
version: tailcfg.CapabilityVersion(100),
created: time.Now(),
}
reconnEntry.lastUsed.Store(time.Now().Unix())
mc.addConnection(reconnEntry)
mc.markConnected()
}
})
wg.Go(func() {
lb.b.cleanupOfflineNodes()
})
}
wg.Wait()
}
// TestBug5_WorkerPanicKillsWorkerPermanently exercises Bug #5:
// If b.nodes.Load() returns exists=true but a nil *multiChannelNodeConn,
// the worker would panic on a nil pointer dereference. Without nil guards,
// this kills the worker goroutine permanently (no recover), reducing
// throughput and eventually deadlocking when all workers are dead.
//
// BUG: batcher_lockfree.go worker() - no nil check after b.nodes.Load()
// FIX: Add nil guard: `exists && nc != nil` in both sync and async paths.
func TestBug5_WorkerPanicKillsWorkerPermanently(t *testing.T) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
lb := setupLightweightBatcher(t, 3, 10)
defer lb.cleanup()
lb.b.workers = 2
lb.b.Start()
// Give workers time to start
time.Sleep(50 * time.Millisecond) //nolint:forbidigo // concurrency test coordination
// Store a nil value in b.nodes for a specific node ID.
// This simulates a race where a node entry exists but the value is nil
// (e.g., concurrent cleanup setting nil before deletion).
nilNodeID := types.NodeID(55555)
lb.b.nodes.Store(nilNodeID, nil)
// Queue async work (resultCh=nil) targeting the nil node.
// Without the nil guard, this would panic: nc.change(w.c) on nil nc.
for range 10 {
lb.b.queueWork(work{
changes: []change.Change{change.DERPMap()},
nodeID: nilNodeID,
})
}
// Queue sync work (with resultCh) targeting the nil node.
// Without the nil guard, this would panic: generateMapResponse(nc, ...)
// on nil nc.
for range 5 {
resultCh := make(chan workResult, 1)
lb.b.queueWork(work{
changes: []change.Change{change.DERPMap()},
nodeID: nilNodeID,
resultCh: resultCh,
})
// Read the result so workers don't block.
select {
case res := <-resultCh:
// With nil guard, result should have nil mapResponse (no work done).
assert.Nil(t, res.mapResponse,
"sync work for nil node should return nil mapResponse")
case <-time.After(2 * time.Second):
t.Fatal("timed out waiting for sync work result — worker may have panicked")
}
}
// Wait for async work to drain
time.Sleep(100 * time.Millisecond) //nolint:forbidigo // concurrency test coordination
// Now queue valid work for a real node to prove workers are still alive.
beforeValid := lb.b.workProcessed.Load()
for range 5 {
lb.b.queueWork(work{
changes: []change.Change{change.DERPMap()},
nodeID: types.NodeID(1),
})
}
time.Sleep(200 * time.Millisecond) //nolint:forbidigo // concurrency test coordination
afterValid := lb.b.workProcessed.Load()
validProcessed := afterValid - beforeValid
t.Logf("valid work processed after nil-node work: %d/5", validProcessed)
assert.Equal(t, int64(5), validProcessed,
"workers must remain functional after encountering nil node entries")
}
// TestBug6_StartCalledMultipleTimes_GoroutineLeak exercises Bug #6:
// Start() creates a new done channel and launches doWork() every time,
// with no guard against multiple calls. Each call spawns (workers+1)
// goroutines that never get cleaned up.
//
// BUG: batcher_lockfree.go:163-166 - Start() has no "already started" check
// FIX: Add sync.Once or atomic.Bool to prevent multiple Start() calls.
func TestBug6_StartCalledMultipleTimes_GoroutineLeak(t *testing.T) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
lb := setupLightweightBatcher(t, 3, 10)
lb.b.workers = 2
goroutinesBefore := runtime.NumGoroutine()
// Call Start() once - this should launch (workers + 1) goroutines
// (1 for doWork + workers for worker())
lb.b.Start()
time.Sleep(50 * time.Millisecond) //nolint:forbidigo // concurrency test coordination
goroutinesAfterFirst := runtime.NumGoroutine()
firstStartDelta := goroutinesAfterFirst - goroutinesBefore
t.Logf("goroutines: before=%d, after_first_Start=%d, delta=%d",
goroutinesBefore, goroutinesAfterFirst, firstStartDelta)
// Call Start() again - this SHOULD be a no-op
// BUG: it creates a NEW done channel (orphaning goroutines listening on the old one)
// and launches another doWork()+workers set
lb.b.Start()
time.Sleep(50 * time.Millisecond) //nolint:forbidigo // concurrency test coordination
goroutinesAfterSecond := runtime.NumGoroutine()
secondStartDelta := goroutinesAfterSecond - goroutinesAfterFirst
t.Logf("goroutines: after_second_Start=%d, delta=%d (should be 0)",
goroutinesAfterSecond, secondStartDelta)
// Call Start() a third time
lb.b.Start()
time.Sleep(50 * time.Millisecond) //nolint:forbidigo // concurrency test coordination
goroutinesAfterThird := runtime.NumGoroutine()
thirdStartDelta := goroutinesAfterThird - goroutinesAfterSecond
t.Logf("goroutines: after_third_Start=%d, delta=%d (should be 0)",
goroutinesAfterThird, thirdStartDelta)
// Close() only closes the LAST done channel, leaving earlier goroutines leaked
lb.b.Close()
time.Sleep(100 * time.Millisecond) //nolint:forbidigo // concurrency test coordination
goroutinesAfterClose := runtime.NumGoroutine()
t.Logf("goroutines after Close: %d (leaked: %d)",
goroutinesAfterClose, goroutinesAfterClose-goroutinesBefore)
// Second Start() should NOT have created new goroutines
assert.Zero(t, secondStartDelta,
"BUG #6: second Start() call leaked %d goroutines. "+
"Start() has no idempotency guard, each call spawns new goroutines. "+
"Fix: add sync.Once or atomic.Bool to prevent multiple Start() calls",
secondStartDelta)
}
// TestBug7_CleanupOfflineNodes_PendingChangesCleanedStructurally verifies that
// pending changes are automatically cleaned up when a node is removed from the
// nodes map, because pending state lives inside multiChannelNodeConn.
//
// Previously (Bug #7): pendingChanges was a separate map that was NOT cleaned
// when cleanupOfflineNodes removed a node, causing orphaned entries.
// FIX: pendingChanges moved into multiChannelNodeConn — deleting the node
// from b.nodes automatically drops its pending changes.
func TestBug7_CleanupOfflineNodes_PendingChangesCleanedStructurally(t *testing.T) {
lb := setupLightweightBatcher(t, 5, 10)
defer lb.cleanup()
targetNode := types.NodeID(3)
// Remove node 3's connections and mark it disconnected >15 minutes ago
if mc, ok := lb.b.nodes.Load(targetNode); ok {
ch := lb.channels[targetNode]
mc.removeConnectionByChannel(ch)
oldTime := time.Now().Add(-20 * time.Minute)
mc.disconnectedAt.Store(&oldTime)
}
// Add pending changes for node 3 before cleanup
if nc, ok := lb.b.nodes.Load(targetNode); ok {
nc.appendPending(change.DERPMap())
}
// Verify pending exists before cleanup
pending := getPendingForNode(lb.b, targetNode)
require.Len(t, pending, 1, "node 3 should have pending changes before cleanup")
// Run cleanup
lb.b.cleanupOfflineNodes()
// Node 3 should be removed from the nodes map
_, existsInNodes := lb.b.nodes.Load(targetNode)
assert.False(t, existsInNodes, "node 3 should be removed from nodes map")
// Pending changes are structurally gone because the node was deleted.
// getPendingForNode returns nil for non-existent nodes.
pendingAfter := getPendingForNode(lb.b, targetNode)
assert.Empty(t, pendingAfter,
"pending changes should be gone after node deletion (structural fix)")
}
// TestBug8_SerialTimeoutUnderWriteLock exercises Bug #8 (performance):
// multiChannelNodeConn.send() originally held the write lock for the ENTIRE
// duration of sending to all connections. Each send has a 50ms timeout for
// stale connections. With N stale connections, the write lock was held for
// N*50ms, blocking all addConnection/removeConnection calls.
//
// BUG: mutex.Lock() held during all conn.send() calls, each with 50ms timeout.
//
// 5 stale connections = 250ms lock hold, blocking addConnection/removeConnection.
//
// FIX: Snapshot connections under read lock, release, send without any lock
//
// (timeouts happen here), then write-lock only to remove failed connections.
// The lock is now held only for O(N) pointer copies, not for N*50ms I/O.
func TestBug8_SerialTimeoutUnderWriteLock(t *testing.T) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
mc := newMultiChannelNodeConn(1, nil)
// Add 5 stale connections (unbuffered, no reader = will timeout at 50ms each)
const staleCount = 5
for i := range staleCount {
ch := make(chan *tailcfg.MapResponse) // unbuffered
mc.addConnection(makeConnectionEntry(fmt.Sprintf("stale-%d", i), ch))
}
// The key test: verify that the mutex is NOT held during the slow sends.
// We do this by trying to acquire the lock from another goroutine during
// the send. With the old code (lock held for 250ms), this would block.
// With the fix, the lock is free during sends.
lockAcquired := make(chan time.Duration, 1)
go func() {
// Give send() a moment to start (it will be in the unlocked send window)
time.Sleep(20 * time.Millisecond) //nolint:forbidigo // concurrency test coordination
// Try to acquire the write lock. It should succeed quickly because
// the lock is only held briefly for the snapshot and cleanup.
start := time.Now()
mc.mutex.Lock()
lockWait := time.Since(start)
mc.mutex.Unlock()
lockAcquired <- lockWait
}()
// Run send() with 5 stale connections. Total wall time will be ~250ms
// (5 * 50ms serial timeouts), but the lock should be free during sends.
_ = mc.send(testMapResponse())
lockWait := <-lockAcquired
t.Logf("lock acquisition during send() with %d stale connections waited %v",
staleCount, lockWait)
// The lock wait should be very short (<50ms) since the lock is released
// before sending. With the old code it would be ~230ms (250ms - 20ms sleep).
assert.Less(t, lockWait, 50*time.Millisecond,
"mutex was held for %v during send() with %d stale connections; "+
"lock should be released before sending to allow "+
"concurrent addConnection/removeConnection calls",
lockWait, staleCount)
}
// TestBug1_BroadcastNoDataLoss verifies that concurrent broadcast addToBatch
// calls do not lose data.
//
// Previously (Bug #1, broadcast path): Same Load→append→Store race as targeted
// changes, but on the broadcast code path within the Range callback.
// FIX: pendingChanges moved into multiChannelNodeConn with mutex protection.
func TestBug1_BroadcastNoDataLoss(t *testing.T) {
// Use many nodes so the Range iteration takes longer, widening the race window
lb := setupLightweightBatcher(t, 100, 10)
defer lb.cleanup()
const goroutines = 50
// Each goroutine broadcasts a DERPMap change to all 100 nodes
panics := runConcurrentlyWithTimeout(t, goroutines, 10*time.Second, func(_ int) {
lb.b.addToBatch(change.DERPMap())
})
require.Zero(t, panics, "no panics expected")
// Each of the 100 nodes should have exactly `goroutines` pending changes.
// The race causes some nodes to have fewer.
var (
totalLost int
nodesWithLoss int
)
lb.b.nodes.Range(func(id types.NodeID, _ *multiChannelNodeConn) bool {
pending := getPendingForNode(lb.b, id)
if len(pending) < goroutines {
totalLost += goroutines - len(pending)
nodesWithLoss++
}
return true
})
t.Logf("broadcast data loss: %d total changes lost across %d/%d nodes",
totalLost, nodesWithLoss, 100)
assert.Zero(t, totalLost,
"broadcast lost %d changes across %d nodes under concurrent access",
totalLost, nodesWithLoss)
}
// ============================================================================
// 1000-Node Scale Tests (lightweight, no DB)
// ============================================================================
// TestScale1000_AddToBatch_Broadcast verifies that broadcasting to 1000 nodes
// works correctly under concurrent access.
func TestScale1000_AddToBatch_Broadcast(t *testing.T) {
if testing.Short() {
t.Skip("skipping 1000-node test in short mode")
}
zerolog.SetGlobalLevel(zerolog.ErrorLevel)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
lb := setupLightweightBatcher(t, 1000, 10)
defer lb.cleanup()
const concurrentBroadcasts = 100
panics := runConcurrentlyWithTimeout(t, concurrentBroadcasts, 30*time.Second, func(_ int) {
lb.b.addToBatch(change.DERPMap())
})
assert.Zero(t, panics, "no panics expected")
nodesWithPending := countNodesPending(lb.b)
totalPending := countTotalPending(lb.b)
t.Logf("1000-node broadcast: %d/%d nodes have pending, %d total pending items",
nodesWithPending, 1000, totalPending)
// All 1000 nodes should have at least some pending changes
// (may lose some due to Bug #1 race, but should have most)
assert.GreaterOrEqual(t, nodesWithPending, 900,
"at least 90%% of nodes should have pending changes")
}
// TestScale1000_ProcessBatchedWithConcurrentAdd tests processBatchedChanges
// running concurrently with addToBatch at 1000 nodes.
func TestScale1000_ProcessBatchedWithConcurrentAdd(t *testing.T) {
if testing.Short() {
t.Skip("skipping 1000-node test in short mode")
}
zerolog.SetGlobalLevel(zerolog.ErrorLevel)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
lb := setupLightweightBatcher(t, 1000, 10)
defer lb.cleanup()
// Use a large work channel to avoid blocking.
// 50 broadcasts × 1000 nodes = up to 50,000 work items.
lb.b.workCh = make(chan work, 100000)
var wg sync.WaitGroup
// Producer: add broadcasts
wg.Go(func() {
for range 50 {
lb.b.addToBatch(change.DERPMap())
}
})
// Consumer: process batched changes repeatedly
wg.Go(func() {
for range 50 {
lb.b.processBatchedChanges()
time.Sleep(1 * time.Millisecond) //nolint:forbidigo // concurrency test coordination
}
})
done := make(chan struct{})
go func() {
wg.Wait()
close(done)
}()
select {
case <-done:
t.Logf("1000-node concurrent add+process completed without deadlock")
case <-time.After(30 * time.Second):
t.Fatal("deadlock detected in 1000-node concurrent add+process")
}
queuedWork := len(lb.b.workCh)
t.Logf("work items queued: %d", queuedWork)
assert.Positive(t, queuedWork, "should have queued some work items")
}
// TestScale1000_MultiChannelBroadcast tests broadcasting a MapResponse
// to 1000 nodes, each with 1-3 connections.
func TestScale1000_MultiChannelBroadcast(t *testing.T) {
if testing.Short() {
t.Skip("skipping 1000-node test in short mode")
}
zerolog.SetGlobalLevel(zerolog.ErrorLevel)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
const (
nodeCount = 1000
bufferSize = 5
)
// Create nodes with varying connection counts
b := &Batcher{
tick: time.NewTicker(10 * time.Millisecond),
workers: 4,
workCh: make(chan work, 4*200),
nodes: xsync.NewMap[types.NodeID, *multiChannelNodeConn](),
done: make(chan struct{}),
}
defer func() {
close(b.done)
b.tick.Stop()
}()
type nodeChannels struct {
channels []chan *tailcfg.MapResponse
}
allNodeChannels := make(map[types.NodeID]*nodeChannels, nodeCount)
for i := 1; i <= nodeCount; i++ {
id := types.NodeID(i) //nolint:gosec // test with small controlled values
mc := newMultiChannelNodeConn(id, nil)
connCount := 1 + (i % 3) // 1, 2, or 3 connections
nc := &nodeChannels{channels: make([]chan *tailcfg.MapResponse, connCount)}
for j := range connCount {
ch := make(chan *tailcfg.MapResponse, bufferSize)
nc.channels[j] = ch
entry := &connectionEntry{
id: fmt.Sprintf("conn-%d-%d", i, j),
c: ch,
version: tailcfg.CapabilityVersion(100),
created: time.Now(),
}
entry.lastUsed.Store(time.Now().Unix())
mc.addConnection(entry)
}
b.nodes.Store(id, mc)
allNodeChannels[id] = nc
}
// Broadcast to all nodes
data := testMapResponse()
var successCount, failCount atomic.Int64
start := time.Now()
b.nodes.Range(func(id types.NodeID, mc *multiChannelNodeConn) bool {
err := mc.send(data)
if err != nil {
failCount.Add(1)
} else {
successCount.Add(1)
}
return true
})
elapsed := time.Since(start)
t.Logf("broadcast to %d nodes: %d success, %d failures, took %v",
nodeCount, successCount.Load(), failCount.Load(), elapsed)
assert.Equal(t, int64(nodeCount), successCount.Load(),
"all nodes should receive broadcast successfully")
assert.Zero(t, failCount.Load(), "no broadcast failures expected")
// Verify at least some channels received data
receivedCount := 0
for _, nc := range allNodeChannels {
for _, ch := range nc.channels {
select {
case <-ch:
receivedCount++
default:
}
}
}
t.Logf("channels that received data: %d", receivedCount)
assert.Positive(t, receivedCount, "channels should have received broadcast data")
}
// TestScale1000_ConnectionChurn tests 1000 nodes with 10% churning connections
// while broadcasts are happening. Stable nodes should not lose data.
func TestScale1000_ConnectionChurn(t *testing.T) {
if testing.Short() {
t.Skip("skipping 1000-node test in short mode")
}
zerolog.SetGlobalLevel(zerolog.ErrorLevel)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
lb := setupLightweightBatcher(t, 1000, 20)
defer lb.cleanup()
const churnNodes = 100 // 10% of nodes churn
const churnCycles = 50
var (
panics atomic.Int64
wg sync.WaitGroup
)
// Churn goroutine: rapidly add/remove connections for nodes 901-1000
wg.Go(func() {
for cycle := range churnCycles {
for i := 901; i <= 901+churnNodes-1; i++ {
id := types.NodeID(i) //nolint:gosec // test with small controlled values
mc, exists := lb.b.nodes.Load(id)
if !exists {
continue
}
// Remove old connection
oldCh := lb.channels[id]
mc.removeConnectionByChannel(oldCh)
// Add new connection
newCh := make(chan *tailcfg.MapResponse, 20)
entry := &connectionEntry{
id: fmt.Sprintf("churn-%d-%d", i, cycle),
c: newCh,
version: tailcfg.CapabilityVersion(100),
created: time.Now(),
}
entry.lastUsed.Store(time.Now().Unix())
mc.addConnection(entry)
lb.channels[id] = newCh
}
}
})
// Broadcast goroutine: send addToBatch calls during churn
wg.Go(func() {
for range churnCycles {
func() {
defer func() {
if r := recover(); r != nil {
panics.Add(1)
}
}()
lb.b.addToBatch(change.DERPMap())
}()
}
})
done := make(chan struct{})
go func() {
wg.Wait()
close(done)
}()
select {
case <-done:
// Success
case <-time.After(30 * time.Second):
t.Fatal("deadlock in 1000-node connection churn test")
}
assert.Zero(t, panics.Load(), "no panics during connection churn")
// Verify stable nodes (1-900) still have active connections
stableConnected := 0
for i := 1; i <= 900; i++ {
if mc, exists := lb.b.nodes.Load(types.NodeID(i)); exists { //nolint:gosec // test
if mc.hasActiveConnections() {
stableConnected++
}
}
}
t.Logf("stable nodes still connected: %d/900", stableConnected)
assert.Equal(t, 900, stableConnected,
"all stable nodes should retain their connections during churn")
}
// TestScale1000_ConcurrentAddRemove tests concurrent AddNode-like and
// RemoveNode-like operations at 1000-node scale.
func TestScale1000_ConcurrentAddRemove(t *testing.T) {
if testing.Short() {
t.Skip("skipping 1000-node test in short mode")
}
zerolog.SetGlobalLevel(zerolog.ErrorLevel)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
lb := setupLightweightBatcher(t, 1000, 10)
defer lb.cleanup()
const goroutines = 200
panics := runConcurrentlyWithTimeout(t, goroutines, 30*time.Second, func(i int) {
id := types.NodeID(1 + (i % 1000)) //nolint:gosec // test
mc, exists := lb.b.nodes.Load(id)
if !exists {
return
}
if i%2 == 0 {
// Add a new connection
ch := make(chan *tailcfg.MapResponse, 10)
entry := &connectionEntry{
id: fmt.Sprintf("concurrent-%d", i),
c: ch,
version: tailcfg.CapabilityVersion(100),
created: time.Now(),
}
entry.lastUsed.Store(time.Now().Unix())
mc.addConnection(entry)
} else {
// Try to remove a connection (may fail if already removed)
ch := lb.channels[id]
mc.removeConnectionByChannel(ch)
}
})
assert.Zero(t, panics, "no panics during concurrent add/remove at 1000 nodes")
}
// TestScale1000_IsConnectedConsistency verifies IsConnected returns consistent
// results during rapid connection state changes at 1000-node scale.
func TestScale1000_IsConnectedConsistency(t *testing.T) {
if testing.Short() {
t.Skip("skipping 1000-node test in short mode")
}
zerolog.SetGlobalLevel(zerolog.ErrorLevel)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
lb := setupLightweightBatcher(t, 1000, 10)
defer lb.cleanup()
var (
panics atomic.Int64
wg sync.WaitGroup
)
// Goroutines reading IsConnected
wg.Go(func() {
for range 1000 {
func() {
defer func() {
if r := recover(); r != nil {
panics.Add(1)
}
}()
for i := 1; i <= 1000; i++ {
_ = lb.b.IsConnected(types.NodeID(i)) //nolint:gosec // test
}
}()
}
})
// Goroutine modifying connection state via disconnectedAt on the node conn
wg.Go(func() {
for i := range 100 {
id := types.NodeID(1 + (i % 1000)) //nolint:gosec // test
if mc, ok := lb.b.nodes.Load(id); ok {
if i%2 == 0 {
mc.markDisconnected() // disconnect
} else {
mc.markConnected() // reconnect
}
}
}
})
done := make(chan struct{})
go func() {
wg.Wait()
close(done)
}()
select {
case <-done:
// Success
case <-time.After(30 * time.Second):
t.Fatal("deadlock in IsConnected consistency test")
}
assert.Zero(t, panics.Load(),
"IsConnected should not panic under concurrent modification")
}
// TestScale1000_BroadcastDuringNodeChurn tests that broadcast addToBatch
// calls work correctly while 20% of nodes are joining and leaving.
func TestScale1000_BroadcastDuringNodeChurn(t *testing.T) {
if testing.Short() {
t.Skip("skipping 1000-node test in short mode")
}
zerolog.SetGlobalLevel(zerolog.ErrorLevel)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
lb := setupLightweightBatcher(t, 1000, 10)
defer lb.cleanup()
var (
panics atomic.Int64
wg sync.WaitGroup
)
// Node churn: 20% of nodes (nodes 801-1000) joining/leaving
wg.Go(func() {
for cycle := range 20 {
for i := 801; i <= 1000; i++ {
func() {
defer func() {
if r := recover(); r != nil {
panics.Add(1)
}
}()
id := types.NodeID(i) //nolint:gosec // test
if cycle%2 == 0 {
// "Remove" node
lb.b.nodes.Delete(id)
} else {
// "Add" node back
mc := newMultiChannelNodeConn(id, nil)
ch := make(chan *tailcfg.MapResponse, 10)
entry := &connectionEntry{
id: fmt.Sprintf("rechurn-%d-%d", i, cycle),
c: ch,
version: tailcfg.CapabilityVersion(100),
created: time.Now(),
}
entry.lastUsed.Store(time.Now().Unix())
mc.addConnection(entry)
lb.b.nodes.Store(id, mc)
}
}()
}
}
})
// Concurrent broadcasts
wg.Go(func() {
for range 50 {
func() {
defer func() {
if r := recover(); r != nil {
panics.Add(1)
}
}()
lb.b.addToBatch(change.DERPMap())
}()
}
})
done := make(chan struct{})
go func() {
wg.Wait()
close(done)
}()
select {
case <-done:
t.Logf("broadcast during churn completed, panics: %d", panics.Load())
case <-time.After(30 * time.Second):
t.Fatal("deadlock in broadcast during node churn")
}
assert.Zero(t, panics.Load(),
"broadcast during node churn should not panic")
}
// TestScale1000_WorkChannelSaturation tests that the work channel doesn't
// deadlock when it fills up (queueWork selects on done channel as escape).
func TestScale1000_WorkChannelSaturation(t *testing.T) {
if testing.Short() {
t.Skip("skipping 1000-node test in short mode")
}
zerolog.SetGlobalLevel(zerolog.ErrorLevel)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
// Create batcher with SMALL work channel to force saturation
b := &Batcher{
tick: time.NewTicker(10 * time.Millisecond),
workers: 2,
workCh: make(chan work, 10), // Very small - will saturate
nodes: xsync.NewMap[types.NodeID, *multiChannelNodeConn](),
done: make(chan struct{}),
}
defer func() {
close(b.done)
b.tick.Stop()
}()
// Add 1000 nodes
for i := 1; i <= 1000; i++ {
id := types.NodeID(i) //nolint:gosec // test
mc := newMultiChannelNodeConn(id, nil)
ch := make(chan *tailcfg.MapResponse, 1)
entry := &connectionEntry{
id: fmt.Sprintf("conn-%d", i),
c: ch,
version: tailcfg.CapabilityVersion(100),
created: time.Now(),
}
entry.lastUsed.Store(time.Now().Unix())
mc.addConnection(entry)
b.nodes.Store(id, mc)
}
// Add pending changes for all 1000 nodes
for i := 1; i <= 1000; i++ {
if nc, ok := b.nodes.Load(types.NodeID(i)); ok { //nolint:gosec // test
nc.appendPending(change.DERPMap())
}
}
// processBatchedChanges should not deadlock even with small work channel.
// queueWork uses select with b.done as escape hatch.
// Start a consumer to slowly drain the work channel.
var consumed atomic.Int64
go func() {
for {
select {
case <-b.workCh:
consumed.Add(1)
case <-b.done:
return
}
}
}()
done := make(chan struct{})
go func() {
b.processBatchedChanges()
close(done)
}()
select {
case <-done:
t.Logf("processBatchedChanges completed, consumed %d work items", consumed.Load())
case <-time.After(30 * time.Second):
t.Fatal("processBatchedChanges deadlocked with saturated work channel")
}
}
// TestScale1000_FullUpdate_AllNodesGetPending verifies that a FullUpdate
// creates pending entries for all 1000 nodes.
func TestScale1000_FullUpdate_AllNodesGetPending(t *testing.T) {
if testing.Short() {
t.Skip("skipping 1000-node test in short mode")
}
zerolog.SetGlobalLevel(zerolog.ErrorLevel)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
lb := setupLightweightBatcher(t, 1000, 10)
defer lb.cleanup()
lb.b.addToBatch(change.FullUpdate())
nodesWithPending := countNodesPending(lb.b)
assert.Equal(t, 1000, nodesWithPending,
"FullUpdate should create pending entries for all 1000 nodes")
// Verify each node has exactly one full update pending
lb.b.nodes.Range(func(id types.NodeID, _ *multiChannelNodeConn) bool {
pending := getPendingForNode(lb.b, id)
require.Len(t, pending, 1, "node %d should have 1 pending change", id)
assert.True(t, pending[0].IsFull(), "pending change for node %d should be full", id)
return true
})
}
// ============================================================================
// 1000-Node Full Pipeline Tests (with DB)
// ============================================================================
// TestScale1000_AllToAll_FullPipeline tests the complete pipeline:
// create 1000 nodes in DB, add them to batcher, send FullUpdate,
// verify all nodes see 999 peers.
func TestScale1000_AllToAll_FullPipeline(t *testing.T) {
if testing.Short() {
t.Skip("skipping 1000-node full pipeline test in short mode")
}
if util.RaceEnabled {
t.Skip("skipping 1000-node test with race detector (bcrypt setup too slow)")
}
zerolog.SetGlobalLevel(zerolog.ErrorLevel)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
t.Logf("setting up 1000-node test environment (this may take a minute)...")
testData, cleanup := setupBatcherWithTestData(t, NewBatcherAndMapper, 1, 1000, 200)
defer cleanup()
batcher := testData.Batcher
allNodes := testData.Nodes
t.Logf("created %d nodes, connecting to batcher...", len(allNodes))
// Start update consumers
for i := range allNodes {
allNodes[i].start()
}
// Connect all nodes
for i := range allNodes {
node := &allNodes[i]
err := batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100), nil)
if err != nil {
t.Fatalf("failed to add node %d: %v", i, err)
}
// Yield periodically to avoid overwhelming the work queue
if i%50 == 49 {
time.Sleep(10 * time.Millisecond) //nolint:forbidigo // concurrency test coordination
}
}
t.Logf("all nodes connected, sending FullUpdate and waiting for convergence...")
// Send FullUpdate
batcher.AddWork(change.FullUpdate())
expectedPeers := len(allNodes) - 1 // Each sees all others
// Wait for all nodes to see all peers
assert.EventuallyWithT(t, func(c *assert.CollectT) {
convergedCount := 0
for i := range allNodes {
if int(allNodes[i].maxPeersCount.Load()) >= expectedPeers {
convergedCount++
}
}
assert.Equal(c, len(allNodes), convergedCount,
"all nodes should see %d peers (converged: %d/%d)",
expectedPeers, convergedCount, len(allNodes))
}, 5*time.Minute, 5*time.Second, "waiting for 1000-node convergence")
// Final statistics
totalUpdates := int64(0)
minPeers := len(allNodes)
maxPeers := 0
for i := range allNodes {
stats := allNodes[i].cleanup()
totalUpdates += stats.TotalUpdates
if stats.MaxPeersSeen < minPeers {
minPeers = stats.MaxPeersSeen
}
if stats.MaxPeersSeen > maxPeers {
maxPeers = stats.MaxPeersSeen
}
}
t.Logf("1000-node pipeline: total_updates=%d, min_peers=%d, max_peers=%d, expected=%d",
totalUpdates, minPeers, maxPeers, expectedPeers)
assert.GreaterOrEqual(t, minPeers, expectedPeers,
"all nodes should have seen at least %d peers", expectedPeers)
}
================================================
FILE: hscontrol/mapper/batcher_scale_bench_test.go
================================================
package mapper
// Scale benchmarks for the batcher system.
//
// These benchmarks systematically increase node counts to find scaling limits
// and identify bottlenecks. Organized into tiers:
//
// Tier 1 - O(1) operations: should stay flat regardless of node count
// Tier 2 - O(N) lightweight: batch queuing and processing (no MapResponse generation)
// Tier 3 - O(N) heavier: map building, peer diff, peer tracking
// Tier 4 - Concurrent contention: multi-goroutine access under load
//
// Node count progression: 100, 500, 1000, 2000, 5000, 10000, 20000, 50000
import (
"fmt"
"strconv"
"sync"
"testing"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/types/change"
"github.com/rs/zerolog"
"tailscale.com/tailcfg"
)
// scaleCounts defines the node counts used across all scaling benchmarks.
// Tier 1 (O(1)) tests up to 50k; Tier 2-4 test up to 10k-20k.
var (
scaleCountsO1 = []int{100, 500, 1000, 2000, 5000, 10000, 20000, 50000}
scaleCountsLinear = []int{100, 500, 1000, 2000, 5000, 10000}
scaleCountsHeavy = []int{100, 500, 1000, 2000, 5000, 10000}
scaleCountsConc = []int{100, 500, 1000, 2000, 5000}
)
// ============================================================================
// Tier 1: O(1) Operations — should scale flat
// ============================================================================
// BenchmarkScale_IsConnected tests single-node lookup at increasing map sizes.
func BenchmarkScale_IsConnected(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, n := range scaleCountsO1 {
b.Run(strconv.Itoa(n), func(b *testing.B) {
batcher, _ := benchBatcher(n, 1)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
b.ResetTimer()
for i := range b.N {
id := types.NodeID(1 + (i % n)) //nolint:gosec
_ = batcher.IsConnected(id)
}
})
}
}
// BenchmarkScale_AddToBatch_Targeted tests single-node targeted change at
// increasing map sizes. The map size should not affect per-operation cost.
func BenchmarkScale_AddToBatch_Targeted(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, n := range scaleCountsO1 {
b.Run(strconv.Itoa(n), func(b *testing.B) {
batcher, _ := benchBatcher(n, 10)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
b.ResetTimer()
for i := range b.N {
targetID := types.NodeID(1 + (i % n)) //nolint:gosec
ch := change.Change{
Reason: "scale-targeted",
TargetNode: targetID,
PeerPatches: []*tailcfg.PeerChange{
{NodeID: tailcfg.NodeID(targetID)}, //nolint:gosec
},
}
batcher.addToBatch(ch)
// Drain every 100 ops to avoid unbounded growth
if i%100 == 99 {
batcher.nodes.Range(func(_ types.NodeID, nc *multiChannelNodeConn) bool {
nc.drainPending()
return true
})
}
}
})
}
}
// BenchmarkScale_ConnectionChurn tests add/remove connection cycle.
// The map size should not affect per-operation cost for a single node.
func BenchmarkScale_ConnectionChurn(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, n := range scaleCountsO1 {
b.Run(strconv.Itoa(n), func(b *testing.B) {
batcher, channels := benchBatcher(n, 10)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
b.ResetTimer()
for i := range b.N {
id := types.NodeID(1 + (i % n)) //nolint:gosec
mc, ok := batcher.nodes.Load(id)
if !ok {
continue
}
oldCh := channels[id]
mc.removeConnectionByChannel(oldCh)
newCh := make(chan *tailcfg.MapResponse, 10)
entry := &connectionEntry{
id: fmt.Sprintf("sc-%d", i),
c: newCh,
version: tailcfg.CapabilityVersion(100),
created: time.Now(),
}
entry.lastUsed.Store(time.Now().Unix())
mc.addConnection(entry)
channels[id] = newCh
}
})
}
}
// ============================================================================
// Tier 2: O(N) Lightweight — batch mechanics without MapResponse generation
// ============================================================================
// BenchmarkScale_AddToBatch_Broadcast tests broadcasting a change to ALL nodes.
// Cost should scale linearly with node count.
func BenchmarkScale_AddToBatch_Broadcast(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, n := range scaleCountsLinear {
b.Run(strconv.Itoa(n), func(b *testing.B) {
batcher, _ := benchBatcher(n, 10)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
ch := change.DERPMap()
b.ResetTimer()
for range b.N {
batcher.addToBatch(ch)
// Drain to avoid unbounded growth
batcher.nodes.Range(func(_ types.NodeID, nc *multiChannelNodeConn) bool {
nc.drainPending()
return true
})
}
})
}
}
// BenchmarkScale_AddToBatch_FullUpdate tests FullUpdate broadcast cost.
func BenchmarkScale_AddToBatch_FullUpdate(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, n := range scaleCountsLinear {
b.Run(strconv.Itoa(n), func(b *testing.B) {
batcher, _ := benchBatcher(n, 10)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
b.ResetTimer()
for range b.N {
batcher.addToBatch(change.FullUpdate())
}
})
}
}
// BenchmarkScale_ProcessBatchedChanges tests draining pending changes into work queue.
func BenchmarkScale_ProcessBatchedChanges(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, n := range scaleCountsLinear {
b.Run(strconv.Itoa(n), func(b *testing.B) {
batcher, _ := benchBatcher(n, 10)
batcher.workCh = make(chan work, n*b.N+1)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
b.ResetTimer()
for range b.N {
b.StopTimer()
for i := 1; i <= n; i++ {
if nc, ok := batcher.nodes.Load(types.NodeID(i)); ok { //nolint:gosec
nc.appendPending(change.DERPMap())
}
}
b.StartTimer()
batcher.processBatchedChanges()
}
})
}
}
// BenchmarkScale_BroadcastToN tests end-to-end: addToBatch + processBatchedChanges.
func BenchmarkScale_BroadcastToN(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, n := range scaleCountsLinear {
b.Run(strconv.Itoa(n), func(b *testing.B) {
batcher, _ := benchBatcher(n, 10)
batcher.workCh = make(chan work, n*b.N+1)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
ch := change.DERPMap()
b.ResetTimer()
for range b.N {
batcher.addToBatch(ch)
batcher.processBatchedChanges()
}
})
}
}
// BenchmarkScale_SendToAll tests raw channel send cost to N nodes (no batching).
// This isolates the multiChannelNodeConn.send() cost.
// Uses large buffered channels to avoid goroutine drain overhead.
func BenchmarkScale_SendToAll(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, n := range scaleCountsLinear {
b.Run(strconv.Itoa(n), func(b *testing.B) {
// b.N+1 buffer so sends never block
batcher, _ := benchBatcher(n, b.N+1)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
data := testMapResponse()
b.ResetTimer()
for range b.N {
batcher.nodes.Range(func(_ types.NodeID, mc *multiChannelNodeConn) bool {
_ = mc.send(data)
return true
})
}
})
}
}
// ============================================================================
// Tier 3: O(N) Heavier — map building, peer diff, peer tracking
// ============================================================================
// BenchmarkScale_ConnectedMap tests building the full connected/disconnected map.
func BenchmarkScale_ConnectedMap(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, n := range scaleCountsHeavy {
b.Run(strconv.Itoa(n), func(b *testing.B) {
batcher, channels := benchBatcher(n, 1)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
// 10% disconnected for realism
for i := 1; i <= n; i++ {
if i%10 == 0 {
id := types.NodeID(i) //nolint:gosec
if mc, ok := batcher.nodes.Load(id); ok {
mc.removeConnectionByChannel(channels[id])
mc.markDisconnected()
}
}
}
b.ResetTimer()
for range b.N {
_ = batcher.ConnectedMap()
}
})
}
}
// BenchmarkScale_ComputePeerDiff tests peer diff computation at scale.
// Each node tracks N-1 peers, with 10% removed.
func BenchmarkScale_ComputePeerDiff(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, n := range scaleCountsHeavy {
b.Run(strconv.Itoa(n), func(b *testing.B) {
mc := newMultiChannelNodeConn(1, nil)
// Track N peers
for i := 1; i <= n; i++ {
mc.lastSentPeers.Store(tailcfg.NodeID(i), struct{}{})
}
// Current: 90% present (every 10th missing)
current := make([]tailcfg.NodeID, 0, n)
for i := 1; i <= n; i++ {
if i%10 != 0 {
current = append(current, tailcfg.NodeID(i))
}
}
b.ResetTimer()
for range b.N {
_ = mc.computePeerDiff(current)
}
})
}
}
// BenchmarkScale_UpdateSentPeers_Full tests full peer list update.
func BenchmarkScale_UpdateSentPeers_Full(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, n := range scaleCountsHeavy {
b.Run(strconv.Itoa(n), func(b *testing.B) {
mc := newMultiChannelNodeConn(1, nil)
peerIDs := make([]tailcfg.NodeID, n)
for i := range peerIDs {
peerIDs[i] = tailcfg.NodeID(i + 1)
}
resp := testMapResponseWithPeers(peerIDs...)
b.ResetTimer()
for range b.N {
mc.updateSentPeers(resp)
}
})
}
}
// BenchmarkScale_UpdateSentPeers_Incremental tests incremental peer updates (10% new).
func BenchmarkScale_UpdateSentPeers_Incremental(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, n := range scaleCountsHeavy {
b.Run(strconv.Itoa(n), func(b *testing.B) {
mc := newMultiChannelNodeConn(1, nil)
// Pre-populate
for i := 1; i <= n; i++ {
mc.lastSentPeers.Store(tailcfg.NodeID(i), struct{}{})
}
addCount := n / 10
if addCount == 0 {
addCount = 1
}
resp := testMapResponse()
resp.PeersChanged = make([]*tailcfg.Node, addCount)
for i := range addCount {
resp.PeersChanged[i] = &tailcfg.Node{ID: tailcfg.NodeID(n + i + 1)}
}
b.ResetTimer()
for range b.N {
mc.updateSentPeers(resp)
}
})
}
}
// BenchmarkScale_MultiChannelBroadcast tests sending to N nodes, each with
// ~1.6 connections on average (every 3rd node has 3 connections).
// Uses large buffered channels to avoid goroutine drain overhead.
func BenchmarkScale_MultiChannelBroadcast(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, n := range scaleCountsHeavy {
b.Run(strconv.Itoa(n), func(b *testing.B) {
// Use b.N+1 buffer so sends never block
batcher, _ := benchBatcher(n, b.N+1)
defer func() {
close(batcher.done)
batcher.tick.Stop()
}()
// Add extra connections to every 3rd node (also buffered)
for i := 1; i <= n; i++ {
if i%3 == 0 {
if mc, ok := batcher.nodes.Load(types.NodeID(i)); ok { //nolint:gosec
for j := range 2 {
ch := make(chan *tailcfg.MapResponse, b.N+1)
entry := &connectionEntry{
id: fmt.Sprintf("extra-%d-%d", i, j),
c: ch,
version: tailcfg.CapabilityVersion(100),
created: time.Now(),
}
entry.lastUsed.Store(time.Now().Unix())
mc.addConnection(entry)
}
}
}
}
data := testMapResponse()
b.ResetTimer()
for range b.N {
batcher.nodes.Range(func(_ types.NodeID, mc *multiChannelNodeConn) bool {
_ = mc.send(data)
return true
})
}
})
}
}
// ============================================================================
// Tier 4: Concurrent Contention — multi-goroutine access
// ============================================================================
// BenchmarkScale_ConcurrentAddToBatch tests parallel addToBatch throughput.
func BenchmarkScale_ConcurrentAddToBatch(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, n := range scaleCountsConc {
b.Run(strconv.Itoa(n), func(b *testing.B) {
batcher, _ := benchBatcher(n, 10)
drainDone := make(chan struct{})
go func() {
defer close(drainDone)
for {
select {
case <-batcher.done:
return
default:
batcher.nodes.Range(func(_ types.NodeID, nc *multiChannelNodeConn) bool {
nc.drainPending()
return true
})
time.Sleep(time.Millisecond) //nolint:forbidigo
}
}
}()
ch := change.DERPMap()
b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
batcher.addToBatch(ch)
}
})
b.StopTimer()
close(batcher.done)
<-drainDone
batcher.done = make(chan struct{})
batcher.tick.Stop()
})
}
}
// BenchmarkScale_ConcurrentSendAndChurn tests the production hot path:
// sending to all nodes while 10% of connections are churning concurrently.
// Uses large buffered channels to avoid goroutine drain overhead.
func BenchmarkScale_ConcurrentSendAndChurn(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, n := range scaleCountsConc {
b.Run(strconv.Itoa(n), func(b *testing.B) {
batcher, channels := benchBatcher(n, b.N+1)
var mu sync.Mutex
stopChurn := make(chan struct{})
go func() {
i := 0
for {
select {
case <-stopChurn:
return
default:
id := types.NodeID(1 + (i % n)) //nolint:gosec
if i%10 == 0 {
mc, ok := batcher.nodes.Load(id)
if ok {
mu.Lock()
oldCh := channels[id]
mu.Unlock()
mc.removeConnectionByChannel(oldCh)
newCh := make(chan *tailcfg.MapResponse, b.N+1)
entry := &connectionEntry{
id: fmt.Sprintf("sc-churn-%d", i),
c: newCh,
version: tailcfg.CapabilityVersion(100),
created: time.Now(),
}
entry.lastUsed.Store(time.Now().Unix())
mc.addConnection(entry)
mu.Lock()
channels[id] = newCh
mu.Unlock()
}
}
i++
}
}
}()
data := testMapResponse()
b.ResetTimer()
for range b.N {
batcher.nodes.Range(func(_ types.NodeID, mc *multiChannelNodeConn) bool {
_ = mc.send(data)
return true
})
}
b.StopTimer()
close(stopChurn)
close(batcher.done)
batcher.tick.Stop()
})
}
}
// BenchmarkScale_MixedWorkload simulates a realistic production workload:
// - 70% targeted changes (single node updates)
// - 20% DERP map changes (broadcast)
// - 10% full updates (broadcast with full map)
// All while 10% of connections are churning.
func BenchmarkScale_MixedWorkload(b *testing.B) {
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, n := range scaleCountsConc {
b.Run(strconv.Itoa(n), func(b *testing.B) {
batcher, channels := benchBatcher(n, 10)
batcher.workCh = make(chan work, n*100+1)
var mu sync.Mutex
stopChurn := make(chan struct{})
// Background churn on 10% of nodes
go func() {
i := 0
for {
select {
case <-stopChurn:
return
default:
id := types.NodeID(1 + (i % n)) //nolint:gosec
if i%10 == 0 {
mc, ok := batcher.nodes.Load(id)
if ok {
mu.Lock()
oldCh := channels[id]
mu.Unlock()
mc.removeConnectionByChannel(oldCh)
newCh := make(chan *tailcfg.MapResponse, 10)
entry := &connectionEntry{
id: fmt.Sprintf("mix-churn-%d", i),
c: newCh,
version: tailcfg.CapabilityVersion(100),
created: time.Now(),
}
entry.lastUsed.Store(time.Now().Unix())
mc.addConnection(entry)
mu.Lock()
channels[id] = newCh
mu.Unlock()
}
}
i++
}
}
}()
// Background batch processor
stopProc := make(chan struct{})
go func() {
for {
select {
case <-stopProc:
return
default:
batcher.processBatchedChanges()
time.Sleep(time.Millisecond) //nolint:forbidigo
}
}
}()
// Background work channel consumer (simulates workers)
stopWorkers := make(chan struct{})
go func() {
for {
select {
case <-batcher.workCh:
case <-stopWorkers:
return
}
}
}()
b.ResetTimer()
for i := range b.N {
switch {
case i%10 < 7: // 70% targeted
targetID := types.NodeID(1 + (i % n)) //nolint:gosec
batcher.addToBatch(change.Change{
Reason: "mixed-targeted",
TargetNode: targetID,
PeerPatches: []*tailcfg.PeerChange{
{NodeID: tailcfg.NodeID(targetID)}, //nolint:gosec
},
})
case i%10 < 9: // 20% DERP map broadcast
batcher.addToBatch(change.DERPMap())
default: // 10% full update
batcher.addToBatch(change.FullUpdate())
}
}
b.StopTimer()
close(stopChurn)
close(stopProc)
close(stopWorkers)
close(batcher.done)
batcher.tick.Stop()
})
}
}
// ============================================================================
// Tier 5: DB-dependent — AddNode with real MapResponse generation
// ============================================================================
// BenchmarkScale_AddAllNodes measures the cost of connecting ALL N nodes
// to a batcher backed by a real database. Each AddNode generates an initial
// MapResponse containing all peer data, so cost is O(N) per node, O(N²) total.
func BenchmarkScale_AddAllNodes(b *testing.B) {
if testing.Short() {
b.Skip("skipping full pipeline benchmark in short mode")
}
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, nodeCount := range []int{10, 50, 100, 200, 500} {
b.Run(strconv.Itoa(nodeCount), func(b *testing.B) {
testData, cleanup := setupBatcherWithTestData(b, NewBatcherAndMapper, 1, nodeCount, largeBufferSize)
defer cleanup()
batcher := testData.Batcher
allNodes := testData.Nodes
for i := range allNodes {
allNodes[i].start()
}
defer func() {
for i := range allNodes {
allNodes[i].cleanup()
}
}()
b.ResetTimer()
for range b.N {
for i := range allNodes {
node := &allNodes[i]
_ = batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100), nil)
}
b.StopTimer()
for i := range allNodes {
node := &allNodes[i]
batcher.RemoveNode(node.n.ID, node.ch)
}
for i := range allNodes {
for {
select {
case <-allNodes[i].ch:
default:
goto drained
}
}
drained:
}
b.StartTimer()
}
})
}
}
// BenchmarkScale_SingleAddNode measures the cost of adding ONE node to an
// already-populated batcher. This is the real production scenario: a new node
// joins an existing network. The cost should scale with the number of existing
// peers since the initial MapResponse includes all peer data.
func BenchmarkScale_SingleAddNode(b *testing.B) {
if testing.Short() {
b.Skip("skipping full pipeline benchmark in short mode")
}
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, nodeCount := range []int{10, 50, 100, 200, 500, 1000} {
b.Run(strconv.Itoa(nodeCount), func(b *testing.B) {
testData, cleanup := setupBatcherWithTestData(b, NewBatcherAndMapper, 1, nodeCount, largeBufferSize)
defer cleanup()
batcher := testData.Batcher
allNodes := testData.Nodes
for i := range allNodes {
allNodes[i].start()
}
defer func() {
for i := range allNodes {
allNodes[i].cleanup()
}
}()
// Connect all nodes except the last one
for i := range len(allNodes) - 1 {
node := &allNodes[i]
err := batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100), nil)
if err != nil {
b.Fatalf("failed to add node %d: %v", i, err)
}
}
time.Sleep(200 * time.Millisecond) //nolint:forbidigo
// Benchmark: repeatedly add and remove the last node
lastNode := &allNodes[len(allNodes)-1]
b.ResetTimer()
for range b.N {
_ = batcher.AddNode(lastNode.n.ID, lastNode.ch, tailcfg.CapabilityVersion(100), nil)
b.StopTimer()
batcher.RemoveNode(lastNode.n.ID, lastNode.ch)
for {
select {
case <-lastNode.ch:
default:
goto drainDone
}
}
drainDone:
b.StartTimer()
}
})
}
}
// BenchmarkScale_MapResponse_DERPMap measures MapResponse generation for a
// DERPMap change. This is a lightweight change that doesn't touch peers.
func BenchmarkScale_MapResponse_DERPMap(b *testing.B) {
if testing.Short() {
b.Skip("skipping full pipeline benchmark in short mode")
}
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, nodeCount := range []int{10, 50, 100, 200, 500} {
b.Run(strconv.Itoa(nodeCount), func(b *testing.B) {
testData, cleanup := setupBatcherWithTestData(b, NewBatcherAndMapper, 1, nodeCount, largeBufferSize)
defer cleanup()
batcher := testData.Batcher
allNodes := testData.Nodes
for i := range allNodes {
allNodes[i].start()
}
defer func() {
for i := range allNodes {
allNodes[i].cleanup()
}
}()
for i := range allNodes {
node := &allNodes[i]
err := batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100), nil)
if err != nil {
b.Fatalf("failed to add node %d: %v", i, err)
}
}
time.Sleep(200 * time.Millisecond) //nolint:forbidigo
ch := change.DERPMap()
b.ResetTimer()
for i := range b.N {
nodeIdx := i % len(allNodes)
_, _ = batcher.MapResponseFromChange(allNodes[nodeIdx].n.ID, ch)
}
})
}
}
// BenchmarkScale_MapResponse_FullUpdate measures MapResponse generation for a
// FullUpdate change. This forces full peer serialization — the primary bottleneck
// for large networks.
func BenchmarkScale_MapResponse_FullUpdate(b *testing.B) {
if testing.Short() {
b.Skip("skipping full pipeline benchmark in short mode")
}
zerolog.SetGlobalLevel(zerolog.Disabled)
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
for _, nodeCount := range []int{10, 50, 100, 200, 500} {
b.Run(strconv.Itoa(nodeCount), func(b *testing.B) {
testData, cleanup := setupBatcherWithTestData(b, NewBatcherAndMapper, 1, nodeCount, largeBufferSize)
defer cleanup()
batcher := testData.Batcher
allNodes := testData.Nodes
for i := range allNodes {
allNodes[i].start()
}
defer func() {
for i := range allNodes {
allNodes[i].cleanup()
}
}()
for i := range allNodes {
node := &allNodes[i]
err := batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100), nil)
if err != nil {
b.Fatalf("failed to add node %d: %v", i, err)
}
}
time.Sleep(200 * time.Millisecond) //nolint:forbidigo
ch := change.FullUpdate()
b.ResetTimer()
for i := range b.N {
nodeIdx := i % len(allNodes)
_, _ = batcher.MapResponseFromChange(allNodes[nodeIdx].n.ID, ch)
}
})
}
}
================================================
FILE: hscontrol/mapper/batcher_test.go
================================================
package mapper
import (
"errors"
"fmt"
"net/netip"
"runtime"
"strings"
"sync"
"sync/atomic"
"testing"
"time"
"github.com/juanfont/headscale/hscontrol/db"
"github.com/juanfont/headscale/hscontrol/derp"
"github.com/juanfont/headscale/hscontrol/state"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/types/change"
"github.com/rs/zerolog"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"tailscale.com/tailcfg"
"zgo.at/zcache/v2"
)
var errNodeNotFoundAfterAdd = errors.New("node not found after adding to batcher")
type batcherFunc func(cfg *types.Config, state *state.State) *Batcher
// batcherTestCase defines a batcher function with a descriptive name for testing.
type batcherTestCase struct {
name string
fn batcherFunc
}
// testBatcherWrapper wraps a real batcher to add online/offline notifications
// that would normally be sent by poll.go in production.
type testBatcherWrapper struct {
*Batcher
state *state.State
// connectGens tracks per-node connect generations so RemoveNode can pass
// the correct generation to State.Disconnect(), matching production behavior.
connectGens sync.Map // types.NodeID → uint64
}
func (t *testBatcherWrapper) AddNode(id types.NodeID, c chan<- *tailcfg.MapResponse, version tailcfg.CapabilityVersion, stop func()) error {
// Mark node as online in state before AddNode to match production behavior
// This ensures the NodeStore has correct online status for change processing
if t.state != nil {
// Use Connect to properly mark node online in NodeStore and track the
// generation so RemoveNode can pass it to Disconnect().
_, gen := t.state.Connect(id)
t.connectGens.Store(id, gen)
}
// First add the node to the real batcher
err := t.Batcher.AddNode(id, c, version, stop)
if err != nil {
return err
}
// Send the online notification that poll.go would normally send
// This ensures other nodes get notified about this node coming online
node, ok := t.state.GetNodeByID(id)
if !ok {
return fmt.Errorf("%w: %d", errNodeNotFoundAfterAdd, id)
}
t.AddWork(change.NodeOnlineFor(node))
return nil
}
func (t *testBatcherWrapper) RemoveNode(id types.NodeID, c chan<- *tailcfg.MapResponse) bool {
// Mark node as offline in state BEFORE removing from batcher
// This ensures the NodeStore has correct offline status when the change is processed
if t.state != nil {
var gen uint64
if v, ok := t.connectGens.LoadAndDelete(id); ok {
if g, ok := v.(uint64); ok {
gen = g
}
}
_, _ = t.state.Disconnect(id, gen)
}
// Send the offline notification that poll.go would normally send
// Do this BEFORE removing from batcher so the change can be processed
node, ok := t.state.GetNodeByID(id)
if ok {
t.AddWork(change.NodeOfflineFor(node))
}
// Finally remove from the real batcher
return t.Batcher.RemoveNode(id, c)
}
// wrapBatcherForTest wraps a batcher with test-specific behavior.
func wrapBatcherForTest(b *Batcher, state *state.State) *testBatcherWrapper {
return &testBatcherWrapper{Batcher: b, state: state}
}
// allBatcherFunctions contains all batcher implementations to test.
var allBatcherFunctions = []batcherTestCase{
{"Default", NewBatcherAndMapper},
}
// emptyCache creates an empty registration cache for testing.
func emptyCache() *zcache.Cache[types.AuthID, types.AuthRequest] {
return zcache.New[types.AuthID, types.AuthRequest](time.Minute, time.Hour)
}
// Test configuration constants.
const (
// Test data configuration.
testUserCount = 3
testNodesPerUser = 2
// Timing configuration.
testTimeout = 120 * time.Second // Increased for more intensive tests
updateTimeout = 5 * time.Second
deadlockTimeout = 30 * time.Second
// Channel configuration.
normalBufferSize = 50
smallBufferSize = 3
tinyBufferSize = 1 // For maximum contention
largeBufferSize = 200
)
// TestData contains all test entities created for a test scenario.
type TestData struct {
Database *db.HSDatabase
Users []*types.User
Nodes []node
State *state.State
Config *types.Config
Batcher *testBatcherWrapper
}
type node struct {
n *types.Node
ch chan *tailcfg.MapResponse
// Update tracking (all accessed atomically for thread safety)
updateCount int64
patchCount int64
fullCount int64
maxPeersCount atomic.Int64
lastPeerCount atomic.Int64
stop chan struct{}
stopped chan struct{}
}
// setupBatcherWithTestData creates a comprehensive test environment with real
// database test data including users and registered nodes.
//
// This helper creates a database, populates it with test data, then creates
// a state and batcher using the SAME database for testing. This provides real
// node data for testing full map responses and comprehensive update scenarios.
//
// Returns TestData struct containing all created entities and a cleanup function.
func setupBatcherWithTestData(
t testing.TB,
bf batcherFunc,
userCount, nodesPerUser, bufferSize int,
) (*TestData, func()) {
t.Helper()
// Create database and populate with test data first
tmpDir := t.TempDir()
dbPath := tmpDir + "/headscale_test.db"
prefixV4 := netip.MustParsePrefix("100.64.0.0/10")
prefixV6 := netip.MustParsePrefix("fd7a:115c:a1e0::/48")
cfg := &types.Config{
Database: types.DatabaseConfig{
Type: types.DatabaseSqlite,
Sqlite: types.SqliteConfig{
Path: dbPath,
},
},
PrefixV4: &prefixV4,
PrefixV6: &prefixV6,
IPAllocation: types.IPAllocationStrategySequential,
BaseDomain: "headscale.test",
Policy: types.PolicyConfig{
Mode: types.PolicyModeDB,
},
DERP: types.DERPConfig{
ServerEnabled: false,
DERPMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
999: {
RegionID: 999,
},
},
},
},
Tuning: types.Tuning{
BatchChangeDelay: 10 * time.Millisecond,
BatcherWorkers: types.DefaultBatcherWorkers(), // Use same logic as config.go
NodeStoreBatchSize: state.TestBatchSize,
NodeStoreBatchTimeout: state.TestBatchTimeout,
},
}
// Create database and populate it with test data
database, err := db.NewHeadscaleDatabase(
cfg,
emptyCache(),
)
if err != nil {
t.Fatalf("setting up database: %s", err)
}
// Create test users and nodes in the database
users := database.CreateUsersForTest(userCount, "testuser")
allNodes := make([]node, 0, userCount*nodesPerUser)
for _, user := range users {
dbNodes := database.CreateRegisteredNodesForTest(user, nodesPerUser, "node")
for i := range dbNodes {
allNodes = append(allNodes, node{
n: dbNodes[i],
ch: make(chan *tailcfg.MapResponse, bufferSize),
})
}
}
// Now create state using the same database
state, err := state.NewState(cfg)
if err != nil {
t.Fatalf("Failed to create state: %v", err)
}
derpMap, err := derp.GetDERPMap(cfg.DERP)
require.NoError(t, err)
require.NotNil(t, derpMap)
state.SetDERPMap(derpMap)
// Set up a permissive policy that allows all communication for testing
allowAllPolicy := `{
"acls": [
{
"action": "accept",
"src": ["*"],
"dst": ["*:*"]
}
]
}`
_, err = state.SetPolicy([]byte(allowAllPolicy))
if err != nil {
t.Fatalf("Failed to set allow-all policy: %v", err)
}
// Create batcher with the state and wrap it for testing
batcher := wrapBatcherForTest(bf(cfg, state), state)
batcher.Start()
testData := &TestData{
Database: database,
Users: users,
Nodes: allNodes,
State: state,
Config: cfg,
Batcher: batcher,
}
cleanup := func() {
batcher.Close()
state.Close()
database.Close()
}
return testData, cleanup
}
type UpdateStats struct {
TotalUpdates int
UpdateSizes []int
LastUpdate time.Time
}
// updateTracker provides thread-safe tracking of updates per node.
type updateTracker struct {
mu sync.RWMutex
stats map[types.NodeID]*UpdateStats
}
// newUpdateTracker creates a new update tracker.
func newUpdateTracker() *updateTracker {
return &updateTracker{
stats: make(map[types.NodeID]*UpdateStats),
}
}
// recordUpdate records an update for a specific node.
func (ut *updateTracker) recordUpdate(nodeID types.NodeID, updateSize int) {
ut.mu.Lock()
defer ut.mu.Unlock()
if ut.stats[nodeID] == nil {
ut.stats[nodeID] = &UpdateStats{}
}
stats := ut.stats[nodeID]
stats.TotalUpdates++
stats.UpdateSizes = append(stats.UpdateSizes, updateSize)
stats.LastUpdate = time.Now()
}
// getStats returns a copy of the statistics for a node.
//
//nolint:unused
func (ut *updateTracker) getStats(nodeID types.NodeID) UpdateStats {
ut.mu.RLock()
defer ut.mu.RUnlock()
if stats, exists := ut.stats[nodeID]; exists {
// Return a copy to avoid race conditions
return UpdateStats{
TotalUpdates: stats.TotalUpdates,
UpdateSizes: append([]int{}, stats.UpdateSizes...),
LastUpdate: stats.LastUpdate,
}
}
return UpdateStats{}
}
// getAllStats returns a copy of all statistics.
func (ut *updateTracker) getAllStats() map[types.NodeID]UpdateStats {
ut.mu.RLock()
defer ut.mu.RUnlock()
result := make(map[types.NodeID]UpdateStats)
for nodeID, stats := range ut.stats {
result[nodeID] = UpdateStats{
TotalUpdates: stats.TotalUpdates,
UpdateSizes: append([]int{}, stats.UpdateSizes...),
LastUpdate: stats.LastUpdate,
}
}
return result
}
func assertDERPMapResponse(t *testing.T, resp *tailcfg.MapResponse) {
t.Helper()
assert.NotNil(t, resp.DERPMap, "DERPMap should not be nil in response")
assert.Len(t, resp.DERPMap.Regions, 1, "Expected exactly one DERP region in response")
assert.Equal(t, 999, resp.DERPMap.Regions[999].RegionID, "Expected DERP region ID to be 999")
}
func assertOnlineMapResponse(t *testing.T, resp *tailcfg.MapResponse, expected bool) {
t.Helper()
// Check for peer changes patch (new online/offline notifications use patches)
if len(resp.PeersChangedPatch) > 0 {
require.Len(t, resp.PeersChangedPatch, 1)
assert.Equal(t, expected, *resp.PeersChangedPatch[0].Online)
return
}
// Fallback to old format for backwards compatibility
require.Len(t, resp.Peers, 1)
assert.Equal(t, expected, resp.Peers[0].Online)
}
// UpdateInfo contains parsed information about an update.
type UpdateInfo struct {
IsFull bool
IsPatch bool
IsDERP bool
PeerCount int
PatchCount int
}
// parseUpdateAndAnalyze parses an update and returns detailed information.
func parseUpdateAndAnalyze(resp *tailcfg.MapResponse) UpdateInfo {
return UpdateInfo{
PeerCount: len(resp.Peers),
PatchCount: len(resp.PeersChangedPatch),
IsFull: len(resp.Peers) > 0,
IsPatch: len(resp.PeersChangedPatch) > 0,
IsDERP: resp.DERPMap != nil,
}
}
// start begins consuming updates from the node's channel and tracking stats.
func (n *node) start() {
// Prevent multiple starts on the same node
if n.stop != nil {
return // Already started
}
n.stop = make(chan struct{})
n.stopped = make(chan struct{})
go func() {
defer close(n.stopped)
for {
select {
case data := <-n.ch:
atomic.AddInt64(&n.updateCount, 1)
// Parse update and track detailed stats
info := parseUpdateAndAnalyze(data)
{
// Track update types
if info.IsFull {
atomic.AddInt64(&n.fullCount, 1)
n.lastPeerCount.Store(int64(info.PeerCount))
// Update max peers seen using compare-and-swap for thread safety
for {
current := n.maxPeersCount.Load()
if int64(info.PeerCount) <= current {
break
}
if n.maxPeersCount.CompareAndSwap(current, int64(info.PeerCount)) {
break
}
}
}
if info.IsPatch {
atomic.AddInt64(&n.patchCount, 1)
// For patches, we track how many patch items using compare-and-swap
for {
current := n.maxPeersCount.Load()
if int64(info.PatchCount) <= current {
break
}
if n.maxPeersCount.CompareAndSwap(current, int64(info.PatchCount)) {
break
}
}
}
}
case <-n.stop:
return
}
}
}()
}
// NodeStats contains final statistics for a node.
type NodeStats struct {
TotalUpdates int64
PatchUpdates int64
FullUpdates int64
MaxPeersSeen int
LastPeerCount int
}
// cleanup stops the update consumer and returns final stats.
func (n *node) cleanup() NodeStats {
if n.stop != nil {
close(n.stop)
<-n.stopped // Wait for goroutine to finish
}
return NodeStats{
TotalUpdates: atomic.LoadInt64(&n.updateCount),
PatchUpdates: atomic.LoadInt64(&n.patchCount),
FullUpdates: atomic.LoadInt64(&n.fullCount),
MaxPeersSeen: int(n.maxPeersCount.Load()),
LastPeerCount: int(n.lastPeerCount.Load()),
}
}
// validateUpdateContent validates that the update data contains a proper MapResponse.
func validateUpdateContent(resp *tailcfg.MapResponse) (bool, string) {
if resp == nil {
return false, "nil MapResponse"
}
// Simple validation - just check if it's a valid MapResponse
return true, "valid"
}
// TestEnhancedNodeTracking verifies that the enhanced node tracking works correctly.
func TestEnhancedNodeTracking(t *testing.T) {
// Create a simple test node
testNode := node{
n: &types.Node{ID: 1},
ch: make(chan *tailcfg.MapResponse, 10),
}
// Start the enhanced tracking
testNode.start()
// Create a simple MapResponse that should be parsed correctly
resp := tailcfg.MapResponse{
KeepAlive: false,
Peers: []*tailcfg.Node{
{ID: 2},
{ID: 3},
},
}
// Send the data to the node's channel
testNode.ch <- &resp
// Wait for tracking goroutine to process the update
assert.EventuallyWithT(t, func(c *assert.CollectT) {
assert.GreaterOrEqual(c, atomic.LoadInt64(&testNode.updateCount), int64(1), "should have processed the update")
}, time.Second, 10*time.Millisecond, "waiting for update to be processed")
// Check stats
stats := testNode.cleanup()
t.Logf("Enhanced tracking stats: Total=%d, Full=%d, Patch=%d, MaxPeers=%d",
stats.TotalUpdates, stats.FullUpdates, stats.PatchUpdates, stats.MaxPeersSeen)
require.Equal(t, int64(1), stats.TotalUpdates, "Expected 1 total update")
require.Equal(t, int64(1), stats.FullUpdates, "Expected 1 full update")
require.Equal(t, 2, stats.MaxPeersSeen, "Expected 2 max peers seen")
}
// TestEnhancedTrackingWithBatcher verifies enhanced tracking works with a real batcher.
func TestEnhancedTrackingWithBatcher(t *testing.T) {
for _, batcherFunc := range allBatcherFunctions {
t.Run(batcherFunc.name, func(t *testing.T) {
// Create test environment with 1 node
testData, cleanup := setupBatcherWithTestData(t, batcherFunc.fn, 1, 1, 10)
defer cleanup()
batcher := testData.Batcher
testNode := &testData.Nodes[0]
t.Logf("Testing enhanced tracking with node ID %d", testNode.n.ID)
// Start enhanced tracking for the node
testNode.start()
// Connect the node to the batcher
_ = batcher.AddNode(testNode.n.ID, testNode.ch, tailcfg.CapabilityVersion(100), nil)
// Wait for connection to be established
assert.EventuallyWithT(t, func(c *assert.CollectT) {
assert.True(c, batcher.IsConnected(testNode.n.ID), "node should be connected")
}, time.Second, 10*time.Millisecond, "waiting for node connection")
// Generate work and wait for updates to be processed
batcher.AddWork(change.FullUpdate())
batcher.AddWork(change.PolicyChange())
batcher.AddWork(change.DERPMap())
// Wait for updates to be processed (at least 1 update received)
assert.EventuallyWithT(t, func(c *assert.CollectT) {
assert.GreaterOrEqual(c, atomic.LoadInt64(&testNode.updateCount), int64(1), "should have received updates")
}, time.Second, 10*time.Millisecond, "waiting for updates to be processed")
// Check stats
stats := testNode.cleanup()
t.Logf("Enhanced tracking with batcher: Total=%d, Full=%d, Patch=%d, MaxPeers=%d",
stats.TotalUpdates, stats.FullUpdates, stats.PatchUpdates, stats.MaxPeersSeen)
if stats.TotalUpdates == 0 {
t.Error(
"Enhanced tracking with batcher received 0 updates - batcher may not be working",
)
}
})
}
}
// TestBatcherScalabilityAllToAll tests the batcher's ability to handle rapid node joins
// and ensure all nodes can see all other nodes. This is a critical test for mesh network
// functionality where every node must be able to communicate with every other node.
func TestBatcherScalabilityAllToAll(t *testing.T) {
// Reduce verbose application logging for cleaner test output
originalLevel := zerolog.GlobalLevel()
defer zerolog.SetGlobalLevel(originalLevel)
zerolog.SetGlobalLevel(zerolog.ErrorLevel)
// Test cases: different node counts to stress test the all-to-all connectivity
testCases := []struct {
name string
nodeCount int
}{
{"10_nodes", 10}, // Quick baseline test
{"100_nodes", 100}, // Full scalability test ~2 minutes
// Large-scale tests commented out - uncomment for scalability testing
// {"1000_nodes", 1000}, // ~12 minutes
// {"2000_nodes", 2000}, // ~60+ minutes
// {"5000_nodes", 5000}, // Not recommended - database bottleneck
}
for _, batcherFunc := range allBatcherFunctions {
t.Run(batcherFunc.name, func(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Logf(
"ALL-TO-ALL TEST: %d nodes with %s batcher",
tc.nodeCount,
batcherFunc.name,
)
// Create test environment - all nodes from same user so they can be peers
// We need enough users to support the node count (max 1000 nodes per user)
usersNeeded := max(1, (tc.nodeCount+999)/1000)
nodesPerUser := (tc.nodeCount + usersNeeded - 1) / usersNeeded
// Use large buffer to avoid blocking during rapid joins
// Buffer needs to handle nodeCount * average_updates_per_node
// Estimate: each node receives ~2*nodeCount updates during all-to-all
// For very large tests (>1000 nodes), limit buffer to avoid excessive memory
bufferSize := max(1000, min(tc.nodeCount*2, 10000))
testData, cleanup := setupBatcherWithTestData(
t,
batcherFunc.fn,
usersNeeded,
nodesPerUser,
bufferSize,
)
defer cleanup()
batcher := testData.Batcher
allNodes := testData.Nodes[:tc.nodeCount] // Limit to requested count
t.Logf(
"Created %d nodes across %d users, buffer size: %d",
len(allNodes),
usersNeeded,
bufferSize,
)
// Start enhanced tracking for all nodes
for i := range allNodes {
allNodes[i].start()
}
// Yield to allow tracking goroutines to start
runtime.Gosched()
startTime := time.Now()
// Join all nodes as fast as possible
t.Logf("Joining %d nodes as fast as possible...", len(allNodes))
for i := range allNodes {
node := &allNodes[i]
_ = batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100), nil)
// Issue full update after each join to ensure connectivity
batcher.AddWork(change.FullUpdate())
// Yield to scheduler for large node counts to prevent overwhelming the work queue
if tc.nodeCount > 100 && i%50 == 49 {
runtime.Gosched()
}
}
joinTime := time.Since(startTime)
t.Logf("All nodes joined in %v, waiting for full connectivity...", joinTime)
// Wait for all updates to propagate until all nodes achieve connectivity
expectedPeers := tc.nodeCount - 1 // Each node should see all others except itself
assert.EventuallyWithT(t, func(c *assert.CollectT) {
connectedCount := 0
for i := range allNodes {
node := &allNodes[i]
currentMaxPeers := int(node.maxPeersCount.Load())
if currentMaxPeers >= expectedPeers {
connectedCount++
}
}
progress := float64(connectedCount) / float64(len(allNodes)) * 100
t.Logf("Progress: %d/%d nodes (%.1f%%) have seen %d+ peers",
connectedCount, len(allNodes), progress, expectedPeers)
assert.Equal(c, len(allNodes), connectedCount, "all nodes should achieve full connectivity")
}, 5*time.Minute, 5*time.Second, "waiting for full connectivity")
t.Logf("All nodes achieved full connectivity")
totalTime := time.Since(startTime)
// Disconnect all nodes
for i := range allNodes {
node := &allNodes[i]
batcher.RemoveNode(node.n.ID, node.ch)
}
// Wait for all nodes to be disconnected
assert.EventuallyWithT(t, func(c *assert.CollectT) {
for i := range allNodes {
assert.False(c, batcher.IsConnected(allNodes[i].n.ID), "node should be disconnected")
}
}, 5*time.Second, 50*time.Millisecond, "waiting for nodes to disconnect")
// Collect final statistics
totalUpdates := int64(0)
totalFull := int64(0)
maxPeersGlobal := 0
minPeersSeen := tc.nodeCount
successfulNodes := 0
nodeDetails := make([]string, 0, min(10, len(allNodes)))
for i := range allNodes {
node := &allNodes[i]
stats := node.cleanup()
totalUpdates += stats.TotalUpdates
totalFull += stats.FullUpdates
if stats.MaxPeersSeen > maxPeersGlobal {
maxPeersGlobal = stats.MaxPeersSeen
}
if stats.MaxPeersSeen < minPeersSeen {
minPeersSeen = stats.MaxPeersSeen
}
if stats.MaxPeersSeen >= expectedPeers {
successfulNodes++
}
// Collect details for first few nodes or failing nodes
if len(nodeDetails) < 10 || stats.MaxPeersSeen < expectedPeers {
nodeDetails = append(nodeDetails,
fmt.Sprintf(
"Node %d: %d updates (%d full), max %d peers",
node.n.ID,
stats.TotalUpdates,
stats.FullUpdates,
stats.MaxPeersSeen,
))
}
}
// Final results
t.Logf("ALL-TO-ALL RESULTS: %d nodes, %d total updates (%d full)",
len(allNodes), totalUpdates, totalFull)
t.Logf(
" Connectivity: %d/%d nodes successful (%.1f%%)",
successfulNodes,
len(allNodes),
float64(successfulNodes)/float64(len(allNodes))*100,
)
t.Logf(" Peers seen: min=%d, max=%d, expected=%d",
minPeersSeen, maxPeersGlobal, expectedPeers)
t.Logf(" Timing: join=%v, total=%v", joinTime, totalTime)
// Show sample of node details
if len(nodeDetails) > 0 {
t.Logf(" Node sample:")
for _, detail := range nodeDetails[:min(5, len(nodeDetails))] {
t.Logf(" %s", detail)
}
if len(nodeDetails) > 5 {
t.Logf(" ... (%d more nodes)", len(nodeDetails)-5)
}
}
// Final verification: Since we waited until all nodes achieved connectivity,
// this should always pass, but we verify the final state for completeness
if successfulNodes == len(allNodes) {
t.Logf(
"PASS: All-to-all connectivity achieved for %d nodes",
len(allNodes),
)
} else {
// This should not happen since we loop until success, but handle it just in case
failedNodes := len(allNodes) - successfulNodes
t.Errorf("UNEXPECTED: %d/%d nodes still failed after waiting for connectivity (expected %d, some saw %d-%d)",
failedNodes, len(allNodes), expectedPeers, minPeersSeen, maxPeersGlobal)
// Show details of failed nodes for debugging
if len(nodeDetails) > 5 {
t.Logf("Failed nodes details:")
for _, detail := range nodeDetails[5:] {
if !strings.Contains(detail, fmt.Sprintf("max %d peers", expectedPeers)) {
t.Logf(" %s", detail)
}
}
}
}
})
}
})
}
}
// TestBatcherBasicOperations verifies core batcher functionality by testing
// the basic lifecycle of adding nodes, processing updates, and removing nodes.
//
// Enhanced with real database test data, this test creates a registered node
// and tests both DERP updates and full node updates. It validates the fundamental
// add/remove operations and basic work processing pipeline with actual update
// content validation instead of just byte count checks.
func TestBatcherBasicOperations(t *testing.T) {
for _, batcherFunc := range allBatcherFunctions {
t.Run(batcherFunc.name, func(t *testing.T) {
// Create test environment with real database and nodes
testData, cleanup := setupBatcherWithTestData(t, batcherFunc.fn, 1, 2, 8)
defer cleanup()
batcher := testData.Batcher
tn := &testData.Nodes[0]
tn2 := &testData.Nodes[1]
// Test AddNode with real node ID
_ = batcher.AddNode(tn.n.ID, tn.ch, 100, nil)
if !batcher.IsConnected(tn.n.ID) {
t.Error("Node should be connected after AddNode")
}
// Test work processing with DERP change
batcher.AddWork(change.DERPMap())
// Wait for update and validate content
select {
case data := <-tn.ch:
assertDERPMapResponse(t, data)
case <-time.After(200 * time.Millisecond):
t.Error("Did not receive expected DERP update")
}
// Drain any initial messages from first node
drainChannelTimeout(tn.ch, 100*time.Millisecond)
// Add the second node and verify update message
_ = batcher.AddNode(tn2.n.ID, tn2.ch, 100, nil)
assert.True(t, batcher.IsConnected(tn2.n.ID))
// First node should get an update that second node has connected.
select {
case data := <-tn.ch:
assertOnlineMapResponse(t, data, true)
case <-time.After(500 * time.Millisecond):
t.Error("Did not receive expected Online response update")
}
// Second node should receive its initial full map
select {
case data := <-tn2.ch:
// Verify it's a full map response
assert.NotNil(t, data)
assert.True(
t,
len(data.Peers) >= 1 || data.Node != nil,
"Should receive initial full map",
)
case <-time.After(500 * time.Millisecond):
t.Error("Second node should receive its initial full map")
}
// Disconnect the second node
batcher.RemoveNode(tn2.n.ID, tn2.ch)
// Note: IsConnected may return true during grace period for DNS resolution
// First node should get update that second has disconnected.
select {
case data := <-tn.ch:
assertOnlineMapResponse(t, data, false)
case <-time.After(500 * time.Millisecond):
t.Error("Did not receive expected Online response update")
}
// // Test node-specific update with real node data
// batcher.AddWork(change.NodeKeyChanged(tn.n.ID))
// // Wait for node update (may be empty for certain node changes)
// select {
// case data := <-tn.ch:
// t.Logf("Received node update: %d bytes", len(data))
// if len(data) == 0 {
// t.Logf("Empty node update (expected for some node changes in test environment)")
// } else {
// if valid, updateType := validateUpdateContent(data); !valid {
// t.Errorf("Invalid node update content: %s", updateType)
// } else {
// t.Logf("Valid node update type: %s", updateType)
// }
// }
// case <-time.After(200 * time.Millisecond):
// // Node changes might not always generate updates in test environment
// t.Logf("No node update received (may be expected in test environment)")
// }
// Test RemoveNode
batcher.RemoveNode(tn.n.ID, tn.ch)
// Note: IsConnected may return true during grace period for DNS resolution
// The node is actually removed from active connections but grace period allows DNS lookups
})
}
}
func drainChannelTimeout(ch <-chan *tailcfg.MapResponse, timeout time.Duration) {
timer := time.NewTimer(timeout)
defer timer.Stop()
for {
select {
case <-ch:
// Drain message
case <-timer.C:
return
}
}
}
// TestBatcherUpdateTypes tests different types of updates and verifies
// that the batcher correctly processes them based on their content.
//
// Enhanced with real database test data, this test creates registered nodes
// and tests various update types including DERP changes, node-specific changes,
// and full updates. This validates the change classification logic and ensures
// different update types are handled appropriately with actual node data.
// func TestBatcherUpdateTypes(t *testing.T) {
// for _, batcherFunc := range allBatcherFunctions {
// t.Run(batcherFunc.name, func(t *testing.T) {
// // Create test environment with real database and nodes
// testData, cleanup := setupBatcherWithTestData(t, batcherFunc.fn, 1, 2, 8)
// defer cleanup()
// batcher := testData.Batcher
// testNodes := testData.Nodes
// ch := make(chan *tailcfg.MapResponse, 10)
// // Use real node ID from test data
// batcher.AddNode(testNodes[0].n.ID, ch, false, "zstd", tailcfg.CapabilityVersion(100))
// tests := []struct {
// name string
// changeSet change.ChangeSet
// expectData bool // whether we expect to receive data
// description string
// }{
// {
// name: "DERP change",
// changeSet: change.DERPMapResponse(),
// expectData: true,
// description: "DERP changes should generate map updates",
// },
// {
// name: "Node key expiry",
// changeSet: change.KeyExpiryFor(testNodes[1].n.ID),
// expectData: true,
// description: "Node key expiry with real node data",
// },
// {
// name: "Node new registration",
// changeSet: change.NodeAddedResponse(testNodes[1].n.ID),
// expectData: true,
// description: "New node registration with real data",
// },
// {
// name: "Full update",
// changeSet: change.FullUpdateResponse(),
// expectData: true,
// description: "Full updates with real node data",
// },
// {
// name: "Policy change",
// changeSet: change.PolicyChangeResponse(),
// expectData: true,
// description: "Policy updates with real node data",
// },
// }
// for _, tt := range tests {
// t.Run(tt.name, func(t *testing.T) {
// t.Logf("Testing: %s", tt.description)
// // Clear any existing updates
// select {
// case <-ch:
// default:
// }
// batcher.AddWork(tt.changeSet)
// select {
// case data := <-ch:
// if !tt.expectData {
// t.Errorf("Unexpected update for %s: %d bytes", tt.name, len(data))
// } else {
// t.Logf("%s: received %d bytes", tt.name, len(data))
// // Validate update content when we have data
// if len(data) > 0 {
// if valid, updateType := validateUpdateContent(data); !valid {
// t.Errorf("Invalid update content for %s: %s", tt.name, updateType)
// } else {
// t.Logf("%s: valid update type: %s", tt.name, updateType)
// }
// } else {
// t.Logf("%s: empty update (may be expected for some node changes)", tt.name)
// }
// }
// case <-time.After(100 * time.Millisecond):
// if tt.expectData {
// t.Errorf("Expected update for %s (%s) but none received", tt.name, tt.description)
// } else {
// t.Logf("%s: no update (expected)", tt.name)
// }
// }
// })
// }
// })
// }
// }
// TestBatcherWorkQueueBatching tests that multiple changes get batched
// together and sent as a single update to reduce network overhead.
//
// Enhanced with real database test data, this test creates registered nodes
// and rapidly submits multiple types of changes including DERP updates and
// node changes. Due to the batching mechanism with BatchChangeDelay, these
// should be combined into fewer updates. This validates that the batching
// system works correctly with real node data and mixed change types.
func TestBatcherWorkQueueBatching(t *testing.T) {
for _, batcherFunc := range allBatcherFunctions {
t.Run(batcherFunc.name, func(t *testing.T) {
// Create test environment with real database and nodes
testData, cleanup := setupBatcherWithTestData(t, batcherFunc.fn, 1, 2, 8)
defer cleanup()
batcher := testData.Batcher
testNodes := testData.Nodes
ch := make(chan *tailcfg.MapResponse, 10)
_ = batcher.AddNode(testNodes[0].n.ID, ch, tailcfg.CapabilityVersion(100), nil)
// Track update content for validation
var receivedUpdates []*tailcfg.MapResponse
// Add multiple changes rapidly to test batching
batcher.AddWork(change.DERPMap())
// Use a valid expiry time for testing since test nodes don't have expiry set
testExpiry := time.Now().Add(24 * time.Hour)
batcher.AddWork(change.KeyExpiryFor(testNodes[1].n.ID, testExpiry))
batcher.AddWork(change.DERPMap())
batcher.AddWork(change.NodeAdded(testNodes[1].n.ID))
batcher.AddWork(change.DERPMap())
// Collect updates with timeout
updateCount := 0
timeout := time.After(200 * time.Millisecond)
for {
select {
case data := <-ch:
updateCount++
receivedUpdates = append(receivedUpdates, data)
// Validate update content
if data != nil {
if valid, reason := validateUpdateContent(data); valid {
t.Logf("Update %d: valid", updateCount)
} else {
t.Logf("Update %d: invalid: %s", updateCount, reason)
}
} else {
t.Logf("Update %d: nil update", updateCount)
}
case <-timeout:
// Expected: 5 explicit changes + 1 initial from AddNode + 1 NodeOnline from wrapper = 7 updates
expectedUpdates := 7
t.Logf("Received %d updates from %d changes (expected %d)",
updateCount, 5, expectedUpdates)
if updateCount != expectedUpdates {
t.Errorf(
"Expected %d updates but received %d",
expectedUpdates,
updateCount,
)
}
// Validate that all updates have valid content
validUpdates := 0
for _, data := range receivedUpdates {
if data != nil {
if valid, _ := validateUpdateContent(data); valid {
validUpdates++
}
}
}
if validUpdates != updateCount {
t.Errorf("Expected all %d updates to be valid, but only %d were valid",
updateCount, validUpdates)
}
return
}
}
})
}
}
// TestBatcherWorkerChannelSafety tests that worker goroutines handle closed
// channels safely without panicking when processing work items.
//
// Enhanced with real database test data, this test creates rapid connect/disconnect
// cycles using registered nodes while simultaneously queuing real work items.
// This creates a race where workers might try to send to channels that have been
// closed by node removal. The test validates that the safeSend() method properly
// handles closed channels with real update workloads.
func TestBatcherWorkerChannelSafety(t *testing.T) {
for _, batcherFunc := range allBatcherFunctions {
t.Run(batcherFunc.name, func(t *testing.T) {
// Create test environment with real database and nodes
testData, cleanup := setupBatcherWithTestData(t, batcherFunc.fn, 1, 1, 8)
defer cleanup()
batcher := testData.Batcher
testNode := &testData.Nodes[0]
var (
panics int
channelErrors int
invalidData int
mutex sync.Mutex
)
// Test rapid connect/disconnect with work generation
for i := range 50 {
func() {
defer func() {
if r := recover(); r != nil {
mutex.Lock()
panics++
mutex.Unlock()
t.Logf("Panic caught: %v", r)
}
}()
ch := make(chan *tailcfg.MapResponse, 5)
// Add node and immediately queue real work
_ = batcher.AddNode(testNode.n.ID, ch, tailcfg.CapabilityVersion(100), nil)
batcher.AddWork(change.DERPMap())
// Consumer goroutine to validate data and detect channel issues
go func() {
defer func() {
if r := recover(); r != nil {
mutex.Lock()
channelErrors++
mutex.Unlock()
t.Logf("Channel consumer panic: %v", r)
}
}()
for {
select {
case data, ok := <-ch:
if !ok {
// Channel was closed, which is expected
return
}
// Validate the data we received
if valid, reason := validateUpdateContent(data); !valid {
mutex.Lock()
invalidData++
mutex.Unlock()
t.Logf("Invalid data received: %s", reason)
}
case <-time.After(10 * time.Millisecond):
// Timeout waiting for data
return
}
}
}()
// Add node-specific work occasionally
if i%10 == 0 {
// Use a valid expiry time for testing since test nodes don't have expiry set
testExpiry := time.Now().Add(24 * time.Hour)
batcher.AddWork(change.KeyExpiryFor(testNode.n.ID, testExpiry))
}
// Rapid removal creates race between worker and removal
for range i % 3 {
runtime.Gosched() // Introduce timing variability
}
batcher.RemoveNode(testNode.n.ID, ch)
// Yield to allow workers to process and close channels
runtime.Gosched()
}()
}
mutex.Lock()
defer mutex.Unlock()
t.Logf(
"Worker safety test results: %d panics, %d channel errors, %d invalid data packets",
panics,
channelErrors,
invalidData,
)
// Test failure conditions
if panics > 0 {
t.Errorf("Worker channel safety failed with %d panics", panics)
}
if channelErrors > 0 {
t.Errorf("Channel handling failed with %d channel errors", channelErrors)
}
if invalidData > 0 {
t.Errorf("Data validation failed with %d invalid data packets", invalidData)
}
})
}
}
// TestBatcherConcurrentClients tests that concurrent connection lifecycle changes
// don't affect other stable clients' ability to receive updates.
//
// The test sets up real test data with multiple users and registered nodes,
// then creates stable clients and churning clients that rapidly connect and
// disconnect. Work is generated continuously during these connection churn cycles using
// real node data. The test validates that stable clients continue to function
// normally and receive proper updates despite the connection churn from other clients,
// ensuring system stability under concurrent load.
//
//nolint:gocyclo // complex concurrent test scenario
func TestBatcherConcurrentClients(t *testing.T) {
if testing.Short() {
t.Skip("Skipping concurrent client test in short mode")
}
for _, batcherFunc := range allBatcherFunctions {
t.Run(batcherFunc.name, func(t *testing.T) {
// Create comprehensive test environment with real data
testData, cleanup := setupBatcherWithTestData(
t,
batcherFunc.fn,
testUserCount,
testNodesPerUser,
8,
)
defer cleanup()
batcher := testData.Batcher
allNodes := testData.Nodes
// Create update tracker for monitoring all updates
tracker := newUpdateTracker()
// Set up stable clients using real node IDs
stableNodes := allNodes[:len(allNodes)/2] // Use first half as stable
stableChannels := make(map[types.NodeID]chan *tailcfg.MapResponse)
for i := range stableNodes {
node := &stableNodes[i]
ch := make(chan *tailcfg.MapResponse, normalBufferSize)
stableChannels[node.n.ID] = ch
_ = batcher.AddNode(node.n.ID, ch, tailcfg.CapabilityVersion(100), nil)
// Monitor updates for each stable client
go func(nodeID types.NodeID, channel chan *tailcfg.MapResponse) {
for {
select {
case data, ok := <-channel:
if !ok {
// Channel was closed, exit gracefully
return
}
if valid, reason := validateUpdateContent(data); valid {
tracker.recordUpdate(
nodeID,
1,
) // Use 1 as update size since we have MapResponse
} else {
t.Errorf("Invalid update received for stable node %d: %s", nodeID, reason)
}
case <-time.After(testTimeout):
return
}
}
}(node.n.ID, ch)
}
// Use remaining nodes for connection churn testing
churningNodes := allNodes[len(allNodes)/2:]
churningChannels := make(map[types.NodeID]chan *tailcfg.MapResponse)
var churningChannelsMutex sync.Mutex // Protect concurrent map access
var wg sync.WaitGroup
numCycles := 10 // Reduced for simpler test
panicCount := 0
var panicMutex sync.Mutex
// Track deadlock with timeout
done := make(chan struct{})
go func() {
defer close(done)
// Connection churn cycles - rapidly connect/disconnect to test concurrency safety
for i := range numCycles {
for j := range churningNodes {
node := &churningNodes[j]
wg.Add(2)
// Connect churning node
go func(nodeID types.NodeID) {
defer func() {
if r := recover(); r != nil {
panicMutex.Lock()
panicCount++
panicMutex.Unlock()
t.Logf("Panic in churning connect: %v", r)
}
wg.Done()
}()
ch := make(chan *tailcfg.MapResponse, smallBufferSize)
churningChannelsMutex.Lock()
churningChannels[nodeID] = ch
churningChannelsMutex.Unlock()
_ = batcher.AddNode(nodeID, ch, tailcfg.CapabilityVersion(100), nil)
// Consume updates to prevent blocking
go func() {
for {
select {
case data, ok := <-ch:
if !ok {
// Channel was closed, exit gracefully
return
}
if valid, _ := validateUpdateContent(data); valid {
tracker.recordUpdate(
nodeID,
1,
) // Use 1 as update size since we have MapResponse
}
case <-time.After(500 * time.Millisecond):
// Longer timeout to prevent premature exit during heavy load
return
}
}
}()
}(node.n.ID)
// Disconnect churning node
go func(nodeID types.NodeID) {
defer func() {
if r := recover(); r != nil {
panicMutex.Lock()
panicCount++
panicMutex.Unlock()
t.Logf("Panic in churning disconnect: %v", r)
}
wg.Done()
}()
for range i % 5 {
runtime.Gosched() // Introduce timing variability
}
churningChannelsMutex.Lock()
ch, exists := churningChannels[nodeID]
churningChannelsMutex.Unlock()
if exists {
batcher.RemoveNode(nodeID, ch)
}
}(node.n.ID)
}
// Generate various types of work during racing
if i%3 == 0 {
// DERP changes
batcher.AddWork(change.DERPMap())
}
if i%5 == 0 {
// Full updates using real node data
batcher.AddWork(change.FullUpdate())
}
if i%7 == 0 && len(allNodes) > 0 {
// Node-specific changes using real nodes
node := &allNodes[i%len(allNodes)]
// Use a valid expiry time for testing since test nodes don't have expiry set
testExpiry := time.Now().Add(24 * time.Hour)
batcher.AddWork(change.KeyExpiryFor(node.n.ID, testExpiry))
}
// Yield to allow some batching
runtime.Gosched()
}
wg.Wait()
}()
// Deadlock detection
select {
case <-done:
t.Logf("Connection churn cycles completed successfully")
case <-time.After(deadlockTimeout):
t.Error("Test timed out - possible deadlock detected")
return
}
// Yield to allow any in-flight updates to complete
runtime.Gosched()
// Validate results
panicMutex.Lock()
finalPanicCount := panicCount
panicMutex.Unlock()
allStats := tracker.getAllStats()
// Calculate expected vs actual updates
stableUpdateCount := 0
churningUpdateCount := 0
// Count actual update sources to understand the pattern
// Let's track what we observe rather than trying to predict
expectedDerpUpdates := (numCycles + 2) / 3
expectedFullUpdates := (numCycles + 4) / 5
expectedKeyUpdates := (numCycles + 6) / 7
totalGeneratedWork := expectedDerpUpdates + expectedFullUpdates + expectedKeyUpdates
t.Logf("Work generated: %d DERP + %d Full + %d KeyExpiry = %d total AddWork calls",
expectedDerpUpdates, expectedFullUpdates, expectedKeyUpdates, totalGeneratedWork)
for i := range stableNodes {
node := &stableNodes[i]
if stats, exists := allStats[node.n.ID]; exists {
stableUpdateCount += stats.TotalUpdates
t.Logf("Stable node %d: %d updates",
node.n.ID, stats.TotalUpdates)
}
// Verify stable clients are still connected
if !batcher.IsConnected(node.n.ID) {
t.Errorf("Stable node %d should still be connected", node.n.ID)
}
}
for i := range churningNodes {
node := &churningNodes[i]
if stats, exists := allStats[node.n.ID]; exists {
churningUpdateCount += stats.TotalUpdates
}
}
t.Logf("Total updates - Stable clients: %d, Churning clients: %d",
stableUpdateCount, churningUpdateCount)
t.Logf(
"Average per stable client: %.1f updates",
float64(stableUpdateCount)/float64(len(stableNodes)),
)
t.Logf("Panics during test: %d", finalPanicCount)
// Validate test success criteria
if finalPanicCount > 0 {
t.Errorf("Test failed with %d panics", finalPanicCount)
}
// Basic sanity check - stable clients should receive some updates
if stableUpdateCount == 0 {
t.Error("Stable clients received no updates - batcher may not be working")
}
// Verify all stable clients are still functional
for i := range stableNodes {
node := &stableNodes[i]
if !batcher.IsConnected(node.n.ID) {
t.Errorf("Stable node %d lost connection during racing", node.n.ID)
}
}
})
}
}
// TestBatcherFullPeerUpdates verifies that when multiple nodes are connected
// and we send a FullSet update, nodes receive the complete peer list.
func TestBatcherFullPeerUpdates(t *testing.T) {
for _, batcherFunc := range allBatcherFunctions {
t.Run(batcherFunc.name, func(t *testing.T) {
// Create test environment with 3 nodes from same user (so they can be peers)
testData, cleanup := setupBatcherWithTestData(t, batcherFunc.fn, 1, 3, 10)
defer cleanup()
batcher := testData.Batcher
allNodes := testData.Nodes
t.Logf("Created %d nodes in database", len(allNodes))
// Connect nodes one at a time and wait for each to be connected
for i := range allNodes {
node := &allNodes[i]
_ = batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100), nil)
t.Logf("Connected node %d (ID: %d)", i, node.n.ID)
// Wait for node to be connected
assert.EventuallyWithT(t, func(c *assert.CollectT) {
assert.True(c, batcher.IsConnected(node.n.ID), "node should be connected")
}, time.Second, 10*time.Millisecond, "waiting for node connection")
}
// Wait for all NodeCameOnline events to be processed
t.Logf("Waiting for NodeCameOnline events to settle...")
assert.EventuallyWithT(t, func(c *assert.CollectT) {
for i := range allNodes {
assert.True(c, batcher.IsConnected(allNodes[i].n.ID), "all nodes should be connected")
}
}, 5*time.Second, 50*time.Millisecond, "waiting for all nodes to connect")
// Check how many peers each node should see
for i := range allNodes {
node := &allNodes[i]
peers := testData.State.ListPeers(node.n.ID)
t.Logf("Node %d should see %d peers from state", i, peers.Len())
}
// Send a full update - this should generate full peer lists
t.Logf("Sending FullSet update...")
batcher.AddWork(change.FullUpdate())
// Wait for FullSet work items to be processed
t.Logf("Waiting for FullSet to be processed...")
assert.EventuallyWithT(t, func(c *assert.CollectT) {
// Check that some data is available in at least one channel
found := false
for i := range allNodes {
if len(allNodes[i].ch) > 0 {
found = true
break
}
}
assert.True(c, found, "no updates received yet")
}, 5*time.Second, 50*time.Millisecond, "waiting for FullSet updates")
// Check what each node receives - read multiple updates
totalUpdates := 0
foundFullUpdate := false
// Read all available updates for each node
for i := range allNodes {
nodeUpdates := 0
t.Logf("Reading updates for node %d:", i)
// Read up to 10 updates per node or until timeout/no more data
for updateNum := range 10 {
select {
case data := <-allNodes[i].ch:
nodeUpdates++
totalUpdates++
// Parse and examine the update - data is already a MapResponse
if data == nil {
t.Errorf("Node %d update %d: nil MapResponse", i, updateNum)
continue
}
updateType := "unknown"
if len(data.Peers) > 0 {
updateType = "FULL"
foundFullUpdate = true
} else if len(data.PeersChangedPatch) > 0 {
updateType = "PATCH"
} else if data.DERPMap != nil {
updateType = "DERP"
}
t.Logf(
" Update %d: %s - Peers=%d, PeersChangedPatch=%d, DERPMap=%v",
updateNum,
updateType,
len(data.Peers),
len(data.PeersChangedPatch),
data.DERPMap != nil,
)
if len(data.Peers) > 0 {
t.Logf(" Full peer list with %d peers", len(data.Peers))
for j, peer := range data.Peers[:min(3, len(data.Peers))] {
t.Logf(
" Peer %d: NodeID=%d, Online=%v",
j,
peer.ID,
peer.Online,
)
}
}
if len(data.PeersChangedPatch) > 0 {
t.Logf(" Patch update with %d changes", len(data.PeersChangedPatch))
for j, patch := range data.PeersChangedPatch[:min(3, len(data.PeersChangedPatch))] {
t.Logf(
" Patch %d: NodeID=%d, Online=%v",
j,
patch.NodeID,
patch.Online,
)
}
}
case <-time.After(500 * time.Millisecond):
}
}
t.Logf("Node %d received %d updates", i, nodeUpdates)
}
t.Logf("Total updates received across all nodes: %d", totalUpdates)
if !foundFullUpdate {
t.Errorf("CRITICAL: No FULL updates received despite sending change.FullUpdateResponse()!")
t.Errorf(
"This confirms the bug - FullSet updates are not generating full peer responses",
)
}
})
}
}
// TestBatcherRapidReconnection reproduces the issue where nodes connecting with the same ID
// at the same time cause /debug/batcher to show nodes as disconnected when they should be connected.
// This specifically tests the multi-channel batcher implementation issue.
func TestBatcherRapidReconnection(t *testing.T) {
for _, batcherFunc := range allBatcherFunctions {
t.Run(batcherFunc.name, func(t *testing.T) {
testData, cleanup := setupBatcherWithTestData(t, batcherFunc.fn, 1, 3, 10)
defer cleanup()
batcher := testData.Batcher
allNodes := testData.Nodes
t.Logf("=== RAPID RECONNECTION TEST ===")
t.Logf("Testing rapid connect/disconnect with %d nodes", len(allNodes))
// Connect all nodes initially.
t.Logf("Connecting all nodes...")
for i := range allNodes {
node := &allNodes[i]
err := batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100), nil)
if err != nil {
t.Fatalf("Failed to add node %d: %v", i, err)
}
}
// Wait for all connections to settle
assert.EventuallyWithT(t, func(c *assert.CollectT) {
for i := range allNodes {
assert.True(c, batcher.IsConnected(allNodes[i].n.ID), "node should be connected")
}
}, 5*time.Second, 50*time.Millisecond, "waiting for connections to settle")
// Rapid disconnect ALL nodes (simulating nodes going down).
t.Logf("Rapid disconnect all nodes...")
for i := range allNodes {
node := &allNodes[i]
removed := batcher.RemoveNode(node.n.ID, node.ch)
t.Logf("Node %d RemoveNode result: %t", i, removed)
}
// Rapid reconnect with NEW channels (simulating nodes coming back up).
t.Logf("Rapid reconnect with new channels...")
newChannels := make([]chan *tailcfg.MapResponse, len(allNodes))
for i := range allNodes {
node := &allNodes[i]
newChannels[i] = make(chan *tailcfg.MapResponse, 10)
err := batcher.AddNode(node.n.ID, newChannels[i], tailcfg.CapabilityVersion(100), nil)
if err != nil {
t.Errorf("Failed to reconnect node %d: %v", i, err)
}
}
// Wait for all reconnections to settle
assert.EventuallyWithT(t, func(c *assert.CollectT) {
for i := range allNodes {
assert.True(c, batcher.IsConnected(allNodes[i].n.ID), "node should be reconnected")
}
}, 5*time.Second, 50*time.Millisecond, "waiting for reconnections to settle")
// Check debug status after reconnection.
t.Logf("Checking debug status...")
debugInfo := batcher.Debug()
disconnectedCount := 0
for i := range allNodes {
node := &allNodes[i]
if info, exists := debugInfo[node.n.ID]; exists {
t.Logf("Node %d (ID %d): debug info = %+v", i, node.n.ID, info)
if !info.Connected {
disconnectedCount++
t.Logf("BUG REPRODUCED: Node %d shows as disconnected in debug but should be connected", i)
}
} else {
disconnectedCount++
t.Logf("Node %d missing from debug info entirely", i)
}
// Also check IsConnected method
if !batcher.IsConnected(node.n.ID) {
t.Logf("Node %d IsConnected() returns false", i)
}
}
if disconnectedCount > 0 {
t.Logf("ISSUE REPRODUCED: %d/%d nodes show as disconnected in debug", disconnectedCount, len(allNodes))
} else {
t.Logf("All nodes show as connected - working correctly")
}
// Test if "disconnected" nodes can actually receive updates.
t.Logf("Testing if nodes can receive updates despite debug status...")
// Send a change that should reach all nodes
batcher.AddWork(change.DERPMap())
receivedCount := 0
timeout := time.After(500 * time.Millisecond)
for i := range allNodes {
select {
case update := <-newChannels[i]:
if update != nil {
receivedCount++
t.Logf("Node %d received update successfully", i)
}
case <-timeout:
t.Logf("Node %d timed out waiting for update", i)
goto done
}
}
done:
t.Logf("Update delivery test: %d/%d nodes received updates", receivedCount, len(allNodes))
if receivedCount < len(allNodes) {
t.Logf("Some nodes failed to receive updates - confirming the issue")
}
})
}
}
//nolint:gocyclo // complex multi-connection test scenario
func TestBatcherMultiConnection(t *testing.T) {
for _, batcherFunc := range allBatcherFunctions {
t.Run(batcherFunc.name, func(t *testing.T) {
testData, cleanup := setupBatcherWithTestData(t, batcherFunc.fn, 1, 2, 10)
defer cleanup()
batcher := testData.Batcher
node1 := &testData.Nodes[0]
node2 := &testData.Nodes[1]
t.Logf("=== MULTI-CONNECTION TEST ===")
// Connect first node with initial connection.
t.Logf("Connecting node 1 with first connection...")
err := batcher.AddNode(node1.n.ID, node1.ch, tailcfg.CapabilityVersion(100), nil)
if err != nil {
t.Fatalf("Failed to add node1: %v", err)
}
// Connect second node for comparison
err = batcher.AddNode(node2.n.ID, node2.ch, tailcfg.CapabilityVersion(100), nil)
if err != nil {
t.Fatalf("Failed to add node2: %v", err)
}
// Wait for initial connections
assert.EventuallyWithT(t, func(c *assert.CollectT) {
assert.True(c, batcher.IsConnected(node1.n.ID), "node1 should be connected")
assert.True(c, batcher.IsConnected(node2.n.ID), "node2 should be connected")
}, time.Second, 10*time.Millisecond, "waiting for initial connections")
// Add second connection for node1 (multi-connection scenario).
t.Logf("Adding second connection for node 1...")
secondChannel := make(chan *tailcfg.MapResponse, 10)
err = batcher.AddNode(node1.n.ID, secondChannel, tailcfg.CapabilityVersion(100), nil)
if err != nil {
t.Fatalf("Failed to add second connection for node1: %v", err)
}
// Yield to allow connection to be processed
runtime.Gosched()
// Add third connection for node1.
t.Logf("Adding third connection for node 1...")
thirdChannel := make(chan *tailcfg.MapResponse, 10)
err = batcher.AddNode(node1.n.ID, thirdChannel, tailcfg.CapabilityVersion(100), nil)
if err != nil {
t.Fatalf("Failed to add third connection for node1: %v", err)
}
// Yield to allow connection to be processed
runtime.Gosched()
// Verify debug status shows correct connection count.
t.Logf("Verifying debug status shows multiple connections...")
debugInfo := batcher.Debug()
if info, exists := debugInfo[node1.n.ID]; exists {
t.Logf("Node1 debug info: %+v", info)
if info.ActiveConnections != 3 {
t.Errorf("Node1 should have 3 active connections, got %d", info.ActiveConnections)
} else {
t.Logf("SUCCESS: Node1 correctly shows 3 active connections")
}
if !info.Connected {
t.Errorf("Node1 should show as connected with 3 active connections")
}
}
if info, exists := debugInfo[node2.n.ID]; exists {
if info.ActiveConnections != 1 {
t.Errorf("Node2 should have 1 active connection, got %d", info.ActiveConnections)
}
}
// Send update and verify ALL connections receive it.
t.Logf("Testing update distribution to all connections...")
// Clear any existing updates from all channels
clearChannel := func(ch chan *tailcfg.MapResponse) {
for {
select {
case <-ch:
// drain
default:
return
}
}
}
clearChannel(node1.ch)
clearChannel(secondChannel)
clearChannel(thirdChannel)
clearChannel(node2.ch)
// Send a change notification from node2 (so node1 should receive it on all connections)
testChangeSet := change.NodeAdded(node2.n.ID)
batcher.AddWork(testChangeSet)
// Wait for updates to propagate to at least one channel
assert.EventuallyWithT(t, func(c *assert.CollectT) {
assert.Positive(c, len(node1.ch)+len(secondChannel)+len(thirdChannel), "should have received updates")
}, 5*time.Second, 50*time.Millisecond, "waiting for updates to propagate")
// Verify all three connections for node1 receive the update
connection1Received := false
connection2Received := false
connection3Received := false
select {
case mapResp := <-node1.ch:
connection1Received = (mapResp != nil)
t.Logf("Node1 connection 1 received update: %t", connection1Received)
case <-time.After(500 * time.Millisecond):
t.Errorf("Node1 connection 1 did not receive update")
}
select {
case mapResp := <-secondChannel:
connection2Received = (mapResp != nil)
t.Logf("Node1 connection 2 received update: %t", connection2Received)
case <-time.After(500 * time.Millisecond):
t.Errorf("Node1 connection 2 did not receive update")
}
select {
case mapResp := <-thirdChannel:
connection3Received = (mapResp != nil)
t.Logf("Node1 connection 3 received update: %t", connection3Received)
case <-time.After(500 * time.Millisecond):
t.Errorf("Node1 connection 3 did not receive update")
}
if connection1Received && connection2Received && connection3Received {
t.Logf("SUCCESS: All three connections for node1 received the update")
} else {
t.Errorf("FAILURE: Multi-connection broadcast failed - conn1: %t, conn2: %t, conn3: %t",
connection1Received, connection2Received, connection3Received)
}
// Test connection removal and verify remaining connections still work.
t.Logf("Testing connection removal...")
// Remove the second connection
removed := batcher.RemoveNode(node1.n.ID, secondChannel)
if !removed {
t.Errorf("Failed to remove second connection for node1")
}
// Yield to allow removal to be processed
runtime.Gosched()
// Verify debug status shows 2 connections now
debugInfo2 := batcher.Debug()
if info, exists := debugInfo2[node1.n.ID]; exists {
if info.ActiveConnections != 2 {
t.Errorf("Node1 should have 2 active connections after removal, got %d", info.ActiveConnections)
} else {
t.Logf("SUCCESS: Node1 correctly shows 2 active connections after removal")
}
}
// Send another update and verify remaining connections still work
clearChannel(node1.ch)
clearChannel(thirdChannel)
testChangeSet2 := change.NodeAdded(node2.n.ID)
batcher.AddWork(testChangeSet2)
// Wait for updates to propagate to remaining channels
assert.EventuallyWithT(t, func(c *assert.CollectT) {
assert.Positive(c, len(node1.ch)+len(thirdChannel), "should have received updates")
}, 5*time.Second, 50*time.Millisecond, "waiting for updates to propagate")
// Verify remaining connections still receive updates
remaining1Received := false
remaining3Received := false
select {
case mapResp := <-node1.ch:
remaining1Received = (mapResp != nil)
case <-time.After(500 * time.Millisecond):
t.Errorf("Node1 connection 1 did not receive update after removal")
}
select {
case mapResp := <-thirdChannel:
remaining3Received = (mapResp != nil)
case <-time.After(500 * time.Millisecond):
t.Errorf("Node1 connection 3 did not receive update after removal")
}
if remaining1Received && remaining3Received {
t.Logf("SUCCESS: Remaining connections still receive updates after removal")
} else {
t.Errorf("FAILURE: Remaining connections failed to receive updates - conn1: %t, conn3: %t",
remaining1Received, remaining3Received)
}
// Drain secondChannel of any messages received before removal
// (the test wrapper sends NodeOffline before removal, which may have reached this channel)
clearChannel(secondChannel)
// Verify second channel no longer receives new updates after being removed
select {
case <-secondChannel:
t.Errorf("Removed connection still received update - this should not happen")
case <-time.After(100 * time.Millisecond):
t.Logf("SUCCESS: Removed connection correctly no longer receives updates")
}
})
}
}
// TestNodeDeletedWhileChangesPending reproduces issue #2924 where deleting a node
// from state while there are pending changes for that node in the batcher causes
// "node not found" errors. The race condition occurs when:
// 1. Node is connected and changes are queued for it
// 2. Node is deleted from state (NodeStore) but not from batcher
// 3. Batcher worker tries to generate map response for deleted node
// 4. Mapper fails to find node in state, causing repeated "node not found" errors.
func TestNodeDeletedWhileChangesPending(t *testing.T) {
for _, batcherFunc := range allBatcherFunctions {
t.Run(batcherFunc.name, func(t *testing.T) {
// Create test environment with 3 nodes
testData, cleanup := setupBatcherWithTestData(t, batcherFunc.fn, 1, 3, normalBufferSize)
defer cleanup()
batcher := testData.Batcher
st := testData.State
node1 := &testData.Nodes[0]
node2 := &testData.Nodes[1]
node3 := &testData.Nodes[2]
t.Logf("Testing issue #2924: Node1=%d, Node2=%d, Node3=%d",
node1.n.ID, node2.n.ID, node3.n.ID)
// Helper to drain channels
drainCh := func(ch chan *tailcfg.MapResponse) {
for {
select {
case <-ch:
// drain
default:
return
}
}
}
// Start update consumers for all nodes
node1.start()
node2.start()
node3.start()
defer node1.cleanup()
defer node2.cleanup()
defer node3.cleanup()
// Connect all nodes to the batcher
require.NoError(t, batcher.AddNode(node1.n.ID, node1.ch, tailcfg.CapabilityVersion(100), nil))
require.NoError(t, batcher.AddNode(node2.n.ID, node2.ch, tailcfg.CapabilityVersion(100), nil))
require.NoError(t, batcher.AddNode(node3.n.ID, node3.ch, tailcfg.CapabilityVersion(100), nil))
// Wait for all nodes to be connected
assert.EventuallyWithT(t, func(c *assert.CollectT) {
assert.True(c, batcher.IsConnected(node1.n.ID), "node1 should be connected")
assert.True(c, batcher.IsConnected(node2.n.ID), "node2 should be connected")
assert.True(c, batcher.IsConnected(node3.n.ID), "node3 should be connected")
}, 5*time.Second, 50*time.Millisecond, "waiting for nodes to connect")
// Get initial work errors count
lfb := unwrapBatcher(batcher)
initialWorkErrors := lfb.WorkErrors()
t.Logf("Initial work errors: %d", initialWorkErrors)
// Clear channels to prepare for the test
drainCh(node1.ch)
drainCh(node2.ch)
drainCh(node3.ch)
// Get node view for deletion
nodeToDelete, ok := st.GetNodeByID(node3.n.ID)
require.True(t, ok, "node3 should exist in state")
// Delete the node from state - this returns a NodeRemoved change
// In production, this change is sent to batcher via app.Change()
nodeChange, err := st.DeleteNode(nodeToDelete)
require.NoError(t, err, "should be able to delete node from state")
t.Logf("Deleted node %d from state, change: %s", node3.n.ID, nodeChange.Reason)
// Verify node is deleted from state
_, exists := st.GetNodeByID(node3.n.ID)
require.False(t, exists, "node3 should be deleted from state")
// Send the NodeRemoved change to batcher (this is what app.Change() does)
// With the fix, this should clean up node3 from batcher's internal state
batcher.AddWork(nodeChange)
// Wait for the batcher to process the removal and clean up the node
assert.EventuallyWithT(t, func(c *assert.CollectT) {
assert.False(c, batcher.IsConnected(node3.n.ID), "node3 should be disconnected from batcher")
}, 5*time.Second, 50*time.Millisecond, "waiting for node removal to be processed")
t.Logf("Node %d connected in batcher after NodeRemoved: %v", node3.n.ID, batcher.IsConnected(node3.n.ID))
// Now queue changes that would have caused errors before the fix
// With the fix, these should NOT cause "node not found" errors
// because node3 was cleaned up when NodeRemoved was processed
batcher.AddWork(change.FullUpdate())
batcher.AddWork(change.PolicyChange())
// Wait for work to be processed and verify no errors occurred
// With the fix, no new errors should occur because the deleted node
// was cleaned up from batcher state when NodeRemoved was processed
assert.EventuallyWithT(t, func(c *assert.CollectT) {
finalWorkErrors := lfb.WorkErrors()
newErrors := finalWorkErrors - initialWorkErrors
assert.Zero(c, newErrors, "Fix for #2924: should have no work errors after node deletion")
}, 5*time.Second, 100*time.Millisecond, "waiting for work processing to complete without errors")
// Verify remaining nodes still work correctly
drainCh(node1.ch)
drainCh(node2.ch)
batcher.AddWork(change.NodeAdded(node1.n.ID))
assert.EventuallyWithT(t, func(c *assert.CollectT) {
// Node 1 and 2 should receive updates
stats1 := NodeStats{TotalUpdates: atomic.LoadInt64(&node1.updateCount)}
stats2 := NodeStats{TotalUpdates: atomic.LoadInt64(&node2.updateCount)}
assert.Positive(c, stats1.TotalUpdates, "node1 should have received updates")
assert.Positive(c, stats2.TotalUpdates, "node2 should have received updates")
}, 5*time.Second, 100*time.Millisecond, "waiting for remaining nodes to receive updates")
})
}
}
func TestRemoveNodeChannelAlreadyRemoved(t *testing.T) {
for _, batcherFunc := range allBatcherFunctions {
t.Run(batcherFunc.name, func(t *testing.T) {
t.Run("marks disconnected when removed channel was last active connection", func(t *testing.T) {
testData, cleanup := setupBatcherWithTestData(t, batcherFunc.fn, 1, 1, normalBufferSize)
defer cleanup()
lfb := unwrapBatcher(testData.Batcher)
nodeID := testData.Nodes[0].n.ID
ch := make(chan *tailcfg.MapResponse, normalBufferSize)
require.NoError(t, lfb.AddNode(nodeID, ch, tailcfg.CapabilityVersion(100), nil))
assert.EventuallyWithT(t, func(c *assert.CollectT) {
assert.True(c, lfb.IsConnected(nodeID), "node should be connected after AddNode")
}, 5*time.Second, 50*time.Millisecond, "waiting for node to be connected")
nodeConn, exists := lfb.nodes.Load(nodeID)
require.True(t, exists, "node connection should exist")
require.True(t, nodeConn.removeConnectionByChannel(ch), "manual channel removal should succeed")
removed := lfb.RemoveNode(nodeID, ch)
assert.False(t, removed, "RemoveNode should report no remaining active connections")
assert.EventuallyWithT(t, func(c *assert.CollectT) {
assert.False(c, lfb.IsConnected(nodeID), "node should be disconnected after last connection is gone")
}, 5*time.Second, 50*time.Millisecond, "waiting for node to be disconnected")
close(ch)
})
t.Run("keeps connected when another connection is still active", func(t *testing.T) {
testData, cleanup := setupBatcherWithTestData(t, batcherFunc.fn, 1, 1, normalBufferSize)
defer cleanup()
lfb := unwrapBatcher(testData.Batcher)
nodeID := testData.Nodes[0].n.ID
ch1 := make(chan *tailcfg.MapResponse, normalBufferSize)
ch2 := make(chan *tailcfg.MapResponse, normalBufferSize)
require.NoError(t, lfb.AddNode(nodeID, ch1, tailcfg.CapabilityVersion(100), nil))
require.NoError(t, lfb.AddNode(nodeID, ch2, tailcfg.CapabilityVersion(100), nil))
assert.EventuallyWithT(t, func(c *assert.CollectT) {
assert.True(c, lfb.IsConnected(nodeID), "node should be connected after AddNode")
}, 5*time.Second, 50*time.Millisecond, "waiting for node to be connected")
nodeConn, exists := lfb.nodes.Load(nodeID)
require.True(t, exists, "node connection should exist")
require.True(t, nodeConn.removeConnectionByChannel(ch1), "manual channel removal should succeed")
removed := lfb.RemoveNode(nodeID, ch1)
assert.True(t, removed, "RemoveNode should report node still has active connections")
assert.True(t, lfb.IsConnected(nodeID), "node should still be connected while another connection exists")
assert.Equal(t, 1, nodeConn.getActiveConnectionCount(), "exactly one active connection should remain")
close(ch1)
})
})
}
}
// unwrapBatcher extracts the underlying *Batcher from the test wrapper.
func unwrapBatcher(b *testBatcherWrapper) *Batcher {
return b.Batcher
}
================================================
FILE: hscontrol/mapper/batcher_unit_test.go
================================================
package mapper
// Unit tests for batcher components that do NOT require database setup.
// These tests exercise connectionEntry, multiChannelNodeConn, computePeerDiff,
// updateSentPeers, generateMapResponse branching, and handleNodeChange in isolation.
import (
"errors"
"fmt"
"runtime"
"sync"
"sync/atomic"
"testing"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/types/change"
"github.com/puzpuzpuz/xsync/v4"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"tailscale.com/tailcfg"
)
// ============================================================================
// Mock Infrastructure
// ============================================================================
// mockNodeConnection implements nodeConnection for isolated unit testing
// of generateMapResponse and handleNodeChange without a real database.
type mockNodeConnection struct {
id types.NodeID
ver tailcfg.CapabilityVersion
// sendFn allows injecting custom send behavior.
// If nil, sends are recorded and succeed.
sendFn func(*tailcfg.MapResponse) error
// sent records all successful sends for assertion.
sent []*tailcfg.MapResponse
mu sync.Mutex
// Peer tracking
peers *xsync.Map[tailcfg.NodeID, struct{}]
}
func newMockNodeConnection(id types.NodeID) *mockNodeConnection {
return &mockNodeConnection{
id: id,
ver: tailcfg.CapabilityVersion(100),
peers: xsync.NewMap[tailcfg.NodeID, struct{}](),
}
}
// withSendError configures the mock to return the given error on send.
func (m *mockNodeConnection) withSendError(err error) *mockNodeConnection {
m.sendFn = func(_ *tailcfg.MapResponse) error { return err }
return m
}
func (m *mockNodeConnection) nodeID() types.NodeID { return m.id }
func (m *mockNodeConnection) version() tailcfg.CapabilityVersion { return m.ver }
func (m *mockNodeConnection) send(data *tailcfg.MapResponse) error {
if m.sendFn != nil {
return m.sendFn(data)
}
m.mu.Lock()
m.sent = append(m.sent, data)
m.mu.Unlock()
return nil
}
func (m *mockNodeConnection) computePeerDiff(currentPeers []tailcfg.NodeID) []tailcfg.NodeID {
currentSet := make(map[tailcfg.NodeID]struct{}, len(currentPeers))
for _, id := range currentPeers {
currentSet[id] = struct{}{}
}
var removed []tailcfg.NodeID
m.peers.Range(func(id tailcfg.NodeID, _ struct{}) bool {
if _, exists := currentSet[id]; !exists {
removed = append(removed, id)
}
return true
})
return removed
}
func (m *mockNodeConnection) updateSentPeers(resp *tailcfg.MapResponse) {
if resp == nil {
return
}
if resp.Peers != nil {
m.peers.Clear()
for _, peer := range resp.Peers {
m.peers.Store(peer.ID, struct{}{})
}
}
for _, peer := range resp.PeersChanged {
m.peers.Store(peer.ID, struct{}{})
}
for _, id := range resp.PeersRemoved {
m.peers.Delete(id)
}
}
// getSent returns a thread-safe copy of all sent responses.
func (m *mockNodeConnection) getSent() []*tailcfg.MapResponse {
m.mu.Lock()
defer m.mu.Unlock()
return append([]*tailcfg.MapResponse{}, m.sent...)
}
// ============================================================================
// Test Helpers
// ============================================================================
// testMapResponse creates a minimal valid MapResponse for testing.
func testMapResponse() *tailcfg.MapResponse {
now := time.Now()
return &tailcfg.MapResponse{
ControlTime: &now,
}
}
// testMapResponseWithPeers creates a MapResponse with the given peer IDs.
func testMapResponseWithPeers(peerIDs ...tailcfg.NodeID) *tailcfg.MapResponse {
resp := testMapResponse()
resp.Peers = make([]*tailcfg.Node, len(peerIDs))
for i, id := range peerIDs {
resp.Peers[i] = &tailcfg.Node{ID: id}
}
return resp
}
// ids is a convenience for creating a slice of tailcfg.NodeID.
func ids(nodeIDs ...tailcfg.NodeID) []tailcfg.NodeID {
return nodeIDs
}
// expectReceive asserts that a message arrives on the channel within 100ms.
func expectReceive(t *testing.T, ch <-chan *tailcfg.MapResponse, msg string) *tailcfg.MapResponse {
t.Helper()
const timeout = 100 * time.Millisecond
select {
case data := <-ch:
return data
case <-time.After(timeout):
t.Fatalf("expected to receive on channel within %v: %s", timeout, msg)
return nil
}
}
// expectNoReceive asserts that no message arrives within timeout.
func expectNoReceive(t *testing.T, ch <-chan *tailcfg.MapResponse, timeout time.Duration, msg string) {
t.Helper()
select {
case data := <-ch:
t.Fatalf("expected no receive but got %+v: %s", data, msg)
case <-time.After(timeout):
// Expected
}
}
// makeConnectionEntry creates a connectionEntry with the given channel.
func makeConnectionEntry(id string, ch chan<- *tailcfg.MapResponse) *connectionEntry {
entry := &connectionEntry{
id: id,
c: ch,
version: tailcfg.CapabilityVersion(100),
created: time.Now(),
}
entry.lastUsed.Store(time.Now().Unix())
return entry
}
// ============================================================================
// connectionEntry.send() Tests
// ============================================================================
func TestConnectionEntry_SendSuccess(t *testing.T) {
ch := make(chan *tailcfg.MapResponse, 1)
entry := makeConnectionEntry("test-conn", ch)
data := testMapResponse()
beforeSend := time.Now().Unix()
err := entry.send(data)
require.NoError(t, err)
assert.GreaterOrEqual(t, entry.lastUsed.Load(), beforeSend,
"lastUsed should be updated after successful send")
// Verify data was actually sent
received := expectReceive(t, ch, "data should be on channel")
assert.Equal(t, data, received)
}
func TestConnectionEntry_SendNilData(t *testing.T) {
ch := make(chan *tailcfg.MapResponse, 1)
entry := makeConnectionEntry("test-conn", ch)
err := entry.send(nil)
require.NoError(t, err, "nil data should return nil error")
expectNoReceive(t, ch, 10*time.Millisecond, "nil data should not be sent to channel")
}
func TestConnectionEntry_SendTimeout(t *testing.T) {
// Unbuffered channel with no reader = always blocks
ch := make(chan *tailcfg.MapResponse)
entry := makeConnectionEntry("test-conn", ch)
data := testMapResponse()
start := time.Now()
err := entry.send(data)
elapsed := time.Since(start)
require.ErrorIs(t, err, ErrConnectionSendTimeout)
assert.GreaterOrEqual(t, elapsed, 40*time.Millisecond,
"should wait approximately 50ms before timeout")
}
func TestConnectionEntry_SendClosed(t *testing.T) {
ch := make(chan *tailcfg.MapResponse, 1)
entry := makeConnectionEntry("test-conn", ch)
// Mark as closed before sending
entry.closed.Store(true)
err := entry.send(testMapResponse())
require.ErrorIs(t, err, errConnectionClosed)
expectNoReceive(t, ch, 10*time.Millisecond,
"closed entry should not send data to channel")
}
func TestConnectionEntry_SendUpdatesLastUsed(t *testing.T) {
ch := make(chan *tailcfg.MapResponse, 1)
entry := makeConnectionEntry("test-conn", ch)
// Set lastUsed to a past time
pastTime := time.Now().Add(-1 * time.Hour).Unix()
entry.lastUsed.Store(pastTime)
err := entry.send(testMapResponse())
require.NoError(t, err)
assert.Greater(t, entry.lastUsed.Load(), pastTime,
"lastUsed should be updated to current time after send")
}
// ============================================================================
// multiChannelNodeConn.send() Tests
// ============================================================================
func TestMultiChannelSend_AllSuccess(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
// Create 3 buffered channels (all will succeed)
channels := make([]chan *tailcfg.MapResponse, 3)
for i := range channels {
channels[i] = make(chan *tailcfg.MapResponse, 1)
mc.addConnection(makeConnectionEntry(fmt.Sprintf("conn-%d", i), channels[i]))
}
data := testMapResponse()
err := mc.send(data)
require.NoError(t, err)
assert.Equal(t, 3, mc.getActiveConnectionCount(),
"all connections should remain active after success")
// Verify all channels received the data
for i, ch := range channels {
received := expectReceive(t, ch,
fmt.Sprintf("channel %d should receive data", i))
assert.Equal(t, data, received)
}
}
func TestMultiChannelSend_PartialFailure(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
// 2 buffered channels (will succeed) + 1 unbuffered (will timeout)
goodCh1 := make(chan *tailcfg.MapResponse, 1)
goodCh2 := make(chan *tailcfg.MapResponse, 1)
badCh := make(chan *tailcfg.MapResponse) // unbuffered, no reader
mc.addConnection(makeConnectionEntry("good-1", goodCh1))
mc.addConnection(makeConnectionEntry("bad", badCh))
mc.addConnection(makeConnectionEntry("good-2", goodCh2))
err := mc.send(testMapResponse())
require.NoError(t, err, "should succeed if at least one connection works")
assert.Equal(t, 2, mc.getActiveConnectionCount(),
"failed connection should be removed")
// Good channels should have received data
expectReceive(t, goodCh1, "good-1 should receive")
expectReceive(t, goodCh2, "good-2 should receive")
}
func TestMultiChannelSend_AllFail(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
// All unbuffered channels with no readers
for i := range 3 {
ch := make(chan *tailcfg.MapResponse) // unbuffered
mc.addConnection(makeConnectionEntry(fmt.Sprintf("bad-%d", i), ch))
}
err := mc.send(testMapResponse())
require.Error(t, err, "should return error when all connections fail")
assert.Equal(t, 0, mc.getActiveConnectionCount(),
"all failed connections should be removed")
}
func TestMultiChannelSend_ZeroConnections(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
err := mc.send(testMapResponse())
require.NoError(t, err,
"sending to node with 0 connections should succeed silently (rapid reconnection scenario)")
}
func TestMultiChannelSend_NilData(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
ch := make(chan *tailcfg.MapResponse, 1)
mc.addConnection(makeConnectionEntry("conn", ch))
err := mc.send(nil)
require.NoError(t, err, "nil data should return nil immediately")
expectNoReceive(t, ch, 10*time.Millisecond, "nil data should not be sent")
}
func TestMultiChannelSend_FailedConnectionRemoved(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
goodCh := make(chan *tailcfg.MapResponse, 10) // large buffer
badCh := make(chan *tailcfg.MapResponse) // unbuffered, will timeout
mc.addConnection(makeConnectionEntry("good", goodCh))
mc.addConnection(makeConnectionEntry("bad", badCh))
assert.Equal(t, 2, mc.getActiveConnectionCount())
// First send: bad connection removed
err := mc.send(testMapResponse())
require.NoError(t, err)
assert.Equal(t, 1, mc.getActiveConnectionCount())
// Second send: only good connection remains, should succeed
err = mc.send(testMapResponse())
require.NoError(t, err)
assert.Equal(t, 1, mc.getActiveConnectionCount())
}
func TestMultiChannelSend_UpdateCount(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
ch := make(chan *tailcfg.MapResponse, 10)
mc.addConnection(makeConnectionEntry("conn", ch))
assert.Equal(t, int64(0), mc.updateCount.Load())
_ = mc.send(testMapResponse())
assert.Equal(t, int64(1), mc.updateCount.Load())
_ = mc.send(testMapResponse())
assert.Equal(t, int64(2), mc.updateCount.Load())
}
// ============================================================================
// multiChannelNodeConn.close() Tests
// ============================================================================
func TestMultiChannelClose_MarksEntriesClosed(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
entries := make([]*connectionEntry, 3)
for i := range entries {
ch := make(chan *tailcfg.MapResponse, 1)
entries[i] = makeConnectionEntry(fmt.Sprintf("conn-%d", i), ch)
mc.addConnection(entries[i])
}
mc.close()
for i, entry := range entries {
assert.True(t, entry.closed.Load(),
"entry %d should be marked as closed", i)
}
}
func TestMultiChannelClose_PreventsSendPanic(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
ch := make(chan *tailcfg.MapResponse, 1)
entry := makeConnectionEntry("conn", ch)
mc.addConnection(entry)
mc.close()
// After close, connectionEntry.send should return errConnectionClosed
// (not panic on send to closed channel)
err := entry.send(testMapResponse())
require.ErrorIs(t, err, errConnectionClosed,
"send after close should return errConnectionClosed, not panic")
}
// ============================================================================
// multiChannelNodeConn connection management Tests
// ============================================================================
func TestMultiChannelNodeConn_AddRemoveConnections(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
ch1 := make(chan *tailcfg.MapResponse, 1)
ch2 := make(chan *tailcfg.MapResponse, 1)
ch3 := make(chan *tailcfg.MapResponse, 1)
// Add connections
mc.addConnection(makeConnectionEntry("c1", ch1))
assert.Equal(t, 1, mc.getActiveConnectionCount())
assert.True(t, mc.hasActiveConnections())
mc.addConnection(makeConnectionEntry("c2", ch2))
mc.addConnection(makeConnectionEntry("c3", ch3))
assert.Equal(t, 3, mc.getActiveConnectionCount())
// Remove by channel pointer
assert.True(t, mc.removeConnectionByChannel(ch2))
assert.Equal(t, 2, mc.getActiveConnectionCount())
// Remove non-existent channel
nonExistentCh := make(chan *tailcfg.MapResponse)
assert.False(t, mc.removeConnectionByChannel(nonExistentCh))
assert.Equal(t, 2, mc.getActiveConnectionCount())
// Remove remaining
assert.True(t, mc.removeConnectionByChannel(ch1))
assert.True(t, mc.removeConnectionByChannel(ch3))
assert.Equal(t, 0, mc.getActiveConnectionCount())
assert.False(t, mc.hasActiveConnections())
}
func TestMultiChannelNodeConn_Version(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
// No connections - version should be 0
assert.Equal(t, tailcfg.CapabilityVersion(0), mc.version())
// Add connection with version 100
ch := make(chan *tailcfg.MapResponse, 1)
entry := makeConnectionEntry("conn", ch)
entry.version = tailcfg.CapabilityVersion(100)
mc.addConnection(entry)
assert.Equal(t, tailcfg.CapabilityVersion(100), mc.version())
}
// ============================================================================
// computePeerDiff Tests
// ============================================================================
func TestComputePeerDiff(t *testing.T) {
tests := []struct {
name string
tracked []tailcfg.NodeID // peers previously sent to client
current []tailcfg.NodeID // peers visible now
wantRemoved []tailcfg.NodeID // expected removed peers
}{
{
name: "no_changes",
tracked: ids(1, 2, 3),
current: ids(1, 2, 3),
wantRemoved: nil,
},
{
name: "one_removed",
tracked: ids(1, 2, 3),
current: ids(1, 3),
wantRemoved: ids(2),
},
{
name: "multiple_removed",
tracked: ids(1, 2, 3, 4, 5),
current: ids(2, 4),
wantRemoved: ids(1, 3, 5),
},
{
name: "all_removed",
tracked: ids(1, 2, 3),
current: nil,
wantRemoved: ids(1, 2, 3),
},
{
name: "peers_added_no_removal",
tracked: ids(1),
current: ids(1, 2, 3),
wantRemoved: nil,
},
{
name: "empty_tracked",
tracked: nil,
current: ids(1, 2, 3),
wantRemoved: nil,
},
{
name: "both_empty",
tracked: nil,
current: nil,
wantRemoved: nil,
},
{
name: "disjoint_sets",
tracked: ids(1, 2, 3),
current: ids(4, 5, 6),
wantRemoved: ids(1, 2, 3),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
// Populate tracked peers
for _, id := range tt.tracked {
mc.lastSentPeers.Store(id, struct{}{})
}
got := mc.computePeerDiff(tt.current)
assert.ElementsMatch(t, tt.wantRemoved, got,
"removed peers should match expected")
})
}
}
// ============================================================================
// updateSentPeers Tests
// ============================================================================
func TestUpdateSentPeers(t *testing.T) {
t.Run("full_peer_list_replaces_all", func(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
// Pre-populate with old peers
mc.lastSentPeers.Store(tailcfg.NodeID(100), struct{}{})
mc.lastSentPeers.Store(tailcfg.NodeID(200), struct{}{})
// Send full peer list
mc.updateSentPeers(testMapResponseWithPeers(1, 2, 3))
// Old peers should be gone
_, exists := mc.lastSentPeers.Load(tailcfg.NodeID(100))
assert.False(t, exists, "old peer 100 should be cleared")
// New peers should be tracked
for _, id := range ids(1, 2, 3) {
_, exists := mc.lastSentPeers.Load(id)
assert.True(t, exists, "peer %d should be tracked", id)
}
})
t.Run("incremental_add_via_PeersChanged", func(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
mc.lastSentPeers.Store(tailcfg.NodeID(1), struct{}{})
resp := testMapResponse()
resp.PeersChanged = []*tailcfg.Node{{ID: 2}, {ID: 3}}
mc.updateSentPeers(resp)
// All three should be tracked
for _, id := range ids(1, 2, 3) {
_, exists := mc.lastSentPeers.Load(id)
assert.True(t, exists, "peer %d should be tracked", id)
}
})
t.Run("incremental_remove_via_PeersRemoved", func(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
mc.lastSentPeers.Store(tailcfg.NodeID(1), struct{}{})
mc.lastSentPeers.Store(tailcfg.NodeID(2), struct{}{})
mc.lastSentPeers.Store(tailcfg.NodeID(3), struct{}{})
resp := testMapResponse()
resp.PeersRemoved = ids(2)
mc.updateSentPeers(resp)
_, exists1 := mc.lastSentPeers.Load(tailcfg.NodeID(1))
_, exists2 := mc.lastSentPeers.Load(tailcfg.NodeID(2))
_, exists3 := mc.lastSentPeers.Load(tailcfg.NodeID(3))
assert.True(t, exists1, "peer 1 should remain")
assert.False(t, exists2, "peer 2 should be removed")
assert.True(t, exists3, "peer 3 should remain")
})
t.Run("nil_response_is_noop", func(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
mc.lastSentPeers.Store(tailcfg.NodeID(1), struct{}{})
mc.updateSentPeers(nil)
_, exists := mc.lastSentPeers.Load(tailcfg.NodeID(1))
assert.True(t, exists, "nil response should not change tracked peers")
})
t.Run("full_then_incremental_sequence", func(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
// Step 1: Full peer list
mc.updateSentPeers(testMapResponseWithPeers(1, 2, 3))
// Step 2: Add peer 4
resp := testMapResponse()
resp.PeersChanged = []*tailcfg.Node{{ID: 4}}
mc.updateSentPeers(resp)
// Step 3: Remove peer 2
resp2 := testMapResponse()
resp2.PeersRemoved = ids(2)
mc.updateSentPeers(resp2)
// Should have 1, 3, 4
for _, id := range ids(1, 3, 4) {
_, exists := mc.lastSentPeers.Load(id)
assert.True(t, exists, "peer %d should be tracked", id)
}
_, exists := mc.lastSentPeers.Load(tailcfg.NodeID(2))
assert.False(t, exists, "peer 2 should have been removed")
})
t.Run("empty_full_peer_list_clears_all", func(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
mc.lastSentPeers.Store(tailcfg.NodeID(1), struct{}{})
mc.lastSentPeers.Store(tailcfg.NodeID(2), struct{}{})
// Empty Peers slice (not nil) means "no peers"
resp := testMapResponse()
resp.Peers = []*tailcfg.Node{} // empty, not nil
mc.updateSentPeers(resp)
count := 0
mc.lastSentPeers.Range(func(_ tailcfg.NodeID, _ struct{}) bool {
count++
return true
})
assert.Equal(t, 0, count, "empty peer list should clear all tracking")
})
}
// ============================================================================
// generateMapResponse Tests (branching logic only, no DB needed)
// ============================================================================
func TestGenerateMapResponse_EmptyChange(t *testing.T) {
mc := newMockNodeConnection(1)
resp, err := generateMapResponse(mc, nil, change.Change{})
require.NoError(t, err)
assert.Nil(t, resp, "empty change should return nil response")
}
func TestGenerateMapResponse_InvalidNodeID(t *testing.T) {
mc := newMockNodeConnection(0) // Invalid ID
resp, err := generateMapResponse(mc, &mapper{}, change.DERPMap())
require.ErrorIs(t, err, ErrInvalidNodeID)
assert.Nil(t, resp)
}
func TestGenerateMapResponse_NilMapper(t *testing.T) {
mc := newMockNodeConnection(1)
resp, err := generateMapResponse(mc, nil, change.DERPMap())
require.ErrorIs(t, err, ErrMapperNil)
assert.Nil(t, resp)
}
func TestGenerateMapResponse_SelfOnlyOtherNode(t *testing.T) {
mc := newMockNodeConnection(1)
// SelfUpdate targeted at node 99 should be skipped for node 1
ch := change.SelfUpdate(99)
resp, err := generateMapResponse(mc, &mapper{}, ch)
require.NoError(t, err)
assert.Nil(t, resp,
"self-only change targeted at different node should return nil")
}
func TestGenerateMapResponse_SelfOnlySameNode(t *testing.T) {
// SelfUpdate targeted at node 1: IsSelfOnly()=true and TargetNode==nodeID
// This should NOT be short-circuited - it should attempt to generate.
// We verify the routing logic by checking that the change is not empty
// and not filtered out (unlike SelfOnlyOtherNode above).
ch := change.SelfUpdate(1)
assert.False(t, ch.IsEmpty(), "SelfUpdate should not be empty")
assert.True(t, ch.IsSelfOnly(), "SelfUpdate should be self-only")
assert.True(t, ch.ShouldSendToNode(1), "should be sent to target node")
assert.False(t, ch.ShouldSendToNode(2), "should NOT be sent to other nodes")
}
// ============================================================================
// handleNodeChange Tests
// ============================================================================
func TestHandleNodeChange_NilConnection(t *testing.T) {
err := handleNodeChange(nil, nil, change.DERPMap())
assert.ErrorIs(t, err, ErrNodeConnectionNil)
}
func TestHandleNodeChange_EmptyChange(t *testing.T) {
mc := newMockNodeConnection(1)
err := handleNodeChange(mc, nil, change.Change{})
require.NoError(t, err, "empty change should not send anything")
assert.Empty(t, mc.getSent(), "no data should be sent for empty change")
}
var errConnectionBroken = errors.New("connection broken")
func TestHandleNodeChange_SendError(t *testing.T) {
mc := newMockNodeConnection(1).withSendError(errConnectionBroken)
// Need a real mapper for this test - we can't easily mock it.
// Instead, test that when generateMapResponse returns nil data,
// no send occurs. The send error path requires a valid MapResponse
// which requires a mapper with state.
// So we test the nil-data path here.
err := handleNodeChange(mc, nil, change.Change{})
assert.NoError(t, err, "empty change produces nil data, no send needed")
}
func TestHandleNodeChange_NilDataNoSend(t *testing.T) {
mc := newMockNodeConnection(1)
// SelfUpdate targeted at different node produces nil data
ch := change.SelfUpdate(99)
err := handleNodeChange(mc, &mapper{}, ch)
require.NoError(t, err, "nil data should not cause error")
assert.Empty(t, mc.getSent(), "nil data should not trigger send")
}
// ============================================================================
// connectionEntry concurrent safety Tests
// ============================================================================
func TestConnectionEntry_ConcurrentSends(t *testing.T) {
ch := make(chan *tailcfg.MapResponse, 100)
entry := makeConnectionEntry("concurrent", ch)
var (
wg sync.WaitGroup
successCount atomic.Int64
)
// 50 goroutines sending concurrently
for range 50 {
wg.Go(func() {
err := entry.send(testMapResponse())
if err == nil {
successCount.Add(1)
}
})
}
wg.Wait()
assert.Equal(t, int64(50), successCount.Load(),
"all sends to buffered channel should succeed")
// Drain and count
count := 0
for range len(ch) {
<-ch
count++
}
assert.Equal(t, 50, count, "all 50 messages should be on channel")
}
func TestConnectionEntry_ConcurrentSendAndClose(t *testing.T) {
ch := make(chan *tailcfg.MapResponse, 100)
entry := makeConnectionEntry("race", ch)
var (
wg sync.WaitGroup
panicked atomic.Bool
)
// Goroutines sending rapidly
for range 20 {
wg.Go(func() {
defer func() {
if r := recover(); r != nil {
panicked.Store(true)
}
}()
for range 10 {
_ = entry.send(testMapResponse())
}
})
}
// Close midway through
wg.Go(func() {
time.Sleep(1 * time.Millisecond) //nolint:forbidigo // concurrency test coordination
entry.closed.Store(true)
})
wg.Wait()
assert.False(t, panicked.Load(),
"concurrent send and close should not panic")
}
// ============================================================================
// multiChannelNodeConn concurrent Tests
// ============================================================================
func TestMultiChannelSend_ConcurrentAddAndSend(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
// Start with one connection
ch1 := make(chan *tailcfg.MapResponse, 100)
mc.addConnection(makeConnectionEntry("initial", ch1))
var (
wg sync.WaitGroup
panicked atomic.Bool
)
// Goroutine adding connections
wg.Go(func() {
defer func() {
if r := recover(); r != nil {
panicked.Store(true)
}
}()
for i := range 10 {
ch := make(chan *tailcfg.MapResponse, 100)
mc.addConnection(makeConnectionEntry(fmt.Sprintf("added-%d", i), ch))
}
})
// Goroutine sending data
wg.Go(func() {
defer func() {
if r := recover(); r != nil {
panicked.Store(true)
}
}()
for range 20 {
_ = mc.send(testMapResponse())
}
})
wg.Wait()
assert.False(t, panicked.Load(),
"concurrent add and send should not panic (mutex protects both)")
}
func TestMultiChannelSend_ConcurrentRemoveAndSend(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
channels := make([]chan *tailcfg.MapResponse, 10)
for i := range channels {
channels[i] = make(chan *tailcfg.MapResponse, 100)
mc.addConnection(makeConnectionEntry(fmt.Sprintf("conn-%d", i), channels[i]))
}
var (
wg sync.WaitGroup
panicked atomic.Bool
)
// Goroutine removing connections
wg.Go(func() {
defer func() {
if r := recover(); r != nil {
panicked.Store(true)
}
}()
for _, ch := range channels {
mc.removeConnectionByChannel(ch)
}
})
// Goroutine sending data concurrently
wg.Go(func() {
defer func() {
if r := recover(); r != nil {
panicked.Store(true)
}
}()
for range 20 {
_ = mc.send(testMapResponse())
}
})
wg.Wait()
assert.False(t, panicked.Load(),
"concurrent remove and send should not panic")
}
// ============================================================================
// Regression tests for H1 (timer leak) and H3 (lifecycle)
// ============================================================================
// TestConnectionEntry_SendFastPath_TimerStopped is a regression guard for H1.
// Before the fix, connectionEntry.send used time.After(50ms) which leaked a
// timer into the runtime heap on every call even when the channel send
// succeeded immediately. The fix switched to time.NewTimer + defer Stop().
//
// This test sends many messages on a buffered (non-blocking) channel and
// checks that the number of live goroutines stays bounded, which would
// grow without bound under the old time.After approach at high call rates.
func TestConnectionEntry_SendFastPath_TimerStopped(t *testing.T) {
const sends = 5000
ch := make(chan *tailcfg.MapResponse, sends)
entry := &connectionEntry{
id: "timer-leak-test",
c: ch,
version: 100,
created: time.Now(),
}
resp := testMapResponse()
for range sends {
err := entry.send(resp)
require.NoError(t, err)
}
// Drain the channel so we aren't holding references.
for range sends {
<-ch
}
// Force a GC + timer cleanup pass.
runtime.GC()
// If timers were leaking we'd see a goroutine count much higher
// than baseline. With 5000 leaked timers the count would be
// noticeably elevated. We just check it's reasonable.
numGR := runtime.NumGoroutine()
assert.Less(t, numGR, 200,
"goroutine count after %d fast-path sends should be bounded; got %d (possible timer leak)", sends, numGR)
}
// TestBatcher_CloseWaitsForWorkers is a regression guard for H3.
// Before the fix, Close() would tear down node connections while workers
// were potentially still running, risking sends on closed channels.
// The fix added sync.WaitGroup tracking so Close() blocks until all
// worker goroutines exit.
func TestBatcher_CloseWaitsForWorkers(t *testing.T) {
b := NewBatcher(50*time.Millisecond, 4, nil)
goroutinesBefore := runtime.NumGoroutine()
b.Start()
// Give workers time to start.
time.Sleep(20 * time.Millisecond) //nolint:forbidigo // test timing
goroutinesDuring := runtime.NumGoroutine()
// We expect at least 5 new goroutines: 1 doWork + 4 workers.
assert.GreaterOrEqual(t, goroutinesDuring-goroutinesBefore, 5,
"expected doWork + 4 workers to be running")
// Close should block until all workers have exited.
b.Close()
// After Close returns, goroutines should have dropped back.
// Allow a small margin for runtime goroutines.
goroutinesAfter := runtime.NumGoroutine()
assert.InDelta(t, goroutinesBefore, goroutinesAfter, 3,
"goroutines should return to baseline after Close(); before=%d after=%d",
goroutinesBefore, goroutinesAfter)
}
// TestBatcher_CloseThenStartIsNoop verifies the lifecycle contract:
// once a Batcher has been started, calling Start() again is a no-op
// (the started flag prevents double-start).
func TestBatcher_CloseThenStartIsNoop(t *testing.T) {
b := NewBatcher(50*time.Millisecond, 2, nil)
b.Start()
b.Close()
goroutinesBefore := runtime.NumGoroutine()
// Second Start should be a no-op because started is already true.
b.Start()
// Allow a moment for any hypothetical goroutine to appear.
time.Sleep(10 * time.Millisecond) //nolint:forbidigo // test timing
goroutinesAfter := runtime.NumGoroutine()
assert.InDelta(t, goroutinesBefore, goroutinesAfter, 1,
"Start() after Close() should not spawn new goroutines; before=%d after=%d",
goroutinesBefore, goroutinesAfter)
}
// TestBatcher_CloseStopsTicker verifies that Close() stops the internal
// ticker, preventing resource leaks.
func TestBatcher_CloseStopsTicker(t *testing.T) {
b := NewBatcher(10*time.Millisecond, 1, nil)
b.Start()
b.Close()
// After Close, the ticker should be stopped. Reading from a stopped
// ticker's channel should not deliver any values.
select {
case <-b.tick.C:
t.Fatal("ticker fired after Close(); ticker.Stop() was not called")
case <-time.After(50 * time.Millisecond): //nolint:forbidigo // test timing
// Expected: no tick received.
}
}
// ============================================================================
// Regression tests for M1, M3, M7
// ============================================================================
// TestBatcher_CloseBeforeStart_DoesNotHang is a regression guard for M1.
// Before the fix, done was nil until Start() was called. queueWork and
// MapResponseFromChange select on done, so a nil channel would block
// forever when workCh was full. With done initialized in NewBatcher,
// Close() can be called safely before Start().
func TestBatcher_CloseBeforeStart_DoesNotHang(t *testing.T) {
b := NewBatcher(50*time.Millisecond, 2, nil)
// Close without Start must not panic or hang.
done := make(chan struct{})
go func() {
b.Close()
close(done)
}()
select {
case <-done:
// Success: Close returned promptly.
case <-time.After(2 * time.Second): //nolint:forbidigo // test timing
t.Fatal("Close() before Start() hung; done channel was likely nil")
}
}
// TestBatcher_QueueWorkAfterClose_DoesNotHang verifies that queueWork
// returns immediately via the done channel when the batcher is closed,
// even without Start() having been called.
func TestBatcher_QueueWorkAfterClose_DoesNotHang(t *testing.T) {
b := NewBatcher(50*time.Millisecond, 1, nil)
b.Close()
done := make(chan struct{})
go func() {
// queueWork selects on done; with done closed this must return.
b.queueWork(work{})
close(done)
}()
select {
case <-done:
// Success
case <-time.After(2 * time.Second): //nolint:forbidigo // test timing
t.Fatal("queueWork hung after Close(); done channel select not working")
}
}
// TestIsConnected_FalseAfterAddNodeFailure is a regression guard for M3.
// Before the fix, AddNode error paths removed the connection but did not
// mark the node as disconnected. IsConnected would return true for a
// node with zero active connections.
func TestIsConnected_FalseAfterAddNodeFailure(t *testing.T) {
b := NewBatcher(50*time.Millisecond, 2, nil)
b.Start()
defer b.Close()
id := types.NodeID(42)
// Pre-create the node entry so AddNode reuses it, and set up a
// multiChannelNodeConn with no mapper so MapResponseFromChange will fail.
// markConnected() simulates a previous session leaving it connected.
nc := newMultiChannelNodeConn(id, nil)
nc.markConnected()
b.nodes.Store(id, nc)
ch := make(chan *tailcfg.MapResponse, 1)
err := b.AddNode(id, ch, 100, func() {})
require.Error(t, err, "AddNode should fail with nil mapper")
// After failure, the node should NOT be reported as connected.
assert.False(t, b.IsConnected(id),
"IsConnected should return false after AddNode failure with no remaining connections")
}
// TestRemoveConnectionAtIndex_NilsTrailingSlot is a regression guard for M7.
// Before the fix, removeConnectionAtIndexLocked used append(s[:i], s[i+1:]...)
// which left a stale pointer in the backing array's last slot. The fix
// uses copy + explicit nil of the trailing element.
func TestRemoveConnectionAtIndex_NilsTrailingSlot(t *testing.T) {
mc := newMultiChannelNodeConn(1, nil)
// Manually add three entries under the lock.
entries := make([]*connectionEntry, 3)
for i := range entries {
entries[i] = &connectionEntry{id: fmt.Sprintf("conn-%d", i), c: make(chan<- *tailcfg.MapResponse)}
}
mc.mutex.Lock()
mc.connections = append(mc.connections, entries...)
// Remove the middle entry (index 1).
removed := mc.removeConnectionAtIndexLocked(1, false)
require.Equal(t, entries[1], removed)
// After removal, len should be 2 and the backing array slot at
// index 2 (the old len-1) should be nil.
require.Len(t, mc.connections, 2)
assert.Equal(t, entries[0], mc.connections[0])
assert.Equal(t, entries[2], mc.connections[1])
// Check the backing array directly: the slot just past the new
// length must be nil to avoid retaining the pointer.
backing := mc.connections[:3]
assert.Nil(t, backing[2],
"trailing slot in backing array should be nil after removal")
mc.mutex.Unlock()
}
================================================
FILE: hscontrol/mapper/builder.go
================================================
package mapper
import (
"net/netip"
"sort"
"time"
"github.com/juanfont/headscale/hscontrol/policy"
"github.com/juanfont/headscale/hscontrol/types"
"tailscale.com/tailcfg"
"tailscale.com/types/views"
"tailscale.com/util/multierr"
)
// MapResponseBuilder provides a fluent interface for building tailcfg.MapResponse.
type MapResponseBuilder struct {
resp *tailcfg.MapResponse
mapper *mapper
nodeID types.NodeID
capVer tailcfg.CapabilityVersion
errs []error
debugType debugType
}
type debugType string
const (
fullResponseDebug debugType = "full"
selfResponseDebug debugType = "self"
changeResponseDebug debugType = "change"
policyResponseDebug debugType = "policy"
)
// NewMapResponseBuilder creates a new builder with basic fields set.
func (m *mapper) NewMapResponseBuilder(nodeID types.NodeID) *MapResponseBuilder {
now := time.Now()
return &MapResponseBuilder{
resp: &tailcfg.MapResponse{
KeepAlive: false,
ControlTime: &now,
},
mapper: m,
nodeID: nodeID,
errs: nil,
}
}
// addError adds an error to the builder's error list.
func (b *MapResponseBuilder) addError(err error) {
if err != nil {
b.errs = append(b.errs, err)
}
}
// hasErrors returns true if the builder has accumulated any errors.
func (b *MapResponseBuilder) hasErrors() bool {
return len(b.errs) > 0
}
// WithCapabilityVersion sets the capability version for the response.
func (b *MapResponseBuilder) WithCapabilityVersion(capVer tailcfg.CapabilityVersion) *MapResponseBuilder {
b.capVer = capVer
return b
}
// WithSelfNode adds the requesting node to the response.
func (b *MapResponseBuilder) WithSelfNode() *MapResponseBuilder {
nv, ok := b.mapper.state.GetNodeByID(b.nodeID)
if !ok {
b.addError(ErrNodeNotFoundMapper)
return b
}
_, matchers := b.mapper.state.Filter()
tailnode, err := nv.TailNode(
b.capVer,
func(id types.NodeID) []netip.Prefix {
return policy.ReduceRoutes(nv, b.mapper.state.GetNodePrimaryRoutes(id), matchers)
},
b.mapper.cfg)
if err != nil {
b.addError(err)
return b
}
b.resp.Node = tailnode
return b
}
func (b *MapResponseBuilder) WithDebugType(t debugType) *MapResponseBuilder {
if debugDumpMapResponsePath != "" {
b.debugType = t
}
return b
}
// WithDERPMap adds the DERP map to the response.
func (b *MapResponseBuilder) WithDERPMap() *MapResponseBuilder {
b.resp.DERPMap = b.mapper.state.DERPMap().AsStruct()
return b
}
// WithDomain adds the domain configuration.
func (b *MapResponseBuilder) WithDomain() *MapResponseBuilder {
b.resp.Domain = b.mapper.cfg.Domain()
return b
}
// WithCollectServicesDisabled sets the collect services flag to false.
func (b *MapResponseBuilder) WithCollectServicesDisabled() *MapResponseBuilder {
b.resp.CollectServices.Set(false)
return b
}
// WithDebugConfig adds debug configuration
// It disables log tailing if the mapper's LogTail is not enabled.
func (b *MapResponseBuilder) WithDebugConfig() *MapResponseBuilder {
b.resp.Debug = &tailcfg.Debug{
DisableLogTail: !b.mapper.cfg.LogTail.Enabled,
}
return b
}
// WithSSHPolicy adds SSH policy configuration for the requesting node.
func (b *MapResponseBuilder) WithSSHPolicy() *MapResponseBuilder {
node, ok := b.mapper.state.GetNodeByID(b.nodeID)
if !ok {
b.addError(ErrNodeNotFoundMapper)
return b
}
sshPolicy, err := b.mapper.state.SSHPolicy(node)
if err != nil {
b.addError(err)
return b
}
b.resp.SSHPolicy = sshPolicy
return b
}
// WithDNSConfig adds DNS configuration for the requesting node.
func (b *MapResponseBuilder) WithDNSConfig() *MapResponseBuilder {
node, ok := b.mapper.state.GetNodeByID(b.nodeID)
if !ok {
b.addError(ErrNodeNotFoundMapper)
return b
}
b.resp.DNSConfig = generateDNSConfig(b.mapper.cfg, node)
return b
}
// WithUserProfiles adds user profiles for the requesting node and given peers.
func (b *MapResponseBuilder) WithUserProfiles(peers views.Slice[types.NodeView]) *MapResponseBuilder {
node, ok := b.mapper.state.GetNodeByID(b.nodeID)
if !ok {
b.addError(ErrNodeNotFoundMapper)
return b
}
b.resp.UserProfiles = generateUserProfiles(node, peers)
return b
}
// WithPacketFilters adds packet filter rules based on policy.
func (b *MapResponseBuilder) WithPacketFilters() *MapResponseBuilder {
node, ok := b.mapper.state.GetNodeByID(b.nodeID)
if !ok {
b.addError(ErrNodeNotFoundMapper)
return b
}
// FilterForNode returns rules already reduced to only those relevant for this node.
// For autogroup:self policies, it returns per-node compiled rules.
// For global policies, it returns the global filter reduced for this node.
filter, err := b.mapper.state.FilterForNode(node)
if err != nil {
b.addError(err)
return b
}
// CapVer 81: 2023-11-17: MapResponse.PacketFilters (incremental packet filter updates)
// Currently, we do not send incremental package filters, however using the
// new PacketFilters field and "base" allows us to send a full update when we
// have to send an empty list, avoiding the hack in the else block.
b.resp.PacketFilters = map[string][]tailcfg.FilterRule{
"base": filter,
}
return b
}
// WithPeers adds full peer list with policy filtering (for full map response).
func (b *MapResponseBuilder) WithPeers(peers views.Slice[types.NodeView]) *MapResponseBuilder {
tailPeers, err := b.buildTailPeers(peers)
if err != nil {
b.addError(err)
return b
}
b.resp.Peers = tailPeers
return b
}
// WithPeerChanges adds changed peers with policy filtering (for incremental updates).
func (b *MapResponseBuilder) WithPeerChanges(peers views.Slice[types.NodeView]) *MapResponseBuilder {
tailPeers, err := b.buildTailPeers(peers)
if err != nil {
b.addError(err)
return b
}
b.resp.PeersChanged = tailPeers
return b
}
// buildTailPeers converts views.Slice[types.NodeView] to []tailcfg.Node with policy filtering and sorting.
func (b *MapResponseBuilder) buildTailPeers(peers views.Slice[types.NodeView]) ([]*tailcfg.Node, error) {
node, ok := b.mapper.state.GetNodeByID(b.nodeID)
if !ok {
return nil, ErrNodeNotFoundMapper
}
// Get unreduced matchers for peer relationship determination.
// MatchersForNode returns unreduced matchers that include all rules where the node
// could be either source or destination. This is different from FilterForNode which
// returns reduced rules for packet filtering (only rules where node is destination).
matchers, err := b.mapper.state.MatchersForNode(node)
if err != nil {
return nil, err
}
// If there are filter rules present, see if there are any nodes that cannot
// access each-other at all and remove them from the peers.
var changedViews views.Slice[types.NodeView]
if len(matchers) > 0 {
changedViews = policy.ReduceNodes(node, peers, matchers)
} else {
changedViews = peers
}
tailPeers, err := types.TailNodes(
changedViews, b.capVer,
func(id types.NodeID) []netip.Prefix {
return policy.ReduceRoutes(node, b.mapper.state.GetNodePrimaryRoutes(id), matchers)
},
b.mapper.cfg)
if err != nil {
return nil, err
}
// Peers is always returned sorted by Node.ID.
sort.SliceStable(tailPeers, func(x, y int) bool {
return tailPeers[x].ID < tailPeers[y].ID
})
return tailPeers, nil
}
// WithPeerChangedPatch adds peer change patches.
func (b *MapResponseBuilder) WithPeerChangedPatch(changes []*tailcfg.PeerChange) *MapResponseBuilder {
b.resp.PeersChangedPatch = changes
return b
}
// WithPeersRemoved adds removed peer IDs.
func (b *MapResponseBuilder) WithPeersRemoved(removedIDs ...types.NodeID) *MapResponseBuilder {
tailscaleIDs := make([]tailcfg.NodeID, 0, len(removedIDs))
for _, id := range removedIDs {
tailscaleIDs = append(tailscaleIDs, id.NodeID())
}
b.resp.PeersRemoved = tailscaleIDs
return b
}
// Build finalizes the response and returns marshaled bytes.
func (b *MapResponseBuilder) Build() (*tailcfg.MapResponse, error) {
if len(b.errs) > 0 {
return nil, multierr.New(b.errs...)
}
if debugDumpMapResponsePath != "" {
writeDebugMapResponse(b.resp, b.debugType, b.nodeID)
}
return b.resp, nil
}
================================================
FILE: hscontrol/mapper/builder_test.go
================================================
package mapper
import (
"testing"
"time"
"github.com/juanfont/headscale/hscontrol/state"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"tailscale.com/tailcfg"
)
func TestMapResponseBuilder_Basic(t *testing.T) {
cfg := &types.Config{
BaseDomain: "example.com",
LogTail: types.LogTailConfig{
Enabled: true,
},
}
mockState := &state.State{}
m := &mapper{
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
builder := m.NewMapResponseBuilder(nodeID)
// Test basic builder creation
assert.NotNil(t, builder)
assert.Equal(t, nodeID, builder.nodeID)
assert.NotNil(t, builder.resp)
assert.False(t, builder.resp.KeepAlive)
assert.NotNil(t, builder.resp.ControlTime)
assert.WithinDuration(t, time.Now(), *builder.resp.ControlTime, time.Second)
}
func TestMapResponseBuilder_WithCapabilityVersion(t *testing.T) {
cfg := &types.Config{}
mockState := &state.State{}
m := &mapper{
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
capVer := tailcfg.CapabilityVersion(42)
builder := m.NewMapResponseBuilder(nodeID).
WithCapabilityVersion(capVer)
assert.Equal(t, capVer, builder.capVer)
assert.False(t, builder.hasErrors())
}
func TestMapResponseBuilder_WithDomain(t *testing.T) {
domain := "test.example.com"
cfg := &types.Config{
ServerURL: "https://test.example.com",
BaseDomain: domain,
}
mockState := &state.State{}
m := &mapper{
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
builder := m.NewMapResponseBuilder(nodeID).
WithDomain()
assert.Equal(t, domain, builder.resp.Domain)
assert.False(t, builder.hasErrors())
}
func TestMapResponseBuilder_WithCollectServicesDisabled(t *testing.T) {
cfg := &types.Config{}
mockState := &state.State{}
m := &mapper{
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
builder := m.NewMapResponseBuilder(nodeID).
WithCollectServicesDisabled()
value, isSet := builder.resp.CollectServices.Get()
assert.True(t, isSet)
assert.False(t, value)
assert.False(t, builder.hasErrors())
}
func TestMapResponseBuilder_WithDebugConfig(t *testing.T) {
tests := []struct {
name string
logTailEnabled bool
expected bool
}{
{
name: "LogTail enabled",
logTailEnabled: true,
expected: false, // DisableLogTail should be false when LogTail is enabled
},
{
name: "LogTail disabled",
logTailEnabled: false,
expected: true, // DisableLogTail should be true when LogTail is disabled
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
cfg := &types.Config{
LogTail: types.LogTailConfig{
Enabled: tt.logTailEnabled,
},
}
mockState := &state.State{}
m := &mapper{
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
builder := m.NewMapResponseBuilder(nodeID).
WithDebugConfig()
require.NotNil(t, builder.resp.Debug)
assert.Equal(t, tt.expected, builder.resp.Debug.DisableLogTail)
assert.False(t, builder.hasErrors())
})
}
}
func TestMapResponseBuilder_WithPeerChangedPatch(t *testing.T) {
cfg := &types.Config{}
mockState := &state.State{}
m := &mapper{
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
changes := []*tailcfg.PeerChange{
{
NodeID: 123,
DERPRegion: 1,
},
{
NodeID: 456,
DERPRegion: 2,
},
}
builder := m.NewMapResponseBuilder(nodeID).
WithPeerChangedPatch(changes)
assert.Equal(t, changes, builder.resp.PeersChangedPatch)
assert.False(t, builder.hasErrors())
}
func TestMapResponseBuilder_WithPeersRemoved(t *testing.T) {
cfg := &types.Config{}
mockState := &state.State{}
m := &mapper{
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
removedID1 := types.NodeID(123)
removedID2 := types.NodeID(456)
builder := m.NewMapResponseBuilder(nodeID).
WithPeersRemoved(removedID1, removedID2)
expected := []tailcfg.NodeID{
removedID1.NodeID(),
removedID2.NodeID(),
}
assert.Equal(t, expected, builder.resp.PeersRemoved)
assert.False(t, builder.hasErrors())
}
func TestMapResponseBuilder_ErrorHandling(t *testing.T) {
cfg := &types.Config{}
mockState := &state.State{}
m := &mapper{
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
// Simulate an error in the builder
builder := m.NewMapResponseBuilder(nodeID)
builder.addError(assert.AnError)
// All subsequent calls should continue to work and accumulate errors
result := builder.
WithDomain().
WithCollectServicesDisabled().
WithDebugConfig()
assert.True(t, result.hasErrors())
assert.Len(t, result.errs, 1)
assert.Equal(t, assert.AnError, result.errs[0])
// Build should return the error
data, err := result.Build()
assert.Nil(t, data)
assert.Error(t, err)
}
func TestMapResponseBuilder_ChainedCalls(t *testing.T) {
domain := "chained.example.com"
cfg := &types.Config{
ServerURL: "https://chained.example.com",
BaseDomain: domain,
LogTail: types.LogTailConfig{
Enabled: false,
},
}
mockState := &state.State{}
m := &mapper{
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
capVer := tailcfg.CapabilityVersion(99)
builder := m.NewMapResponseBuilder(nodeID).
WithCapabilityVersion(capVer).
WithDomain().
WithCollectServicesDisabled().
WithDebugConfig()
// Verify all fields are set correctly
assert.Equal(t, capVer, builder.capVer)
assert.Equal(t, domain, builder.resp.Domain)
value, isSet := builder.resp.CollectServices.Get()
assert.True(t, isSet)
assert.False(t, value)
assert.NotNil(t, builder.resp.Debug)
assert.True(t, builder.resp.Debug.DisableLogTail)
assert.False(t, builder.hasErrors())
}
func TestMapResponseBuilder_MultipleWithPeersRemoved(t *testing.T) {
cfg := &types.Config{}
mockState := &state.State{}
m := &mapper{
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
removedID1 := types.NodeID(100)
removedID2 := types.NodeID(200)
// Test calling WithPeersRemoved multiple times
builder := m.NewMapResponseBuilder(nodeID).
WithPeersRemoved(removedID1).
WithPeersRemoved(removedID2)
// Second call should overwrite the first
expected := []tailcfg.NodeID{removedID2.NodeID()}
assert.Equal(t, expected, builder.resp.PeersRemoved)
assert.False(t, builder.hasErrors())
}
func TestMapResponseBuilder_EmptyPeerChangedPatch(t *testing.T) {
cfg := &types.Config{}
mockState := &state.State{}
m := &mapper{
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
builder := m.NewMapResponseBuilder(nodeID).
WithPeerChangedPatch([]*tailcfg.PeerChange{})
assert.Empty(t, builder.resp.PeersChangedPatch)
assert.False(t, builder.hasErrors())
}
func TestMapResponseBuilder_NilPeerChangedPatch(t *testing.T) {
cfg := &types.Config{}
mockState := &state.State{}
m := &mapper{
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
builder := m.NewMapResponseBuilder(nodeID).
WithPeerChangedPatch(nil)
assert.Nil(t, builder.resp.PeersChangedPatch)
assert.False(t, builder.hasErrors())
}
func TestMapResponseBuilder_MultipleErrors(t *testing.T) {
cfg := &types.Config{}
mockState := &state.State{}
m := &mapper{
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
// Create a builder and add multiple errors
builder := m.NewMapResponseBuilder(nodeID)
builder.addError(assert.AnError)
builder.addError(assert.AnError)
builder.addError(nil) // This should be ignored
// All subsequent calls should continue to work
result := builder.
WithDomain().
WithCollectServicesDisabled()
assert.True(t, result.hasErrors())
assert.Len(t, result.errs, 2) // nil error should be ignored
// Build should return a multierr
data, err := result.Build()
require.Nil(t, data)
require.Error(t, err)
// The error should contain information about multiple errors
assert.Contains(t, err.Error(), "multiple errors")
}
================================================
FILE: hscontrol/mapper/mapper.go
================================================
package mapper
import (
"encoding/json"
"fmt"
"io/fs"
"net/url"
"os"
"path"
"slices"
"strconv"
"strings"
"time"
"github.com/juanfont/headscale/hscontrol/state"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/types/change"
"github.com/rs/zerolog/log"
"tailscale.com/envknob"
"tailscale.com/tailcfg"
"tailscale.com/types/dnstype"
"tailscale.com/types/views"
)
const (
nextDNSDoHPrefix = "https://dns.nextdns.io"
debugMapResponsePerm = 0o755
)
var debugDumpMapResponsePath = envknob.String("HEADSCALE_DEBUG_DUMP_MAPRESPONSE_PATH")
// TODO: Optimise
// As this work continues, the idea is that there will be one Mapper instance
// per node, attached to the open stream between the control and client.
// This means that this can hold a state per node and we can use that to
// improve the mapresponses sent.
// We could:
// - Keep information about the previous mapresponse so we can send a diff
// - Store hashes
// - Create a "minifier" that removes info not needed for the node
// - some sort of batching, wait for 5 or 60 seconds before sending
type mapper struct {
// Configuration
state *state.State
cfg *types.Config
batcher *Batcher
created time.Time
}
//nolint:unused
type patch struct {
timestamp time.Time
change *tailcfg.PeerChange
}
func newMapper(
cfg *types.Config,
state *state.State,
) *mapper {
// uid, _ := util.GenerateRandomStringDNSSafe(mapperIDLength)
return &mapper{
state: state,
cfg: cfg,
created: time.Now(),
}
}
// generateUserProfiles creates user profiles for MapResponse.
func generateUserProfiles(
node types.NodeView,
peers views.Slice[types.NodeView],
) []tailcfg.UserProfile {
userMap := make(map[uint]*types.UserView)
ids := make([]uint, 0, len(userMap))
user := node.Owner()
if !user.Valid() {
log.Error().
EmbedObject(node).
Msg("node has no valid owner, skipping user profile generation")
return nil
}
userID := user.Model().ID
userMap[userID] = &user
ids = append(ids, userID)
for _, peer := range peers.All() {
peerUser := peer.Owner()
if !peerUser.Valid() {
continue
}
peerUserID := peerUser.Model().ID
userMap[peerUserID] = &peerUser
ids = append(ids, peerUserID)
}
slices.Sort(ids)
ids = slices.Compact(ids)
var profiles []tailcfg.UserProfile
for _, id := range ids {
if userMap[id] != nil {
profiles = append(profiles, userMap[id].TailscaleUserProfile())
}
}
return profiles
}
func generateDNSConfig(
cfg *types.Config,
node types.NodeView,
) *tailcfg.DNSConfig {
if cfg.TailcfgDNSConfig == nil {
return nil
}
dnsConfig := cfg.TailcfgDNSConfig.Clone()
addNextDNSMetadata(dnsConfig.Resolvers, node)
return dnsConfig
}
// If any nextdns DoH resolvers are present in the list of resolvers it will
// take metadata from the node metadata and instruct tailscale to add it
// to the requests. This makes it possible to identify from which device the
// requests come in the NextDNS dashboard.
//
// This will produce a resolver like:
// `https://dns.nextdns.io/?device_name=node-name&device_model=linux&device_ip=100.64.0.1`
func addNextDNSMetadata(resolvers []*dnstype.Resolver, node types.NodeView) {
for _, resolver := range resolvers {
if strings.HasPrefix(resolver.Addr, nextDNSDoHPrefix) {
attrs := url.Values{
"device_name": []string{node.Hostname()},
"device_model": []string{node.Hostinfo().OS()},
}
if len(node.IPs()) > 0 {
attrs.Add("device_ip", node.IPs()[0].String())
}
resolver.Addr = fmt.Sprintf("%s?%s", resolver.Addr, attrs.Encode())
}
}
}
// fullMapResponse returns a MapResponse for the given node.
//
//nolint:unused
func (m *mapper) fullMapResponse(
nodeID types.NodeID,
capVer tailcfg.CapabilityVersion,
) (*tailcfg.MapResponse, error) {
peers := m.state.ListPeers(nodeID)
return m.NewMapResponseBuilder(nodeID).
WithDebugType(fullResponseDebug).
WithCapabilityVersion(capVer).
WithSelfNode().
WithDERPMap().
WithDomain().
WithCollectServicesDisabled().
WithDebugConfig().
WithSSHPolicy().
WithDNSConfig().
WithUserProfiles(peers).
WithPacketFilters().
WithPeers(peers).
Build()
}
func (m *mapper) selfMapResponse(
nodeID types.NodeID,
capVer tailcfg.CapabilityVersion,
) (*tailcfg.MapResponse, error) {
ma, err := m.NewMapResponseBuilder(nodeID).
WithDebugType(selfResponseDebug).
WithCapabilityVersion(capVer).
WithSelfNode().
Build()
if err != nil {
return nil, err
}
// Set the peers to nil, to ensure the node does not think
// its getting a new list.
ma.Peers = nil
return ma, err
}
// policyChangeResponse creates a MapResponse for policy changes.
// It sends:
// - PeersRemoved for peers that are no longer visible after the policy change
// - PeersChanged for remaining peers (their AllowedIPs may have changed due to policy)
// - Updated PacketFilters
// - Updated SSHPolicy (SSH rules may reference users/groups that changed)
// - Optionally, the node's own self info (when includeSelf is true)
// This avoids the issue where an empty Peers slice is interpreted by Tailscale
// clients as "no change" rather than "no peers".
// When includeSelf is true, the node's self info is included so that a node
// whose own attributes changed (e.g., tags via admin API) sees its updated
// self info along with the new packet filters.
func (m *mapper) policyChangeResponse(
nodeID types.NodeID,
capVer tailcfg.CapabilityVersion,
removedPeers []tailcfg.NodeID,
currentPeers views.Slice[types.NodeView],
includeSelf bool,
) (*tailcfg.MapResponse, error) {
builder := m.NewMapResponseBuilder(nodeID).
WithDebugType(policyResponseDebug).
WithCapabilityVersion(capVer).
WithPacketFilters().
WithSSHPolicy()
if includeSelf {
builder = builder.WithSelfNode()
}
if len(removedPeers) > 0 {
// Convert tailcfg.NodeID to types.NodeID for WithPeersRemoved
removedIDs := make([]types.NodeID, len(removedPeers))
for i, id := range removedPeers {
removedIDs[i] = types.NodeID(id) //nolint:gosec // NodeID types are equivalent
}
builder.WithPeersRemoved(removedIDs...)
}
// Send remaining peers in PeersChanged - their AllowedIPs may have
// changed due to the policy update (e.g., different routes allowed).
if currentPeers.Len() > 0 {
builder.WithPeerChanges(currentPeers)
}
return builder.Build()
}
// buildFromChange builds a MapResponse from a change.Change specification.
// This provides fine-grained control over what gets included in the response.
func (m *mapper) buildFromChange(
nodeID types.NodeID,
capVer tailcfg.CapabilityVersion,
resp *change.Change,
) (*tailcfg.MapResponse, error) {
if resp.IsEmpty() {
return nil, nil //nolint:nilnil // Empty response means nothing to send, not an error
}
// If this is a self-update (the changed node is the receiving node),
// send a self-update response to ensure the node sees its own changes.
if resp.OriginNode != 0 && resp.OriginNode == nodeID {
return m.selfMapResponse(nodeID, capVer)
}
builder := m.NewMapResponseBuilder(nodeID).
WithCapabilityVersion(capVer).
WithDebugType(changeResponseDebug)
if resp.IncludeSelf {
builder.WithSelfNode()
}
if resp.IncludeDERPMap {
builder.WithDERPMap()
}
if resp.IncludeDNS {
builder.WithDNSConfig()
}
if resp.IncludeDomain {
builder.WithDomain()
}
if resp.IncludePolicy {
builder.WithPacketFilters()
builder.WithSSHPolicy()
}
if resp.SendAllPeers {
peers := m.state.ListPeers(nodeID)
builder.WithUserProfiles(peers)
builder.WithPeers(peers)
} else {
if len(resp.PeersChanged) > 0 {
peers := m.state.ListPeers(nodeID, resp.PeersChanged...)
builder.WithUserProfiles(peers)
builder.WithPeerChanges(peers)
}
if len(resp.PeersRemoved) > 0 {
builder.WithPeersRemoved(resp.PeersRemoved...)
}
}
if len(resp.PeerPatches) > 0 {
builder.WithPeerChangedPatch(resp.PeerPatches)
}
return builder.Build()
}
func writeDebugMapResponse(
resp *tailcfg.MapResponse,
t debugType,
nodeID types.NodeID,
) {
body, err := json.MarshalIndent(resp, "", " ")
if err != nil {
panic(err)
}
perms := fs.FileMode(debugMapResponsePerm)
mPath := path.Join(debugDumpMapResponsePath, fmt.Sprintf("%d", nodeID))
err = os.MkdirAll(mPath, perms)
if err != nil {
panic(err)
}
now := time.Now().Format("2006-01-02T15-04-05.999999999")
mapResponsePath := path.Join(
mPath,
fmt.Sprintf("%s-%s.json", now, t),
)
log.Trace().Msgf("writing MapResponse to %s", mapResponsePath)
err = os.WriteFile(mapResponsePath, body, perms)
if err != nil {
panic(err)
}
}
func (m *mapper) debugMapResponses() (map[types.NodeID][]tailcfg.MapResponse, error) {
if debugDumpMapResponsePath == "" {
return nil, nil //nolint:nilnil // intentional: no data when debug path not set
}
return ReadMapResponsesFromDirectory(debugDumpMapResponsePath)
}
func ReadMapResponsesFromDirectory(dir string) (map[types.NodeID][]tailcfg.MapResponse, error) {
nodes, err := os.ReadDir(dir)
if err != nil {
return nil, err
}
result := make(map[types.NodeID][]tailcfg.MapResponse)
for _, node := range nodes {
if !node.IsDir() {
continue
}
nodeIDu, err := strconv.ParseUint(node.Name(), 10, 64)
if err != nil {
log.Error().Err(err).Msgf("parsing node ID from dir %s", node.Name())
continue
}
nodeID := types.NodeID(nodeIDu)
files, err := os.ReadDir(path.Join(dir, node.Name()))
if err != nil {
log.Error().Err(err).Msgf("reading dir %s", node.Name())
continue
}
slices.SortStableFunc(files, func(a, b fs.DirEntry) int {
return strings.Compare(a.Name(), b.Name())
})
for _, file := range files {
if file.IsDir() || !strings.HasSuffix(file.Name(), ".json") {
continue
}
body, err := os.ReadFile(path.Join(dir, node.Name(), file.Name()))
if err != nil {
log.Error().Err(err).Msgf("reading file %s", file.Name())
continue
}
var resp tailcfg.MapResponse
err = json.Unmarshal(body, &resp)
if err != nil {
log.Error().Err(err).Msgf("unmarshalling file %s", file.Name())
continue
}
result[nodeID] = append(result[nodeID], resp)
}
}
return result, nil
}
================================================
FILE: hscontrol/mapper/mapper_test.go
================================================
package mapper
import (
"fmt"
"net/netip"
"testing"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/juanfont/headscale/hscontrol/types"
"tailscale.com/tailcfg"
"tailscale.com/types/dnstype"
)
var iap = func(ipStr string) *netip.Addr {
ip := netip.MustParseAddr(ipStr)
return &ip
}
func TestDNSConfigMapResponse(t *testing.T) {
tests := []struct {
magicDNS bool
want *tailcfg.DNSConfig
}{
{
magicDNS: true,
want: &tailcfg.DNSConfig{
Routes: map[string][]*dnstype.Resolver{},
Domains: []string{
"foobar.headscale.net",
},
Proxied: true,
},
},
{
magicDNS: false,
want: &tailcfg.DNSConfig{
Domains: []string{"foobar.headscale.net"},
Proxied: false,
},
},
}
for _, tt := range tests {
t.Run(fmt.Sprintf("with-magicdns-%v", tt.magicDNS), func(t *testing.T) {
mach := func(hostname, username string, userid uint) *types.Node {
return &types.Node{
Hostname: hostname,
UserID: new(userid),
User: &types.User{
Name: username,
},
}
}
baseDomain := "foobar.headscale.net"
dnsConfigOrig := tailcfg.DNSConfig{
Routes: make(map[string][]*dnstype.Resolver),
Domains: []string{baseDomain},
Proxied: tt.magicDNS,
}
nodeInShared1 := mach("test_get_shared_nodes_1", "shared1", 1)
got := generateDNSConfig(
&types.Config{
TailcfgDNSConfig: &dnsConfigOrig,
},
nodeInShared1.View(),
)
if diff := cmp.Diff(tt.want, got, cmpopts.EquateEmpty()); diff != "" {
t.Errorf("expandAlias() unexpected result (-want +got):\n%s", diff)
}
})
}
}
================================================
FILE: hscontrol/mapper/node_conn.go
================================================
package mapper
import (
"fmt"
"strconv"
"sync"
"sync/atomic"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/types/change"
"github.com/juanfont/headscale/hscontrol/util/zlog/zf"
"github.com/puzpuzpuz/xsync/v4"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"tailscale.com/tailcfg"
)
// connectionEntry represents a single connection to a node.
type connectionEntry struct {
id string // unique connection ID
c chan<- *tailcfg.MapResponse
version tailcfg.CapabilityVersion
created time.Time
stop func()
lastUsed atomic.Int64 // Unix timestamp of last successful send
closed atomic.Bool // Indicates if this connection has been closed
}
// multiChannelNodeConn manages multiple concurrent connections for a single node.
type multiChannelNodeConn struct {
id types.NodeID
mapper *mapper
log zerolog.Logger
mutex sync.RWMutex
connections []*connectionEntry
// pendingMu protects pending changes independently of the connection mutex.
// This avoids contention between addToBatch (which appends changes) and
// send() (which sends data to connections).
pendingMu sync.Mutex
pending []change.Change
// workMu serializes change processing for this node across batch ticks.
// Without this, two workers could process consecutive ticks' bundles
// concurrently, causing out-of-order MapResponse delivery and races
// on lastSentPeers (Clear+Store in updateSentPeers vs Range in
// computePeerDiff).
workMu sync.Mutex
closeOnce sync.Once
updateCount atomic.Int64
// disconnectedAt records when the last connection was removed.
// nil means the node is considered connected (or newly created);
// non-nil means the node disconnected at the stored timestamp.
// Used by cleanupOfflineNodes to evict stale entries.
disconnectedAt atomic.Pointer[time.Time]
// lastSentPeers tracks which peers were last sent to this node.
// This enables computing diffs for policy changes instead of sending
// full peer lists (which clients interpret as "no change" when empty).
// Using xsync.Map for lock-free concurrent access.
lastSentPeers *xsync.Map[tailcfg.NodeID, struct{}]
}
// connIDCounter is a monotonically increasing counter used to generate
// unique connection identifiers without the overhead of crypto/rand.
// Connection IDs are process-local and need not be cryptographically random.
var connIDCounter atomic.Uint64
// generateConnectionID generates a unique connection identifier.
func generateConnectionID() string {
return strconv.FormatUint(connIDCounter.Add(1), 10)
}
// newMultiChannelNodeConn creates a new multi-channel node connection.
func newMultiChannelNodeConn(id types.NodeID, mapper *mapper) *multiChannelNodeConn {
return &multiChannelNodeConn{
id: id,
mapper: mapper,
lastSentPeers: xsync.NewMap[tailcfg.NodeID, struct{}](),
log: log.With().Uint64(zf.NodeID, id.Uint64()).Logger(),
}
}
func (mc *multiChannelNodeConn) close() {
mc.closeOnce.Do(func() {
mc.mutex.Lock()
defer mc.mutex.Unlock()
for _, conn := range mc.connections {
mc.stopConnection(conn)
}
})
}
// stopConnection marks a connection as closed and tears down the owning session
// at most once, even if multiple cleanup paths race to remove it.
func (mc *multiChannelNodeConn) stopConnection(conn *connectionEntry) {
if conn.closed.CompareAndSwap(false, true) {
if conn.stop != nil {
conn.stop()
}
}
}
// removeConnectionAtIndexLocked removes the active connection at index.
// If stopConnection is true, it also stops that session.
// Caller must hold mc.mutex.
func (mc *multiChannelNodeConn) removeConnectionAtIndexLocked(i int, stopConnection bool) *connectionEntry {
conn := mc.connections[i]
copy(mc.connections[i:], mc.connections[i+1:])
mc.connections[len(mc.connections)-1] = nil // release pointer for GC
mc.connections = mc.connections[:len(mc.connections)-1]
if stopConnection {
mc.stopConnection(conn)
}
return conn
}
// addConnection adds a new connection.
func (mc *multiChannelNodeConn) addConnection(entry *connectionEntry) {
mc.mutex.Lock()
defer mc.mutex.Unlock()
mc.connections = append(mc.connections, entry)
mc.log.Debug().Str(zf.ConnID, entry.id).
Int("total_connections", len(mc.connections)).
Msg("connection added")
}
// removeConnectionByChannel removes a connection by matching channel pointer.
func (mc *multiChannelNodeConn) removeConnectionByChannel(c chan<- *tailcfg.MapResponse) bool {
mc.mutex.Lock()
defer mc.mutex.Unlock()
for i, entry := range mc.connections {
if entry.c == c {
mc.removeConnectionAtIndexLocked(i, false)
mc.log.Debug().Str(zf.ConnID, entry.id).
Int("remaining_connections", len(mc.connections)).
Msg("connection removed")
return true
}
}
return false
}
// hasActiveConnections checks if the node has any active connections.
func (mc *multiChannelNodeConn) hasActiveConnections() bool {
mc.mutex.RLock()
defer mc.mutex.RUnlock()
return len(mc.connections) > 0
}
// getActiveConnectionCount returns the number of active connections.
func (mc *multiChannelNodeConn) getActiveConnectionCount() int {
mc.mutex.RLock()
defer mc.mutex.RUnlock()
return len(mc.connections)
}
// markConnected clears the disconnect timestamp, indicating the node
// has an active connection.
func (mc *multiChannelNodeConn) markConnected() {
mc.disconnectedAt.Store(nil)
}
// markDisconnected records the current time as the moment the node
// lost its last connection. Used by cleanupOfflineNodes to determine
// how long the node has been offline.
func (mc *multiChannelNodeConn) markDisconnected() {
now := time.Now()
mc.disconnectedAt.Store(&now)
}
// isConnected returns true if the node has active connections or has
// not been marked as disconnected.
func (mc *multiChannelNodeConn) isConnected() bool {
if mc.hasActiveConnections() {
return true
}
return mc.disconnectedAt.Load() == nil
}
// offlineDuration returns how long the node has been disconnected.
// Returns 0 if the node is connected or has never been marked as disconnected.
func (mc *multiChannelNodeConn) offlineDuration() time.Duration {
t := mc.disconnectedAt.Load()
if t == nil {
return 0
}
return time.Since(*t)
}
// appendPending appends changes to this node's pending change list.
// Thread-safe via pendingMu; does not contend with the connection mutex.
func (mc *multiChannelNodeConn) appendPending(changes ...change.Change) {
mc.pendingMu.Lock()
mc.pending = append(mc.pending, changes...)
mc.pendingMu.Unlock()
}
// drainPending atomically removes and returns all pending changes.
// Returns nil if there are no pending changes.
func (mc *multiChannelNodeConn) drainPending() []change.Change {
mc.pendingMu.Lock()
p := mc.pending
mc.pending = nil
mc.pendingMu.Unlock()
return p
}
// send broadcasts data to all active connections for the node.
//
// To avoid holding the write lock during potentially slow sends (each stale
// connection can block for up to 50ms), the method snapshots connections under
// a read lock, sends without any lock held, then write-locks only to remove
// failures. New connections added between the snapshot and cleanup are safe:
// they receive a full initial map via AddNode, so missing this update causes
// no data loss.
func (mc *multiChannelNodeConn) send(data *tailcfg.MapResponse) error {
if data == nil {
return nil
}
// Snapshot connections under read lock.
mc.mutex.RLock()
if len(mc.connections) == 0 {
mc.mutex.RUnlock()
mc.log.Trace().
Msg("send: no active connections, skipping")
return nil
}
// Copy the slice so we can release the read lock before sending.
snapshot := make([]*connectionEntry, len(mc.connections))
copy(snapshot, mc.connections)
mc.mutex.RUnlock()
mc.log.Trace().
Int("total_connections", len(snapshot)).
Msg("send: broadcasting")
// Send to all connections without holding any lock.
// Stale connection timeouts (50ms each) happen here without blocking
// other goroutines that need the mutex.
var (
lastErr error
successCount int
failed []*connectionEntry
)
for _, conn := range snapshot {
err := conn.send(data)
if err != nil {
lastErr = err
failed = append(failed, conn)
mc.log.Warn().Err(err).
Str(zf.ConnID, conn.id).
Msg("send: connection failed")
} else {
successCount++
}
}
// Write-lock only to remove failed connections.
if len(failed) > 0 {
mc.mutex.Lock()
// Remove by pointer identity: only remove entries that still exist
// in the current connections slice and match a failed pointer.
// New connections added since the snapshot are not affected.
failedSet := make(map[*connectionEntry]struct{}, len(failed))
for _, f := range failed {
failedSet[f] = struct{}{}
}
clean := mc.connections[:0]
for _, conn := range mc.connections {
if _, isFailed := failedSet[conn]; !isFailed {
clean = append(clean, conn)
} else {
mc.log.Debug().
Str(zf.ConnID, conn.id).
Msg("send: removing failed connection")
// Tear down the owning session so the old serveLongPoll
// goroutine exits instead of lingering as a stale session.
mc.stopConnection(conn)
}
}
// Nil out trailing slots so removed *connectionEntry values
// are not retained by the backing array.
for i := len(clean); i < len(mc.connections); i++ {
mc.connections[i] = nil
}
mc.connections = clean
mc.mutex.Unlock()
}
mc.updateCount.Add(1)
mc.log.Trace().
Int("successful_sends", successCount).
Int("failed_connections", len(failed)).
Msg("send: broadcast complete")
// Success if at least one send succeeded
if successCount > 0 {
return nil
}
return fmt.Errorf("node %d: all connections failed, last error: %w", mc.id, lastErr)
}
// send sends data to a single connection entry with timeout-based stale connection detection.
func (entry *connectionEntry) send(data *tailcfg.MapResponse) error {
if data == nil {
return nil
}
// Check if the connection has been closed to prevent send on closed channel panic.
// This can happen during shutdown when Close() is called while workers are still processing.
if entry.closed.Load() {
return fmt.Errorf("connection %s: %w", entry.id, errConnectionClosed)
}
// Use a short timeout to detect stale connections where the client isn't reading the channel.
// This is critical for detecting Docker containers that are forcefully terminated
// but still have channels that appear open.
//
// We use time.NewTimer + Stop instead of time.After to avoid leaking timers.
// time.After creates a timer that lives in the runtime's timer heap until it fires,
// even when the send succeeds immediately. On the hot path (1000+ nodes per tick),
// this leaks thousands of timers per second.
timer := time.NewTimer(50 * time.Millisecond) //nolint:mnd
defer timer.Stop()
select {
case entry.c <- data:
// Update last used timestamp on successful send
entry.lastUsed.Store(time.Now().Unix())
return nil
case <-timer.C:
// Connection is likely stale - client isn't reading from channel
// This catches the case where Docker containers are killed but channels remain open
return fmt.Errorf("connection %s: %w", entry.id, ErrConnectionSendTimeout)
}
}
// nodeID returns the node ID.
func (mc *multiChannelNodeConn) nodeID() types.NodeID {
return mc.id
}
// version returns the capability version from the first active connection.
// All connections for a node should have the same version in practice.
func (mc *multiChannelNodeConn) version() tailcfg.CapabilityVersion {
mc.mutex.RLock()
defer mc.mutex.RUnlock()
if len(mc.connections) == 0 {
return 0
}
return mc.connections[0].version
}
// updateSentPeers updates the tracked peer state based on a sent MapResponse.
// This must be called after successfully sending a response to keep track of
// what the client knows about, enabling accurate diffs for future updates.
func (mc *multiChannelNodeConn) updateSentPeers(resp *tailcfg.MapResponse) {
if resp == nil {
return
}
// Full peer list replaces tracked state entirely
if resp.Peers != nil {
mc.lastSentPeers.Clear()
for _, peer := range resp.Peers {
mc.lastSentPeers.Store(peer.ID, struct{}{})
}
}
// Incremental additions
for _, peer := range resp.PeersChanged {
mc.lastSentPeers.Store(peer.ID, struct{}{})
}
// Incremental removals
for _, id := range resp.PeersRemoved {
mc.lastSentPeers.Delete(id)
}
}
// computePeerDiff compares the current peer list against what was last sent
// and returns the peers that were removed (in lastSentPeers but not in current).
func (mc *multiChannelNodeConn) computePeerDiff(currentPeers []tailcfg.NodeID) []tailcfg.NodeID {
currentSet := make(map[tailcfg.NodeID]struct{}, len(currentPeers))
for _, id := range currentPeers {
currentSet[id] = struct{}{}
}
var removed []tailcfg.NodeID
// Find removed: in lastSentPeers but not in current
mc.lastSentPeers.Range(func(id tailcfg.NodeID, _ struct{}) bool {
if _, exists := currentSet[id]; !exists {
removed = append(removed, id)
}
return true
})
return removed
}
// change applies a change to all active connections for the node.
func (mc *multiChannelNodeConn) change(r change.Change) error {
return handleNodeChange(mc, mc.mapper, r)
}
================================================
FILE: hscontrol/mapper/tail_test.go
================================================
package mapper
import (
"encoding/json"
"net/netip"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/juanfont/headscale/hscontrol/routes"
"github.com/juanfont/headscale/hscontrol/types"
"tailscale.com/net/tsaddr"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
)
func TestTailNode(t *testing.T) {
mustNK := func(str string) key.NodePublic {
var k key.NodePublic
_ = k.UnmarshalText([]byte(str))
return k
}
mustDK := func(str string) key.DiscoPublic {
var k key.DiscoPublic
_ = k.UnmarshalText([]byte(str))
return k
}
mustMK := func(str string) key.MachinePublic {
var k key.MachinePublic
_ = k.UnmarshalText([]byte(str))
return k
}
hiview := func(hoin tailcfg.Hostinfo) tailcfg.HostinfoView {
return hoin.View()
}
created := time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC)
lastSeen := time.Date(2009, time.November, 10, 23, 9, 0, 0, time.UTC)
expire := time.Date(2500, time.November, 11, 23, 0, 0, 0, time.UTC)
tests := []struct {
name string
node *types.Node
pol []byte
dnsConfig *tailcfg.DNSConfig
baseDomain string
want *tailcfg.Node
wantErr bool
}{
{
name: "empty-node",
node: &types.Node{
GivenName: "empty",
Hostinfo: &tailcfg.Hostinfo{},
},
dnsConfig: &tailcfg.DNSConfig{},
baseDomain: "",
want: &tailcfg.Node{
Name: "empty",
StableID: "0",
HomeDERP: 0,
LegacyDERPString: "127.3.3.40:0",
Hostinfo: hiview(tailcfg.Hostinfo{}),
MachineAuthorized: true,
CapMap: tailcfg.NodeCapMap{
tailcfg.CapabilityFileSharing: []tailcfg.RawMessage{},
tailcfg.CapabilityAdmin: []tailcfg.RawMessage{},
tailcfg.CapabilitySSH: []tailcfg.RawMessage{},
},
},
wantErr: false,
},
{
name: "minimal-node",
node: &types.Node{
ID: 0,
MachineKey: mustMK(
"mkey:f08305b4ee4250b95a70f3b7504d048d75d899993c624a26d422c67af0422507",
),
NodeKey: mustNK(
"nodekey:9b2ffa7e08cc421a3d2cca9012280f6a236fd0de0b4ce005b30a98ad930306fe",
),
DiscoKey: mustDK(
"discokey:cf7b0fd05da556fdc3bab365787b506fd82d64a70745db70e00e86c1b1c03084",
),
IPv4: iap("100.64.0.1"),
Hostname: "mini",
GivenName: "mini",
UserID: new(uint(0)),
User: &types.User{
Name: "mini",
},
Tags: []string{},
AuthKey: &types.PreAuthKey{},
LastSeen: &lastSeen,
Expiry: &expire,
Hostinfo: &tailcfg.Hostinfo{
RoutableIPs: []netip.Prefix{
tsaddr.AllIPv4(),
tsaddr.AllIPv6(),
netip.MustParsePrefix("192.168.0.0/24"),
netip.MustParsePrefix("172.0.0.0/10"),
},
},
ApprovedRoutes: []netip.Prefix{tsaddr.AllIPv4(), tsaddr.AllIPv6(), netip.MustParsePrefix("192.168.0.0/24")},
CreatedAt: created,
},
dnsConfig: &tailcfg.DNSConfig{},
baseDomain: "",
want: &tailcfg.Node{
ID: 0,
StableID: "0",
Name: "mini",
User: 0,
Key: mustNK(
"nodekey:9b2ffa7e08cc421a3d2cca9012280f6a236fd0de0b4ce005b30a98ad930306fe",
),
KeyExpiry: expire,
Machine: mustMK(
"mkey:f08305b4ee4250b95a70f3b7504d048d75d899993c624a26d422c67af0422507",
),
DiscoKey: mustDK(
"discokey:cf7b0fd05da556fdc3bab365787b506fd82d64a70745db70e00e86c1b1c03084",
),
Addresses: []netip.Prefix{netip.MustParsePrefix("100.64.0.1/32")},
AllowedIPs: []netip.Prefix{
tsaddr.AllIPv4(),
netip.MustParsePrefix("100.64.0.1/32"),
netip.MustParsePrefix("192.168.0.0/24"),
tsaddr.AllIPv6(),
},
PrimaryRoutes: []netip.Prefix{
netip.MustParsePrefix("192.168.0.0/24"),
},
HomeDERP: 0,
LegacyDERPString: "127.3.3.40:0",
Hostinfo: hiview(tailcfg.Hostinfo{
RoutableIPs: []netip.Prefix{
tsaddr.AllIPv4(),
tsaddr.AllIPv6(),
netip.MustParsePrefix("192.168.0.0/24"),
netip.MustParsePrefix("172.0.0.0/10"),
},
}),
Created: created,
Tags: []string{},
MachineAuthorized: true,
CapMap: tailcfg.NodeCapMap{
tailcfg.CapabilityFileSharing: []tailcfg.RawMessage{},
tailcfg.CapabilityAdmin: []tailcfg.RawMessage{},
tailcfg.CapabilitySSH: []tailcfg.RawMessage{},
},
},
wantErr: false,
},
{
name: "check-dot-suffix-on-node-name",
node: &types.Node{
GivenName: "minimal",
Hostinfo: &tailcfg.Hostinfo{},
},
dnsConfig: &tailcfg.DNSConfig{},
baseDomain: "example.com",
want: &tailcfg.Node{
// a node name should have a dot appended
Name: "minimal.example.com.",
StableID: "0",
HomeDERP: 0,
LegacyDERPString: "127.3.3.40:0",
Hostinfo: hiview(tailcfg.Hostinfo{}),
MachineAuthorized: true,
CapMap: tailcfg.NodeCapMap{
tailcfg.CapabilityFileSharing: []tailcfg.RawMessage{},
tailcfg.CapabilityAdmin: []tailcfg.RawMessage{},
tailcfg.CapabilitySSH: []tailcfg.RawMessage{},
},
},
wantErr: false,
},
// TODO: Add tests to check other aspects of the node conversion:
// - With tags and policy
// - dnsconfig and basedomain
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
primary := routes.New()
cfg := &types.Config{
BaseDomain: tt.baseDomain,
TailcfgDNSConfig: tt.dnsConfig,
RandomizeClientPort: false,
Taildrop: types.TaildropConfig{Enabled: true},
}
_ = primary.SetRoutes(tt.node.ID, tt.node.SubnetRoutes()...)
// This is a hack to avoid having a second node to test the primary route.
// This should be baked into the test case proper if it is extended in the future.
_ = primary.SetRoutes(2, netip.MustParsePrefix("192.168.0.0/24"))
got, err := tt.node.View().TailNode(
0,
func(id types.NodeID) []netip.Prefix {
return primary.PrimaryRoutes(id)
},
cfg,
)
if (err != nil) != tt.wantErr {
t.Errorf("TailNode() error = %v, wantErr %v", err, tt.wantErr)
return
}
if diff := cmp.Diff(tt.want, got, cmpopts.EquateEmpty()); diff != "" {
t.Errorf("TailNode() unexpected result (-want +got):\n%s", diff)
}
})
}
}
func TestNodeExpiry(t *testing.T) {
tp := func(t time.Time) *time.Time {
return &t
}
tests := []struct {
name string
exp *time.Time
wantTime time.Time
wantTimeZero bool
}{
{
name: "no-expiry",
exp: nil,
wantTimeZero: true,
},
{
name: "zero-expiry",
exp: &time.Time{},
wantTimeZero: true,
},
{
name: "localtime",
exp: tp(time.Time{}.Local()), //nolint:gosmopolitan
wantTimeZero: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
node := &types.Node{
ID: 0,
GivenName: "test",
Expiry: tt.exp,
}
tn, err := node.View().TailNode(
0,
func(id types.NodeID) []netip.Prefix {
return []netip.Prefix{}
},
&types.Config{Taildrop: types.TaildropConfig{Enabled: true}},
)
if err != nil {
t.Fatalf("nodeExpiry() error = %v", err)
}
// Round trip the node through JSON to ensure the time is serialized correctly
seri, err := json.Marshal(tn)
if err != nil {
t.Fatalf("nodeExpiry() error = %v", err)
}
var deseri tailcfg.Node
err = json.Unmarshal(seri, &deseri)
if err != nil {
t.Fatalf("nodeExpiry() error = %v", err)
}
if tt.wantTimeZero {
if !deseri.KeyExpiry.IsZero() {
t.Errorf("nodeExpiry() = %v, want zero", deseri.KeyExpiry)
}
} else if deseri.KeyExpiry != tt.wantTime {
t.Errorf("nodeExpiry() = %v, want %v", deseri.KeyExpiry, tt.wantTime)
}
})
}
}
================================================
FILE: hscontrol/metrics.go
================================================
package hscontrol
import (
"net/http"
"strconv"
"github.com/gorilla/mux"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"tailscale.com/envknob"
)
var debugHighCardinalityMetrics = envknob.Bool("HEADSCALE_DEBUG_HIGH_CARDINALITY_METRICS")
var mapResponseLastSentSeconds *prometheus.GaugeVec
func init() {
if debugHighCardinalityMetrics {
mapResponseLastSentSeconds = promauto.NewGaugeVec(prometheus.GaugeOpts{
Namespace: prometheusNamespace,
Name: "mapresponse_last_sent_seconds",
Help: "last sent metric to node.id",
}, []string{"type", "id"})
}
}
const prometheusNamespace = "headscale"
var (
mapResponseSent = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: prometheusNamespace,
Name: "mapresponse_sent_total",
Help: "total count of mapresponses sent to clients",
}, []string{"status", "type"})
mapResponseEndpointUpdates = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: prometheusNamespace,
Name: "mapresponse_endpoint_updates_total",
Help: "total count of endpoint updates received",
}, []string{"status"})
mapResponseEnded = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: prometheusNamespace,
Name: "mapresponse_ended_total",
Help: "total count of new mapsessions ended",
}, []string{"reason"})
httpDuration = promauto.NewHistogramVec(prometheus.HistogramOpts{
Namespace: prometheusNamespace,
Name: "http_duration_seconds",
Help: "Duration of HTTP requests.",
}, []string{"path"})
httpCounter = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: prometheusNamespace,
Name: "http_requests_total",
Help: "Total number of http requests processed",
}, []string{"code", "method", "path"},
)
)
// prometheusMiddleware implements mux.MiddlewareFunc.
func prometheusMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
route := mux.CurrentRoute(r)
path, _ := route.GetPathTemplate()
// Ignore streaming and noise sessions
// it has its own router further down.
if path == "/ts2021" || path == "/machine/map" || path == "/derp" || path == "/derp/probe" || path == "/derp/latency-check" || path == "/bootstrap-dns" {
next.ServeHTTP(w, r)
return
}
rw := &respWriterProm{ResponseWriter: w}
timer := prometheus.NewTimer(httpDuration.WithLabelValues(path))
next.ServeHTTP(rw, r)
timer.ObserveDuration()
httpCounter.WithLabelValues(strconv.Itoa(rw.status), r.Method, path).Inc()
})
}
type respWriterProm struct {
http.ResponseWriter
status int
written int64
wroteHeader bool
}
func (r *respWriterProm) WriteHeader(code int) {
r.status = code
r.wroteHeader = true
r.ResponseWriter.WriteHeader(code)
}
func (r *respWriterProm) Write(b []byte) (int, error) {
if !r.wroteHeader {
r.WriteHeader(http.StatusOK)
}
n, err := r.ResponseWriter.Write(b)
r.written += int64(n)
return n, err
}
================================================
FILE: hscontrol/noise.go
================================================
package hscontrol
import (
"encoding/binary"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"time"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/go-chi/metrics"
"github.com/juanfont/headscale/hscontrol/capver"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"golang.org/x/net/http2"
"tailscale.com/control/controlbase"
"tailscale.com/control/controlhttp/controlhttpserver"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
)
// ErrUnsupportedClientVersion is returned when a client connects with an unsupported protocol version.
var ErrUnsupportedClientVersion = errors.New("unsupported client version")
// ErrMissingURLParameter is returned when a required URL parameter is not provided.
var ErrMissingURLParameter = errors.New("missing URL parameter")
// ErrUnsupportedURLParameterType is returned when a URL parameter has an unsupported type.
var ErrUnsupportedURLParameterType = errors.New("unsupported URL parameter type")
// ErrNoAuthSession is returned when an auth_id does not match any active auth session.
var ErrNoAuthSession = errors.New("no auth session found")
const (
// ts2021UpgradePath is the path that the server listens on for the WebSockets upgrade.
ts2021UpgradePath = "/ts2021"
// The first 9 bytes from the server to client over Noise are either an HTTP/2
// settings frame (a normal HTTP/2 setup) or, as Tailscale added later, an "early payload"
// header that's also 9 bytes long: 5 bytes (earlyPayloadMagic) followed by 4 bytes
// of length. Then that many bytes of JSON-encoded tailcfg.EarlyNoise.
// The early payload is optional. Some servers may not send it... But we do!
earlyPayloadMagic = "\xff\xff\xffTS"
// noiseBodyLimit is the maximum allowed request body size for Noise protocol
// handlers. This prevents unauthenticated OOM attacks via unbounded io.ReadAll.
// No legitimate Noise request (MapRequest, RegisterRequest, etc.) comes close
// to this limit; typical payloads are a few KB.
noiseBodyLimit int64 = 1048576 // 1 MiB
)
type noiseServer struct {
headscale *Headscale
httpBaseConfig *http.Server
http2Server *http2.Server
conn *controlbase.Conn
machineKey key.MachinePublic
nodeKey key.NodePublic
// EarlyNoise-related stuff
challenge key.ChallengePrivate
protocolVersion int
}
// NoiseUpgradeHandler is to upgrade the connection and hijack the net.Conn
// in order to use the Noise-based TS2021 protocol. Listens in /ts2021.
func (h *Headscale) NoiseUpgradeHandler(
writer http.ResponseWriter,
req *http.Request,
) {
log.Trace().Caller().Msgf("noise upgrade handler for client %s", req.RemoteAddr)
upgrade := req.Header.Get("Upgrade")
if upgrade == "" {
// This probably means that the user is running Headscale behind an
// improperly configured reverse proxy. TS2021 requires WebSockets to
// be passed to Headscale. Let's give them a hint.
log.Warn().
Caller().
Msg("no upgrade header in TS2021 request. If headscale is behind a reverse proxy, make sure it is configured to pass WebSockets through.")
http.Error(writer, "Internal error", http.StatusInternalServerError)
return
}
ns := noiseServer{
headscale: h,
challenge: key.NewChallenge(),
}
noiseConn, err := controlhttpserver.AcceptHTTP(
req.Context(),
writer,
req,
*h.noisePrivateKey,
ns.earlyNoise,
)
if err != nil {
httpError(writer, fmt.Errorf("upgrading noise connection: %w", err))
return
}
ns.conn = noiseConn
ns.machineKey = ns.conn.Peer()
ns.protocolVersion = ns.conn.ProtocolVersion()
// This router is served only over the Noise connection, and exposes only the new API.
//
// The HTTP2 server that exposes this router is created for
// a single hijacked connection from /ts2021, using netutil.NewOneConnListener
r := chi.NewRouter()
// Limit request body size to prevent unauthenticated OOM attacks.
// The Noise handshake accepts any machine key without checking
// registration, so all endpoints behind this router are reachable
// without credentials.
r.Use(func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
r.Body = http.MaxBytesReader(w, r.Body, noiseBodyLimit)
next.ServeHTTP(w, r)
})
})
r.Use(metrics.Collector(metrics.CollectorOpts{
Host: false,
Proto: true,
Skip: func(r *http.Request) bool {
return r.Method != http.MethodOptions
},
}))
r.Use(middleware.RequestID)
r.Use(middleware.RealIP)
r.Use(middleware.RequestLogger(&zerologRequestLogger{}))
r.Use(middleware.Recoverer)
r.Handle("/metrics", metrics.Handler())
r.Route("/machine", func(r chi.Router) {
r.Post("/register", ns.RegistrationHandler)
r.Post("/map", ns.PollNetMapHandler)
// SSH Check mode endpoint, consulted to validate if a given SSH connection should be accepted or rejected.
r.Get("/ssh/action/from/{src_node_id}/to/{dst_node_id}", ns.SSHActionHandler)
// Not implemented yet
//
// /whoami is a debug endpoint to validate that the client can communicate over the connection,
// not clear if there is a specific response, it looks like it is just logged.
// https://github.com/tailscale/tailscale/blob/dfba01ca9bd8c4df02c3c32f400d9aeb897c5fc7/cmd/tailscale/cli/debug.go#L1138
r.Get("/whoami", ns.NotImplementedHandler)
// client sends a [tailcfg.SetDNSRequest] to this endpoints and expect
// the server to create or update this DNS record "somewhere".
// It is typically a TXT record for an ACME challenge.
r.Post("/set-dns", ns.NotImplementedHandler)
// A patch of [tailcfg.SetDeviceAttributesRequest] to update device attributes.
// We currently do not support device attributes.
r.Patch("/set-device-attr", ns.NotImplementedHandler)
// A [tailcfg.AuditLogRequest] to send audit log entries to the server.
// The server is expected to store them "somewhere".
// We currently do not support device attributes.
r.Post("/audit-log", ns.NotImplementedHandler)
// handles requests to get an OIDC ID token. Receives a [tailcfg.TokenRequest].
r.Post("/id-token", ns.NotImplementedHandler)
// Asks the server if a feature is available and receive information about how to enable it.
// Gets a [tailcfg.QueryFeatureRequest] and returns a [tailcfg.QueryFeatureResponse].
r.Post("/feature/query", ns.NotImplementedHandler)
r.Post("/update-health", ns.NotImplementedHandler)
r.Route("/webclient", func(r chi.Router) {})
r.Post("/c2n", ns.NotImplementedHandler)
})
ns.httpBaseConfig = &http.Server{
Handler: r,
ReadHeaderTimeout: types.HTTPTimeout,
}
ns.http2Server = &http2.Server{}
ns.http2Server.ServeConn(
noiseConn,
&http2.ServeConnOpts{
BaseConfig: ns.httpBaseConfig,
},
)
}
func unsupportedClientError(version tailcfg.CapabilityVersion) error {
return fmt.Errorf("%w: %s (%d)", ErrUnsupportedClientVersion, capver.TailscaleVersion(version), version)
}
func (ns *noiseServer) earlyNoise(protocolVersion int, writer io.Writer) error {
if !isSupportedVersion(tailcfg.CapabilityVersion(protocolVersion)) {
return unsupportedClientError(tailcfg.CapabilityVersion(protocolVersion))
}
earlyJSON, err := json.Marshal(&tailcfg.EarlyNoise{
NodeKeyChallenge: ns.challenge.Public(),
})
if err != nil {
return err
}
// 5 bytes that won't be mistaken for an HTTP/2 frame:
// https://httpwg.org/specs/rfc7540.html#rfc.section.4.1 (Especially not
// an HTTP/2 settings frame, which isn't of type 'T')
var notH2Frame [5]byte
copy(notH2Frame[:], earlyPayloadMagic)
var lenBuf [4]byte
binary.BigEndian.PutUint32(lenBuf[:], uint32(len(earlyJSON))) //nolint:gosec // JSON length is bounded
// These writes are all buffered by caller, so fine to do them
// separately:
if _, err := writer.Write(notH2Frame[:]); err != nil { //nolint:noinlineerr
return err
}
if _, err := writer.Write(lenBuf[:]); err != nil { //nolint:noinlineerr
return err
}
if _, err := writer.Write(earlyJSON); err != nil { //nolint:noinlineerr
return err
}
return nil
}
func isSupportedVersion(version tailcfg.CapabilityVersion) bool {
return version >= capver.MinSupportedCapabilityVersion
}
func rejectUnsupported(
writer http.ResponseWriter,
version tailcfg.CapabilityVersion,
mkey key.MachinePublic,
nkey key.NodePublic,
) bool {
// Reject unsupported versions
if !isSupportedVersion(version) {
log.Error().
Caller().
Int("minimum_cap_ver", int(capver.MinSupportedCapabilityVersion)).
Int("client_cap_ver", int(version)).
Str("minimum_version", capver.TailscaleVersion(capver.MinSupportedCapabilityVersion)).
Str("client_version", capver.TailscaleVersion(version)).
Str("node.key", nkey.ShortString()).
Str("machine.key", mkey.ShortString()).
Msg("unsupported client connected")
http.Error(writer, unsupportedClientError(version).Error(), http.StatusBadRequest)
return true
}
return false
}
func (ns *noiseServer) NotImplementedHandler(writer http.ResponseWriter, req *http.Request) {
log.Trace().Caller().Str("path", req.URL.String()).Msg("not implemented handler hit")
http.Error(writer, "Not implemented yet", http.StatusNotImplemented)
}
func urlParam[T any](req *http.Request, key string) (T, error) {
var zero T
param := chi.URLParam(req, key)
if param == "" {
return zero, fmt.Errorf("%w: %s", ErrMissingURLParameter, key)
}
var value T
switch any(value).(type) {
case string:
v, ok := any(param).(T)
if !ok {
return zero, fmt.Errorf("%w: %T", ErrUnsupportedURLParameterType, value)
}
value = v
case types.NodeID:
id, err := types.ParseNodeID(param)
if err != nil {
return zero, fmt.Errorf("parsing %s: %w", key, err)
}
v, ok := any(id).(T)
if !ok {
return zero, fmt.Errorf("%w: %T", ErrUnsupportedURLParameterType, value)
}
value = v
default:
return zero, fmt.Errorf("%w: %T", ErrUnsupportedURLParameterType, value)
}
return value, nil
}
// SSHActionHandler handles the /ssh-action endpoint, returning a
// [tailcfg.SSHAction] to the client with the verdict of an SSH access
// request.
func (ns *noiseServer) SSHActionHandler(
writer http.ResponseWriter,
req *http.Request,
) {
srcNodeID, err := urlParam[types.NodeID](req, "src_node_id")
if err != nil {
httpError(writer, NewHTTPError(
http.StatusBadRequest,
"Invalid src_node_id",
err,
))
return
}
dstNodeID, err := urlParam[types.NodeID](req, "dst_node_id")
if err != nil {
httpError(writer, NewHTTPError(
http.StatusBadRequest,
"Invalid dst_node_id",
err,
))
return
}
reqLog := log.With().
Uint64("src_node_id", srcNodeID.Uint64()).
Uint64("dst_node_id", dstNodeID.Uint64()).
Str("ssh_user", req.URL.Query().Get("ssh_user")).
Str("local_user", req.URL.Query().Get("local_user")).
Logger()
reqLog.Trace().Caller().Msg("SSH action request")
action, err := ns.sshAction(
reqLog,
srcNodeID, dstNodeID,
req.URL.Query().Get("auth_id"),
)
if err != nil {
httpError(writer, err)
return
}
writer.Header().Set("Content-Type", "application/json; charset=utf-8")
writer.WriteHeader(http.StatusOK)
err = json.NewEncoder(writer).Encode(action)
if err != nil {
reqLog.Error().Caller().Err(err).
Msg("failed to encode SSH action response")
return
}
if flusher, ok := writer.(http.Flusher); ok {
flusher.Flush()
}
}
// sshAction resolves the SSH action for the given request parameters.
// It returns the action to send to the client, or an HTTPError on failure.
//
// Three cases:
// 1. Initial request, auto-approved — source recently authenticated
// within the check period, accept immediately.
// 2. Initial request, needs auth — build a HoldAndDelegate URL and
// wait for the user to authenticate.
// 3. Follow-up request — an auth_id is present, wait for the auth
// verdict and accept or reject.
func (ns *noiseServer) sshAction(
reqLog zerolog.Logger,
srcNodeID, dstNodeID types.NodeID,
authIDStr string,
) (*tailcfg.SSHAction, error) {
action := tailcfg.SSHAction{
AllowAgentForwarding: true,
AllowLocalPortForwarding: true,
AllowRemotePortForwarding: true,
}
// Look up check params from the server's own policy rather than
// trusting URL parameters, which the client could tamper with.
checkPeriod, checkFound := ns.headscale.state.SSHCheckParams(
srcNodeID, dstNodeID,
)
// Follow-up request with auth_id — wait for the auth verdict.
if authIDStr != "" {
return ns.sshActionFollowUp(
reqLog, &action, authIDStr,
srcNodeID, dstNodeID,
checkFound,
)
}
// Initial request — check if auto-approval applies.
if checkFound && checkPeriod > 0 {
if lastAuth, ok := ns.headscale.state.GetLastSSHAuth(
srcNodeID, dstNodeID,
); ok && time.Since(lastAuth) < checkPeriod {
reqLog.Trace().Caller().
Dur("check_period", checkPeriod).
Time("last_auth", lastAuth).
Msg("auto-approved within check period")
action.Accept = true
return &action, nil
}
}
// No auto-approval — create an auth session and hold.
return ns.sshActionHoldAndDelegate(reqLog, &action)
}
// sshActionHoldAndDelegate creates a new auth session and returns a
// HoldAndDelegate action that directs the client to authenticate.
func (ns *noiseServer) sshActionHoldAndDelegate(
reqLog zerolog.Logger,
action *tailcfg.SSHAction,
) (*tailcfg.SSHAction, error) {
holdURL, err := url.Parse(
ns.headscale.cfg.ServerURL +
"/machine/ssh/action/from/$SRC_NODE_ID/to/$DST_NODE_ID" +
"?ssh_user=$SSH_USER&local_user=$LOCAL_USER",
)
if err != nil {
return nil, NewHTTPError(
http.StatusInternalServerError,
"Internal error",
fmt.Errorf("parsing SSH action URL: %w", err),
)
}
authID, err := types.NewAuthID()
if err != nil {
return nil, NewHTTPError(
http.StatusInternalServerError,
"Internal error",
fmt.Errorf("generating auth ID: %w", err),
)
}
ns.headscale.state.SetAuthCacheEntry(authID, types.NewAuthRequest())
authURL := ns.headscale.authProvider.AuthURL(authID)
q := holdURL.Query()
q.Set("auth_id", authID.String())
holdURL.RawQuery = q.Encode()
action.HoldAndDelegate = holdURL.String()
// TODO(kradalby): here we can also send a very tiny mapresponse
// "popping" the url and opening it for the user.
action.Message = fmt.Sprintf(
"# Headscale SSH requires an additional check.\n"+
"# To authenticate, visit: %s\n"+
"# Authentication checked with Headscale SSH.\n",
authURL,
)
reqLog.Info().Caller().
Str("auth_id", authID.String()).
Msg("SSH check pending, waiting for auth")
return action, nil
}
// sshActionFollowUp handles follow-up requests where the client
// provides an auth_id. It blocks until the auth session resolves.
func (ns *noiseServer) sshActionFollowUp(
reqLog zerolog.Logger,
action *tailcfg.SSHAction,
authIDStr string,
srcNodeID, dstNodeID types.NodeID,
checkFound bool,
) (*tailcfg.SSHAction, error) {
authID, err := types.AuthIDFromString(authIDStr)
if err != nil {
return nil, NewHTTPError(
http.StatusBadRequest,
"Invalid auth_id",
fmt.Errorf("parsing auth_id: %w", err),
)
}
reqLog = reqLog.With().Str("auth_id", authID.String()).Logger()
auth, ok := ns.headscale.state.GetAuthCacheEntry(authID)
if !ok {
return nil, NewHTTPError(
http.StatusBadRequest,
"Invalid auth_id",
fmt.Errorf("%w: %s", ErrNoAuthSession, authID),
)
}
reqLog.Trace().Caller().Msg("SSH action follow-up")
verdict := <-auth.WaitForAuth()
if !verdict.Accept() {
action.Reject = true
reqLog.Trace().Caller().Err(verdict.Err).
Msg("authentication rejected")
return action, nil
}
action.Accept = true
// Record the successful auth for future auto-approval.
if checkFound {
ns.headscale.state.SetLastSSHAuth(srcNodeID, dstNodeID)
reqLog.Trace().Caller().
Msg("auth recorded for auto-approval")
}
return action, nil
}
// PollNetMapHandler takes care of /machine/:id/map using the Noise protocol
//
// This is the busiest endpoint, as it keeps the HTTP long poll that updates
// the clients when something in the network changes.
//
// The clients POST stuff like HostInfo and their Endpoints here, but
// only after their first request (marked with the ReadOnly field).
//
// At this moment the updates are sent in a quite horrendous way, but they kinda work.
func (ns *noiseServer) PollNetMapHandler(
writer http.ResponseWriter,
req *http.Request,
) {
var mapRequest tailcfg.MapRequest
err := json.NewDecoder(req.Body).Decode(&mapRequest)
if err != nil {
httpError(writer, err)
return
}
// Reject unsupported versions
if rejectUnsupported(writer, mapRequest.Version, ns.machineKey, mapRequest.NodeKey) {
return
}
nv, err := ns.getAndValidateNode(mapRequest)
if err != nil {
httpError(writer, err)
return
}
ns.nodeKey = nv.NodeKey()
sess := ns.headscale.newMapSession(req.Context(), mapRequest, writer, nv.AsStruct())
sess.log.Trace().Caller().Msg("a node sending a MapRequest with Noise protocol")
if !sess.isStreaming() {
sess.serve()
} else {
sess.serveLongPoll()
}
}
func regErr(err error) *tailcfg.RegisterResponse {
return &tailcfg.RegisterResponse{Error: err.Error()}
}
// RegistrationHandler handles the actual registration process of a node.
func (ns *noiseServer) RegistrationHandler(
writer http.ResponseWriter,
req *http.Request,
) {
if req.Method != http.MethodPost {
httpError(writer, errMethodNotAllowed)
return
}
registerRequest, registerResponse := func() (*tailcfg.RegisterRequest, *tailcfg.RegisterResponse) { //nolint:contextcheck
var resp *tailcfg.RegisterResponse
var regReq tailcfg.RegisterRequest
err := json.NewDecoder(req.Body).Decode(®Req)
if err != nil {
return ®Req, regErr(err)
}
ns.nodeKey = regReq.NodeKey
resp, err = ns.headscale.handleRegister(req.Context(), regReq, ns.conn.Peer())
if err != nil {
if httpErr, ok := errors.AsType[HTTPError](err); ok {
resp = &tailcfg.RegisterResponse{
Error: httpErr.Msg,
}
return ®Req, resp
}
return ®Req, regErr(err)
}
return ®Req, resp
}()
// Reject unsupported versions
if rejectUnsupported(writer, registerRequest.Version, ns.machineKey, registerRequest.NodeKey) {
return
}
writer.Header().Set("Content-Type", "application/json; charset=utf-8")
writer.WriteHeader(http.StatusOK)
err := json.NewEncoder(writer).Encode(registerResponse)
if err != nil {
log.Error().Caller().Err(err).Msg("noise registration handler: failed to encode RegisterResponse")
return
}
// Ensure response is flushed to client
if flusher, ok := writer.(http.Flusher); ok {
flusher.Flush()
}
}
// getAndValidateNode retrieves the node from the database using the NodeKey
// and validates that it matches the MachineKey from the Noise session.
func (ns *noiseServer) getAndValidateNode(mapRequest tailcfg.MapRequest) (types.NodeView, error) {
nv, ok := ns.headscale.state.GetNodeByNodeKey(mapRequest.NodeKey)
if !ok {
return types.NodeView{}, NewHTTPError(http.StatusNotFound, "node not found", nil)
}
// Validate that the MachineKey in the Noise session matches the one associated with the NodeKey.
if ns.machineKey != nv.MachineKey() {
return types.NodeView{}, NewHTTPError(http.StatusNotFound, "node key in request does not match the one associated with this machine key", nil)
}
return nv, nil
}
================================================
FILE: hscontrol/noise_test.go
================================================
package hscontrol
import (
"bytes"
"context"
"encoding/json"
"io"
"net/http"
"net/http/httptest"
"testing"
"github.com/go-chi/chi/v5"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"tailscale.com/tailcfg"
)
// newNoiseRouterWithBodyLimit builds a chi router with the same body-limit
// middleware used in the real Noise router but wired to a test handler that
// captures the io.ReadAll result. This lets us verify the limit without
// needing a full Headscale instance.
func newNoiseRouterWithBodyLimit(readBody *[]byte, readErr *error) http.Handler {
r := chi.NewRouter()
r.Use(func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
r.Body = http.MaxBytesReader(w, r.Body, noiseBodyLimit)
next.ServeHTTP(w, r)
})
})
handler := func(w http.ResponseWriter, r *http.Request) {
*readBody, *readErr = io.ReadAll(r.Body)
if *readErr != nil {
http.Error(w, "body too large", http.StatusRequestEntityTooLarge)
return
}
w.WriteHeader(http.StatusOK)
}
r.Post("/machine/map", handler)
r.Post("/machine/register", handler)
return r
}
func TestNoiseBodyLimit_MapEndpoint(t *testing.T) {
t.Parallel()
t.Run("normal_map_request", func(t *testing.T) {
t.Parallel()
var body []byte
var readErr error
router := newNoiseRouterWithBodyLimit(&body, &readErr)
mapReq := tailcfg.MapRequest{Version: 100, Stream: true}
payload, err := json.Marshal(mapReq)
require.NoError(t, err)
req := httptest.NewRequestWithContext(context.Background(), http.MethodPost, "/machine/map", bytes.NewReader(payload))
rec := httptest.NewRecorder()
router.ServeHTTP(rec, req)
require.NoError(t, readErr)
assert.Equal(t, http.StatusOK, rec.Code)
assert.Len(t, body, len(payload))
})
t.Run("oversized_body_rejected", func(t *testing.T) {
t.Parallel()
var body []byte
var readErr error
router := newNoiseRouterWithBodyLimit(&body, &readErr)
oversized := bytes.Repeat([]byte("x"), int(noiseBodyLimit)+1)
req := httptest.NewRequestWithContext(context.Background(), http.MethodPost, "/machine/map", bytes.NewReader(oversized))
rec := httptest.NewRecorder()
router.ServeHTTP(rec, req)
require.Error(t, readErr)
assert.Equal(t, http.StatusRequestEntityTooLarge, rec.Code)
assert.LessOrEqual(t, len(body), int(noiseBodyLimit))
})
}
func TestNoiseBodyLimit_RegisterEndpoint(t *testing.T) {
t.Parallel()
t.Run("normal_register_request", func(t *testing.T) {
t.Parallel()
var body []byte
var readErr error
router := newNoiseRouterWithBodyLimit(&body, &readErr)
regReq := tailcfg.RegisterRequest{Version: 100}
payload, err := json.Marshal(regReq)
require.NoError(t, err)
req := httptest.NewRequestWithContext(context.Background(), http.MethodPost, "/machine/register", bytes.NewReader(payload))
rec := httptest.NewRecorder()
router.ServeHTTP(rec, req)
require.NoError(t, readErr)
assert.Equal(t, http.StatusOK, rec.Code)
assert.Len(t, body, len(payload))
})
t.Run("oversized_body_rejected", func(t *testing.T) {
t.Parallel()
var body []byte
var readErr error
router := newNoiseRouterWithBodyLimit(&body, &readErr)
oversized := bytes.Repeat([]byte("x"), int(noiseBodyLimit)+1)
req := httptest.NewRequestWithContext(context.Background(), http.MethodPost, "/machine/register", bytes.NewReader(oversized))
rec := httptest.NewRecorder()
router.ServeHTTP(rec, req)
require.Error(t, readErr)
assert.Equal(t, http.StatusRequestEntityTooLarge, rec.Code)
assert.LessOrEqual(t, len(body), int(noiseBodyLimit))
})
}
func TestNoiseBodyLimit_AtExactLimit(t *testing.T) {
t.Parallel()
var body []byte
var readErr error
router := newNoiseRouterWithBodyLimit(&body, &readErr)
payload := bytes.Repeat([]byte("a"), int(noiseBodyLimit))
req := httptest.NewRequestWithContext(context.Background(), http.MethodPost, "/machine/map", bytes.NewReader(payload))
rec := httptest.NewRecorder()
router.ServeHTTP(rec, req)
require.NoError(t, readErr)
assert.Equal(t, http.StatusOK, rec.Code)
assert.Len(t, body, int(noiseBodyLimit))
}
// TestPollNetMapHandler_OversizedBody calls the real handler with a
// MaxBytesReader-wrapped body to verify it fails gracefully (json decode
// error on truncated data) rather than consuming unbounded memory.
func TestPollNetMapHandler_OversizedBody(t *testing.T) {
t.Parallel()
ns := &noiseServer{}
oversized := bytes.Repeat([]byte("x"), int(noiseBodyLimit)+1)
req := httptest.NewRequestWithContext(context.Background(), http.MethodPost, "/machine/map", bytes.NewReader(oversized))
rec := httptest.NewRecorder()
req.Body = http.MaxBytesReader(rec, req.Body, noiseBodyLimit)
ns.PollNetMapHandler(rec, req)
// Body is truncated → json.Decode fails → httpError returns 500.
assert.Equal(t, http.StatusInternalServerError, rec.Code)
}
// TestRegistrationHandler_OversizedBody calls the real handler with a
// MaxBytesReader-wrapped body to verify it returns an error response
// rather than consuming unbounded memory.
func TestRegistrationHandler_OversizedBody(t *testing.T) {
t.Parallel()
ns := &noiseServer{}
oversized := bytes.Repeat([]byte("x"), int(noiseBodyLimit)+1)
req := httptest.NewRequestWithContext(context.Background(), http.MethodPost, "/machine/register", bytes.NewReader(oversized))
rec := httptest.NewRecorder()
req.Body = http.MaxBytesReader(rec, req.Body, noiseBodyLimit)
ns.RegistrationHandler(rec, req)
// json.Decode returns MaxBytesError → regErr wraps it → handler writes
// a RegisterResponse with the error and then rejectUnsupported kicks in
// for version 0 → returns 400.
assert.Equal(t, http.StatusBadRequest, rec.Code)
}
================================================
FILE: hscontrol/oidc.go
================================================
package hscontrol
import (
"bytes"
"cmp"
"context"
"errors"
"fmt"
"net/http"
"slices"
"strings"
"time"
"github.com/coreos/go-oidc/v3/oidc"
"github.com/juanfont/headscale/hscontrol/db"
"github.com/juanfont/headscale/hscontrol/templates"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/types/change"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
"golang.org/x/oauth2"
"zgo.at/zcache/v2"
)
const (
randomByteSize = 16
defaultOAuthOptionsCount = 3
authCacheExpiration = time.Minute * 15
authCacheCleanup = time.Minute * 20
)
var (
errEmptyOIDCCallbackParams = errors.New("empty OIDC callback params")
errNoOIDCIDToken = errors.New("extracting ID token")
errNoOIDCRegistrationInfo = errors.New("registration info not in cache")
errOIDCAllowedDomains = errors.New(
"authenticated principal does not match any allowed domain",
)
errOIDCAllowedGroups = errors.New("authenticated principal is not in any allowed group")
errOIDCAllowedUsers = errors.New(
"authenticated principal does not match any allowed user",
)
errOIDCUnverifiedEmail = errors.New("authenticated principal has an unverified email")
)
// AuthInfo contains both auth ID and verifier information for OIDC validation.
type AuthInfo struct {
AuthID types.AuthID
Verifier *string
Registration bool
}
type AuthProviderOIDC struct {
h *Headscale
serverURL string
cfg *types.OIDCConfig
// authCache holds auth information between
// the auth and the callback steps.
authCache *zcache.Cache[string, AuthInfo]
oidcProvider *oidc.Provider
oauth2Config *oauth2.Config
}
func NewAuthProviderOIDC(
ctx context.Context,
h *Headscale,
serverURL string,
cfg *types.OIDCConfig,
) (*AuthProviderOIDC, error) {
var err error
// grab oidc config if it hasn't been already
oidcProvider, err := oidc.NewProvider(context.Background(), cfg.Issuer) //nolint:contextcheck
if err != nil {
return nil, fmt.Errorf("creating OIDC provider from issuer config: %w", err)
}
oauth2Config := &oauth2.Config{
ClientID: cfg.ClientID,
ClientSecret: cfg.ClientSecret,
Endpoint: oidcProvider.Endpoint(),
RedirectURL: strings.TrimSuffix(serverURL, "/") + "/oidc/callback",
Scopes: cfg.Scope,
}
authCache := zcache.New[string, AuthInfo](
authCacheExpiration,
authCacheCleanup,
)
return &AuthProviderOIDC{
h: h,
serverURL: serverURL,
cfg: cfg,
authCache: authCache,
oidcProvider: oidcProvider,
oauth2Config: oauth2Config,
}, nil
}
func (a *AuthProviderOIDC) AuthURL(authID types.AuthID) string {
return fmt.Sprintf(
"%s/auth/%s",
strings.TrimSuffix(a.serverURL, "/"),
authID.String())
}
func (a *AuthProviderOIDC) AuthHandler(
writer http.ResponseWriter,
req *http.Request,
) {
a.authHandler(writer, req, false)
}
func (a *AuthProviderOIDC) RegisterURL(authID types.AuthID) string {
return fmt.Sprintf(
"%s/register/%s",
strings.TrimSuffix(a.serverURL, "/"),
authID.String())
}
// RegisterHandler registers the OIDC callback handler with the given router.
// It puts NodeKey in cache so the callback can retrieve it using the oidc state param.
// Listens in /register/:auth_id.
func (a *AuthProviderOIDC) RegisterHandler(
writer http.ResponseWriter,
req *http.Request,
) {
a.authHandler(writer, req, true)
}
// authHandler takes an incoming request that needs to be authenticated and
// validates and prepares it for the OIDC flow.
func (a *AuthProviderOIDC) authHandler(
writer http.ResponseWriter,
req *http.Request,
registration bool,
) {
authID, err := authIDFromRequest(req)
if err != nil {
httpError(writer, err)
return
}
// Set the state and nonce cookies to protect against CSRF attacks
state, err := setCSRFCookie(writer, req, "state")
if err != nil {
httpError(writer, err)
return
}
// Set the state and nonce cookies to protect against CSRF attacks
nonce, err := setCSRFCookie(writer, req, "nonce")
if err != nil {
httpError(writer, err)
return
}
registrationInfo := AuthInfo{
AuthID: authID,
Registration: registration,
}
extras := make([]oauth2.AuthCodeOption, 0, len(a.cfg.ExtraParams)+defaultOAuthOptionsCount)
// Add PKCE verification if enabled
if a.cfg.PKCE.Enabled {
verifier := oauth2.GenerateVerifier()
registrationInfo.Verifier = &verifier
extras = append(extras, oauth2.AccessTypeOffline)
switch a.cfg.PKCE.Method {
case types.PKCEMethodS256:
extras = append(extras, oauth2.S256ChallengeOption(verifier))
case types.PKCEMethodPlain:
// oauth2 does not have a plain challenge option, so we add it manually
extras = append(extras, oauth2.SetAuthURLParam("code_challenge_method", "plain"), oauth2.SetAuthURLParam("code_challenge", verifier))
}
}
// Add any extra parameters from configuration
for k, v := range a.cfg.ExtraParams {
extras = append(extras, oauth2.SetAuthURLParam(k, v))
}
extras = append(extras, oidc.Nonce(nonce))
// Cache the registration info
a.authCache.Set(state, registrationInfo)
authURL := a.oauth2Config.AuthCodeURL(state, extras...)
log.Debug().Caller().Msgf("redirecting to %s for authentication", authURL)
http.Redirect(writer, req, authURL, http.StatusFound)
}
// OIDCCallbackHandler handles the callback from the OIDC endpoint
// Retrieves the nkey from the state cache and adds the node to the users email user
// TODO: A confirmation page for new nodes should be added to avoid phishing vulnerabilities
// TODO: Add groups information from OIDC tokens into node HostInfo
// Listens in /oidc/callback.
func (a *AuthProviderOIDC) OIDCCallbackHandler(
writer http.ResponseWriter,
req *http.Request,
) {
code, state, err := extractCodeAndStateParamFromRequest(req)
if err != nil {
httpError(writer, err)
return
}
stateCookieName := getCookieName("state", state)
cookieState, err := req.Cookie(stateCookieName)
if err != nil {
httpError(writer, NewHTTPError(http.StatusBadRequest, "state not found", err))
return
}
if state != cookieState.Value {
httpError(writer, NewHTTPError(http.StatusForbidden, "state did not match", nil))
return
}
oauth2Token, err := a.getOauth2Token(req.Context(), code, state)
if err != nil {
httpError(writer, err)
return
}
idToken, err := a.extractIDToken(req.Context(), oauth2Token)
if err != nil {
httpError(writer, err)
return
}
if idToken.Nonce == "" {
httpError(writer, NewHTTPError(http.StatusBadRequest, "nonce not found in IDToken", err))
return
}
nonceCookieName := getCookieName("nonce", idToken.Nonce)
nonce, err := req.Cookie(nonceCookieName)
if err != nil {
httpError(writer, NewHTTPError(http.StatusBadRequest, "nonce not found", err))
return
}
if idToken.Nonce != nonce.Value {
httpError(writer, NewHTTPError(http.StatusForbidden, "nonce did not match", nil))
return
}
nodeExpiry := a.determineNodeExpiry(idToken.Expiry)
var claims types.OIDCClaims
if err := idToken.Claims(&claims); err != nil { //nolint:noinlineerr
httpError(writer, fmt.Errorf("decoding ID token claims: %w", err))
return
}
// Fetch user information (email, groups, name, etc) from the userinfo endpoint
// https://openid.net/specs/openid-connect-core-1_0.html#UserInfo
var userinfo *oidc.UserInfo
userinfo, err = a.oidcProvider.UserInfo(req.Context(), oauth2.StaticTokenSource(oauth2Token))
if err != nil {
util.LogErr(err, "could not get userinfo; only using claims from id token")
}
// The oidc.UserInfo type only decodes some fields (Subject, Profile, Email, EmailVerified).
// We are interested in other fields too (e.g. groups are required for allowedGroups) so we
// decode into our own OIDCUserInfo type using the underlying claims struct.
var userinfo2 types.OIDCUserInfo
if userinfo != nil && userinfo.Claims(&userinfo2) == nil && userinfo2.Sub == claims.Sub {
// Update the user with the userinfo claims (with id token claims as fallback).
// TODO(kradalby): there might be more interesting fields here that we have not found yet.
claims.Email = cmp.Or(userinfo2.Email, claims.Email)
claims.EmailVerified = cmp.Or(userinfo2.EmailVerified, claims.EmailVerified)
claims.Username = cmp.Or(userinfo2.PreferredUsername, claims.Username)
claims.Name = cmp.Or(userinfo2.Name, claims.Name)
claims.ProfilePictureURL = cmp.Or(userinfo2.Picture, claims.ProfilePictureURL)
if userinfo2.Groups != nil {
claims.Groups = userinfo2.Groups
}
} else {
util.LogErr(err, "could not get userinfo; only using claims from id token")
}
// The user claims are now updated from the userinfo endpoint so we can verify the user
// against allowed emails, email domains, and groups.
err = doOIDCAuthorization(a.cfg, &claims)
if err != nil {
httpError(writer, err)
return
}
user, _, err := a.createOrUpdateUserFromClaim(&claims)
if err != nil {
log.Error().
Err(err).
Caller().
Msgf("could not create or update user")
writer.Header().Set("Content-Type", "text/plain; charset=utf-8")
writer.WriteHeader(http.StatusInternalServerError)
_, werr := writer.Write([]byte("Could not create or update user"))
if werr != nil {
log.Error().
Caller().
Err(werr).
Msg("Failed to write HTTP response")
}
return
}
// TODO(kradalby): Is this comment right?
// If the node exists, then the node should be reauthenticated,
// if the node does not exist, and the machine key exists, then
// this is a new node that should be registered.
authInfo := a.getAuthInfoFromState(state)
if authInfo == nil {
log.Debug().Caller().Str("state", state).Msg("state not found in cache, login session may have expired")
httpError(writer, NewHTTPError(http.StatusGone, "login session expired, try again", nil))
return
}
// If this is a registration flow, then we need to register the node.
if authInfo.Registration {
newNode, err := a.handleRegistration(user, authInfo.AuthID, nodeExpiry)
if err != nil {
if errors.Is(err, db.ErrNodeNotFoundRegistrationCache) {
log.Debug().Caller().Str("auth_id", authInfo.AuthID.String()).Msg("registration session expired before authorization completed")
httpError(writer, NewHTTPError(http.StatusGone, "login session expired, try again", err))
return
}
httpError(writer, err)
return
}
content := renderRegistrationSuccessTemplate(user, newNode)
writer.Header().Set("Content-Type", "text/html; charset=utf-8")
writer.WriteHeader(http.StatusOK)
if _, err := writer.Write(content.Bytes()); err != nil { //nolint:noinlineerr
util.LogErr(err, "Failed to write HTTP response")
}
return
}
// If this is not a registration callback, then its a regular authentication callback
// and we need to send a response and confirm that the access was allowed.
authReq, ok := a.h.state.GetAuthCacheEntry(authInfo.AuthID)
if !ok {
log.Debug().Caller().Str("auth_id", authInfo.AuthID.String()).Msg("auth session expired before authorization completed")
httpError(writer, NewHTTPError(http.StatusGone, "login session expired, try again", nil))
return
}
// Send a finish auth verdict with no errors to let the CLI know that the authentication was successful.
authReq.FinishAuth(types.AuthVerdict{})
content := renderAuthSuccessTemplate(user)
writer.Header().Set("Content-Type", "text/html; charset=utf-8")
writer.WriteHeader(http.StatusOK)
if _, err := writer.Write(content.Bytes()); err != nil { //nolint:noinlineerr
util.LogErr(err, "Failed to write HTTP response")
}
}
func (a *AuthProviderOIDC) determineNodeExpiry(idTokenExpiration time.Time) time.Time {
if a.cfg.UseExpiryFromToken {
return idTokenExpiration
}
return time.Now().Add(a.cfg.Expiry)
}
func extractCodeAndStateParamFromRequest(
req *http.Request,
) (string, string, error) {
code := req.URL.Query().Get("code")
state := req.URL.Query().Get("state")
if code == "" || state == "" {
return "", "", NewHTTPError(http.StatusBadRequest, "missing code or state parameter", errEmptyOIDCCallbackParams)
}
return code, state, nil
}
// getOauth2Token exchanges the code from the callback for an oauth2 token.
func (a *AuthProviderOIDC) getOauth2Token(
ctx context.Context,
code string,
state string,
) (*oauth2.Token, error) {
var exchangeOpts []oauth2.AuthCodeOption
if a.cfg.PKCE.Enabled {
regInfo, ok := a.authCache.Get(state)
if !ok {
return nil, NewHTTPError(http.StatusNotFound, "registration not found", errNoOIDCRegistrationInfo)
}
if regInfo.Verifier != nil {
exchangeOpts = []oauth2.AuthCodeOption{oauth2.VerifierOption(*regInfo.Verifier)}
}
}
oauth2Token, err := a.oauth2Config.Exchange(ctx, code, exchangeOpts...)
if err != nil {
return nil, NewHTTPError(http.StatusForbidden, "invalid code", fmt.Errorf("exchanging code for token: %w", err))
}
return oauth2Token, err
}
// extractIDToken extracts the ID token from the oauth2 token.
func (a *AuthProviderOIDC) extractIDToken(
ctx context.Context,
oauth2Token *oauth2.Token,
) (*oidc.IDToken, error) {
rawIDToken, ok := oauth2Token.Extra("id_token").(string)
if !ok {
return nil, NewHTTPError(http.StatusBadRequest, "no id_token", errNoOIDCIDToken)
}
verifier := a.oidcProvider.Verifier(&oidc.Config{ClientID: a.cfg.ClientID})
idToken, err := verifier.Verify(ctx, rawIDToken)
if err != nil {
return nil, NewHTTPError(http.StatusForbidden, "failed to verify id_token", fmt.Errorf("verifying ID token: %w", err))
}
return idToken, nil
}
// validateOIDCAllowedDomains checks that if AllowedDomains is provided,
// that the authenticated principal ends with @.
func validateOIDCAllowedDomains(
allowedDomains []string,
claims *types.OIDCClaims,
) error {
if len(allowedDomains) > 0 {
if at := strings.LastIndex(claims.Email, "@"); at < 0 ||
!slices.Contains(allowedDomains, claims.Email[at+1:]) {
return NewHTTPError(http.StatusUnauthorized, "unauthorised domain", errOIDCAllowedDomains)
}
}
return nil
}
// validateOIDCAllowedGroups checks if AllowedGroups is provided,
// and that the user has one group in the list.
// claims.Groups can be populated by adding a client scope named
// 'groups' that contains group membership.
func validateOIDCAllowedGroups(
allowedGroups []string,
claims *types.OIDCClaims,
) error {
for _, group := range allowedGroups {
if slices.Contains(claims.Groups, group) {
return nil
}
}
return NewHTTPError(http.StatusUnauthorized, "unauthorised group", errOIDCAllowedGroups)
}
// validateOIDCAllowedUsers checks that if AllowedUsers is provided,
// that the authenticated principal is part of that list.
func validateOIDCAllowedUsers(
allowedUsers []string,
claims *types.OIDCClaims,
) error {
if !slices.Contains(allowedUsers, claims.Email) {
return NewHTTPError(http.StatusUnauthorized, "unauthorised user", errOIDCAllowedUsers)
}
return nil
}
// doOIDCAuthorization applies authorization tests to claims.
//
// The following tests are always applied:
//
// - validateOIDCAllowedGroups
//
// The following tests are applied if cfg.EmailVerifiedRequired=false
// or claims.email_verified=true:
//
// - validateOIDCAllowedDomains
// - validateOIDCAllowedUsers
//
// NOTE that, contrary to the function name, validateOIDCAllowedUsers
// only checks the email address -- not the username.
func doOIDCAuthorization(
cfg *types.OIDCConfig,
claims *types.OIDCClaims,
) error {
if len(cfg.AllowedGroups) > 0 {
err := validateOIDCAllowedGroups(cfg.AllowedGroups, claims)
if err != nil {
return err
}
}
trustEmail := !cfg.EmailVerifiedRequired || bool(claims.EmailVerified)
hasEmailTests := len(cfg.AllowedDomains) > 0 || len(cfg.AllowedUsers) > 0
if !trustEmail && hasEmailTests {
return NewHTTPError(http.StatusUnauthorized, "unverified email", errOIDCUnverifiedEmail)
}
if len(cfg.AllowedDomains) > 0 {
err := validateOIDCAllowedDomains(cfg.AllowedDomains, claims)
if err != nil {
return err
}
}
if len(cfg.AllowedUsers) > 0 {
err := validateOIDCAllowedUsers(cfg.AllowedUsers, claims)
if err != nil {
return err
}
}
return nil
}
// getAuthInfoFromState retrieves the registration ID from the state.
func (a *AuthProviderOIDC) getAuthInfoFromState(state string) *AuthInfo {
authInfo, ok := a.authCache.Get(state)
if !ok {
return nil
}
return &authInfo
}
func (a *AuthProviderOIDC) createOrUpdateUserFromClaim(
claims *types.OIDCClaims,
) (*types.User, change.Change, error) {
var (
user *types.User
err error
newUser bool
c change.Change
)
user, err = a.h.state.GetUserByOIDCIdentifier(claims.Identifier())
if err != nil && !errors.Is(err, db.ErrUserNotFound) {
return nil, change.Change{}, fmt.Errorf("creating or updating user: %w", err)
}
// if the user is still not found, create a new empty user.
// TODO(kradalby): This context is not inherited from the request, which is probably not ideal.
// However, we need a context to use the OIDC provider.
if user == nil {
newUser = true
user = &types.User{}
}
user.FromClaim(claims, a.cfg.EmailVerifiedRequired)
if newUser {
user, c, err = a.h.state.CreateUser(*user)
if err != nil {
return nil, change.Change{}, fmt.Errorf("creating user: %w", err)
}
} else {
_, c, err = a.h.state.UpdateUser(types.UserID(user.ID), func(u *types.User) error {
*u = *user
return nil
})
if err != nil {
return nil, change.Change{}, fmt.Errorf("updating user: %w", err)
}
}
return user, c, nil
}
func (a *AuthProviderOIDC) handleRegistration(
user *types.User,
registrationID types.AuthID,
expiry time.Time,
) (bool, error) {
node, nodeChange, err := a.h.state.HandleNodeFromAuthPath(
registrationID,
types.UserID(user.ID),
&expiry,
util.RegisterMethodOIDC,
)
if err != nil {
return false, fmt.Errorf("registering node: %w", err)
}
// This is a bit of a back and forth, but we have a bit of a chicken and egg
// dependency here.
// Because the way the policy manager works, we need to have the node
// in the database, then add it to the policy manager and then we can
// approve the route. This means we get this dance where the node is
// first added to the database, then we add it to the policy manager via
// SaveNode (which automatically updates the policy manager) and then we can auto approve the routes.
// As that only approves the struct object, we need to save it again and
// ensure we send an update.
// This works, but might be another good candidate for doing some sort of
// eventbus.
routesChange, err := a.h.state.AutoApproveRoutes(node)
if err != nil {
return false, fmt.Errorf("auto approving routes: %w", err)
}
// Send both changes. Empty changes are ignored by Change().
a.h.Change(nodeChange, routesChange)
return !nodeChange.IsEmpty(), nil
}
func renderRegistrationSuccessTemplate(
user *types.User,
newNode bool,
) *bytes.Buffer {
result := templates.AuthSuccessResult{
Title: "Headscale - Node Reauthenticated",
Heading: "Node reauthenticated",
Verb: "Reauthenticated",
User: user.Display(),
Message: "You can now close this window.",
}
if newNode {
result.Title = "Headscale - Node Registered"
result.Heading = "Node registered"
result.Verb = "Registered"
}
return bytes.NewBufferString(templates.AuthSuccess(result).Render())
}
func renderAuthSuccessTemplate(
user *types.User,
) *bytes.Buffer {
result := templates.AuthSuccessResult{
Title: "Headscale - SSH Session Authorized",
Heading: "SSH session authorized",
Verb: "Authorized",
User: user.Display(),
Message: "You may return to your terminal.",
}
return bytes.NewBufferString(templates.AuthSuccess(result).Render())
}
// getCookieName generates a unique cookie name based on a cookie value.
func getCookieName(baseName, value string) string {
return fmt.Sprintf("%s_%s", baseName, value[:6])
}
func setCSRFCookie(w http.ResponseWriter, r *http.Request, name string) (string, error) {
val, err := util.GenerateRandomStringURLSafe(64)
if err != nil {
return val, err
}
c := &http.Cookie{
Path: "/oidc/callback",
Name: getCookieName(name, val),
Value: val,
MaxAge: int(time.Hour.Seconds()),
Secure: r.TLS != nil,
HttpOnly: true,
}
http.SetCookie(w, c)
return val, nil
}
================================================
FILE: hscontrol/oidc_template_test.go
================================================
package hscontrol
import (
"testing"
"github.com/juanfont/headscale/hscontrol/templates"
"github.com/stretchr/testify/assert"
)
func TestAuthSuccessTemplate(t *testing.T) {
tests := []struct {
name string
result templates.AuthSuccessResult
}{
{
name: "node_registered",
result: templates.AuthSuccessResult{
Title: "Headscale - Node Registered",
Heading: "Node registered",
Verb: "Registered",
User: "newuser@example.com",
Message: "You can now close this window.",
},
},
{
name: "node_reauthenticated",
result: templates.AuthSuccessResult{
Title: "Headscale - Node Reauthenticated",
Heading: "Node reauthenticated",
Verb: "Reauthenticated",
User: "test@example.com",
Message: "You can now close this window.",
},
},
{
name: "ssh_session_authorized",
result: templates.AuthSuccessResult{
Title: "Headscale - SSH Session Authorized",
Heading: "SSH session authorized",
Verb: "Authorized",
User: "test@example.com",
Message: "You may return to your terminal.",
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
html := templates.AuthSuccess(tt.result).Render()
// Verify the HTML contains expected structural elements
assert.Contains(t, html, "")
assert.Contains(t, html, ""+tt.result.Title+"")
assert.Contains(t, html, tt.result.Heading)
assert.Contains(t, html, tt.result.Verb+" as ")
assert.Contains(t, html, tt.result.User)
assert.Contains(t, html, tt.result.Message)
// Verify Material for MkDocs design system CSS is present
assert.Contains(t, html, "Material for MkDocs")
assert.Contains(t, html, "Roboto")
assert.Contains(t, html, ".md-typeset")
// Verify SVG elements are present
assert.Contains(t, html, "