[
  {
    "path": ".editorconfig",
    "content": "[*]\nend_of_line = lf\n\n[caddytest/integration/caddyfile_adapt/*.caddyfiletest]\nindent_style = tab"
  },
  {
    "path": ".gitattributes",
    "content": "*.go text eol=lf"
  },
  {
    "path": ".github/CONTRIBUTING.md",
    "content": "Contributing to Caddy\n=====================\n\nWelcome! Thank you for choosing to be a part of our community. Caddy wouldn't be nearly as excellent without your involvement!\n\nFor starters, we invite you to join [the Caddy forum](https://caddy.community) where you can hang out with other Caddy users and developers.\n\n## Common Tasks\n\n- [Contributing code](#contributing-code)\n- [Writing a Caddy module](#writing-a-caddy-module)\n- [Asking or answering questions for help using Caddy](#getting-help-using-caddy)\n- [Reporting a bug](#reporting-bugs)\n- [Suggesting an enhancement or a new feature](#suggesting-features)\n- [Improving documentation](#improving-documentation)\n\nOther menu items:\n\n- [Values](#values)\n- [Coordinated Disclosure](#coordinated-disclosure)\n- [Thank You](#thank-you)\n\n\n### Contributing code\n\nYou can have a huge impact on the project by helping with its code. To contribute code to Caddy, first submit or comment in an issue to discuss your contribution, then open a [pull request](https://github.com/caddyserver/caddy/pulls) (PR). If you're new to our community, that's okay: **we gladly welcome pull requests from anyone, regardless of your native language or coding experience.** You can get familiar with Caddy's code base by using [code search at Sourcegraph](https://sourcegraph.com/github.com/caddyserver/caddy).\n\nWe hold contributions to a high standard for quality :bowtie:, so don't be surprised if we ask for revisions&mdash;even if it seems small or insignificant. Please don't take it personally. :blue_heart: If your change is on the right track, we can guide you to make it mergeable.\n\nHere are some of the expectations we have of contributors:\n\n- **Open an issue to propose your change first.** This way we can avoid confusion, coordinate what everyone is working on, and ensure that any changes are in-line with the project's goals and the best interests of its users. We can also discuss the best possible implementation. If there's already an issue about it, comment on the existing issue to claim it. A lot of valuable time can be saved by discussing a proposal first.\n\n- **Keep pull requests small.** Smaller PRs are more likely to be merged because they are easier to review! We might ask you to break up large PRs into smaller ones. [An example of what we want to avoid.](https://twitter.com/iamdevloper/status/397664295875805184)\n\n- **Keep related commits together in a PR.** We do want pull requests to be small, but you should also keep multiple related commits in the same PR if they rely on each other.\n\n- **Write tests.** Good, automated tests are very valuable! Written properly, they ensure your change works, and that other changes in the future won't break your change. CI checks should pass.\n\n- **Benchmarks should be included for optimizations.** Optimizations sometimes make code harder to read or have changes that are less than obvious. They should be proven with benchmarks and profiling.\n\n- **[Squash](http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html) insignificant commits.** Every commit should be significant. Commits which merely rewrite a comment or fix a typo can be combined into another commit that has more substance. Interactive rebase can do this, or a simpler way is `git reset --soft <diverging-commit>` then `git commit -s`.\n\n- **Be responsible for and maintain your contributions.** Caddy is a growing project, and it's much better when individual contributors help maintain their change after it is merged.\n\n- **Use comments properly.** We expect good godoc comments for package-level functions, types, and values. Comments are also useful whenever the purpose for a line of code is not obvious.\n\n- **Pull requests may still get closed.** The longer a PR stays open and idle, the more likely it is to be closed. If we haven't reviewed it in a while, it probably means the change is not a priority. Please don't take this personally, we're trying to balance a lot of tasks! If nobody else has commented or reacted to the PR, it likely means your change is useful only to you. The reality is this happens quite a lot. We don't tend to accept PRs that aren't generally helpful. For these reasons or others, the PR may get closed even after a review. We are not obligated to accept all proposed changes, even if the best justification we can give is something vague like, \"It doesn't sit right.\" Sometimes PRs are just the wrong thing or the wrong time. Because it is open source, you can always build your own modified version of Caddy with a change you need, even if we reject it in the official repo. Plus, because Caddy is extensible, it's possible your feature could make a great plugin instead!\n\n- **You certify that you wrote and comprehend the code you submit.** The Caddy project welcomes original contributions that comply with [our CLA](https://cla-assistant.io/caddyserver/caddy), meaning that authors must be able to certify that they created or have rights to the code they are contributing. In addition, we require that code is not simply copy-pasted from Q/A sites or AI language models without full comprehension and rigorous testing. In other words: contributors are allowed to refer to communities for assistance and use AI tools such as language models for inspiration, but code which originates from or is assisted by these resources MUST be:\n\n\t- Licensed for you to freely share\n\t- Fully comprehended by you (be able to explain every line of code)\n\t- Verified by automated tests when feasible, or thorough manual tests otherwise\n\n\tWe have found that current language models (LLMs, like ChatGPT) may understand code syntax and even problem spaces to an extent, but often fail in subtle ways to convey true knowledge and produce correct algorithms. Integrated tools such as GitHub Copilot and Sourcegraph Cody may be used for inspiration, but code generated by these tools still needs to meet our criteria for licensing, human comprehension, and testing. These tools may be used to help write code comments and tests as long as you can certify they are accurate and correct. Note that it is often more trouble than it's worth to certify that Copilot (for example) is not giving you code that is possibly plagiarised, unlicensed, or licensed with incompatible terms -- as the Caddy project cannot accept such contributions. If that's too difficult for you (or impossible), then we recommend using these resources only for inspiration and write your own code. Ultimately, you (the contributor) are responsible for the code you're submitting.\n\n\tAs a courtesy to reviewers, we kindly ask that you disclose when contributing code that was generated by an AI tool or copied from another website so we can be aware of what to look for in code review.\n\nWe often grant [collaborator status](#collaborator-instructions) to contributors who author one or more significant, high-quality PRs that are merged into the code base.\n\n\n#### HOW TO MAKE A PULL REQUEST TO CADDY\n\nContributing to Go projects on GitHub is fun and easy. After you have proposed your change in an issue, we recommend the following workflow:\n\n1. [Fork this repo](https://github.com/caddyserver/caddy). This makes a copy of the code you can write to.\n\n2. If you don't already have this repo (caddyserver/caddy.git) repo on your computer, clone it down: `git clone https://github.com/caddyserver/caddy.git`\n\n3. Tell git that it can push the caddyserver/caddy.git repo to your fork by adding a remote: `git remote add myfork https://github.com/<your-username>/caddy.git`\n\n4. Make your changes in the caddyserver/caddy.git repo on your computer.\n\n5. Push your changes to your fork: `git push myfork`\n\n6. [Create a pull request](https://github.com/caddyserver/caddy/pull/new/master) to merge your changes into caddyserver/caddy @ master. (Click \"compare across forks\" and change the head fork.)\n\nThis workflow is nice because you don't have to change import paths. You can get fancier by using different branches if you want.\n\n\n### Writing a Caddy module\n\nCaddy can do more with modules! Anyone can write one. Caddy modules are Go libraries that get compiled into Caddy, extending its feature set. They can add directives to the Caddyfile, add new configuration adapters, and even implement new server types (e.g. HTTP, DNS).\n\n[Learn how to write a module here](https://caddyserver.com/docs/extending-caddy). You should also share and discuss your module idea [on the forums](https://caddy.community) to have people test it out. We don't use the Caddy issue tracker for third-party modules.\n\n\n### Getting help using Caddy\n\nIf you have a question about using Caddy, [ask on our forum](https://caddy.community)! There will be more people there who can help you than just the Caddy developers who follow our issue tracker. Issues are not the place for usage questions.\n\nMany people on the forums could benefit from your experience and expertise, too. Once you've been helped, consider giving back by answering other people's questions and participating in other discussions.\n\n\n### Reporting bugs\n\nLike every software, Caddy has its flaws. If you find one, [search the issues](https://github.com/caddyserver/caddy/issues) to see if it has already been reported. If not, [open a new issue](https://github.com/caddyserver/caddy/issues/new) and describe the bug, and somebody will look into it! (This repository is only for Caddy and its standard modules.)\n\n**You can help us fix bugs!** Speed up the patch by identifying the bug in the code. This can sometimes be done by adding `fmt.Println()` statements (or similar) in relevant code paths to narrow down where the problem may be. It's a good way to [introduce yourself to the Go language](https://tour.golang.org), too.\n\nWe may reply with an issue template. Please follow the template so we have all the needed information. Unredacted&mdash;yes, actual values matter. We need to be able to repeat the bug using your instructions. Please simplify the issue as much as possible. If you don't, we might close your report. The burden is on you to make it easily reproducible and to convince us that it is actually a bug in Caddy. This is easiest to do when you write clear, concise instructions so we can reproduce the behavior (even if it seems obvious). The more detailed and specific you are, the faster we will be able to help you!\n\nWe suggest reading [How to Report Bugs Effectively](http://www.chiark.greenend.org.uk/~sgtatham/bugs.html).\n\nPlease be kind. :smile: Remember that Caddy comes at no cost to you, and you're getting free support when we fix your issues. If we helped you, please consider helping someone else!\n\n#### Bug reporting expectations\n\nMaintainers---or more generally, developers---need three things to act on bugs:\n\n1. To agree or be convinced that it's a bug (reporter's responsibility).\n\t- A bug is unintentional, undesired, or surprising behavior which violates documentation or relevant spec. It might be either a mistake in the documentation or a bug in the code.\n\t- This project usually does not work around bugs in other software, systems, and dependencies; instead, we recommend that those bugs are fixed at their source. This sometimes means we close issues or reject PRs that attempt to fix, workaround, or hide bugs in other projects.\n\n2. To be able to understand what is happening (mostly reporter's responsibility).\n\t- If the reporter can provide satisfactory instructions such that a developer can reproduce the bug, the developer will likely be able to understand the bug, write a test case, and implement a fix. This is the least amount of work for everyone and path to the fastest resolution.\n\t- Otherwise, the burden is on the reporter to test possible solutions. This is less preferable because it loosens the feedback loop, slows down debugging efforts, obscures the true nature of the problem from the developers, and is unlikely to result in new test cases.\n\n3. A solution, or ideas toward a solution (mostly maintainer's responsibility).\n\t- Sometimes the best solution is a documentation change.\n\t- Usually the developers have the best domain knowledge for inventing a solution, but reporters may have ideas or preferences for how they would like the software to work.\n\t- Security, correctness, and project goals/vision all take priority over a user's preferences.\n\t- It's simply good business to yield a solution that satisfies the users, and it's even better business to leave them impressed.\n\nThus, at the very least, the reporter is expected to:\n\n1. Convince the reader that it's a bug in Caddy (if it's not obvious).\n2. Reduce the problem down to the minimum specific steps required to reproduce it.\n\nThe maintainer is usually able to do the rest; but of course the reporter may invest additional effort to speed up the process.\n\n\n\n### Suggesting features\n\nFirst, [search to see if your feature has already been requested](https://github.com/caddyserver/caddy/issues). If it has, you can add a :+1: reaction to vote for it. If your feature idea is new, open an issue to request the feature. Please describe your idea thoroughly so that we know how to implement it! Really vague requests may not be helpful or actionable and, without clarification, will have to be closed.\n\nWhile we really do value your requests and implement many of them, not all features are a good fit for Caddy. Most of those [make good modules](#writing-a-caddy-module), which can be made by anyone! But if a feature is not in the best interest of the Caddy project or its users in general, we may politely decline to implement it into Caddy core. Additionally, some features are bad ideas altogether (for either obvious or non-obvious reasons) which may be rejected. We'll try to explain why we reject a feature, but sometimes the best we can do is, \"It's not a good fit for the project.\"\n\n\n### Improving documentation\n\nCaddy's documentation is available at [https://caddyserver.com/docs](https://caddyserver.com/docs) and its source is in the [website repo](https://github.com/caddyserver/website). If you would like to make a fix to the docs, please submit an issue there describing the change to make.\n\nNote that third-party module documentation is not hosted by the Caddy website, other than basic usage examples. They are managed by the individual module authors, and you will have to contact them to change their documentation.\n\nOur documentation is scoped to the Caddy project only: it is not for describing how other software or systems work, even if they relate to Caddy or web servers. That kind of content [can be found in our community wiki](https://caddy.community/c/wiki/13), however.\n\n## Collaborator Instructions\n\nCollaborators have push rights to the repository. We grant this permission after one or more successful, high-quality PRs are merged! We thank them for their help. The expectations we have of collaborators are:\n\n- **Help review pull requests.** Be meticulous, but also kind. We love our contributors, but we critique the contribution to make it better. Multiple, thorough reviews make for the best contributions! Here are some questions to consider:\n\t- Can the change be made more elegant?\n\t- Is this a maintenance burden?\n\t- What assumptions does the code make?\n\t- Is it well-tested?\n\t- Is the change a good fit for the project?\n\t- Does it actually fix the problem or is it creating a special case instead?\n\t- Does the change incur any new dependencies? (Avoid these!)\n\n- **Answer issues.** If every collaborator helped out with issues, we could count the number of open issues on two hands. This means getting involved in the discussion, investigating the code, and yes, debugging it. It's fun. Really! :smile: Please, please help with open issues. Granted, some issues need to be done before others. And of course some are larger than others: you don't have to do it all yourself. Work with other collaborators as a team!\n\n- **Do not merge pull requests until they have been approved by one or two other collaborators.** If a project owner approves the PR, it can be merged (as long as the conversation has finished too).\n\n- **Prefer squashed commits over a messy merge.** If there are many little commits, please [squash the commits](https://stackoverflow.com/a/11732910/1048862) so we don't clutter the commit history.\n\n- **Don't accept new dependencies lightly.** Dependencies can make the world crash and burn, but they are sometimes necessary. Choose carefully. Extremely small dependencies (a few lines of code) can be inlined. The rest may not be needed. For those that are, Caddy uses [go modules](https://github.com/golang/go/wiki/Modules). All external dependencies must be installed as modules, and _Caddy must not export any types defined by those dependencies_. Check this diligently!\n\n- **Be extra careful in some areas of the code.** There are some critical areas in the Caddy code base that we review extra meticulously: the `caddyhttp` and `caddytls` packages especially.\n\n- **Make sure tests test the actual thing.** Double-check that the tests fail without the change, and pass with it. It's important that they assert what they're purported to assert.\n\n- **Recommended reading**\n\t- [CodeReviewComments](https://github.com/golang/go/wiki/CodeReviewComments) for an idea of what we look for in good, clean Go code\n\t- [Linus Torvalds describes a good commit message](https://gist.github.com/matthewhudson/1475276)\n\t- [Best Practices for Maintainers](https://opensource.guide/best-practices/)\n\t- [Shrinking Code Review](https://alexgaynor.net/2015/dec/29/shrinking-code-review/)\n\n\n\n## Values (WIP)\n\n- A person is always more important than code. People don't like being handled \"efficiently\". But we can still process issues and pull requests efficiently while being kind, patient, and considerate.\n\n- The ends justify the means, if the means are good. A good tree won't produce bad fruit. But if we cut corners or are hasty in our process, the end result will not be good.\n\n\n## Security Policy\n\nIf you think you've found a security vulnerability, please refer to our [Security Policy](https://github.com/caddyserver/caddy/security/policy) document.\n\n\n## Thank you\n\nThanks for your help! Caddy would not be what it is today without your contributions.\n"
  },
  {
    "path": ".github/FUNDING.yml",
    "content": "# These are supported funding model platforms\n\ngithub: [mholt] # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]\npatreon: # Replace with a single Patreon username\nopen_collective: # Replace with a single Open Collective username\nko_fi: # Replace with a single Ko-fi username\ntidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel\ncommunity_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry\nliberapay: # Replace with a single Liberapay username\nissuehunt: # Replace with a single IssueHunt username\notechie: # Replace with a single Otechie username\ncustom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/ISSUE.yml",
    "content": "name: Issue\ndescription: An actionable development item, like a bug report or feature request\nbody:\n  - type: markdown\n    attributes:\n      value: |\n        Thank you for opening an issue! This is for actionable development items like bug reports and feature requests.\n        If you have a question about using Caddy, please [post on our forums](https://caddy.community) instead.\n  - type: textarea\n    id: content\n    attributes:\n      label: Issue Details\n      placeholder: Describe the issue here. Be specific by providing complete logs and minimal instructions to reproduce, or a thoughtful proposal, etc.\n    validations:\n      required: true\n  - type: dropdown\n    id: assistance-disclosure\n    attributes:\n      label: Assistance Disclosure\n      description: \"Our project allows assistance by AI/LLM tools as long as it is disclosed and described so we can better respond. Please certify whether you have used any such tooling related to this issue:\"\n      options:\n        - \n        - AI used\n        - AI not used\n    validations:\n      required: true\n  - type: input\n    id: assistance-description\n    attributes:\n      label: If AI was used, describe the extent to which it was used.\n      description: 'Examples: \"ChatGPT translated from my native language\" or \"Claude proposed this change/feature\"'\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/config.yml",
    "content": "blank_issues_enabled: false\ncontact_links:\n  - name: Caddy forum\n    url: https://caddy.community\n    about: If you have questions (or answers!) about using Caddy, please use our forum"
  },
  {
    "path": ".github/SECURITY.md",
    "content": "# Security Policy\n\nThe Caddy project would like to make sure that it stays on top of all relevant and practically-exploitable vulnerabilities.\n\n\n## Supported Versions\n\n| Version     | Supported |\n| ----------- | ----------|\n| 2.latest    | ✔️        |\n| <= 2.latest | :x:       |\n\n\n## Acceptable Scope\n\nA security report must demonstrate a security bug in the source code from this repository.\n\nSome security problems are the result of interplay between different components of the Web, rather than a vulnerability in the web server itself. Please only report vulnerabilities in the web server itself, as we cannot coerce the rest of the Web to be fixed (for example, we do not consider IP spoofing, BGP hijacks, or missing/misconfigured HTTP headers a vulnerability in the Caddy web server).\n\nVulnerabilities caused by misconfigurations are out of scope. Yes, it is entirely possible to craft and use a configuration that is unsafe, just like with every other web server; we recommend against doing that. Similarly, external misconfigurations are out of scope. For example, an open or forwarded port from a public network to a Caddy instance intended to serve only internal clients is not a vulnerability in Caddy.\n\nWe do not accept reports if the steps imply or require a compromised system or third-party software, as we cannot control those. We expect that users secure their own systems and keep all their software patched. For example, if untrusted users are able to upload/write/host arbitrary files in the web root directory, it is NOT a security bug in Caddy if those files get served to clients; however, it _would_ be a valid report if a bug in Caddy's source code unintentionally gave unauthorized users the ability to upload unsafe files or delete files without relying on an unpatched system or piece of software.\n\nClient-side exploits are out of scope. In other words, it is not a bug in Caddy if the web browser does something unsafe, even if the downloaded content was served by Caddy. (Those kinds of exploits can generally be mitigated by proper configuration of HTTP headers.) As a general rule, the content served by Caddy is not considered in scope because content is configurable by the site owner or the associated web application.\n\nSecurity bugs in code dependencies (including Go's standard library) are out of scope. Instead, if a dependency has patched a relevant security bug, please feel free to open a public issue or pull request to update that dependency in our code.\n\nWe accept security reports and patches, but do not assign CVEs, for code that has not been released with a non-prerelease tag.\n\n\n## Reporting a Vulnerability\n\nWe get a lot of difficult reports that turn out to be invalid. Clear, obvious reports tend to be the most credible (but are also rare).\n\nFirst please ensure your report falls within the accepted scope of security bugs (above).\n\n:warning: **YOU MUST DISCLOSE WHETHER YOU USED LLMs (\"AI\") IN ANY WAY.** Whether you are using AI for discovery, as part of writing the report or its replies, and/or testing or validating proofs and changes, we require you to mention the extent of it. **FAILURE TO INCLUDE A DISCLOSURE EVEN IF YOU DO NOT USE AI MAY LEAD TO IMMEDIATE DISMISSAL OF YOUR REPORT AND POTENTIAL BLOCKLISTING.** We will not waste our time chatting with bots. But if you're a human, pull up a chair and we'll drink some chocolate milk.\n\nWe'll need enough information to verify the bug and make a patch. To speed things up, please include:\n\n- Most minimal possible config (without redactions!)\n- Command(s)\n- Precise HTTP requests (`curl -v` and its output please)\n- Full log output (please enable debug mode)\n- Specific minimal steps to reproduce the issue from scratch\n- A working patch\n\nPlease DO NOT use containers, VMs, cloud instances or services, or any other complex infrastructure in your steps. Always prefer `curl -v` instead of web browsers.\n\nWe consider publicly-registered domain names to be public information. This necessary in order to maintain the integrity of certificate transparency, public DNS, and other public trust systems. Do not redact domain names from your reports. The actual content of your domain name affects Caddy's behavior, so we need the exact domain name(s) to reproduce with, or your report will be ignored.\n\nIt will speed things up if you suggest a working patch, such as a code diff, and explain why and how it works. Reports that are not actionable, do not contain enough information, are too pushy/demanding, or are not able to convince us that it is a viable and practical attack on the web server itself may be deferred to a later time or possibly ignored, depending on available resources. Priority will be given to credible, responsible reports that are constructive, specific, and actionable. (We get a lot of invalid reports.) Thank you for understanding.\n\nWhen you are ready, please submit a [new private vulnerability report](https://github.com/caddyserver/caddy/security/advisories/new).\n\nPlease don't encrypt the message. It only makes the process more complicated.\n\nPlease also understand that due to our nature as an open source project, we do not have a budget to award security bounties. We can only thank you.\n\nIf your report is valid and a patch is released, we will not reveal your identity by default. If you wish to be credited, please give us the name to use and/or your GitHub username. If you don't provide this we can't credit you.\n\nThanks for responsibly helping Caddy&mdash;and thousands of websites&mdash;be more secure!\n"
  },
  {
    "path": ".github/dependabot.yml",
    "content": "---\nversion: 2\nupdates:\n  - package-ecosystem: \"github-actions\"\n    directory: \"/\"\n    open-pull-requests-limit: 1\n    groups:\n      actions-deps:\n        patterns:\n          - \"*\"\n    schedule:\n      interval: \"monthly\"\n    \n  - package-ecosystem: \"gomod\"\n    directory: \"/\"\n    open-pull-requests-limit: 1\n    groups:\n      all-updates:\n        patterns:\n          - \"*\"\n    schedule:\n      interval: \"monthly\"\n"
  },
  {
    "path": ".github/pull_request_template.md",
    "content": "\n\n\n## Assistance Disclosure\n<!--\nThank you for contributing! Please note:\n\nThe use of AI/LLM tools is allowed so long as it is disclosed, so\nthat we can provide better code review and maintain project quality.\n\nIf you used AI/LLM tooling in any way related to this PR, please\nlet us know to what extent it was utilized.\n\nExamples:\n\n\"No AI was used.\"\n\"I wrote the code, but Claude generated the tests.\"\n\"I consulted ChatGPT for a solution, but I authored/coded it myself.\"\n\"Cody generated the code, and I verified it is correct.\"\n\"Copilot provided tab completion for code and comments.\"\n\nWe expect that you have vetted your contributions for correctness.\nAdditionally, signing our CLA certifies that you have the rights to\ncontribute this change.\n\nReplace the text below with your disclosure:\n-->\n\n_This PR is missing an assistance disclosure._\n"
  },
  {
    "path": ".github/workflows/ai.yml",
    "content": "name: AI Moderator\npermissions: read-all\non:\n  issues:\n    types: [opened]\n  issue_comment:\n    types: [created]\n  pull_request_review_comment:\n    types: [created]\njobs:\n  spam-detection:\n    runs-on: ubuntu-latest\n    permissions:\n      issues: write\n      pull-requests: write\n      models: read\n      contents: read\n    steps:\n      - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd\n      - uses: github/ai-moderator@81159c370785e295c97461ade67d7c33576e9319\n        with:\n          token: ${{ secrets.GITHUB_TOKEN }}\n          spam-label: 'spam'\n          ai-label: 'ai-generated'\n          minimize-detected-comments: true\n          # Built-in prompt configuration (all enabled by default)\n          enable-spam-detection: true\n          enable-link-spam-detection: true\n          enable-ai-detection: true\n          # custom-prompt-path: '.github/prompts/my-custom.prompt.yml'  # Optional"
  },
  {
    "path": ".github/workflows/auto-release-pr.yml",
    "content": "name: Release Proposal Approval Tracker\n\non:\n  pull_request_review:\n    types: [submitted, dismissed]\n  pull_request:\n    types: [labeled, unlabeled, synchronize, closed]\n\npermissions:\n  contents: read\n  pull-requests: write\n  issues: write\n\njobs:\n  check-approvals:\n    name: Track Maintainer Approvals\n    runs-on: ubuntu-latest\n    # Only run on PRs with release-proposal label\n    if: contains(github.event.pull_request.labels.*.name, 'release-proposal') && github.event.pull_request.state == 'open'\n    \n    steps:\n      - name: Check approvals and update PR\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0\n        env:\n          MAINTAINER_LOGINS: ${{ secrets.MAINTAINER_LOGINS }}\n        with:\n          script: |\n            const pr = context.payload.pull_request;\n            \n            // Extract version from PR title (e.g., \"Release Proposal: v1.2.3\")\n            const versionMatch = pr.title.match(/Release Proposal:\\s*(v[\\d.]+(?:-[\\w.]+)?)/);\n            const commitMatch = pr.body.match(/\\*\\*Target Commit:\\*\\*\\s*`([a-f0-9]+)`/);\n            \n            if (!versionMatch || !commitMatch) {\n              console.log('Could not extract version from title or commit from body');\n              return;\n            }\n            \n            const version = versionMatch[1];\n            const targetCommit = commitMatch[1];\n            \n            console.log(`Version: ${version}, Target Commit: ${targetCommit}`);\n            \n            // Get all reviews\n            const reviews = await github.rest.pulls.listReviews({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              pull_number: pr.number\n            });\n            \n            // Get list of maintainers\n            const maintainerLoginsRaw = process.env.MAINTAINER_LOGINS || '';\n            const maintainerLogins = maintainerLoginsRaw\n              .split(/[,;]/)\n              .map(login => login.trim())\n              .filter(login => login.length > 0);\n            \n            console.log(`Maintainer logins: ${maintainerLogins.join(', ')}`);\n            \n            // Get the latest review from each user\n            const latestReviewsByUser = {};\n            reviews.data.forEach(review => {\n              const username = review.user.login;\n              if (!latestReviewsByUser[username] || new Date(review.submitted_at) > new Date(latestReviewsByUser[username].submitted_at)) {\n                latestReviewsByUser[username] = review;\n              }\n            });\n            \n            // Count approvals from maintainers\n            const maintainerApprovals = Object.entries(latestReviewsByUser)\n              .filter(([username, review]) => \n                maintainerLogins.includes(username) && \n                review.state === 'APPROVED'\n              )\n              .map(([username, review]) => username);\n            \n            const approvalCount = maintainerApprovals.length;\n            console.log(`Found ${approvalCount} maintainer approvals from: ${maintainerApprovals.join(', ')}`);\n            \n            // Get current labels\n            const currentLabels = pr.labels.map(label => label.name);\n            const hasApprovedLabel = currentLabels.includes('approved');\n            const hasAwaitingApprovalLabel = currentLabels.includes('awaiting-approval');\n            \n            if (approvalCount >= 2 && !hasApprovedLabel) {\n              console.log('✅ Quorum reached! Updating PR...');\n              \n              // Remove awaiting-approval label if present\n              if (hasAwaitingApprovalLabel) {\n                await github.rest.issues.removeLabel({\n                  owner: context.repo.owner,\n                  repo: context.repo.repo,\n                  issue_number: pr.number,\n                  name: 'awaiting-approval'\n                }).catch(e => console.log('Label not found:', e.message));\n              }\n              \n              // Add approved label\n              await github.rest.issues.addLabels({\n                owner: context.repo.owner,\n                repo: context.repo.repo,\n                issue_number: pr.number,\n                labels: ['approved']\n              });\n              \n              // Add comment with tagging instructions\n              const approversList = maintainerApprovals.map(u => `@${u}`).join(', ');\n              const commentBody = [\n                '## ✅ Approval Quorum Reached',\n                '',\n                `This release proposal has been approved by ${approvalCount} maintainers: ${approversList}`,\n                '',\n                '### Tagging Instructions',\n                '',\n                'A maintainer should now create and push the signed tag:',\n                '',\n                '```bash',\n                `git checkout ${targetCommit}`,\n                `git tag -s ${version} -m \"Release ${version}\"`,\n                `git push origin ${version}`,\n                `git checkout -`,\n                '```',\n                '',\n                'The release workflow will automatically start when the tag is pushed.'\n              ].join('\\n');\n              \n              await github.rest.issues.createComment({\n                owner: context.repo.owner,\n                repo: context.repo.repo,\n                issue_number: pr.number,\n                body: commentBody\n              });\n              \n              console.log('Posted tagging instructions');\n            } else if (approvalCount < 2 && hasApprovedLabel) {\n              console.log('⚠️  Approval count dropped below quorum, removing approved label');\n              \n              // Remove approved label\n              await github.rest.issues.removeLabel({\n                owner: context.repo.owner,\n                repo: context.repo.repo,\n                issue_number: pr.number,\n                name: 'approved'\n              }).catch(e => console.log('Label not found:', e.message));\n              \n              // Add awaiting-approval label\n              if (!hasAwaitingApprovalLabel) {\n                await github.rest.issues.addLabels({\n                  owner: context.repo.owner,\n                  repo: context.repo.repo,\n                  issue_number: pr.number,\n                  labels: ['awaiting-approval']\n                });\n              }\n            } else {\n              console.log(`⏳ Waiting for more approvals (${approvalCount}/2 required)`);\n            }\n\n  handle-pr-closed:\n    name: Handle PR Closed Without Tag\n    runs-on: ubuntu-latest\n    if: |\n      contains(github.event.pull_request.labels.*.name, 'release-proposal') &&\n      github.event.action == 'closed' && !contains(github.event.pull_request.labels.*.name, 'released')\n    \n    steps:\n      - name: Add cancelled label and comment\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0\n        with:\n          script: |\n            const pr = context.payload.pull_request;\n            \n            // Check if the release-in-progress label is present\n            const hasReleaseInProgress = pr.labels.some(label => label.name === 'release-in-progress');\n            \n            if (hasReleaseInProgress) {\n              // PR was closed while release was in progress - this is unusual\n              await github.rest.issues.createComment({\n                owner: context.repo.owner,\n                repo: context.repo.repo,\n                issue_number: pr.number,\n                body: '⚠️ **Warning:** This PR was closed while a release was in progress. This may indicate an error. Please verify the release status.'\n              });\n            } else {\n              // PR was closed before tag was created - this is normal cancellation\n              const versionMatch = pr.title.match(/Release Proposal:\\s*(v[\\d.]+(?:-[\\w.]+)?)/);\n              const version = versionMatch ? versionMatch[1] : 'unknown';\n              \n              await github.rest.issues.createComment({\n                owner: context.repo.owner,\n                repo: context.repo.repo,\n                issue_number: pr.number,\n                body: `## 🚫 Release Proposal Cancelled\\n\\nThis release proposal for ${version} was closed without creating the tag.\\n\\nIf you want to proceed with this release later, you can create a new release proposal.`\n              });\n            }\n            \n            // Add cancelled label\n            await github.rest.issues.addLabels({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              issue_number: pr.number,\n              labels: ['cancelled']\n            });\n            \n            // Remove other workflow labels if present\n            const labelsToRemove = ['awaiting-approval', 'approved', 'release-in-progress'];\n            for (const label of labelsToRemove) {\n              try {\n                await github.rest.issues.removeLabel({\n                  owner: context.repo.owner,\n                  repo: context.repo.repo,\n                  issue_number: pr.number,\n                  name: label\n                });\n              } catch (e) {\n                console.log(`Label ${label} not found or already removed`);\n              }\n            }\n            \n            console.log('Added cancelled label and cleaned up workflow labels');\n            \n"
  },
  {
    "path": ".github/workflows/ci.yml",
    "content": "# Used as inspiration: https://github.com/mvdan/github-actions-golang\n\nname: Tests\n\non:\n  push:\n    branches:\n      - master\n      - 2.*\n  pull_request:\n    branches:\n      - master\n      - 2.*\n\nenv:\n  GOFLAGS: '-tags=nobadger,nomysql,nopgx'\n  # https://github.com/actions/setup-go/issues/491\n  GOTOOLCHAIN: local\n\npermissions:\n  contents: read\n\njobs:\n  test:\n    strategy:\n      # Default is true, cancels jobs for other platforms in the matrix if one fails\n      fail-fast: false\n      matrix:\n        os:\n          - linux\n          - mac\n          - windows\n        go:\n          - '1.26'\n\n        include:\n        # Set the minimum Go patch version for the given Go minor\n        # Usable via ${{ matrix.GO_SEMVER }}\n        - go: '1.26'\n          GO_SEMVER: '~1.26.0'\n\n        # Set some variables per OS, usable via ${{ matrix.VAR }}\n        # OS_LABEL: the VM label from GitHub Actions (see https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners/about-github-hosted-runners#standard-github-hosted-runners-for-public-repositories)\n        # CADDY_BIN_PATH: the path to the compiled Caddy binary, for artifact publishing\n        # SUCCESS: the typical value for $? per OS (Windows/pwsh returns 'True')\n        - os: linux\n          OS_LABEL: ubuntu-latest\n          CADDY_BIN_PATH: ./cmd/caddy/caddy\n          SUCCESS: 0\n\n        - os: mac\n          OS_LABEL: macos-14\n          CADDY_BIN_PATH: ./cmd/caddy/caddy\n          SUCCESS: 0\n\n        - os: windows\n          OS_LABEL: windows-latest\n          CADDY_BIN_PATH: ./cmd/caddy/caddy.exe\n          SUCCESS: 'True'\n\n    runs-on: ${{ matrix.OS_LABEL }}\n    permissions:\n      contents: read\n      pull-requests: read\n      actions: write # to allow uploading artifacts and cache\n    steps:\n    - name: Harden the runner (Audit all outbound calls)\n      uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0\n      with:\n        egress-policy: audit\n\n    - name: Checkout code\n      uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2\n\n    - name: Install Go\n      uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0\n      with:\n        go-version: ${{ matrix.GO_SEMVER }}\n        check-latest: true\n\n    # These tools would be useful if we later decide to reinvestigate\n    # publishing test/coverage reports to some tool for easier consumption\n    # - name: Install test and coverage analysis tools\n    #   run: |\n    #     go get github.com/axw/gocov/gocov\n    #     go get github.com/AlekSi/gocov-xml\n    #     go get -u github.com/jstemmer/go-junit-report\n    #     echo \"$(go env GOPATH)/bin\" >> $GITHUB_PATH\n\n    - name: Print Go version and environment\n      id: vars\n      shell: bash\n      run: |\n        printf \"Using go at: $(which go)\\n\"\n        printf \"Go version: $(go version)\\n\"\n        printf \"\\n\\nGo environment:\\n\\n\"\n        go env\n        printf \"\\n\\nSystem environment:\\n\\n\"\n        env\n        printf \"Git version: $(git version)\\n\\n\"\n        # Calculate the short SHA1 hash of the git commit\n        echo \"short_sha=$(git rev-parse --short HEAD)\" >> $GITHUB_OUTPUT\n\n    - name: Get dependencies\n      run: |\n        go get -v -t -d ./...\n        # mkdir test-results\n\n    - name: Build Caddy\n      working-directory: ./cmd/caddy\n      env:\n        CGO_ENABLED: 0\n      run: |\n        go build -trimpath -ldflags=\"-w -s\" -v\n\n    - name: Smoke test Caddy\n      working-directory: ./cmd/caddy\n      run: |\n        ./caddy start\n        ./caddy stop\n\n    - name: Publish Build Artifact\n      uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0\n      with:\n        name: caddy_${{ runner.os }}_go${{ matrix.go }}_${{ steps.vars.outputs.short_sha }}\n        path: ${{ matrix.CADDY_BIN_PATH }}\n        compression-level: 0\n\n    # Commented bits below were useful to allow the job to continue\n    # even if the tests fail, so we can publish the report separately\n    # For info about set-output, see https://stackoverflow.com/questions/57850553/github-actions-check-steps-status\n    - name: Run tests\n      # id: step_test\n      # continue-on-error: true\n      run: |\n        # (go test -v -coverprofile=cover-profile.out -race ./... 2>&1) > test-results/test-result.out\n        go test -v -coverprofile=\"cover-profile.out\" -short -race ./...\n        # echo \"status=$?\" >> $GITHUB_OUTPUT\n\n    # Relevant step if we reinvestigate publishing test/coverage reports\n    # - name: Prepare coverage reports\n    #   run: |\n    #     mkdir coverage\n    #     gocov convert cover-profile.out > coverage/coverage.json\n    #     # Because Windows doesn't work with input redirection like *nix, but output redirection works.\n    #     (cat ./coverage/coverage.json | gocov-xml) > coverage/coverage.xml\n\n    # To return the correct result even though we set 'continue-on-error: true'\n    # - name: Coerce correct build result\n    #   if: matrix.os != 'windows' && steps.step_test.outputs.status != ${{ matrix.SUCCESS }}\n    #   run: |\n    #     echo \"step_test ${{ steps.step_test.outputs.status }}\\n\"\n    #     exit 1\n\n  s390x-test:\n    name: test (s390x on IBM Z)\n    permissions:\n      contents: read\n      pull-requests: read\n    runs-on: ubuntu-latest\n    if: github.event.pull_request.head.repo.full_name == 'caddyserver/caddy' && github.actor != 'dependabot[bot]'\n    continue-on-error: true  # August 2020: s390x VM is down due to weather and power issues\n    steps:\n      - name: Harden the runner (Audit all outbound calls)\n        uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0\n        with:\n          egress-policy: audit\n          allowed-endpoints: ci-s390x.caddyserver.com:22\n\n      - name: Checkout code\n        uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2\n      - name: Run Tests\n        run: |\n          set +e\n          mkdir -p ~/.ssh && echo -e \"${SSH_KEY//_/\\\\n}\" > ~/.ssh/id_ecdsa && chmod og-rwx ~/.ssh/id_ecdsa\n\n          # short sha is enough?\n          short_sha=$(git rev-parse --short HEAD)\n\n          # To shorten the following lines\n          ssh_opts=\"-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null\"\n          ssh_host=\"$CI_USER@ci-s390x.caddyserver.com\"\n\n          # The environment is fresh, so there's no point in keeping accepting and adding the key.\n          rsync -arz -e \"ssh $ssh_opts\" --progress --delete --exclude '.git' . \"$ssh_host\":/var/tmp/\"$short_sha\"\n          ssh $ssh_opts -t \"$ssh_host\" bash <<EOF\n          cd /var/tmp/$short_sha\n          go version\n          go env\n          printf \"\\n\\n\"\n          retries=3\n          exit_code=0\n          while ((retries > 0)); do\n            CGO_ENABLED=0 go test -p 1 -v ./...\n            exit_code=$?\n            if ((exit_code == 0)); then\n              break\n            fi\n            echo \"\\n\\nTest failed: \\$exit_code, retrying...\"\n            ((retries--))\n          done\n          echo \"Remote exit code: \\$exit_code\"\n          exit \\$exit_code\n          EOF\n          test_result=$?\n\n          # There's no need leaving the files around\n          ssh $ssh_opts \"$ssh_host\" \"rm -rf /var/tmp/'$short_sha'\"\n\n          echo \"Test exit code: $test_result\"\n          exit $test_result\n        env:\n          SSH_KEY: ${{ secrets.S390X_SSH_KEY }}\n          CI_USER: ${{ secrets.CI_USER }}\n\n  goreleaser-check:\n    runs-on: ubuntu-latest\n    permissions:\n      contents: read\n      pull-requests: read\n    if: github.event.pull_request.head.repo.full_name == 'caddyserver/caddy' && github.actor != 'dependabot[bot]'\n    steps:\n      - name: Harden the runner (Audit all outbound calls)\n        uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0\n        with:\n          egress-policy: audit\n\n      - name: Checkout code\n        uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2\n      \n      - uses: goreleaser/goreleaser-action@ec59f474b9834571250b370d4735c50f8e2d1e29 # v7.0.0\n        with:\n          version: latest\n          args: check\n      - name: Install Go\n        uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0\n        with:\n          go-version: \"~1.26\"\n          check-latest: true\n      - name: Install xcaddy\n        run: |\n          go install github.com/caddyserver/xcaddy/cmd/xcaddy@latest\n          xcaddy version\n      - uses: goreleaser/goreleaser-action@ec59f474b9834571250b370d4735c50f8e2d1e29 # v7.0.0\n        with:\n          version: latest\n          args: build --single-target --snapshot\n        env:\n          TAG: ${{ github.head_ref || github.ref_name }}\n"
  },
  {
    "path": ".github/workflows/cross-build.yml",
    "content": "name: Cross-Build\n\non:\n  push:\n    branches:\n      - master\n      - 2.*\n  pull_request:\n    branches:\n      - master\n      - 2.*\n\nenv:\n  GOFLAGS: '-tags=nobadger,nomysql,nopgx'\n  CGO_ENABLED: '0'\n  # https://github.com/actions/setup-go/issues/491\n  GOTOOLCHAIN: local\n\npermissions:\n  contents: read\n\njobs:\n  build:\n    strategy:\n      fail-fast: false\n      matrix:\n        goos:\n          - 'aix'\n          - 'linux'\n          - 'solaris'\n          - 'illumos'\n          - 'dragonfly'\n          - 'freebsd'\n          - 'openbsd'\n          - 'windows'\n          - 'darwin'\n          - 'netbsd'\n        go:\n          - '1.26'\n\n        include:\n        # Set the minimum Go patch version for the given Go minor\n        # Usable via ${{ matrix.GO_SEMVER }}\n        - go: '1.26'\n          GO_SEMVER: '~1.26.0'\n\n    runs-on: ubuntu-latest\n    permissions:\n      contents: read\n      pull-requests: read\n    continue-on-error: true\n    steps:\n      - name: Harden the runner (Audit all outbound calls)\n        uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0\n        with:\n          egress-policy: audit\n\n      - name: Checkout code\n        uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2\n\n      - name: Install Go\n        uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0\n        with:\n          go-version: ${{ matrix.GO_SEMVER }}\n          check-latest: true\n\n      - name: Print Go version and environment\n        id: vars\n        run: |\n          printf \"Using go at: $(which go)\\n\"\n          printf \"Go version: $(go version)\\n\"\n          printf \"\\n\\nGo environment:\\n\\n\"\n          go env\n          printf \"\\n\\nSystem environment:\\n\\n\"\n          env\n\n      - name: Run Build\n        env:\n          GOOS: ${{ matrix.goos }}\n          GOARCH: ${{ matrix.goos == 'aix' && 'ppc64' || 'amd64' }}\n        shell: bash\n        continue-on-error: true\n        working-directory: ./cmd/caddy\n        run: go build -trimpath -o caddy-\"$GOOS\"-$GOARCH 2> /dev/null\n"
  },
  {
    "path": ".github/workflows/lint.yml",
    "content": "name: Lint\n\non:\n  push:\n    branches:\n      - master\n      - 2.*\n  pull_request:\n    branches:\n      - master\n      - 2.*\n\npermissions:\n  contents: read\n\nenv:\n  # https://github.com/actions/setup-go/issues/491\n  GOTOOLCHAIN: local\n\njobs:\n  # From https://github.com/golangci/golangci-lint-action\n  golangci:\n    permissions:\n      contents: read # for actions/checkout to fetch code\n      pull-requests: read # for golangci/golangci-lint-action to fetch pull requests\n    name: lint\n    strategy:\n      matrix:\n        os:\n          - linux\n          - mac\n          - windows\n\n        include:\n        - os: linux\n          OS_LABEL: ubuntu-latest\n\n        - os: mac\n          OS_LABEL: macos-14\n\n        - os: windows\n          OS_LABEL: windows-latest\n\n    runs-on: ${{ matrix.OS_LABEL }}\n\n    steps:\n      - name: Harden the runner (Audit all outbound calls)\n        uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0\n        with:\n          egress-policy: audit\n\n      - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2\n      - uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0\n        with:\n          go-version: '~1.26'\n          check-latest: true\n\n      - name: golangci-lint\n        uses: golangci/golangci-lint-action@1e7e51e771db61008b38414a730f564565cf7c20 # v9.2.0\n        with:\n          version: latest\n\n          # Windows times out frequently after about 5m50s if we don't set a longer timeout.\n          args: --timeout 10m\n\n          # Optional: show only new issues if it's a pull request. The default value is `false`.\n          # only-new-issues: true\n\n  govulncheck:\n    permissions:\n      contents: read\n      pull-requests: read\n    runs-on: ubuntu-latest\n    steps:\n      - name: Harden the runner (Audit all outbound calls)\n        uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0\n        with:\n          egress-policy: audit\n\n      - name: govulncheck\n        uses: golang/govulncheck-action@b625fbe08f3bccbe446d94fbf87fcc875a4f50ee # v1.0.4\n        with:\n          go-version-input: '~1.26.0'\n          check-latest: true\n  \n  dependency-review:\n    runs-on: ubuntu-latest\n    permissions:\n      contents: read\n      pull-requests: write\n    steps:\n      - name: Harden the runner (Audit all outbound calls)\n        uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0\n        with:\n          egress-policy: audit\n\n      - name: 'Checkout Repository'\n        uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2\n      - name: 'Dependency Review'\n        uses: actions/dependency-review-action@05fe4576374b728f0c523d6a13d64c25081e0803 # v4.8.3\n        with:\n          comment-summary-in-pr: on-failure\n          # https://github.com/actions/dependency-review-action/issues/430#issuecomment-1468975566\n          base-ref: ${{ github.event.pull_request.base.sha || 'master' }}\n          head-ref: ${{ github.event.pull_request.head.sha || github.ref }}\n"
  },
  {
    "path": ".github/workflows/release-proposal.yml",
    "content": "name: Release Proposal\n\n# This workflow creates a release proposal as a PR that requires approval from maintainers\n# Triggered manually by maintainers when ready to prepare a release\non:\n  workflow_dispatch:\n    inputs:\n      version:\n        description: 'Version to release (e.g., v2.8.0)'\n        required: true\n        type: string\n      commit_hash:\n        description: 'Commit hash to release from'\n        required: true\n        type: string\n\npermissions:\n  contents: read\n\njobs:\n  create-proposal:\n    name: Create Release Proposal\n    runs-on: ubuntu-latest\n    permissions:\n      contents: write\n      pull-requests: write\n      issues: write\n    \n    steps:\n      - name: Harden the runner (Audit all outbound calls)\n        uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0\n        with:\n          egress-policy: audit\n      - name: Checkout code\n        uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2\n        with:\n          fetch-depth: 0\n\n      - name: Trim and validate inputs\n        id: inputs\n        run: |\n          # Trim whitespace from inputs\n          VERSION=$(echo \"${{ inputs.version }}\" | xargs)\n          COMMIT_HASH=$(echo \"${{ inputs.commit_hash }}\" | xargs)\n          \n          echo \"version=$VERSION\" >> $GITHUB_OUTPUT\n          echo \"commit_hash=$COMMIT_HASH\" >> $GITHUB_OUTPUT\n          \n          # Validate version format\n          if [[ ! \"$VERSION\" =~ ^v[0-9]+\\.[0-9]+\\.[0-9]+(-[a-zA-Z0-9.]+)?$ ]]; then\n            echo \"Error: Version must follow semver format (e.g., v2.8.0 or v2.8.0-beta.1)\"\n            exit 1\n          fi\n          \n          # Validate commit hash format\n          if [[ ! \"$COMMIT_HASH\" =~ ^[a-f0-9]{7,40}$ ]]; then\n            echo \"Error: Commit hash must be a valid SHA (7-40 characters)\"\n            exit 1\n          fi\n          \n          # Check if commit exists\n          if ! git cat-file -e \"$COMMIT_HASH\"; then\n            echo \"Error: Commit $COMMIT_HASH does not exist\"\n            exit 1\n          fi\n\n      - name: Check if tag already exists\n        run: |\n          if git rev-parse \"${{ steps.inputs.outputs.version }}\" >/dev/null 2>&1; then\n            echo \"Error: Tag ${{ steps.inputs.outputs.version }} already exists\"\n            exit 1\n          fi\n\n      - name: Check for existing proposal PR\n        id: check_existing\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0\n        with:\n          script: |\n            const version = '${{ steps.inputs.outputs.version }}';\n            \n            // Search for existing open PRs with release-proposal label that match this version\n            const openPRs = await github.rest.pulls.list({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              state: 'open',\n              sort: 'updated',\n              direction: 'desc'\n            });\n            \n            const existingOpenPR = openPRs.data.find(pr => \n              pr.title.includes(version) && \n              pr.labels.some(label => label.name === 'release-proposal')\n            );\n            \n            if (existingOpenPR) {\n              const hasReleased = existingOpenPR.labels.some(label => label.name === 'released');\n              const hasReleaseInProgress = existingOpenPR.labels.some(label => label.name === 'release-in-progress');\n              \n              if (hasReleased || hasReleaseInProgress) {\n                core.setFailed(`A release for ${version} is already in progress or completed: ${existingOpenPR.html_url}`);\n              } else {\n                core.setFailed(`An open release proposal already exists for ${version}: ${existingOpenPR.html_url}\\n\\nPlease use the existing PR or close it first.`);\n              }\n              return;\n            }\n            \n            // Check for closed PRs with this version that were cancelled\n            const closedPRs = await github.rest.pulls.list({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              state: 'closed',\n              sort: 'updated',\n              direction: 'desc'\n            });\n            \n            const cancelledPR = closedPRs.data.find(pr => \n              pr.title.includes(version) && \n              pr.labels.some(label => label.name === 'release-proposal') &&\n              pr.labels.some(label => label.name === 'cancelled')\n            );\n            \n            if (cancelledPR) {\n              console.log(`Found previously cancelled proposal for ${version}: ${cancelledPR.html_url}`);\n              console.log('Creating new proposal to replace cancelled one...');\n            } else {\n              console.log(`No existing proposal found for ${version}, proceeding...`);\n            }\n\n      - name: Generate changelog and create branch\n        id: setup\n        run: |\n          VERSION=\"${{ steps.inputs.outputs.version }}\"\n          COMMIT_HASH=\"${{ steps.inputs.outputs.commit_hash }}\"\n          \n          # Create a new branch for the release proposal\n          BRANCH_NAME=\"release_proposal-$VERSION\"\n          git checkout -b \"$BRANCH_NAME\"\n          \n          # Calculate how many commits behind HEAD\n          COMMITS_BEHIND=$(git rev-list --count ${COMMIT_HASH}..HEAD)\n          \n          if [ \"$COMMITS_BEHIND\" -eq 0 ]; then\n            BEHIND_INFO=\"This is the latest commit (HEAD)\"\n          else\n            BEHIND_INFO=\"This commit is **${COMMITS_BEHIND} commits behind HEAD**\"\n          fi\n          \n          echo \"commits_behind=$COMMITS_BEHIND\" >> $GITHUB_OUTPUT\n          echo \"behind_info=$BEHIND_INFO\" >> $GITHUB_OUTPUT\n          \n          # Get the last tag\n          LAST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo \"\")\n          \n          if [ -z \"$LAST_TAG\" ]; then\n            echo \"No previous tag found, generating full changelog\"\n            COMMITS=$(git log --pretty=format:\"- %s (%h)\" --reverse \"$COMMIT_HASH\")\n          else\n            echo \"Generating changelog since $LAST_TAG\"\n            COMMITS=$(git log --pretty=format:\"- %s (%h)\" --reverse \"${LAST_TAG}..$COMMIT_HASH\")\n          fi\n          \n          # Store changelog for PR body\n          CLEANSED_COMMITS=$(echo \"$COMMITS\" | sed 's/`/\\\\`/g')\n          echo \"changelog<<EOF\" >> $GITHUB_OUTPUT\n          echo \"$CLEANSED_COMMITS\" >> $GITHUB_OUTPUT\n          echo \"EOF\" >> $GITHUB_OUTPUT\n          \n          # Create empty commit for the PR\n          git config user.name \"github-actions[bot]\"\n          git config user.email \"github-actions[bot]@users.noreply.github.com\"\n          git commit --allow-empty -m \"Release proposal for $VERSION\"\n          \n          # Push the branch\n          git push origin \"$BRANCH_NAME\"\n          \n          echo \"branch_name=$BRANCH_NAME\" >> $GITHUB_OUTPUT\n\n      - name: Create release proposal PR\n        id: create_pr\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0\n        with:\n          script: |\n            const changelog = `${{ steps.setup.outputs.changelog }}`;\n            \n            const pr = await github.rest.pulls.create({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              title: `Release Proposal: ${{ steps.inputs.outputs.version }}`,\n              head: '${{ steps.setup.outputs.branch_name }}',\n              base: 'master',\n              body: `## Release Proposal: ${{ steps.inputs.outputs.version }}\n            \n            **Target Commit:** \\`${{ steps.inputs.outputs.commit_hash }}\\`\n            **Requested by:** @${{ github.actor }}\n            **Commit Status:** ${{ steps.setup.outputs.behind_info }}\n            \n            This PR proposes creating release tag \\`${{ steps.inputs.outputs.version }}\\` at commit \\`${{ steps.inputs.outputs.commit_hash }}\\`.\n            \n            ### Approval Process\n            \n            This PR requires **approval from 2+ maintainers** before the tag can be created.\n            \n            ### What happens next?\n            \n            1. Maintainers review this proposal\n            2. When 2+ maintainer approvals are received, an automated workflow will post tagging instructions\n            3. A maintainer manually creates and pushes the signed tag\n            4. The release workflow is triggered automatically by the tag push\n            5. Upon release completion, this PR is closed and the branch is deleted\n            \n            ### Changes Since Last Release\n            \n            ${changelog}\n            \n            ### Release Checklist\n            \n            - [ ] All tests pass\n            - [ ] Security review completed\n            - [ ] Documentation updated\n            - [ ] Breaking changes documented\n            \n            ---\n            \n            **Note:** Tag creation is manual and requires a signed tag from a maintainer.`,\n              draft: true\n            });\n            \n            // Add labels\n            await github.rest.issues.addLabels({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              issue_number: pr.data.number,\n              labels: ['release-proposal', 'awaiting-approval']\n            });\n            \n            console.log(`Created PR: ${pr.data.html_url}`);\n            \n            return { number: pr.data.number, url: pr.data.html_url };\n          result-encoding: json\n\n      - name: Post summary\n        run: |\n          echo \"## Release Proposal PR Created! 🚀\" >> $GITHUB_STEP_SUMMARY\n          echo \"\" >> $GITHUB_STEP_SUMMARY\n          echo \"Version: **${{ steps.inputs.outputs.version }}**\" >> $GITHUB_STEP_SUMMARY\n          echo \"Commit: **${{ steps.inputs.outputs.commit_hash }}**\" >> $GITHUB_STEP_SUMMARY\n          echo \"Status: ${{ steps.setup.outputs.behind_info }}\" >> $GITHUB_STEP_SUMMARY\n          echo \"\" >> $GITHUB_STEP_SUMMARY\n          echo \"PR: ${{ fromJson(steps.create_pr.outputs.result).url }}\" >> $GITHUB_STEP_SUMMARY\n"
  },
  {
    "path": ".github/workflows/release.yml",
    "content": "name: Release\n\non:\n  push:\n    tags:\n      - 'v*.*.*'\n\nenv:\n  # https://github.com/actions/setup-go/issues/491\n  GOTOOLCHAIN: local\n\npermissions:\n  contents: read\n\njobs:\n  verify-tag:\n    name: Verify Tag Signature and Approvals\n    runs-on: ubuntu-latest\n    permissions:\n      contents: write\n      pull-requests: write\n      issues: write\n    \n    outputs:\n      verification_passed: ${{ steps.verify.outputs.passed }}\n      tag_version: ${{ steps.info.outputs.version }}\n      proposal_issue_number: ${{ steps.find_proposal.outputs.result && fromJson(steps.find_proposal.outputs.result).number || '' }}\n    \n    steps:\n      - name: Checkout code\n        uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2\n        with:\n          fetch-depth: 0\n      # Force fetch upstream tags -- because 65 minutes\n      # tl;dr: actions/checkout@v3 runs this line:\n      #   git -c protocol.version=2 fetch --no-tags --prune --progress --no-recurse-submodules --depth=1 origin +ebc278ec98bb24f2852b61fde2a9bf2e3d83818b:refs/tags/\n      # which makes its own local lightweight tag, losing all the annotations in the process. Our earlier script ran:\n      #   git fetch --prune --unshallow\n      # which doesn't overwrite that tag because that would be destructive.\n      # Credit to @francislavoie for the investigation.\n      # https://github.com/actions/checkout/issues/290#issuecomment-680260080\n      - name: Force fetch upstream tags\n        run: git fetch --tags --force\n\n      - name: Get tag info\n        id: info\n        run: |\n          echo \"version=${GITHUB_REF#refs/tags/}\" >> $GITHUB_OUTPUT\n          echo \"sha=$(git rev-parse HEAD)\" >> $GITHUB_OUTPUT\n\n       # https://github.community/t5/GitHub-Actions/How-to-get-just-the-tag-name/m-p/32167/highlight/true#M1027\n      - name: Print Go version and environment\n        id: vars\n        run: |\n          printf \"Using go at: $(which go)\\n\"\n          printf \"Go version: $(go version)\\n\"\n          printf \"\\n\\nGo environment:\\n\\n\"\n          go env\n          printf \"\\n\\nSystem environment:\\n\\n\"\n          env\n          echo \"version_tag=${GITHUB_REF/refs\\/tags\\//}\" >> $GITHUB_OUTPUT\n          echo \"short_sha=$(git rev-parse --short HEAD)\" >> $GITHUB_OUTPUT\n\n          # Add \"pip install\" CLI tools to PATH\n          echo ~/.local/bin >> $GITHUB_PATH\n\n          # Parse semver\n          TAG=${GITHUB_REF/refs\\/tags\\//}\n          SEMVER_RE='[^0-9]*\\([0-9]*\\)[.]\\([0-9]*\\)[.]\\([0-9]*\\)\\([0-9A-Za-z\\.-]*\\)'\n          TAG_MAJOR=`echo ${TAG#v} | sed -e \"s#$SEMVER_RE#\\1#\"`\n          TAG_MINOR=`echo ${TAG#v} | sed -e \"s#$SEMVER_RE#\\2#\"`\n          TAG_PATCH=`echo ${TAG#v} | sed -e \"s#$SEMVER_RE#\\3#\"`\n          TAG_SPECIAL=`echo ${TAG#v} | sed -e \"s#$SEMVER_RE#\\4#\"`\n          echo \"tag_major=${TAG_MAJOR}\" >> $GITHUB_OUTPUT\n          echo \"tag_minor=${TAG_MINOR}\" >> $GITHUB_OUTPUT\n          echo \"tag_patch=${TAG_PATCH}\" >> $GITHUB_OUTPUT\n          echo \"tag_special=${TAG_SPECIAL}\" >> $GITHUB_OUTPUT\n\n      - name: Validate commits and tag signatures\n        id: verify\n        env:\n          signing_keys: ${{ secrets.SIGNING_KEYS }}\n        run: |\n          # Read the string into an array, splitting by IFS\n          IFS=\";\" read -ra keys_collection <<< \"$signing_keys\"\n          \n          # ref: https://docs.github.com/en/actions/reference/workflows-and-actions/contexts#example-usage-of-the-runner-context\n          touch \"${{ runner.temp }}/allowed_signers\"\n\n          # Iterate and print the split elements\n          for item in \"${keys_collection[@]}\"; do\n          \n            # trim leading whitespaces\n            item=\"${item##*( )}\"\n\n            # trim trailing whitespaces\n            item=\"${item%%*( )}\"\n            \n            IFS=\" \" read -ra key_components <<< \"$item\"\n            # git wants it in format: email address, type, public key\n            # ssh has it in format: type, public key, email address\n            echo \"${key_components[2]} namespaces=\\\"git\\\" ${key_components[0]} ${key_components[1]}\" >> \"${{ runner.temp }}/allowed_signers\"\n          done\n\n          git config set --global gpg.ssh.allowedSignersFile \"${{ runner.temp }}/allowed_signers\"\n\n          echo \"Verifying the tag: ${{ steps.vars.outputs.version_tag }}\"\n          \n          # Verify the tag is signed\n          if ! git verify-tag -v \"${{ steps.vars.outputs.version_tag }}\" 2>&1; then\n            echo \"❌ Tag verification failed!\"\n            echo \"passed=false\" >> $GITHUB_OUTPUT\n            git push --delete origin \"${{ steps.vars.outputs.version_tag }}\"\n            exit 1\n          fi\n          # Run it again to capture the output\n          git verify-tag -v \"${{ steps.vars.outputs.version_tag }}\" 2>&1 | tee /tmp/verify-output.txt;\n\n          # SSH verification output typically includes the key fingerprint\n          # Use GNU grep with Perl regex for cleaner extraction (Linux environment)\n          KEY_SHA256=$(grep -oP \"SHA256:[\\\"']?\\K[A-Za-z0-9+/=]+(?=[\\\"']?)\" /tmp/verify-output.txt | head -1 || echo \"\")\n          \n          if [ -z \"$KEY_SHA256\" ]; then\n            # Try alternative pattern with \"key\" prefix\n            KEY_SHA256=$(grep -oP \"key SHA256:[\\\"']?\\K[A-Za-z0-9+/=]+(?=[\\\"']?)\" /tmp/verify-output.txt | head -1 || echo \"\")\n          fi\n          \n          if [ -z \"$KEY_SHA256\" ]; then\n            # Fallback: extract any base64-like string (40+ chars)\n            KEY_SHA256=$(grep -oP '[A-Za-z0-9+/]{40,}=?' /tmp/verify-output.txt | head -1 || echo \"\")\n          fi\n          \n          if [ -z \"$KEY_SHA256\" ]; then\n            echo \"Somehow could not extract SSH key fingerprint from git verify-tag output\"\n            echo \"Cancelling flow and deleting tag\"\n            echo \"passed=false\" >> $GITHUB_OUTPUT\n            git push --delete origin \"${{ steps.vars.outputs.version_tag }}\"\n            exit 1\n          fi\n\n          echo \"✅ Tag verification succeeded!\"\n          echo \"passed=true\" >> $GITHUB_OUTPUT\n          echo \"key_id=$KEY_SHA256\" >> $GITHUB_OUTPUT\n\n      - name: Find related release proposal\n        id: find_proposal\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0\n        with:\n          script: |\n            const version = '${{ steps.vars.outputs.version_tag }}';\n            \n            // Search for PRs with release-proposal label that match this version\n            const prs = await github.rest.pulls.list({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              state: 'open', // Changed to 'all' to find both open and closed PRs\n              sort: 'updated',\n              direction: 'desc'\n            });\n            \n            // Find the most recent PR for this version\n            const proposal = prs.data.find(pr => \n              pr.title.includes(version) && \n              pr.labels.some(label => label.name === 'release-proposal')\n            );\n            \n            if (!proposal) {\n              console.log(`⚠️  No release proposal PR found for ${version}`);\n              console.log('This might be a hotfix or emergency release');\n              return { number: null, approved: true, approvals: 0, proposedCommit: null };\n            }\n            \n            console.log(`Found proposal PR #${proposal.number} for version ${version}`);\n            \n            // Extract commit hash from PR body\n            const commitMatch = proposal.body.match(/\\*\\*Target Commit:\\*\\*\\s*`([a-f0-9]+)`/);\n            const proposedCommit = commitMatch ? commitMatch[1] : null;\n            \n            if (proposedCommit) {\n              console.log(`Proposal was for commit: ${proposedCommit}`);\n            } else {\n              console.log('⚠️  No target commit hash found in PR body');\n            }\n            \n            // Get PR reviews to extract approvers\n            let approvers = 'Validated by automation';\n            let approvalCount = 2; // Minimum required\n            \n            try {\n              const reviews = await github.rest.pulls.listReviews({\n                owner: context.repo.owner,\n                repo: context.repo.repo,\n                pull_number: proposal.number\n              });\n              \n              // Get latest review per user and filter for approvals\n              const latestReviewsByUser = {};\n              reviews.data.forEach(review => {\n                const username = review.user.login;\n                if (!latestReviewsByUser[username] || new Date(review.submitted_at) > new Date(latestReviewsByUser[username].submitted_at)) {\n                  latestReviewsByUser[username] = review;\n                }\n              });\n              \n              const approvalReviews = Object.values(latestReviewsByUser).filter(review => \n                review.state === 'APPROVED'\n              );\n              \n              if (approvalReviews.length > 0) {\n                approvers = approvalReviews.map(r => '@' + r.user.login).join(', ');\n                approvalCount = approvalReviews.length;\n                console.log(`Found ${approvalCount} approvals from: ${approvers}`);\n              }\n            } catch (error) {\n              console.log(`Could not fetch reviews: ${error.message}`);\n            }\n            \n            return {\n              number: proposal.number,\n              approved: true,\n              approvals: approvalCount,\n              approvers: approvers,\n              proposedCommit: proposedCommit\n            };\n          result-encoding: json\n\n      - name: Verify proposal commit\n        run: |\n          APPROVALS='${{ steps.find_proposal.outputs.result }}'\n          \n          # Parse JSON\n          PROPOSED_COMMIT=$(echo \"$APPROVALS\" | jq -r '.proposedCommit')\n          CURRENT_COMMIT=\"${{ steps.info.outputs.sha }}\"\n          \n          echo \"Proposed commit: $PROPOSED_COMMIT\"\n          echo \"Current commit: $CURRENT_COMMIT\"\n          \n          # Check if commits match (if proposal had a target commit)\n          if [ \"$PROPOSED_COMMIT\" != \"null\" ] && [ -n \"$PROPOSED_COMMIT\" ]; then\n            # Normalize both commits to full SHA for comparison\n            PROPOSED_FULL=$(git rev-parse \"$PROPOSED_COMMIT\" 2>/dev/null || echo \"\")\n            CURRENT_FULL=$(git rev-parse \"$CURRENT_COMMIT\" 2>/dev/null || echo \"\")\n            \n            if [ -z \"$PROPOSED_FULL\" ]; then\n              echo \"⚠️  Could not resolve proposed commit: $PROPOSED_COMMIT\"\n            elif [ \"$PROPOSED_FULL\" != \"$CURRENT_FULL\" ]; then\n              echo \"❌ Commit mismatch!\"\n              echo \"The tag points to commit $CURRENT_FULL but the proposal was for $PROPOSED_FULL\"\n              echo \"This indicates an error in tag creation.\"\n              # Delete the tag remotely\n              git push --delete origin \"${{ steps.vars.outputs.version_tag }}\"\n              echo \"Tag ${{steps.vars.outputs.version_tag}} has been deleted\"\n              exit 1\n            else\n              echo \"✅ Commit hash matches proposal\"\n            fi\n          else\n            echo \"⚠️  No target commit found in proposal (might be legacy release)\"\n          fi\n          \n          echo \"✅ Tag verification completed\"\n\n      - name: Update release proposal PR\n        if: fromJson(steps.find_proposal.outputs.result).number != null\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0\n        with:\n          script: |\n            const result = ${{ steps.find_proposal.outputs.result }};\n            \n            if (result.number) {\n              // Add in-progress label\n              await github.rest.issues.addLabels({\n                owner: context.repo.owner,\n                repo: context.repo.repo,\n                issue_number: result.number,\n                labels: ['release-in-progress']\n              });\n              \n              // Remove approved label if present\n              try {\n                await github.rest.issues.removeLabel({\n                  owner: context.repo.owner,\n                  repo: context.repo.repo,\n                  issue_number: result.number,\n                  name: 'approved'\n                });\n              } catch (e) {\n                console.log('Approved label not found:', e.message);\n              }\n              \n              const commentBody = [\n                '## 🚀 Release Workflow Started',\n                '',\n                '- **Tag:** ${{ steps.info.outputs.version }}',\n                '- **Signed by key:** ${{ steps.verify.outputs.key_id }}',\n                '- **Commit:** ${{ steps.info.outputs.sha }}',\n                '- **Approved by:** ' + result.approvers,\n                '',\n                'Release workflow is now running. This PR will be updated when the release is published.'\n              ].join('\\n');\n              \n              await github.rest.issues.createComment({\n                owner: context.repo.owner,\n                repo: context.repo.repo,\n                issue_number: result.number,\n                body: commentBody\n              });\n            }\n\n      - name: Summary\n        run: |\n          APPROVALS='${{ steps.find_proposal.outputs.result }}'\n          PROPOSED_COMMIT=$(echo \"$APPROVALS\" | jq -r '.proposedCommit // \"N/A\"')\n          APPROVERS=$(echo \"$APPROVALS\" | jq -r '.approvers // \"N/A\"')\n          \n          echo \"## Tag Verification Summary 🔐\" >> $GITHUB_STEP_SUMMARY\n          echo \"\" >> $GITHUB_STEP_SUMMARY\n          echo \"- **Tag:** ${{ steps.info.outputs.version }}\" >> $GITHUB_STEP_SUMMARY\n          echo \"- **Commit:** ${{ steps.info.outputs.sha }}\" >> $GITHUB_STEP_SUMMARY\n          echo \"- **Proposed Commit:** $PROPOSED_COMMIT\" >> $GITHUB_STEP_SUMMARY\n          echo \"- **Signature:** ✅ Verified\" >> $GITHUB_STEP_SUMMARY\n          echo \"- **Signed by:** ${{ steps.verify.outputs.key_id }}\" >> $GITHUB_STEP_SUMMARY\n          echo \"- **Approvals:** ✅ Sufficient\" >> $GITHUB_STEP_SUMMARY\n          echo \"- **Approved by:** $APPROVERS\" >> $GITHUB_STEP_SUMMARY\n          echo \"\" >> $GITHUB_STEP_SUMMARY\n          echo \"Proceeding with release build...\" >> $GITHUB_STEP_SUMMARY\n      \n  release:\n    name: Release\n    needs: verify-tag\n    if: ${{ needs.verify-tag.outputs.verification_passed == 'true' }}\n    strategy:\n      matrix:\n        os: \n          - ubuntu-latest\n        go: \n          - '1.26'\n\n        include:\n        # Set the minimum Go patch version for the given Go minor\n        # Usable via ${{ matrix.GO_SEMVER }}\n        - go: '1.26'\n          GO_SEMVER: '~1.26.0'\n\n    runs-on: ${{ matrix.os }}\n    # https://github.com/sigstore/cosign/issues/1258#issuecomment-1002251233\n    # https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#adding-permissions-settings\n    permissions:\n      id-token: write\n      # https://docs.github.com/en/rest/overview/permissions-required-for-github-apps#permission-on-contents\n      # \"Releases\" is part of `contents`, so it needs the `write`\n      contents: write\n      issues: write\n      pull-requests: write\n\n    steps:\n    - name: Harden the runner (Audit all outbound calls)\n      uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0\n      with:\n        egress-policy: audit\n\n    - name: Checkout code\n      uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2\n      with:\n        fetch-depth: 0\n\n    - name: Install Go\n      uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0\n      with:\n        go-version: ${{ matrix.GO_SEMVER }}\n        check-latest: true\n\n    # Force fetch upstream tags -- because 65 minutes\n    # tl;dr: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v4.2.2 runs this line:\n    #   git -c protocol.version=2 fetch --no-tags --prune --progress --no-recurse-submodules --depth=1 origin +ebc278ec98bb24f2852b61fde2a9bf2e3d83818b:refs/tags/\n    # which makes its own local lightweight tag, losing all the annotations in the process. Our earlier script ran:\n    #   git fetch --prune --unshallow\n    # which doesn't overwrite that tag because that would be destructive.\n    # Credit to @francislavoie for the investigation.\n    # https://github.com/actions/checkout/issues/290#issuecomment-680260080\n    - name: Force fetch upstream tags\n      run: git fetch --tags --force\n\n    # https://github.community/t5/GitHub-Actions/How-to-get-just-the-tag-name/m-p/32167/highlight/true#M1027\n    - name: Print Go version and environment\n      id: vars\n      run: |\n        printf \"Using go at: $(which go)\\n\"\n        printf \"Go version: $(go version)\\n\"\n        printf \"\\n\\nGo environment:\\n\\n\"\n        go env\n        printf \"\\n\\nSystem environment:\\n\\n\"\n        env\n        echo \"version_tag=${GITHUB_REF/refs\\/tags\\//}\" >> $GITHUB_OUTPUT\n        echo \"short_sha=$(git rev-parse --short HEAD)\" >> $GITHUB_OUTPUT\n\n        # Add \"pip install\" CLI tools to PATH\n        echo ~/.local/bin >> $GITHUB_PATH\n\n        # Parse semver\n        TAG=${GITHUB_REF/refs\\/tags\\//}\n        SEMVER_RE='[^0-9]*\\([0-9]*\\)[.]\\([0-9]*\\)[.]\\([0-9]*\\)\\([0-9A-Za-z\\.-]*\\)'\n        TAG_MAJOR=`echo ${TAG#v} | sed -e \"s#$SEMVER_RE#\\1#\"`\n        TAG_MINOR=`echo ${TAG#v} | sed -e \"s#$SEMVER_RE#\\2#\"`\n        TAG_PATCH=`echo ${TAG#v} | sed -e \"s#$SEMVER_RE#\\3#\"`\n        TAG_SPECIAL=`echo ${TAG#v} | sed -e \"s#$SEMVER_RE#\\4#\"`\n        echo \"tag_major=${TAG_MAJOR}\" >> $GITHUB_OUTPUT\n        echo \"tag_minor=${TAG_MINOR}\" >> $GITHUB_OUTPUT\n        echo \"tag_patch=${TAG_PATCH}\" >> $GITHUB_OUTPUT\n        echo \"tag_special=${TAG_SPECIAL}\" >> $GITHUB_OUTPUT\n\n    # Cloudsmith CLI tooling for pushing releases\n    # See https://help.cloudsmith.io/docs/cli\n    - name: Install Cloudsmith CLI\n      run: pip install --upgrade cloudsmith-cli\n\n    - name: Install Cosign\n      uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # main\n    - name: Cosign version\n      run: cosign version\n    - name: Install Syft\n      uses: anchore/sbom-action/download-syft@17ae1740179002c89186b61233e0f892c3118b11 # main\n    - name: Syft version\n      run: syft version\n    - name: Install xcaddy\n      run: |\n        go install github.com/caddyserver/xcaddy/cmd/xcaddy@latest\n        xcaddy version\n    # GoReleaser will take care of publishing those artifacts into the release\n    - name: Run GoReleaser\n      uses: goreleaser/goreleaser-action@ec59f474b9834571250b370d4735c50f8e2d1e29 # v7.0.0\n      with:\n        version: latest\n        args: release --clean --timeout 60m\n      env:\n        GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n        TAG: ${{ steps.vars.outputs.version_tag }}\n        COSIGN_EXPERIMENTAL: 1\n\n    # Only publish on non-special tags (e.g. non-beta)\n    # We will continue to push to Gemfury for the foreseeable future, although\n    # Cloudsmith is probably better, to not break things for existing users of Gemfury.\n    # See https://gemfury.com/caddy/deb:caddy\n    - name: Publish .deb to Gemfury\n      if: ${{ steps.vars.outputs.tag_special == '' }}\n      env:\n        GEMFURY_PUSH_TOKEN: ${{ secrets.GEMFURY_PUSH_TOKEN }}\n      run: |\n        for filename in dist/*.deb; do\n          # armv6 and armv7 are both \"armhf\" so we can skip the duplicate\n          if [[ \"$filename\" == *\"armv6\"* ]]; then\n            echo \"Skipping $filename\"\n            continue\n          fi\n\n          curl -F package=@\"$filename\" https://${GEMFURY_PUSH_TOKEN}:@push.fury.io/caddy/\n        done\n\n    # Publish only special tags (unstable/beta/rc) to the \"testing\" repo\n    # See https://cloudsmith.io/~caddy/repos/testing/\n    - name: Publish .deb to Cloudsmith (special tags)\n      if: ${{ steps.vars.outputs.tag_special != '' }}\n      env:\n        CLOUDSMITH_API_KEY: ${{ secrets.CLOUDSMITH_API_KEY }}\n      run: |\n        for filename in dist/*.deb; do\n          # armv6 and armv7 are both \"armhf\" so we can skip the duplicate\n          if [[ \"$filename\" == *\"armv6\"* ]]; then\n            echo \"Skipping $filename\"\n            continue\n          fi\n\n          echo \"Pushing $filename to 'testing'\"\n          cloudsmith push deb caddy/testing/any-distro/any-version $filename\n        done\n\n    # Publish stable tags to Cloudsmith to both repos, \"stable\" and \"testing\"\n    # See https://cloudsmith.io/~caddy/repos/stable/\n    - name: Publish .deb to Cloudsmith (stable tags)\n      if: ${{ steps.vars.outputs.tag_special == '' }}\n      env:\n        CLOUDSMITH_API_KEY: ${{ secrets.CLOUDSMITH_API_KEY }}\n      run: |\n        for filename in dist/*.deb; do\n          # armv6 and armv7 are both \"armhf\" so we can skip the duplicate\n          if [[ \"$filename\" == *\"armv6\"* ]]; then\n            echo \"Skipping $filename\"\n            continue\n          fi\n\n          echo \"Pushing $filename to 'stable'\"\n          cloudsmith push deb caddy/stable/any-distro/any-version $filename\n\n          echo \"Pushing $filename to 'testing'\"\n          cloudsmith push deb caddy/testing/any-distro/any-version $filename\n        done\n\n    - name: Update release proposal PR\n      if: needs.verify-tag.outputs.proposal_issue_number != ''\n      uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0\n      with:\n        script: |\n          const prNumber = parseInt('${{ needs.verify-tag.outputs.proposal_issue_number }}');\n          \n          if (prNumber) {\n            // Get PR details to find the branch\n            const pr = await github.rest.pulls.get({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              pull_number: prNumber\n            });\n            \n            const branchName = pr.data.head.ref;\n            \n            // Remove in-progress label\n            try {\n              await github.rest.issues.removeLabel({\n                owner: context.repo.owner,\n                repo: context.repo.repo,\n                issue_number: prNumber,\n                name: 'release-in-progress'\n              });\n            } catch (e) {\n              console.log('Label not found:', e.message);\n            }\n            \n            // Add released label\n            await github.rest.issues.addLabels({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              issue_number: prNumber,\n              labels: ['released']\n            });\n            \n            // Add final comment\n            await github.rest.issues.createComment({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              issue_number: prNumber,\n              body: '## ✅ Release Published\\n\\nThe release has been successfully published and is now available.'\n            });\n            \n            // Close the PR if it's still open\n            if (pr.data.state === 'open') {\n              await github.rest.pulls.update({\n                owner: context.repo.owner,\n                repo: context.repo.repo,\n                pull_number: prNumber,\n                state: 'closed'\n              });\n              console.log(`Closed PR #${prNumber}`);\n            }\n            \n            // Delete the branch\n            try {\n              await github.rest.git.deleteRef({\n                owner: context.repo.owner,\n                repo: context.repo.repo,\n                ref: `heads/${branchName}`\n              });\n              console.log(`Deleted branch: ${branchName}`);\n            } catch (e) {\n              console.log(`Could not delete branch ${branchName}: ${e.message}`);\n            }\n          }\n"
  },
  {
    "path": ".github/workflows/release_published.yml",
    "content": "name: Release Published\n\n# Event payload: https://developer.github.com/webhooks/event-payloads/#release\non:\n  release:\n    types: [published]\n\npermissions:\n  contents: read\n\njobs:\n  release:\n    name: Release Published\n    strategy:\n      matrix:\n        os: \n          - ubuntu-latest\n    runs-on: ${{ matrix.os }}\n    permissions:\n      contents: read\n      pull-requests: read\n      actions: write\n    steps:\n\n    # See https://github.com/peter-evans/repository-dispatch\n    - name: Harden the runner (Audit all outbound calls)\n      uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0\n      with:\n        egress-policy: audit\n\n    - name: Trigger event on caddyserver/dist\n      uses: peter-evans/repository-dispatch@28959ce8df70de7be546dd1250a005dd32156697 # v4.0.1\n      with:\n        token: ${{ secrets.REPO_DISPATCH_TOKEN }}\n        repository: caddyserver/dist\n        event-type: release-tagged\n        client-payload: '{\"tag\": \"${{ github.event.release.tag_name }}\"}'\n\n    - name: Trigger event on caddyserver/caddy-docker\n      uses: peter-evans/repository-dispatch@28959ce8df70de7be546dd1250a005dd32156697 # v4.0.1\n      with:\n        token: ${{ secrets.REPO_DISPATCH_TOKEN }}\n        repository: caddyserver/caddy-docker\n        event-type: release-tagged\n        client-payload: '{\"tag\": \"${{ github.event.release.tag_name }}\"}'\n\n"
  },
  {
    "path": ".github/workflows/scorecard.yml",
    "content": "# This workflow uses actions that are not certified by GitHub. They are provided\n# by a third-party and are governed by separate terms of service, privacy\n# policy, and support documentation.\n\nname: OpenSSF Scorecard supply-chain security\non:\n  # For Branch-Protection check. Only the default branch is supported. See\n  # https://github.com/ossf/scorecard/blob/main/docs/checks.md#branch-protection\n  branch_protection_rule:\n  # To guarantee Maintained check is occasionally updated. See\n  # https://github.com/ossf/scorecard/blob/main/docs/checks.md#maintained\n  schedule:\n    - cron: '20 2 * * 5'\n  push:\n    branches: [ \"master\", \"2.*\" ]\n  pull_request:\n    branches: [ \"master\", \"2.*\" ]\n\n\n# Declare default permissions as read only.\npermissions: read-all\n\njobs:\n  analysis:\n    name: Scorecard analysis\n    runs-on: ubuntu-latest\n    # `publish_results: true` only works when run from the default branch. conditional can be removed if disabled.\n    if: github.event.repository.default_branch == github.ref_name || github.event_name == 'pull_request'\n    permissions:\n      # Needed to upload the results to code-scanning dashboard.\n      security-events: write\n      # Needed to publish results and get a badge (see publish_results below).\n      id-token: write\n      # Uncomment the permissions below if installing in a private repository.\n      # contents: read\n      # actions: read\n\n    steps:\n      - name: Harden the runner (Audit all outbound calls)\n        uses: step-security/harden-runner@a90bcbc6539c36a85cdfeb73f7e2f433735f215b # v2.15.0\n        with:\n          egress-policy: audit\n\n      - name: \"Checkout code\"\n        uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2\n        with:\n          persist-credentials: false\n\n      - name: \"Run analysis\"\n        uses: ossf/scorecard-action@4eaacf0543bb3f2c246792bd56e8cdeffafb205a # v2.4.3\n        with:\n          results_file: results.sarif\n          results_format: sarif\n          # (Optional) \"write\" PAT token. Uncomment the `repo_token` line below if:\n          # - you want to enable the Branch-Protection check on a *public* repository, or\n          # - you are installing Scorecard on a *private* repository\n          # To create the PAT, follow the steps in https://github.com/ossf/scorecard-action?tab=readme-ov-file#authentication-with-fine-grained-pat-optional.\n          # repo_token: ${{ secrets.SCORECARD_TOKEN }}\n\n          # Public repositories:\n          #   - Publish results to OpenSSF REST API for easy access by consumers\n          #   - Allows the repository to include the Scorecard badge.\n          #   - See https://github.com/ossf/scorecard-action#publishing-results.\n          # For private repositories:\n          #   - `publish_results` will always be set to `false`, regardless\n          #     of the value entered here.\n          publish_results: true\n\n          # (Optional) Uncomment file_mode if you have a .gitattributes with files marked export-ignore\n          # file_mode: git\n\n      # Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF\n      # format to the repository Actions tab.\n      - name: \"Upload artifact\"\n        uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0\n        with:\n          name: SARIF file\n          path: results.sarif\n          retention-days: 5\n\n      # Upload the results to GitHub's code scanning dashboard (optional).\n      # Commenting out will disable upload of results to your repo's Code Scanning dashboard\n      - name: \"Upload to code-scanning\"\n        uses: github/codeql-action/upload-sarif@89a39a4e59826350b863aa6b6252a07ad50cf83e # v3.29.5\n        with:\n          sarif_file: results.sarif\n"
  },
  {
    "path": ".gitignore",
    "content": "_gitignore/\n*.log\nCaddyfile\nCaddyfile.*\n!caddyfile/\n!caddyfile.go\n\n# artifacts from pprof tooling\n*.prof\n*.test\n\n# build artifacts and helpers\ncmd/caddy/caddy\ncmd/caddy/caddy.exe\ncmd/caddy/tmp/*.exe\ncmd/caddy/.env\n\n# mac specific\n.DS_Store\n\n# go modules\nvendor\n\n# goreleaser artifacts\ndist\ncaddy-build\ncaddy-dist\n\n# IDE files\n.idea/\n.vscode/\n"
  },
  {
    "path": ".golangci.yml",
    "content": "version: \"2\"\nrun:\n  issues-exit-code: 1\n  tests: false\n  build-tags:\n    - nobadger\n    - nomysql\n    - nopgx\noutput:\n  formats:\n    text:\n      path: stdout\n      print-linter-name: true\n      print-issued-lines: true\nlinters:\n  default: none\n  enable:\n    - asasalint\n    - asciicheck\n    - bidichk\n    - bodyclose\n    - decorder\n    - dogsled\n    - dupl\n    - dupword\n    - durationcheck\n    - errcheck\n    - errname\n    - exhaustive\n    - gosec\n    - govet\n    - importas\n    - ineffassign\n    - misspell\n    - modernize\n    - prealloc\n    - promlinter\n    - sloglint\n    - sqlclosecheck\n    - staticcheck\n    - testableexamples\n    - testifylint\n    - tparallel\n    - unconvert\n    - unused\n    - wastedassign\n    - whitespace\n    - zerologlint\n  settings:\n    staticcheck:\n      checks: [\"all\", \"-ST1000\", \"-ST1003\", \"-ST1016\", \"-ST1020\", \"-ST1021\", \"-ST1022\",  \"-QF1006\", \"-QF1008\"] # default, and exclude 1 more undesired check\n    errcheck:\n      exclude-functions:\n        - fmt.*\n        - (go.uber.org/zap/zapcore.ObjectEncoder).AddObject\n        - (go.uber.org/zap/zapcore.ObjectEncoder).AddArray\n    exhaustive:\n      ignore-enum-types: reflect.Kind|svc.Cmd\n  exclusions:\n    generated: lax\n    presets:\n      - comments\n      - common-false-positives\n      - legacy\n      - std-error-handling\n    rules:\n      - linters:\n          - gosec\n        text: G115 # TODO: Either we should fix the issues or nuke the linter if it's bad\n      - linters:\n          - gosec\n        text: G107 # we aren't calling unknown URL\n      - linters:\n          - gosec\n        text: G203 # as a web server that's expected to handle any template, this is totally in the hands of the user.\n      - linters:\n          - gosec\n        text: G204 # we're shelling out to known commands, not relying on user-defined input.\n      - linters:\n          - gosec\n        # the choice of weakrand is deliberate, hence the named import \"weakrand\"\n        path: modules/caddyhttp/reverseproxy/selectionpolicies.go\n        text: G404\n      - linters:\n          - gosec\n        path: modules/caddyhttp/reverseproxy/streaming.go\n        text: G404\n      - linters:\n          - dupl\n        path: modules/logging/filters.go\n      - linters:\n          - dupl\n        path: modules/caddyhttp/matchers.go\n      - linters:\n          - dupl\n        path: modules/caddyhttp/vars.go\n      - linters:\n          - errcheck\n        path: _test\\.go\n    paths:\n      - third_party$\n      - builtin$\n      - examples$\nformatters:\n  enable:\n    - gci\n    - gofmt\n    - gofumpt\n    - goimports\n  settings:\n    gci:\n      sections:\n        - standard # Standard section: captures all standard packages.\n        - default # Default section: contains all imports that could not be matched to another section type.\n        - prefix(github.com/caddyserver/caddy/v2/cmd) # ensure that this is always at the top and always has a line break.\n        - prefix(github.com/caddyserver/caddy) # Custom section: groups all imports with the specified Prefix.\n      custom-order: true\n  exclusions:\n    generated: lax\n    paths:\n      - third_party$\n      - builtin$\n      - examples$\n"
  },
  {
    "path": ".goreleaser.yml",
    "content": "version: 2\n\nbefore:\n  hooks:\n    # The build is done in this particular way to build Caddy in a designated directory named in .gitignore.\n    # This is so we can run goreleaser on tag without Git complaining of being dirty. The main.go in cmd/caddy directory \n    # cannot be built within that directory due to changes necessary for the build causing Git to be dirty, which\n    # subsequently causes gorleaser to refuse running.\n    - rm -rf caddy-build caddy-dist vendor\n    # vendor Caddy deps\n    - go mod vendor\n    - mkdir -p caddy-build\n    - cp cmd/caddy/main.go caddy-build/main.go\n    - /bin/sh -c 'cd ./caddy-build && go mod init caddy'\n    # prepare syso files for windows embedding\n    - /bin/sh -c 'for a in amd64 arm64; do XCADDY_SKIP_BUILD=1 GOOS=windows GOARCH=$a xcaddy build {{.Env.TAG}}; done'\n    - /bin/sh -c 'mv /tmp/buildenv_*/*.syso caddy-build'\n    # GoReleaser doesn't seem to offer {{.Tag}} at this stage, so we have to embed it into the env\n    # so we run: TAG=$(git describe --abbrev=0) goreleaser release --rm-dist --skip-publish --skip-validate\n    - go mod edit -require=github.com/caddyserver/caddy/v2@{{.Env.TAG}} ./caddy-build/go.mod\n    # as of Go 1.16, `go` commands no longer automatically change go.{mod,sum}. We now have to explicitly\n    # run `go mod tidy`. The `/bin/sh -c '...'` is because goreleaser can't find cd in PATH without shell invocation.\n    - /bin/sh -c 'cd ./caddy-build && go mod tidy'\n    # vendor the deps of the prepared to-build module\n    - /bin/sh -c 'cd ./caddy-build && go mod vendor'\n    - git clone --depth 1 https://github.com/caddyserver/dist caddy-dist\n    - mkdir -p caddy-dist/man\n    - go mod download\n    - go run cmd/caddy/main.go manpage --directory ./caddy-dist/man\n    - gzip -r ./caddy-dist/man/\n    - /bin/sh -c 'go run cmd/caddy/main.go completion bash > ./caddy-dist/scripts/bash-completion'\n\nbuilds:\n- env:\n  - CGO_ENABLED=0\n  - GO111MODULE=on\n  dir: ./caddy-build\n  binary: caddy\n  goos:\n  - darwin\n  - linux\n  - windows\n  - freebsd\n  goarch:\n  - amd64\n  - arm\n  - arm64\n  - s390x\n  - ppc64le\n  - riscv64\n  goarm:\n  - \"5\"\n  - \"6\"\n  - \"7\"\n  ignore:\n    - goos: darwin\n      goarch: arm\n    - goos: darwin\n      goarch: ppc64le\n    - goos: darwin\n      goarch: s390x\n    - goos: darwin\n      goarch: riscv64\n    - goos: windows\n      goarch: ppc64le\n    - goos: windows\n      goarch: s390x\n    - goos: windows\n      goarch: riscv64\n    - goos: windows\n      goarch: arm\n    - goos: freebsd\n      goarch: ppc64le\n    - goos: freebsd\n      goarch: s390x\n    - goos: freebsd\n      goarch: riscv64\n    - goos: freebsd\n      goarch: arm\n      goarm: \"5\"\n  flags:\n  - -trimpath\n  - -mod=readonly\n  ldflags:\n  - -s -w\n  tags:\n  - nobadger\n  - nomysql\n  - nopgx\n\nsigns:\n  - cmd: cosign\n    signature: \"${artifact}.sig\"\n    certificate: '{{ trimsuffix (trimsuffix .Env.artifact \".zip\") \".tar.gz\" }}.pem'\n    args: [\"sign-blob\", \"--yes\", \"--output-signature=${signature}\", \"--output-certificate\", \"${certificate}\", \"${artifact}\"]\n    artifacts: all\n\nsboms:\n  - artifacts: binary\n    documents:\n      - >-\n        {{ .ProjectName }}_\n        {{- .Version }}_\n        {{- if eq .Os \"darwin\" }}mac{{ else }}{{ .Os }}{{ end }}_\n        {{- .Arch }}\n        {{- with .Arm }}v{{ . }}{{ end }}\n        {{- with .Mips }}_{{ . }}{{ end }}\n        {{- if not (eq .Amd64 \"v1\") }}{{ .Amd64 }}{{ end }}.sbom\n    cmd: syft\n    args: [\"$artifact\", \"--file\", \"${document}\", \"--output\", \"cyclonedx-json\"]\n\narchives:\n  - id: default\n    format_overrides:\n      - goos: windows\n        formats: zip\n    name_template: >-\n      {{ .ProjectName }}_\n      {{- .Version }}_\n      {{- if eq .Os \"darwin\" }}mac{{ else }}{{ .Os }}{{ end }}_\n      {{- .Arch }}\n      {{- with .Arm }}v{{ . }}{{ end }}\n      {{- with .Mips }}_{{ . }}{{ end }}\n      {{- if not (eq .Amd64 \"v1\") }}{{ .Amd64 }}{{ end }}\n  \n  # package the 'caddy-build' directory into a tarball,\n  # allowing users to build the exact same set of files as ours.\n  - id: source\n    meta: true\n    name_template: \"{{ .ProjectName }}_{{ .Version }}_buildable-artifact\"\n    files:\n      - src: LICENSE\n        dst: ./LICENSE\n      - src: README.md\n        dst: ./README.md\n      - src: AUTHORS\n        dst: ./AUTHORS\n      - src: ./caddy-build\n        dst: ./\n\nsource:\n  enabled: true\n  name_template: '{{ .ProjectName }}_{{ .Version }}_src'\n  format: 'tar.gz'\n\n  # Additional files/template/globs you want to add to the source archive.\n  #\n  # Default: empty.\n  files:\n    - vendor\n\n\nchecksum:\n  algorithm: sha512\n\nnfpms:\n  - id: default\n    package_name: caddy\n\n    vendor: Dyanim\n    homepage: https://caddyserver.com\n    maintainer: Matthew Holt <mholt@users.noreply.github.com>\n    description: |\n      Caddy - Powerful, enterprise-ready, open source web server with automatic HTTPS written in Go\n    license: Apache 2.0\n\n    formats:\n      - deb\n      # - rpm\n\n    bindir: /usr/bin\n    contents:\n      - src: ./caddy-dist/init/caddy.service\n        dst: /lib/systemd/system/caddy.service\n        \n      - src: ./caddy-dist/init/caddy-api.service\n        dst: /lib/systemd/system/caddy-api.service\n      \n      - src: ./caddy-dist/welcome/index.html\n        dst: /usr/share/caddy/index.html\n      \n      - src: ./caddy-dist/scripts/bash-completion\n        dst: /etc/bash_completion.d/caddy\n    \n      - src: ./caddy-dist/config/Caddyfile\n        dst: /etc/caddy/Caddyfile\n        type: config\n\n      - src: ./caddy-dist/man/*\n        dst: /usr/share/man/man8/\n\n    scripts:\n      postinstall: ./caddy-dist/scripts/postinstall.sh\n      preremove: ./caddy-dist/scripts/preremove.sh\n      postremove: ./caddy-dist/scripts/postremove.sh\n\n    provides:\n      - httpd\n\nrelease:\n  github:\n    owner: caddyserver\n    name: caddy\n  draft: true\n  prerelease: auto\n\nchangelog:\n  sort: asc\n  filters:\n    exclude:\n    - '^chore:'\n    - '^ci:'\n    - '^docs?:'\n    - '^readme:'\n    - '^tests?:'\n    - '^\\w+\\s+' # a hack to remove commit messages without colons thus don't correspond to a package\n"
  },
  {
    "path": ".pre-commit-config.yaml",
    "content": "repos:\n- repo: https://github.com/gitleaks/gitleaks\n  rev: v8.16.3\n  hooks:\n  - id: gitleaks\n- repo: https://github.com/golangci/golangci-lint\n  rev: v1.52.2\n  hooks:\n  - id: golangci-lint-config-verify\n  - id: golangci-lint\n  - id: golangci-lint-fmt\n- repo: https://github.com/jumanjihouse/pre-commit-hooks\n  rev: 3.0.0\n  hooks:\n  - id: shellcheck\n- repo: https://github.com/pre-commit/pre-commit-hooks\n  rev: v4.4.0\n  hooks:\n  - id: end-of-file-fixer\n  - id: trailing-whitespace\n"
  },
  {
    "path": "AUTHORS",
    "content": "# This is the official list of Caddy Authors for copyright purposes.\n# Authors may be either individual people or legal entities.\n#\n# Not all individual contributors are authors. For the full list of\n# contributors, refer to the project's page on GitHub or the repo's\n# commit history.\n\nMatthew Holt <Matthew.Holt@gmail.com>\nLight Code Labs <sales@lightcodelabs.com>\nArdan Labs <info@ardanlabs.com>\n"
  },
  {
    "path": "LICENSE",
    "content": "\n                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "README.md",
    "content": "<p align=\"center\">\n\t<a href=\"https://caddyserver.com\">\n\t\t<picture>\n\t\t\t<source media=\"(prefers-color-scheme: dark)\" srcset=\"https://user-images.githubusercontent.com/1128849/210187358-e2c39003-9a5e-4dd5-a783-6deb6483ee72.svg\">\n\t\t\t<source media=\"(prefers-color-scheme: light)\" srcset=\"https://user-images.githubusercontent.com/1128849/210187356-dfb7f1c5-ac2e-43aa-bb23-fc014280ae1f.svg\">\n\t\t\t<img src=\"https://user-images.githubusercontent.com/1128849/210187356-dfb7f1c5-ac2e-43aa-bb23-fc014280ae1f.svg\" alt=\"Caddy\" width=\"550\">\n\t\t</picture>\n\t</a>\n\t<br>\n\t<h3 align=\"center\">a <a href=\"https://zerossl.com\"><img src=\"https://user-images.githubusercontent.com/55066419/208327323-2770dc16-ec09-43a0-9035-c5b872c2ad7f.svg\" height=\"28\" style=\"vertical-align: -7.7px\" valign=\"middle\"></a> project</h3>\n</p>\n<hr>\n<h3 align=\"center\">Every site on HTTPS</h3>\n<p align=\"center\">Caddy is an extensible server platform that uses TLS by default.</p>\n<p align=\"center\">\n\t<a href=\"https://github.com/caddyserver/caddy/releases\">Releases</a> ·\n\t<a href=\"https://caddyserver.com/docs/\">Documentation</a> ·\n\t<a href=\"https://caddy.community\">Get Help</a>\n</p>\n<p align=\"center\">\n\t<a href=\"https://github.com/caddyserver/caddy/actions/workflows/ci.yml\"><img src=\"https://github.com/caddyserver/caddy/actions/workflows/ci.yml/badge.svg\"></a>\n\t&nbsp;\n\t<a href=\"https://www.bestpractices.dev/projects/7141\"><img src=\"https://www.bestpractices.dev/projects/7141/badge\"></a>\n\t&nbsp;\n\t<a href=\"https://pkg.go.dev/github.com/caddyserver/caddy/v2\"><img src=\"https://img.shields.io/badge/godoc-reference-%23007d9c.svg\"></a>\n\t&nbsp;\n\t<a href=\"https://x.com/caddyserver\" title=\"@caddyserver on Twitter\"><img src=\"https://img.shields.io/twitter/follow/caddyserver\" alt=\"@caddyserver on Twitter\"></a>\n\t&nbsp;\n\t<a href=\"https://caddy.community\" title=\"Caddy Forum\"><img src=\"https://img.shields.io/badge/community-forum-ff69b4.svg\" alt=\"Caddy Forum\"></a>\n\t<br>\n\t<a href=\"https://sourcegraph.com/github.com/caddyserver/caddy?badge\" title=\"Caddy on Sourcegraph\"><img src=\"https://sourcegraph.com/github.com/caddyserver/caddy/-/badge.svg\" alt=\"Caddy on Sourcegraph\"></a>\n\t&nbsp;\n\t<a href=\"https://cloudsmith.io/~caddy/repos/\"><img src=\"https://img.shields.io/badge/OSS%20hosting%20by-cloudsmith-blue?logo=cloudsmith\" alt=\"Cloudsmith\"></a>\n</p>\n<p align=\"center\">\n\t<b>Powered by</b>\n\t<br>\n\t<a href=\"https://github.com/caddyserver/certmagic\">\n\t\t<picture>\n\t\t\t<source media=\"(prefers-color-scheme: dark)\" srcset=\"https://user-images.githubusercontent.com/55066419/206946718-740b6371-3df3-4d72-a822-47e4c48af999.png\">\n\t\t\t<source media=\"(prefers-color-scheme: light)\" srcset=\"https://user-images.githubusercontent.com/1128849/49704830-49d37200-fbd5-11e8-8385-767e0cd033c3.png\">\n\t\t\t<img src=\"https://user-images.githubusercontent.com/1128849/49704830-49d37200-fbd5-11e8-8385-767e0cd033c3.png\" alt=\"CertMagic\" width=\"250\">\n\t\t</picture>\n\t</a>\n</p>\n\n<!-- Warp sponsorship requests this section -->\n<div align=\"center\" markdown=\"1\">\n\t<hr>\n\t<sup>Special thanks to:</sup>\n\t<br>\n\t<a href=\"https://go.warp.dev/caddy\">\n\t\t<img alt=\"Warp sponsorship\" width=\"400\" src=\"https://github.com/user-attachments/assets/c8efffde-18c7-4af4-83ed-b1aba2dda394\">\n\t</a>\n\n### [Warp, built for coding with multiple AI agents](https://go.warp.dev/caddy)\n[Available for MacOS, Linux, & Windows](https://go.warp.dev/caddy)<br>\n</div>\n\n<hr>\n\n### Menu\n\n- [Features](#features)\n- [Install](#install)\n- [Build from source](#build-from-source)\n\t- [For development](#for-development)\n\t- [With version information and/or plugins](#with-version-information-andor-plugins)\n- [Quick start](#quick-start)\n- [Overview](#overview)\n- [Full documentation](#full-documentation)\n- [Getting help](#getting-help)\n- [About](#about)\n\n\n## [Features](https://caddyserver.com/features)\n\n- **Easy configuration** with the [Caddyfile](https://caddyserver.com/docs/caddyfile)\n- **Powerful configuration** with its [native JSON config](https://caddyserver.com/docs/json/)\n- **Dynamic configuration** with the [JSON API](https://caddyserver.com/docs/api)\n- [**Config adapters**](https://caddyserver.com/docs/config-adapters) if you don't like JSON\n- **Automatic HTTPS** by default\n\t- [ZeroSSL](https://zerossl.com) and [Let's Encrypt](https://letsencrypt.org) for public names\n\t- Fully-managed local CA for internal names & IPs\n\t- Can coordinate with other Caddy instances in a cluster\n\t- Multi-issuer fallback\n\t- Encrypted ClientHello (ECH) support\n- **Stays up when other servers go down** due to TLS/OCSP/certificate-related issues\n- **Production-ready** after serving trillions of requests and managing millions of TLS certificates\n- **Scales to hundreds of thousands of sites** as proven in production\n- **HTTP/1.1, HTTP/2, and HTTP/3** all supported by default\n- **Highly extensible** [modular architecture](https://caddyserver.com/docs/architecture) lets Caddy do anything without bloat\n- **Runs anywhere** with **no external dependencies** (not even libc)\n- Written in Go, a language with higher **memory safety guarantees** than other servers\n- Actually **fun to use**\n- So much more to [discover](https://caddyserver.com/features)\n\n## Install\n\nThe simplest, cross-platform way to get started is to download Caddy from [GitHub Releases](https://github.com/caddyserver/caddy/releases) and place the executable file in your PATH.\n\nSee [our online documentation](https://caddyserver.com/docs/install) for other install instructions.\n\n## Build from source\n\nRequirements:\n\n- [Go 1.25.0 or newer](https://golang.org/dl/)\n\n### For development\n\n_**Note:** These steps [will not embed proper version information](https://github.com/golang/go/issues/29228). For that, please follow the instructions in the next section._\n\n```bash\n$ git clone \"https://github.com/caddyserver/caddy.git\"\n$ cd caddy/cmd/caddy/\n$ go build\n```\n\nWhen you run Caddy, it may try to bind to low ports unless otherwise specified in your config. If your OS requires elevated privileges for this, you will need to give your new binary permission to do so. On Linux, this can be done easily with: `sudo setcap cap_net_bind_service=+ep ./caddy`\n\nIf you prefer to use `go run` which only creates temporary binaries, you can still do this with the included `setcap.sh` like so:\n\n```bash\n$ go run -exec ./setcap.sh main.go\n```\n\nIf you don't want to type your password for `setcap`, use `sudo visudo` to edit your sudoers file and allow your user account to run that command without a password, for example:\n\n```\nusername ALL=(ALL:ALL) NOPASSWD: /usr/sbin/setcap\n```\n\nreplacing `username` with your actual username. Please be careful and only do this if you know what you are doing! We are only qualified to document how to use Caddy, not Go tooling or your computer, and we are providing these instructions for convenience only; please learn how to use your own computer at your own risk and make any needful adjustments.\n\nThen you can run the tests in all modules or a specific one:\n\n```bash\n$ go test ./...\n$ go test ./modules/caddyhttp/tracing/\n```\n\n### With version information and/or plugins\n\nUsing [our builder tool, `xcaddy`](https://github.com/caddyserver/xcaddy)...\n\n```bash\n$ xcaddy build\n```\n\n...the following steps are automated:\n\n1. Create a new folder: `mkdir caddy`\n2. Change into it: `cd caddy`\n3. Copy [Caddy's main.go](https://github.com/caddyserver/caddy/blob/master/cmd/caddy/main.go) into the empty folder. Add imports for any custom plugins you want to add.\n4. Initialize a Go module: `go mod init caddy`\n5. (Optional) Pin Caddy version: `go get github.com/caddyserver/caddy/v2@version` replacing `version` with a git tag, commit, or branch name.\n6. (Optional) Add plugins by adding their import: `_ \"import/path/here\"`\n7. Compile: `go build -tags=nobadger,nomysql,nopgx`\n\n\n\n\n## Quick start\n\nThe [Caddy website](https://caddyserver.com/docs/) has documentation that includes tutorials, quick-start guides, reference, and more.\n\n**We recommend that all users -- regardless of experience level -- do our [Getting Started](https://caddyserver.com/docs/getting-started) guide to become familiar with using Caddy.**\n\nIf you've only got a minute, [the website has several quick-start tutorials](https://caddyserver.com/docs/quick-starts) to choose from! However, after finishing a quick-start tutorial, please read more documentation to understand how the software works. 🙂\n\n\n\n\n## Overview\n\nCaddy is most often used as an HTTPS server, but it is suitable for any long-running Go program. First and foremost, it is a platform to run Go applications. Caddy \"apps\" are just Go programs that are implemented as Caddy modules. Two apps -- `tls` and `http` -- ship standard with Caddy.\n\nCaddy apps instantly benefit from [automated documentation](https://caddyserver.com/docs/json/), graceful on-line [config changes via API](https://caddyserver.com/docs/api), and unification with other Caddy apps.\n\nAlthough [JSON](https://caddyserver.com/docs/json/) is Caddy's native config language, Caddy can accept input from [config adapters](https://caddyserver.com/docs/config-adapters) which can essentially convert any config format of your choice into JSON: Caddyfile, JSON 5, YAML, TOML, NGINX config, and more.\n\nThe primary way to configure Caddy is through [its API](https://caddyserver.com/docs/api), but if you prefer config files, the [command-line interface](https://caddyserver.com/docs/command-line) supports those too.\n\nCaddy exposes an unprecedented level of control compared to any web server in existence. In Caddy, you are usually setting the actual values of the initialized types in memory that power everything from your HTTP handlers and TLS handshakes to your storage medium. Caddy is also ridiculously extensible, with a powerful plugin system that makes vast improvements over other web servers.\n\nTo wield the power of this design, you need to know how the config document is structured. Please see [our documentation site](https://caddyserver.com/docs/) for details about [Caddy's config structure](https://caddyserver.com/docs/json/).\n\nNearly all of Caddy's configuration is contained in a single config document, rather than being scattered across CLI flags and env variables and a configuration file as with other web servers. This makes managing your server config more straightforward and reduces hidden variables/factors.\n\n\n## Full documentation\n\nOur website has complete documentation:\n\n**https://caddyserver.com/docs/**\n\nThe docs are also open source. You can contribute to them here: https://github.com/caddyserver/website\n\n\n\n## Getting help\n\n- We advise companies using Caddy to secure a support contract through [Ardan Labs](https://www.ardanlabs.com) before help is needed.\n\n- A [sponsorship](https://github.com/sponsors/mholt) goes a long way! We can offer private help to sponsors. If Caddy is benefitting your company, please consider a sponsorship. This not only helps fund full-time work to ensure the longevity of the project, it provides your company the resources, support, and discounts you need; along with being a great look for your company to your customers and potential customers!\n\n- Individuals can exchange help for free on our community forum at https://caddy.community. Remember that people give help out of their spare time and good will. The best way to get help is to give it first!\n\nPlease use our [issue tracker](https://github.com/caddyserver/caddy/issues) only for bug reports and feature requests, i.e. actionable development items (support questions will usually be referred to the forums).\n\n\n\n## About\n\nMatthew Holt began developing Caddy in 2014 while studying computer science at Brigham Young University. (The name \"Caddy\" was chosen because this software helps with the tedious, mundane tasks of serving the Web, and is also a single place for multiple things to be organized together.) It soon became the first web server to use HTTPS automatically and by default, and now has hundreds of contributors and has served trillions of HTTPS requests.\n\n**The name \"Caddy\" is trademarked.** The name of the software is \"Caddy\", not \"Caddy Server\" or \"CaddyServer\". Please call it \"Caddy\" or, if you wish to clarify, \"the Caddy web server\". Caddy is a registered trademark of Stack Holdings GmbH.\n\n- _Project on X: [@caddyserver](https://x.com/caddyserver)_\n- _Author on X: [@mholt6](https://x.com/mholt6)_\n\nCaddy is a project of [ZeroSSL](https://zerossl.com), an HID Global company.\n\nDebian package repository hosting is graciously provided by [Cloudsmith](https://cloudsmith.com). Cloudsmith is the only fully hosted, cloud-native, universal package management solution, that enables your organization to create, store and share packages in any format, to any place, with total confidence.\n"
  },
  {
    "path": "admin.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"crypto\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"encoding/base64\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"expvar\"\n\t\"fmt\"\n\t\"hash\"\n\t\"io\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/http/pprof\"\n\t\"net/url\"\n\t\"os\"\n\t\"path\"\n\t\"regexp\"\n\t\"slices\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/cespare/xxhash/v2\"\n\t\"github.com/prometheus/client_golang/prometheus\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n)\n\n// testCertMagicStorageOverride is a package-level test hook. Tests may set\n// this variable to provide a temporary certmagic.Storage so that cert\n// management in tests does not hit the real default storage on disk.\n// This must NOT be set in production code.\nvar testCertMagicStorageOverride certmagic.Storage\n\nfunc init() {\n\t// The hard-coded default `DefaultAdminListen` can be overridden\n\t// by setting the `CADDY_ADMIN` environment variable.\n\t// The environment variable may be used by packagers to change\n\t// the default admin address to something more appropriate for\n\t// that platform. See #5317 for discussion.\n\tif env, exists := os.LookupEnv(\"CADDY_ADMIN\"); exists {\n\t\tDefaultAdminListen = env\n\t}\n}\n\n// AdminConfig configures Caddy's API endpoint, which is used\n// to manage Caddy while it is running.\ntype AdminConfig struct {\n\t// If true, the admin endpoint will be completely disabled.\n\t// Note that this makes any runtime changes to the config\n\t// impossible, since the interface to do so is through the\n\t// admin endpoint.\n\tDisabled bool `json:\"disabled,omitempty\"`\n\n\t// The address to which the admin endpoint's listener should\n\t// bind itself. Can be any single network address that can be\n\t// parsed by Caddy. Accepts placeholders.\n\t// Default: the value of the `CADDY_ADMIN` environment variable,\n\t// or `localhost:2019` otherwise.\n\t//\n\t// Remember: When changing this value through a config reload,\n\t// be sure to use the `--address` CLI flag to specify the current\n\t// admin address if the currently-running admin endpoint is not\n\t// the default address.\n\tListen string `json:\"listen,omitempty\"`\n\n\t// If true, CORS headers will be emitted, and requests to the\n\t// API will be rejected if their `Host` and `Origin` headers\n\t// do not match the expected value(s). Use `origins` to\n\t// customize which origins/hosts are allowed. If `origins` is\n\t// not set, the listen address is the only value allowed by\n\t// default. Enforced only on local (plaintext) endpoint.\n\tEnforceOrigin bool `json:\"enforce_origin,omitempty\"`\n\n\t// The list of allowed origins/hosts for API requests. Only needed\n\t// if accessing the admin endpoint from a host different from the\n\t// socket's network interface or if `enforce_origin` is true. If not\n\t// set, the listener address will be the default value. If set but\n\t// empty, no origins will be allowed. Enforced only on local\n\t// (plaintext) endpoint.\n\tOrigins []string `json:\"origins,omitempty\"`\n\n\t// Options pertaining to configuration management.\n\tConfig *ConfigSettings `json:\"config,omitempty\"`\n\n\t// Options that establish this server's identity. Identity refers to\n\t// credentials which can be used to uniquely identify and authenticate\n\t// this server instance. This is required if remote administration is\n\t// enabled (but does not require remote administration to be enabled).\n\t// Default: no identity management.\n\tIdentity *IdentityConfig `json:\"identity,omitempty\"`\n\n\t// Options pertaining to remote administration. By default, remote\n\t// administration is disabled. If enabled, identity management must\n\t// also be configured, as that is how the endpoint is secured.\n\t// See the neighboring \"identity\" object.\n\t//\n\t// EXPERIMENTAL: This feature is subject to change.\n\tRemote *RemoteAdmin `json:\"remote,omitempty\"`\n\n\t// Holds onto the routers so that we can later provision them\n\t// if they require provisioning.\n\trouters []AdminRouter\n}\n\n// ConfigSettings configures the management of configuration.\ntype ConfigSettings struct {\n\t// Whether to keep a copy of the active config on disk. Default is true.\n\t// Note that \"pulled\" dynamic configs (using the neighboring \"load\" module)\n\t// are not persisted; only configs that are pushed to Caddy get persisted.\n\tPersist *bool `json:\"persist,omitempty\"`\n\n\t// Loads a new configuration. This is helpful if your configs are\n\t// managed elsewhere and you want Caddy to pull its config dynamically\n\t// when it starts. The pulled config completely replaces the current\n\t// one, just like any other config load. It is an error if a pulled\n\t// config is configured to pull another config without a load_delay,\n\t// as this creates a tight loop.\n\t//\n\t// EXPERIMENTAL: Subject to change.\n\tLoadRaw json.RawMessage `json:\"load,omitempty\" caddy:\"namespace=caddy.config_loaders inline_key=module\"`\n\n\t// The duration after which to load config. If set, config will be pulled\n\t// from the config loader after this duration. A delay is required if a\n\t// dynamically-loaded config is configured to load yet another config. To\n\t// load configs on a regular interval, ensure this value is set the same\n\t// on all loaded configs; it can also be variable if needed, and to stop\n\t// the loop, simply remove dynamic config loading from the next-loaded\n\t// config.\n\t//\n\t// EXPERIMENTAL: Subject to change.\n\tLoadDelay Duration `json:\"load_delay,omitempty\"`\n}\n\n// IdentityConfig configures management of this server's identity. An identity\n// consists of credentials that uniquely verify this instance; for example,\n// TLS certificates (public + private key pairs).\ntype IdentityConfig struct {\n\t// List of names or IP addresses which refer to this server.\n\t// Certificates will be obtained for these identifiers so\n\t// secure TLS connections can be made using them.\n\tIdentifiers []string `json:\"identifiers,omitempty\"`\n\n\t// Issuers that can provide this admin endpoint its identity\n\t// certificate(s). Default: ACME issuers configured for\n\t// ZeroSSL and Let's Encrypt. Be sure to change this if you\n\t// require credentials for private identifiers.\n\tIssuersRaw []json.RawMessage `json:\"issuers,omitempty\" caddy:\"namespace=tls.issuance inline_key=module\"`\n\n\tissuers []certmagic.Issuer\n}\n\n// RemoteAdmin enables and configures remote administration. If enabled,\n// a secure listener enforcing mutual TLS authentication will be started\n// on a different port from the standard plaintext admin server.\n//\n// This endpoint is secured using identity management, which must be\n// configured separately (because identity management does not depend\n// on remote administration). See the admin/identity config struct.\n//\n// EXPERIMENTAL: Subject to change.\ntype RemoteAdmin struct {\n\t// The address on which to start the secure listener. Accepts placeholders.\n\t// Default: :2021\n\tListen string `json:\"listen,omitempty\"`\n\n\t// List of access controls for this secure admin endpoint.\n\t// This configures TLS mutual authentication (i.e. authorized\n\t// client certificates), but also application-layer permissions\n\t// like which paths and methods each identity is authorized for.\n\tAccessControl []*AdminAccess `json:\"access_control,omitempty\"`\n}\n\n// AdminAccess specifies what permissions an identity or group\n// of identities are granted.\ntype AdminAccess struct {\n\t// Base64-encoded DER certificates containing public keys to accept.\n\t// (The contents of PEM certificate blocks are base64-encoded DER.)\n\t// Any of these public keys can appear in any part of a verified chain.\n\tPublicKeys []string `json:\"public_keys,omitempty\"`\n\n\t// Limits what the associated identities are allowed to do.\n\t// If unspecified, all permissions are granted.\n\tPermissions []AdminPermissions `json:\"permissions,omitempty\"`\n\n\tpublicKeys []crypto.PublicKey\n}\n\n// AdminPermissions specifies what kinds of requests are allowed\n// to be made to the admin endpoint.\ntype AdminPermissions struct {\n\t// The API paths allowed. Paths are simple prefix matches.\n\t// Any subpath of the specified paths will be allowed.\n\tPaths []string `json:\"paths,omitempty\"`\n\n\t// The HTTP methods allowed for the given paths.\n\tMethods []string `json:\"methods,omitempty\"`\n}\n\n// newAdminHandler reads admin's config and returns an http.Handler suitable\n// for use in an admin endpoint server, which will be listening on listenAddr.\nfunc (admin *AdminConfig) newAdminHandler(addr NetworkAddress, remote bool, _ Context) adminHandler {\n\tmuxWrap := adminHandler{mux: http.NewServeMux()}\n\n\t// secure the local or remote endpoint respectively\n\tif remote {\n\t\tmuxWrap.remoteControl = admin.Remote\n\t} else {\n\t\t// see comment in allowedOrigins() as to why we disable the host check for unix/fd networks\n\t\tmuxWrap.enforceHost = !addr.isWildcardInterface() && !addr.IsUnixNetwork() && !addr.IsFdNetwork()\n\t\tmuxWrap.allowedOrigins = admin.allowedOrigins(addr)\n\t\tmuxWrap.enforceOrigin = admin.EnforceOrigin\n\t}\n\n\taddRouteWithMetrics := func(pattern string, handlerLabel string, h http.Handler) {\n\t\tlabels := prometheus.Labels{\"path\": pattern, \"handler\": handlerLabel}\n\t\th = instrumentHandlerCounter(\n\t\t\tadminMetrics.requestCount.MustCurryWith(labels),\n\t\t\th,\n\t\t)\n\t\tmuxWrap.mux.Handle(pattern, h)\n\t}\n\t// addRoute just calls muxWrap.mux.Handle after\n\t// wrapping the handler with error handling\n\taddRoute := func(pattern string, handlerLabel string, h AdminHandler) {\n\t\twrapper := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\t\terr := h.ServeHTTP(w, r)\n\t\t\tif err != nil {\n\t\t\t\tlabels := prometheus.Labels{\n\t\t\t\t\t\"path\":    pattern,\n\t\t\t\t\t\"handler\": handlerLabel,\n\t\t\t\t\t\"method\":  strings.ToUpper(r.Method),\n\t\t\t\t}\n\t\t\t\tadminMetrics.requestErrors.With(labels).Inc()\n\t\t\t}\n\t\t\tmuxWrap.handleError(w, r, err)\n\t\t})\n\t\taddRouteWithMetrics(pattern, handlerLabel, wrapper)\n\t}\n\n\tconst handlerLabel = \"admin\"\n\n\t// register standard config control endpoints\n\taddRoute(\"/\"+rawConfigKey+\"/\", handlerLabel, AdminHandlerFunc(handleConfig))\n\taddRoute(\"/id/\", handlerLabel, AdminHandlerFunc(handleConfigID))\n\taddRoute(\"/stop\", handlerLabel, AdminHandlerFunc(handleStop))\n\n\t// register debugging endpoints\n\taddRouteWithMetrics(\"/debug/pprof/\", handlerLabel, http.HandlerFunc(pprof.Index))\n\taddRouteWithMetrics(\"/debug/pprof/cmdline\", handlerLabel, http.HandlerFunc(pprof.Cmdline))\n\taddRouteWithMetrics(\"/debug/pprof/profile\", handlerLabel, http.HandlerFunc(pprof.Profile))\n\taddRouteWithMetrics(\"/debug/pprof/symbol\", handlerLabel, http.HandlerFunc(pprof.Symbol))\n\taddRouteWithMetrics(\"/debug/pprof/trace\", handlerLabel, http.HandlerFunc(pprof.Trace))\n\taddRouteWithMetrics(\"/debug/vars\", handlerLabel, expvar.Handler())\n\n\t// register third-party module endpoints\n\tfor _, m := range GetModules(\"admin.api\") {\n\t\trouter := m.New().(AdminRouter)\n\t\tfor _, route := range router.Routes() {\n\t\t\taddRoute(route.Pattern, handlerLabel, route.Handler)\n\t\t}\n\t\tadmin.routers = append(admin.routers, router)\n\t}\n\n\treturn muxWrap\n}\n\n// provisionAdminRouters provisions all the router modules\n// in the admin.api namespace that need provisioning.\nfunc (admin *AdminConfig) provisionAdminRouters(ctx Context) error {\n\tfor _, router := range admin.routers {\n\t\tprovisioner, ok := router.(Provisioner)\n\t\tif !ok {\n\t\t\tcontinue\n\t\t}\n\n\t\terr := provisioner.Provision(ctx)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// We no longer need the routers once provisioned, allow for GC\n\tadmin.routers = nil\n\n\treturn nil\n}\n\n// allowedOrigins returns a list of origins that are allowed.\n// If admin.Origins is nil (null), the provided listen address\n// will be used as the default origin. If admin.Origins is\n// empty, no origins will be allowed, effectively bricking the\n// endpoint for non-unix-socket endpoints, but whatever.\nfunc (admin AdminConfig) allowedOrigins(addr NetworkAddress) []*url.URL {\n\tuniqueOrigins := make(map[string]struct{})\n\tfor _, o := range admin.Origins {\n\t\tuniqueOrigins[o] = struct{}{}\n\t}\n\t// RFC 2616, Section 14.26:\n\t// \"A client MUST include a Host header field in all HTTP/1.1 request\n\t// messages. If the requested URI does not include an Internet host\n\t// name for the service being requested, then the Host header field MUST\n\t// be given with an empty value.\"\n\t//\n\t// UPDATE July 2023: Go broke this by patching a minor security bug in 1.20.6.\n\t// Understandable, but frustrating. See:\n\t// https://github.com/golang/go/issues/60374\n\t// See also the discussion here:\n\t// https://github.com/golang/go/issues/61431\n\t//\n\t// We can no longer conform to RFC 2616 Section 14.26 from either Go or curl\n\t// in purity. (Curl allowed no host between 7.40 and 7.50, but now requires a\n\t// bogus host; see https://superuser.com/a/925610.) If we disable Host/Origin\n\t// security checks, the infosec community assures me that it is secure to do\n\t// so, because:\n\t//\n\t// 1) Browsers do not allow access to unix sockets\n\t// 2) DNS is irrelevant to unix sockets\n\t//\n\t// If either of those two statements ever fail to hold true, it is not the\n\t// fault of Caddy.\n\t//\n\t// Thus, we do not fill out allowed origins and do not enforce Host\n\t// requirements for unix sockets. Enforcing it leads to confusion and\n\t// frustration, when UDS have their own permissions from the OS.\n\t// Enforcing host requirements here is effectively security theater,\n\t// and a false sense of security.\n\t//\n\t// See also the discussion in #6832.\n\tif admin.Origins == nil && !addr.IsUnixNetwork() && !addr.IsFdNetwork() {\n\t\tif addr.isLoopback() {\n\t\t\tuniqueOrigins[net.JoinHostPort(\"localhost\", addr.port())] = struct{}{}\n\t\t\tuniqueOrigins[net.JoinHostPort(\"::1\", addr.port())] = struct{}{}\n\t\t\tuniqueOrigins[net.JoinHostPort(\"127.0.0.1\", addr.port())] = struct{}{}\n\t\t} else {\n\t\t\tuniqueOrigins[addr.JoinHostPort(0)] = struct{}{}\n\t\t}\n\t}\n\tallowed := make([]*url.URL, 0, len(uniqueOrigins))\n\tfor originStr := range uniqueOrigins {\n\t\tvar origin *url.URL\n\t\tif strings.Contains(originStr, \"://\") {\n\t\t\tvar err error\n\t\t\torigin, err = url.Parse(originStr)\n\t\t\tif err != nil {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\torigin.Path = \"\"\n\t\t\torigin.RawPath = \"\"\n\t\t\torigin.Fragment = \"\"\n\t\t\torigin.RawFragment = \"\"\n\t\t\torigin.RawQuery = \"\"\n\t\t} else {\n\t\t\torigin = &url.URL{Host: originStr}\n\t\t}\n\t\tallowed = append(allowed, origin)\n\t}\n\treturn allowed\n}\n\n// replaceLocalAdminServer replaces the running local admin server\n// according to the relevant configuration in cfg. If no configuration\n// for the admin endpoint exists in cfg, a default one is used, so\n// that there is always an admin server (unless it is explicitly\n// configured to be disabled).\n// Critically note that some elements and functionality of the context\n// may not be ready, e.g. storage. Tread carefully.\nfunc replaceLocalAdminServer(cfg *Config, ctx Context) error {\n\t// always* be sure to close down the old admin endpoint\n\t// as gracefully as possible, even if the new one is\n\t// disabled -- careful to use reference to the current\n\t// (old) admin endpoint since it will be different\n\t// when the function returns\n\t// (* except if the new one fails to start)\n\toldAdminServer := localAdminServer\n\tvar err error\n\tdefer func() {\n\t\t// do the shutdown asynchronously so that any\n\t\t// current API request gets a response; this\n\t\t// goroutine may last a few seconds\n\t\tif oldAdminServer != nil && err == nil {\n\t\t\tgo func(oldAdminServer *http.Server) {\n\t\t\t\terr := stopAdminServer(oldAdminServer)\n\t\t\t\tif err != nil {\n\t\t\t\t\tLog().Named(\"admin\").Error(\"stopping current admin endpoint\", zap.Error(err))\n\t\t\t\t}\n\t\t\t}(oldAdminServer)\n\t\t}\n\t}()\n\n\t// set a default if admin wasn't otherwise configured\n\tif cfg.Admin == nil {\n\t\tcfg.Admin = &AdminConfig{\n\t\t\tListen: DefaultAdminListen,\n\t\t}\n\t}\n\n\t// if new admin endpoint is to be disabled, we're done\n\tif cfg.Admin.Disabled {\n\t\tLog().Named(\"admin\").Warn(\"admin endpoint disabled\")\n\t\treturn nil\n\t}\n\n\t// extract a singular listener address\n\taddr, err := parseAdminListenAddr(cfg.Admin.Listen, DefaultAdminListen)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\thandler := cfg.Admin.newAdminHandler(addr, false, ctx)\n\n\t// run the provisioners for loaded modules to make sure local\n\t// state is properly re-initialized in the new admin server\n\terr = cfg.Admin.provisionAdminRouters(ctx)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tln, err := addr.Listen(context.TODO(), 0, net.ListenConfig{})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tserverMu.Lock()\n\tlocalAdminServer = &http.Server{\n\t\tAddr:              addr.String(), // for logging purposes only\n\t\tHandler:           handler,\n\t\tReadTimeout:       10 * time.Second,\n\t\tReadHeaderTimeout: 5 * time.Second,\n\t\tIdleTimeout:       60 * time.Second,\n\t\tMaxHeaderBytes:    1024 * 64,\n\t}\n\tserverMu.Unlock()\n\n\tadminLogger := Log().Named(\"admin\")\n\tgo func() {\n\t\tserverMu.Lock()\n\t\tserver := localAdminServer\n\t\tserverMu.Unlock()\n\t\tif err := server.Serve(ln.(net.Listener)); !errors.Is(err, http.ErrServerClosed) {\n\t\t\tadminLogger.Error(\"admin server shutdown for unknown reason\", zap.Error(err))\n\t\t}\n\t}()\n\n\tadminLogger.Info(\"admin endpoint started\",\n\t\tzap.String(\"address\", addr.String()),\n\t\tzap.Bool(\"enforce_origin\", cfg.Admin.EnforceOrigin),\n\t\tzap.Array(\"origins\", loggableURLArray(handler.allowedOrigins)))\n\n\tif !handler.enforceHost {\n\t\tadminLogger.Warn(\"admin endpoint on open interface; host checking disabled\",\n\t\t\tzap.String(\"address\", addr.String()))\n\t}\n\n\treturn nil\n}\n\n// manageIdentity sets up automated identity management for this server.\nfunc manageIdentity(ctx Context, cfg *Config) error {\n\tif cfg == nil || cfg.Admin == nil || cfg.Admin.Identity == nil {\n\t\treturn nil\n\t}\n\n\t// set default issuers; this is pretty hacky because we can't\n\t// import the caddytls package -- but it works\n\tif cfg.Admin.Identity.IssuersRaw == nil {\n\t\tcfg.Admin.Identity.IssuersRaw = []json.RawMessage{\n\t\t\tjson.RawMessage(`{\"module\": \"acme\"}`),\n\t\t}\n\t}\n\n\t// load and provision issuer modules\n\tif cfg.Admin.Identity.IssuersRaw != nil {\n\t\tval, err := ctx.LoadModule(cfg.Admin.Identity, \"IssuersRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading identity issuer modules: %s\", err)\n\t\t}\n\t\tfor _, issVal := range val.([]any) {\n\t\t\tcfg.Admin.Identity.issuers = append(cfg.Admin.Identity.issuers, issVal.(certmagic.Issuer))\n\t\t}\n\t}\n\n\t// we'll make a new cache when we make the CertMagic config, so stop any previous cache\n\tif identityCertCache != nil {\n\t\tidentityCertCache.Stop()\n\t}\n\n\tlogger := Log().Named(\"admin.identity\")\n\tcmCfg := cfg.Admin.Identity.certmagicConfig(logger, true)\n\n\t// issuers have circular dependencies with the configs because,\n\t// as explained in the caddytls package, they need access to the\n\t// correct storage and cache to solve ACME challenges\n\tfor _, issuer := range cfg.Admin.Identity.issuers {\n\t\t// avoid import cycle with caddytls package, so manually duplicate the interface here, yuck\n\t\tif annoying, ok := issuer.(interface{ SetConfig(cfg *certmagic.Config) }); ok {\n\t\t\tannoying.SetConfig(cmCfg)\n\t\t}\n\t}\n\n\t// obtain and renew server identity certificate(s)\n\treturn cmCfg.ManageAsync(ctx, cfg.Admin.Identity.Identifiers)\n}\n\n// replaceRemoteAdminServer replaces the running remote admin server\n// according to the relevant configuration in cfg. It stops any previous\n// remote admin server and only starts a new one if configured.\nfunc replaceRemoteAdminServer(ctx Context, cfg *Config) error {\n\tif cfg == nil {\n\t\treturn nil\n\t}\n\n\tremoteLogger := Log().Named(\"admin.remote\")\n\n\toldAdminServer := remoteAdminServer\n\tdefer func() {\n\t\tif oldAdminServer != nil {\n\t\t\tgo func(oldAdminServer *http.Server) {\n\t\t\t\terr := stopAdminServer(oldAdminServer)\n\t\t\t\tif err != nil {\n\t\t\t\t\tLog().Named(\"admin\").Error(\"stopping current secure admin endpoint\", zap.Error(err))\n\t\t\t\t}\n\t\t\t}(oldAdminServer)\n\t\t}\n\t}()\n\n\tif cfg.Admin == nil || cfg.Admin.Remote == nil {\n\t\treturn nil\n\t}\n\n\taddr, err := parseAdminListenAddr(cfg.Admin.Remote.Listen, DefaultRemoteAdminListen)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// make the HTTP handler but disable Host/Origin enforcement\n\t// because we are using TLS authentication instead\n\thandler := cfg.Admin.newAdminHandler(addr, true, ctx)\n\n\t// run the provisioners for loaded modules to make sure local\n\t// state is properly re-initialized in the new admin server\n\terr = cfg.Admin.provisionAdminRouters(ctx)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// create client certificate pool for TLS mutual auth, and extract public keys\n\t// so that we can enforce access controls at the application layer\n\tclientCertPool := x509.NewCertPool()\n\tfor i, accessControl := range cfg.Admin.Remote.AccessControl {\n\t\tfor j, certBase64 := range accessControl.PublicKeys {\n\t\t\tcert, err := decodeBase64DERCert(certBase64)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"access control %d public key %d: parsing base64 certificate DER: %v\", i, j, err)\n\t\t\t}\n\t\t\taccessControl.publicKeys = append(accessControl.publicKeys, cert.PublicKey)\n\t\t\tclientCertPool.AddCert(cert)\n\t\t}\n\t}\n\n\t// create TLS config that will enforce mutual authentication\n\tif identityCertCache == nil {\n\t\treturn fmt.Errorf(\"cannot enable remote admin without a certificate cache; configure identity management to initialize a certificate cache\")\n\t}\n\tcmCfg := cfg.Admin.Identity.certmagicConfig(remoteLogger, false)\n\ttlsConfig := cmCfg.TLSConfig()\n\ttlsConfig.NextProtos = nil // this server does not solve ACME challenges\n\ttlsConfig.ClientAuth = tls.RequireAndVerifyClientCert\n\ttlsConfig.ClientCAs = clientCertPool\n\n\t// convert logger to stdlib so it can be used by HTTP server\n\tserverLogger, err := zap.NewStdLogAt(remoteLogger, zap.DebugLevel)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tserverMu.Lock()\n\t// create secure HTTP server\n\tremoteAdminServer = &http.Server{\n\t\tAddr:              addr.String(), // for logging purposes only\n\t\tHandler:           handler,\n\t\tTLSConfig:         tlsConfig,\n\t\tReadTimeout:       10 * time.Second,\n\t\tReadHeaderTimeout: 5 * time.Second,\n\t\tIdleTimeout:       60 * time.Second,\n\t\tMaxHeaderBytes:    1024 * 64,\n\t\tErrorLog:          serverLogger,\n\t}\n\tserverMu.Unlock()\n\n\t// start listener\n\tlnAny, err := addr.Listen(ctx, 0, net.ListenConfig{})\n\tif err != nil {\n\t\treturn err\n\t}\n\tln := lnAny.(net.Listener)\n\tln = tls.NewListener(ln, tlsConfig)\n\n\tgo func() {\n\t\tserverMu.Lock()\n\t\tserver := remoteAdminServer\n\t\tserverMu.Unlock()\n\t\tif err := server.Serve(ln); !errors.Is(err, http.ErrServerClosed) {\n\t\t\tremoteLogger.Error(\"admin remote server shutdown for unknown reason\", zap.Error(err))\n\t\t}\n\t}()\n\n\tremoteLogger.Info(\"secure admin remote control endpoint started\",\n\t\tzap.String(\"address\", addr.String()))\n\n\treturn nil\n}\n\nfunc (ident *IdentityConfig) certmagicConfig(logger *zap.Logger, makeCache bool) *certmagic.Config {\n\tvar cmCfg *certmagic.Config\n\tif ident == nil {\n\t\t// user might not have configured identity; that's OK, we can still make a\n\t\t// certmagic config, although it'll be mostly useless for remote management\n\t\tident = new(IdentityConfig)\n\t}\n\t// Choose storage: prefer the package-level test override when present,\n\t// otherwise use the configured DefaultStorage. Tests may set an override\n\t// to divert storage into a temporary location. Otherwise, in production\n\t// we use the DefaultStorage since we don't want to act as part of a\n\t// cluster; this storage is for the server's local identity only.\n\tvar storage certmagic.Storage\n\tif testCertMagicStorageOverride != nil {\n\t\tstorage = testCertMagicStorageOverride\n\t} else {\n\t\tstorage = DefaultStorage\n\t}\n\ttemplate := certmagic.Config{\n\t\tStorage: storage,\n\t\tLogger:  logger,\n\t\tIssuers: ident.issuers,\n\t}\n\tif makeCache {\n\t\tidentityCertCache = certmagic.NewCache(certmagic.CacheOptions{\n\t\t\tGetConfigForCert: func(certmagic.Certificate) (*certmagic.Config, error) {\n\t\t\t\treturn cmCfg, nil\n\t\t\t},\n\t\t\tLogger: logger.Named(\"cache\"),\n\t\t})\n\t}\n\tcmCfg = certmagic.New(identityCertCache, template)\n\treturn cmCfg\n}\n\n// IdentityCredentials returns this instance's configured, managed identity credentials\n// that can be used in TLS client authentication.\nfunc (ctx Context) IdentityCredentials(logger *zap.Logger) ([]tls.Certificate, error) {\n\tif ctx.cfg == nil || ctx.cfg.Admin == nil || ctx.cfg.Admin.Identity == nil {\n\t\treturn nil, fmt.Errorf(\"no server identity configured\")\n\t}\n\tident := ctx.cfg.Admin.Identity\n\tif len(ident.Identifiers) == 0 {\n\t\treturn nil, fmt.Errorf(\"no identifiers configured\")\n\t}\n\tif logger == nil {\n\t\tlogger = Log()\n\t}\n\tmagic := ident.certmagicConfig(logger, false)\n\treturn magic.ClientCredentials(ctx, ident.Identifiers)\n}\n\n// enforceAccessControls enforces application-layer access controls for r based on remote.\n// It expects that the TLS server has already established at least one verified chain of\n// trust, and then looks for a matching, authorized public key that is allowed to access\n// the defined path(s) using the defined method(s).\nfunc (remote RemoteAdmin) enforceAccessControls(r *http.Request) error {\n\tfor _, chain := range r.TLS.VerifiedChains {\n\t\tfor _, peerCert := range chain {\n\t\t\tfor _, adminAccess := range remote.AccessControl {\n\t\t\t\tfor _, allowedKey := range adminAccess.publicKeys {\n\t\t\t\t\t// see if we found a matching public key; the TLS server already verified the chain\n\t\t\t\t\t// so we know the client possesses the associated private key; this handy interface\n\t\t\t\t\t// doesn't appear to be defined anywhere in the std lib, but was implemented here:\n\t\t\t\t\t// https://github.com/golang/go/commit/b5f2c0f50297fa5cd14af668ddd7fd923626cf8c\n\t\t\t\t\tcomparer, ok := peerCert.PublicKey.(interface{ Equal(crypto.PublicKey) bool })\n\t\t\t\t\tif !ok || !comparer.Equal(allowedKey) {\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\n\t\t\t\t\t// key recognized; make sure its HTTP request is permitted\n\t\t\t\t\tfor _, accessPerm := range adminAccess.Permissions {\n\t\t\t\t\t\t// verify method\n\t\t\t\t\t\tmethodFound := accessPerm.Methods == nil || slices.Contains(accessPerm.Methods, r.Method)\n\t\t\t\t\t\tif !methodFound {\n\t\t\t\t\t\t\treturn APIError{\n\t\t\t\t\t\t\t\tHTTPStatus: http.StatusForbidden,\n\t\t\t\t\t\t\t\tMessage:    \"not authorized to use this method\",\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\t// verify path\n\t\t\t\t\t\tpathFound := accessPerm.Paths == nil\n\t\t\t\t\t\tfor _, allowedPath := range accessPerm.Paths {\n\t\t\t\t\t\t\tif strings.HasPrefix(r.URL.Path, allowedPath) {\n\t\t\t\t\t\t\t\tpathFound = true\n\t\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif !pathFound {\n\t\t\t\t\t\t\treturn APIError{\n\t\t\t\t\t\t\t\tHTTPStatus: http.StatusForbidden,\n\t\t\t\t\t\t\t\tMessage:    \"not authorized to access this path\",\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\t// public key authorized, method and path allowed\n\t\t\t\t\treturn nil\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t// in theory, this should never happen; with an unverified chain, the TLS server\n\t// should not accept the connection in the first place, and the acceptable cert\n\t// pool is configured using the same list of public keys we verify against\n\treturn APIError{\n\t\tHTTPStatus: http.StatusUnauthorized,\n\t\tMessage:    \"client identity not authorized\",\n\t}\n}\n\nfunc stopAdminServer(srv *http.Server) error {\n\tif srv == nil {\n\t\treturn fmt.Errorf(\"no admin server\")\n\t}\n\ttimeout := 10 * time.Second\n\tctx, cancel := context.WithTimeoutCause(context.Background(), timeout, fmt.Errorf(\"stopping admin server: %ds timeout\", int(timeout.Seconds())))\n\tdefer cancel()\n\terr := srv.Shutdown(ctx)\n\tif err != nil {\n\t\tif cause := context.Cause(ctx); cause != nil && errors.Is(err, context.DeadlineExceeded) {\n\t\t\terr = cause\n\t\t}\n\t\treturn fmt.Errorf(\"shutting down admin server: %v\", err)\n\t}\n\tLog().Named(\"admin\").Info(\"stopped previous server\", zap.String(\"address\", srv.Addr))\n\treturn nil\n}\n\n// AdminRouter is a type which can return routes for the admin API.\ntype AdminRouter interface {\n\tRoutes() []AdminRoute\n}\n\n// AdminRoute represents a route for the admin endpoint.\ntype AdminRoute struct {\n\tPattern string\n\tHandler AdminHandler\n}\n\ntype adminHandler struct {\n\tmux *http.ServeMux\n\n\t// security for local/plaintext endpoint\n\tenforceOrigin  bool\n\tenforceHost    bool\n\tallowedOrigins []*url.URL\n\n\t// security for remote/encrypted endpoint\n\tremoteControl *RemoteAdmin\n}\n\n// ServeHTTP is the external entry point for API requests.\n// It will only be called once per request.\nfunc (h adminHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {\n\tip, port, err := net.SplitHostPort(r.RemoteAddr)\n\tif err != nil {\n\t\tip = r.RemoteAddr\n\t\tport = \"\"\n\t}\n\tlog := Log().Named(\"admin.api\").With(\n\t\tzap.String(\"method\", r.Method),\n\t\tzap.String(\"host\", r.Host),\n\t\tzap.String(\"uri\", r.RequestURI),\n\t\tzap.String(\"remote_ip\", ip),\n\t\tzap.String(\"remote_port\", port),\n\t\tzap.Reflect(\"headers\", r.Header),\n\t)\n\tif r.TLS != nil {\n\t\tlog = log.With(\n\t\t\tzap.Bool(\"secure\", true),\n\t\t\tzap.Int(\"verified_chains\", len(r.TLS.VerifiedChains)),\n\t\t)\n\t}\n\tif r.RequestURI == \"/metrics\" {\n\t\tlog.Debug(\"received request\")\n\t} else {\n\t\tlog.Info(\"received request\")\n\t}\n\th.serveHTTP(w, r)\n}\n\n// serveHTTP is the internal entry point for API requests. It may\n// be called more than once per request, for example if a request\n// is rewritten (i.e. internal redirect).\nfunc (h adminHandler) serveHTTP(w http.ResponseWriter, r *http.Request) {\n\tif h.remoteControl != nil {\n\t\t// enforce access controls on secure endpoint\n\t\tif err := h.remoteControl.enforceAccessControls(r); err != nil {\n\t\t\th.handleError(w, r, err)\n\t\t\treturn\n\t\t}\n\t}\n\n\t// common mitigations in browser contexts\n\tif strings.Contains(r.Header.Get(\"Upgrade\"), \"websocket\") {\n\t\t// I've never been able demonstrate a vulnerability myself, but apparently\n\t\t// WebSocket connections originating from browsers aren't subject to CORS\n\t\t// restrictions, so we'll just be on the safe side\n\t\th.handleError(w, r, APIError{\n\t\t\tHTTPStatus: http.StatusBadRequest,\n\t\t\tErr:        errors.New(\"websocket connections aren't allowed\"),\n\t\t\tMessage:    \"WebSocket connections aren't allowed.\",\n\t\t})\n\t\treturn\n\t}\n\tif strings.Contains(r.Header.Get(\"Sec-Fetch-Mode\"), \"no-cors\") {\n\t\t// turns out web pages can just disable the same-origin policy (!???!?)\n\t\t// but at least browsers let us know that's the case, holy heck\n\t\th.handleError(w, r, APIError{\n\t\t\tHTTPStatus: http.StatusBadRequest,\n\t\t\tErr:        errors.New(\"client attempted to make request by disabling same-origin policy using no-cors mode\"),\n\t\t\tMessage:    \"Disabling same-origin restrictions is not allowed.\",\n\t\t})\n\t\treturn\n\t}\n\tif r.Header.Get(\"Origin\") == \"null\" {\n\t\t// bug in Firefox in certain cross-origin situations (yikes?)\n\t\t// (not strictly a security vuln on its own, but it's red flaggy,\n\t\t// since it seems to manifest in cross-origin contexts)\n\t\th.handleError(w, r, APIError{\n\t\t\tHTTPStatus: http.StatusBadRequest,\n\t\t\tErr:        errors.New(\"invalid origin 'null'\"),\n\t\t\tMessage:    \"Buggy browser is sending null Origin header.\",\n\t\t})\n\t}\n\n\tif h.enforceHost {\n\t\t// DNS rebinding mitigation\n\t\terr := h.checkHost(r)\n\t\tif err != nil {\n\t\t\th.handleError(w, r, err)\n\t\t\treturn\n\t\t}\n\t}\n\n\t_, hasOriginHeader := r.Header[\"Origin\"]\n\t_, hasSecHeader := r.Header[\"Sec-Fetch-Mode\"]\n\tif h.enforceOrigin || hasOriginHeader || hasSecHeader {\n\t\t// cross-site mitigation\n\t\torigin, err := h.checkOrigin(r)\n\t\tif err != nil {\n\t\t\th.handleError(w, r, err)\n\t\t\treturn\n\t\t}\n\n\t\tif r.Method == http.MethodOptions {\n\t\t\tw.Header().Set(\"Access-Control-Allow-Methods\", \"OPTIONS, GET, POST, PUT, PATCH, DELETE\")\n\t\t\tw.Header().Set(\"Access-Control-Allow-Headers\", \"Content-Type, Content-Length, Cache-Control\")\n\t\t\tw.Header().Set(\"Access-Control-Allow-Credentials\", \"true\")\n\t\t}\n\t\tw.Header().Set(\"Access-Control-Allow-Origin\", origin)\n\t}\n\n\th.mux.ServeHTTP(w, r)\n}\n\nfunc (h adminHandler) handleError(w http.ResponseWriter, r *http.Request, err error) {\n\tif err == nil {\n\t\treturn\n\t}\n\tif err == errInternalRedir {\n\t\th.serveHTTP(w, r)\n\t\treturn\n\t}\n\n\tapiErr, ok := err.(APIError)\n\tif !ok {\n\t\tapiErr = APIError{\n\t\t\tHTTPStatus: http.StatusInternalServerError,\n\t\t\tErr:        err,\n\t\t}\n\t}\n\tif apiErr.HTTPStatus == 0 {\n\t\tapiErr.HTTPStatus = http.StatusInternalServerError\n\t}\n\tif apiErr.Message == \"\" && apiErr.Err != nil {\n\t\tapiErr.Message = apiErr.Err.Error()\n\t}\n\n\tLog().Named(\"admin.api\").Error(\"request error\",\n\t\tzap.Error(err),\n\t\tzap.Int(\"status_code\", apiErr.HTTPStatus),\n\t)\n\n\tw.Header().Set(\"Content-Type\", \"application/json\")\n\tw.WriteHeader(apiErr.HTTPStatus)\n\tencErr := json.NewEncoder(w).Encode(apiErr)\n\tif encErr != nil {\n\t\tLog().Named(\"admin.api\").Error(\"failed to encode error response\", zap.Error(encErr))\n\t}\n}\n\n// checkHost returns a handler that wraps next such that\n// it will only be called if the request's Host header matches\n// a trustworthy/expected value. This helps to mitigate DNS\n// rebinding attacks.\nfunc (h adminHandler) checkHost(r *http.Request) error {\n\tallowed := slices.ContainsFunc(h.allowedOrigins, func(u *url.URL) bool {\n\t\treturn r.Host == u.Host\n\t})\n\tif !allowed {\n\t\treturn APIError{\n\t\t\tHTTPStatus: http.StatusForbidden,\n\t\t\tErr:        fmt.Errorf(\"host not allowed: %s\", r.Host),\n\t\t}\n\t}\n\treturn nil\n}\n\n// checkOrigin ensures that the Origin header, if\n// set, matches the intended target; prevents arbitrary\n// sites from issuing requests to our listener. It\n// returns the origin that was obtained from r.\nfunc (h adminHandler) checkOrigin(r *http.Request) (string, error) {\n\toriginStr, origin := h.getOrigin(r)\n\tif origin == nil {\n\t\treturn \"\", APIError{\n\t\t\tHTTPStatus: http.StatusForbidden,\n\t\t\tErr:        fmt.Errorf(\"required Origin header is missing or invalid\"),\n\t\t}\n\t}\n\tif !h.originAllowed(origin) {\n\t\treturn \"\", APIError{\n\t\t\tHTTPStatus: http.StatusForbidden,\n\t\t\tErr:        fmt.Errorf(\"client is not allowed to access from origin '%s'\", originStr),\n\t\t}\n\t}\n\treturn origin.String(), nil\n}\n\nfunc (h adminHandler) getOrigin(r *http.Request) (string, *url.URL) {\n\torigin := r.Header.Get(\"Origin\")\n\tif origin == \"\" {\n\t\torigin = r.Header.Get(\"Referer\")\n\t}\n\toriginURL, err := url.Parse(origin)\n\tif err != nil {\n\t\treturn origin, nil\n\t}\n\toriginURL.Path = \"\"\n\toriginURL.RawPath = \"\"\n\toriginURL.Fragment = \"\"\n\toriginURL.RawFragment = \"\"\n\toriginURL.RawQuery = \"\"\n\treturn origin, originURL\n}\n\nfunc (h adminHandler) originAllowed(origin *url.URL) bool {\n\tfor _, allowedOrigin := range h.allowedOrigins {\n\t\tif allowedOrigin.Scheme != \"\" && origin.Scheme != allowedOrigin.Scheme {\n\t\t\tcontinue\n\t\t}\n\t\tif origin.Host == allowedOrigin.Host {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\n// etagHasher returns the hasher we used on the config to both\n// produce and verify ETags.\nfunc etagHasher() hash.Hash { return xxhash.New() }\n\n// makeEtag returns an Etag header value (including quotes) for\n// the given config path and hash of contents at that path.\nfunc makeEtag(path string, hash hash.Hash) string {\n\treturn fmt.Sprintf(`\"%s %x\"`, path, hash.Sum(nil))\n}\n\n// This buffer pool is used to keep buffers for\n// reading the config file during eTag header generation\nvar bufferPool = sync.Pool{\n\tNew: func() any {\n\t\treturn new(bytes.Buffer)\n\t},\n}\n\nfunc handleConfig(w http.ResponseWriter, r *http.Request) error {\n\tswitch r.Method {\n\tcase http.MethodGet:\n\t\tw.Header().Set(\"Content-Type\", \"application/json\")\n\t\thash := etagHasher()\n\n\t\t// Read the config into a buffer instead of writing directly to\n\t\t// the response writer, as we want to set the ETag as the header,\n\t\t// not the trailer.\n\t\tbuf := bufferPool.Get().(*bytes.Buffer)\n\t\tbuf.Reset()\n\t\tdefer bufferPool.Put(buf)\n\n\t\tconfigWriter := io.MultiWriter(buf, hash)\n\t\terr := readConfig(r.URL.Path, configWriter)\n\t\tif err != nil {\n\t\t\treturn APIError{HTTPStatus: http.StatusBadRequest, Err: err}\n\t\t}\n\n\t\t// we could consider setting up a sync.Pool for the summed\n\t\t// hashes to reduce GC pressure.\n\t\tw.Header().Set(\"Etag\", makeEtag(r.URL.Path, hash))\n\t\t_, err = w.Write(buf.Bytes())\n\t\tif err != nil {\n\t\t\treturn APIError{HTTPStatus: http.StatusInternalServerError, Err: err}\n\t\t}\n\n\t\treturn nil\n\n\tcase http.MethodPost,\n\t\thttp.MethodPut,\n\t\thttp.MethodPatch,\n\t\thttp.MethodDelete:\n\n\t\t// DELETE does not use a body, but the others do\n\t\tvar body []byte\n\t\tif r.Method != http.MethodDelete {\n\t\t\tif ct := r.Header.Get(\"Content-Type\"); !strings.Contains(ct, \"/json\") {\n\t\t\t\treturn APIError{\n\t\t\t\t\tHTTPStatus: http.StatusBadRequest,\n\t\t\t\t\tErr:        fmt.Errorf(\"unacceptable content-type: %v; 'application/json' required\", ct),\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tbuf := bufPool.Get().(*bytes.Buffer)\n\t\t\tbuf.Reset()\n\t\t\tdefer bufPool.Put(buf)\n\n\t\t\t_, err := io.Copy(buf, r.Body)\n\t\t\tif err != nil {\n\t\t\t\treturn APIError{\n\t\t\t\t\tHTTPStatus: http.StatusBadRequest,\n\t\t\t\t\tErr:        fmt.Errorf(\"reading request body: %v\", err),\n\t\t\t\t}\n\t\t\t}\n\t\t\tbody = buf.Bytes()\n\t\t}\n\n\t\tforceReload := r.Header.Get(\"Cache-Control\") == \"must-revalidate\"\n\n\t\terr := changeConfig(r.Method, r.URL.Path, body, r.Header.Get(\"If-Match\"), forceReload)\n\t\tif err != nil && !errors.Is(err, errSameConfig) {\n\t\t\treturn err\n\t\t}\n\n\t\t// If this request changed the config, clear the last\n\t\t// config info we have stored, if it is different from\n\t\t// the original source.\n\t\tClearLastConfigIfDifferent(\n\t\t\tr.Header.Get(\"Caddy-Config-Source-File\"),\n\t\t\tr.Header.Get(\"Caddy-Config-Source-Adapter\"))\n\n\tdefault:\n\t\treturn APIError{\n\t\t\tHTTPStatus: http.StatusMethodNotAllowed,\n\t\t\tErr:        fmt.Errorf(\"method %s not allowed\", r.Method),\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc handleConfigID(w http.ResponseWriter, r *http.Request) error {\n\tidPath := r.URL.Path\n\n\tparts := strings.Split(idPath, \"/\")\n\tif len(parts) < 3 || parts[2] == \"\" {\n\t\treturn APIError{\n\t\t\tHTTPStatus: http.StatusBadRequest,\n\t\t\tErr:        fmt.Errorf(\"request path is missing object ID\"),\n\t\t}\n\t}\n\tif parts[0] != \"\" || parts[1] != \"id\" {\n\t\treturn APIError{\n\t\t\tHTTPStatus: http.StatusBadRequest,\n\t\t\tErr:        fmt.Errorf(\"malformed object path\"),\n\t\t}\n\t}\n\tid := parts[2]\n\n\t// map the ID to the expanded path\n\trawCfgMu.RLock()\n\texpanded, ok := rawCfgIndex[id]\n\trawCfgMu.RUnlock()\n\tif !ok {\n\t\treturn APIError{\n\t\t\tHTTPStatus: http.StatusNotFound,\n\t\t\tErr:        fmt.Errorf(\"unknown object ID '%s'\", id),\n\t\t}\n\t}\n\n\t// piece the full URL path back together\n\tparts = append([]string{expanded}, parts[3:]...)\n\tr.URL.Path = path.Join(parts...)\n\n\treturn errInternalRedir\n}\n\nfunc handleStop(w http.ResponseWriter, r *http.Request) error {\n\tif r.Method != http.MethodPost {\n\t\treturn APIError{\n\t\t\tHTTPStatus: http.StatusMethodNotAllowed,\n\t\t\tErr:        fmt.Errorf(\"method not allowed\"),\n\t\t}\n\t}\n\n\texitProcess(context.Background(), Log().Named(\"admin.api\"))\n\treturn nil\n}\n\n// unsyncedConfigAccess traverses into the current config and performs\n// the operation at path according to method, using body and out as\n// needed. This is a low-level, unsynchronized function; most callers\n// will want to use changeConfig or readConfig instead. This requires a\n// read or write lock on currentCtxMu, depending on method (GET needs\n// only a read lock; all others need a write lock).\nfunc unsyncedConfigAccess(method, path string, body []byte, out io.Writer) error {\n\tvar err error\n\tvar val any\n\n\t// if there is a request body, decode it into the\n\t// variable that will be set in the config according\n\t// to method and path\n\tif len(body) > 0 {\n\t\terr = json.Unmarshal(body, &val)\n\t\tif err != nil {\n\t\t\tif jsonErr, ok := err.(*json.SyntaxError); ok {\n\t\t\t\treturn fmt.Errorf(\"decoding request body: %w, at offset %d\", jsonErr, jsonErr.Offset)\n\t\t\t}\n\t\t\treturn fmt.Errorf(\"decoding request body: %w\", err)\n\t\t}\n\t}\n\n\tenc := json.NewEncoder(out)\n\n\tcleanPath := strings.Trim(path, \"/\")\n\tif cleanPath == \"\" {\n\t\treturn fmt.Errorf(\"no traversable path\")\n\t}\n\n\tparts := strings.Split(cleanPath, \"/\")\n\tif len(parts) == 0 {\n\t\treturn fmt.Errorf(\"path missing\")\n\t}\n\n\t// A path that ends with \"...\" implies:\n\t// 1) the part before it is an array\n\t// 2) the payload is an array\n\t// and means that the user wants to expand the elements\n\t// in the payload array and append each one into the\n\t// destination array, like so:\n\t//     array = append(array, elems...)\n\t// This special case is handled below.\n\tellipses := parts[len(parts)-1] == \"...\"\n\tif ellipses {\n\t\tparts = parts[:len(parts)-1]\n\t}\n\n\tvar ptr any = rawCfg\n\ntraverseLoop:\n\tfor i, part := range parts {\n\t\tswitch v := ptr.(type) {\n\t\tcase map[string]any:\n\t\t\t// if the next part enters a slice, and the slice is our destination,\n\t\t\t// handle it specially (because appending to the slice copies the slice\n\t\t\t// header, which does not replace the original one like we want)\n\t\t\tif arr, ok := v[part].([]any); ok && i == len(parts)-2 {\n\t\t\t\tvar idx int\n\t\t\t\tif method != http.MethodPost {\n\t\t\t\t\tidxStr := parts[len(parts)-1]\n\t\t\t\t\tidx, err = strconv.Atoi(idxStr)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn fmt.Errorf(\"[%s] invalid array index '%s': %v\",\n\t\t\t\t\t\t\tpath, idxStr, err)\n\t\t\t\t\t}\n\t\t\t\t\tif idx < 0 || (method != http.MethodPut && idx >= len(arr)) || idx > len(arr) {\n\t\t\t\t\t\treturn fmt.Errorf(\"[%s] array index out of bounds: %s\", path, idxStr)\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tswitch method {\n\t\t\t\tcase http.MethodGet:\n\t\t\t\t\terr = enc.Encode(arr[idx])\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn fmt.Errorf(\"encoding config: %v\", err)\n\t\t\t\t\t}\n\t\t\t\tcase http.MethodPost:\n\t\t\t\t\tif ellipses {\n\t\t\t\t\t\tvalArray, ok := val.([]any)\n\t\t\t\t\t\tif !ok {\n\t\t\t\t\t\t\treturn fmt.Errorf(\"final element is not an array\")\n\t\t\t\t\t\t}\n\t\t\t\t\t\tv[part] = append(arr, valArray...)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tv[part] = append(arr, val)\n\t\t\t\t\t}\n\t\t\t\tcase http.MethodPut:\n\t\t\t\t\t// avoid creation of new slice and a second copy (see\n\t\t\t\t\t// https://github.com/golang/go/wiki/SliceTricks#insert)\n\t\t\t\t\tarr = append(arr, nil)\n\t\t\t\t\tcopy(arr[idx+1:], arr[idx:])\n\t\t\t\t\tarr[idx] = val\n\t\t\t\t\tv[part] = arr\n\t\t\t\tcase http.MethodPatch:\n\t\t\t\t\tarr[idx] = val\n\t\t\t\tcase http.MethodDelete:\n\t\t\t\t\tv[part] = append(arr[:idx], arr[idx+1:]...)\n\t\t\t\tdefault:\n\t\t\t\t\treturn fmt.Errorf(\"unrecognized method %s\", method)\n\t\t\t\t}\n\t\t\t\tbreak traverseLoop\n\t\t\t}\n\n\t\t\tif i == len(parts)-1 {\n\t\t\t\tswitch method {\n\t\t\t\tcase http.MethodGet:\n\t\t\t\t\terr = enc.Encode(v[part])\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn fmt.Errorf(\"encoding config: %v\", err)\n\t\t\t\t\t}\n\t\t\t\tcase http.MethodPost:\n\t\t\t\t\t// if the part is an existing list, POST appends to\n\t\t\t\t\t// it, otherwise it just sets or creates the value\n\t\t\t\t\tif arr, ok := v[part].([]any); ok {\n\t\t\t\t\t\tif ellipses {\n\t\t\t\t\t\t\tvalArray, ok := val.([]any)\n\t\t\t\t\t\t\tif !ok {\n\t\t\t\t\t\t\t\treturn fmt.Errorf(\"final element is not an array\")\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tv[part] = append(arr, valArray...)\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tv[part] = append(arr, val)\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tv[part] = val\n\t\t\t\t\t}\n\t\t\t\tcase http.MethodPut:\n\t\t\t\t\tif _, ok := v[part]; ok {\n\t\t\t\t\t\treturn APIError{\n\t\t\t\t\t\t\tHTTPStatus: http.StatusConflict,\n\t\t\t\t\t\t\tErr:        fmt.Errorf(\"[%s] key already exists: %s\", path, part),\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tv[part] = val\n\t\t\t\tcase http.MethodPatch:\n\t\t\t\t\tif _, ok := v[part]; !ok {\n\t\t\t\t\t\treturn APIError{\n\t\t\t\t\t\t\tHTTPStatus: http.StatusNotFound,\n\t\t\t\t\t\t\tErr:        fmt.Errorf(\"[%s] key does not exist: %s\", path, part),\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tv[part] = val\n\t\t\t\tcase http.MethodDelete:\n\t\t\t\t\tif _, ok := v[part]; !ok {\n\t\t\t\t\t\treturn APIError{\n\t\t\t\t\t\t\tHTTPStatus: http.StatusNotFound,\n\t\t\t\t\t\t\tErr:        fmt.Errorf(\"[%s] key does not exist: %s\", path, part),\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tdelete(v, part)\n\t\t\t\tdefault:\n\t\t\t\t\treturn fmt.Errorf(\"unrecognized method %s\", method)\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\t// if we are \"PUTting\" a new resource, the key(s) in its path\n\t\t\t\t// might not exist yet; that's OK but we need to make them as\n\t\t\t\t// we go, while we still have a pointer from the level above\n\t\t\t\tif v[part] == nil && method == http.MethodPut {\n\t\t\t\t\tv[part] = make(map[string]any)\n\t\t\t\t}\n\t\t\t\tptr = v[part]\n\t\t\t}\n\n\t\tcase []any:\n\t\t\tpartInt, err := strconv.Atoi(part)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"[/%s] invalid array index '%s': %v\",\n\t\t\t\t\tstrings.Join(parts[:i+1], \"/\"), part, err)\n\t\t\t}\n\t\t\tif partInt < 0 || partInt >= len(v) {\n\t\t\t\treturn fmt.Errorf(\"[/%s] array index out of bounds: %s\",\n\t\t\t\t\tstrings.Join(parts[:i+1], \"/\"), part)\n\t\t\t}\n\t\t\tptr = v[partInt]\n\n\t\tdefault:\n\t\t\treturn fmt.Errorf(\"invalid traversal path at: %s\", strings.Join(parts[:i+1], \"/\"))\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// RemoveMetaFields removes meta fields like \"@id\" from a JSON message\n// by using a simple regular expression. (An alternate way to do this\n// would be to delete them from the raw, map[string]any\n// representation as they are indexed, then iterate the index we made\n// and add them back after encoding as JSON, but this is simpler.)\nfunc RemoveMetaFields(rawJSON []byte) []byte {\n\treturn idRegexp.ReplaceAllFunc(rawJSON, func(in []byte) []byte {\n\t\t// matches with a comma on both sides (when \"@id\" property is\n\t\t// not the first or last in the object) need to keep exactly\n\t\t// one comma for correct JSON syntax\n\t\tcomma := []byte{','}\n\t\tif bytes.HasPrefix(in, comma) && bytes.HasSuffix(in, comma) {\n\t\t\treturn comma\n\t\t}\n\t\treturn []byte{}\n\t})\n}\n\n// AdminHandler is like http.Handler except ServeHTTP may return an error.\n//\n// If any handler encounters an error, it should be returned for proper\n// handling.\ntype AdminHandler interface {\n\tServeHTTP(http.ResponseWriter, *http.Request) error\n}\n\n// AdminHandlerFunc is a convenience type like http.HandlerFunc.\ntype AdminHandlerFunc func(http.ResponseWriter, *http.Request) error\n\n// ServeHTTP implements the Handler interface.\nfunc (f AdminHandlerFunc) ServeHTTP(w http.ResponseWriter, r *http.Request) error {\n\treturn f(w, r)\n}\n\n// APIError is a structured error that every API\n// handler should return for consistency in logging\n// and client responses. If Message is unset, then\n// Err.Error() will be serialized in its place.\ntype APIError struct {\n\tHTTPStatus int    `json:\"-\"`\n\tErr        error  `json:\"-\"`\n\tMessage    string `json:\"error\"`\n}\n\nfunc (e APIError) Error() string {\n\tif e.Err != nil {\n\t\treturn e.Err.Error()\n\t}\n\treturn e.Message\n}\n\n// parseAdminListenAddr extracts a singular listen address from either addr\n// or defaultAddr, returning the network and the address of the listener.\nfunc parseAdminListenAddr(addr string, defaultAddr string) (NetworkAddress, error) {\n\tinput, err := NewReplacer().ReplaceOrErr(addr, true, true)\n\tif err != nil {\n\t\treturn NetworkAddress{}, fmt.Errorf(\"replacing listen address: %v\", err)\n\t}\n\tif input == \"\" {\n\t\tinput = defaultAddr\n\t}\n\tlistenAddr, err := ParseNetworkAddress(input)\n\tif err != nil {\n\t\treturn NetworkAddress{}, fmt.Errorf(\"parsing listener address: %v\", err)\n\t}\n\tif listenAddr.PortRangeSize() != 1 {\n\t\treturn NetworkAddress{}, fmt.Errorf(\"must be exactly one listener address; cannot listen on: %s\", listenAddr)\n\t}\n\treturn listenAddr, nil\n}\n\n// decodeBase64DERCert base64-decodes, then DER-decodes, certStr.\nfunc decodeBase64DERCert(certStr string) (*x509.Certificate, error) {\n\tderBytes, err := base64.StdEncoding.DecodeString(certStr)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn x509.ParseCertificate(derBytes)\n}\n\ntype loggableURLArray []*url.URL\n\nfunc (ua loggableURLArray) MarshalLogArray(enc zapcore.ArrayEncoder) error {\n\tif ua == nil {\n\t\treturn nil\n\t}\n\tfor _, u := range ua {\n\t\tenc.AppendString(u.String())\n\t}\n\treturn nil\n}\n\nvar (\n\t// DefaultAdminListen is the address for the local admin\n\t// listener, if none is specified at startup.\n\tDefaultAdminListen = \"localhost:2019\"\n\n\t// DefaultRemoteAdminListen is the address for the remote\n\t// (TLS-authenticated) admin listener, if enabled and not\n\t// specified otherwise.\n\tDefaultRemoteAdminListen = \":2021\"\n)\n\n// PIDFile writes a pidfile to the file at filename. It\n// will get deleted before the process gracefully exits.\nfunc PIDFile(filename string) error {\n\tpid := []byte(strconv.Itoa(os.Getpid()) + \"\\n\")\n\terr := os.WriteFile(filename, pid, 0o600)\n\tif err != nil {\n\t\treturn err\n\t}\n\tpidfile = filename\n\treturn nil\n}\n\n// idRegexp is used to match ID fields and their associated values\n// in the config. It also matches adjacent commas so that syntax\n// can be preserved no matter where in the object the field appears.\n// It supports string and most numeric values.\nvar idRegexp = regexp.MustCompile(`(?m),?\\s*\"` + idKey + `\"\\s*:\\s*(-?[0-9]+(\\.[0-9]+)?|(?U)\".*\")\\s*,?`)\n\n// pidfile is the name of the pidfile, if any.\nvar pidfile string\n\n// errInternalRedir indicates an internal redirect\n// and is useful when admin API handlers rewrite\n// the request; in that case, authentication and\n// authorization needs to happen again for the\n// rewritten request.\nvar errInternalRedir = fmt.Errorf(\"internal redirect; re-authorization required\")\n\nconst (\n\trawConfigKey = \"config\"\n\tidKey        = \"@id\"\n)\n\nvar bufPool = sync.Pool{\n\tNew: func() any {\n\t\treturn new(bytes.Buffer)\n\t},\n}\n\n// keep a reference to admin endpoint singletons while they're active\nvar (\n\tserverMu                            sync.Mutex\n\tlocalAdminServer, remoteAdminServer *http.Server\n\tidentityCertCache                   *certmagic.Cache\n)\n"
  },
  {
    "path": "admin_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport (\n\t\"context\"\n\t\"crypto/x509\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"maps\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"os\"\n\t\"reflect\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/prometheus/client_golang/prometheus\"\n\tdto \"github.com/prometheus/client_model/go\"\n)\n\nvar testCfg = []byte(`{\n\t\t\t\"apps\": {\n\t\t\t\t\"http\": {\n\t\t\t\t\t\"servers\": {\n\t\t\t\t\t\t\"myserver\": {\n\t\t\t\t\t\t\t\"listen\": [\"tcp/localhost:8080-8084\"],\n\t\t\t\t\t\t\t\"read_timeout\": \"30s\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"yourserver\": {\n\t\t\t\t\t\t\t\"listen\": [\"127.0.0.1:5000\"],\n\t\t\t\t\t\t\t\"read_header_timeout\": \"15s\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\t`)\n\nfunc TestUnsyncedConfigAccess(t *testing.T) {\n\t// each test is performed in sequence, so\n\t// each change builds on the previous ones;\n\t// the config is not reset between tests\n\tfor i, tc := range []struct {\n\t\tmethod    string\n\t\tpath      string // rawConfigKey will be prepended\n\t\tpayload   string\n\t\texpect    string // JSON representation of what the whole config is expected to be after the request\n\t\tshouldErr bool\n\t}{\n\t\t{\n\t\t\tmethod:  \"POST\",\n\t\t\tpath:    \"\",\n\t\t\tpayload: `{\"foo\": \"bar\", \"list\": [\"a\", \"b\", \"c\"]}`, // starting value\n\t\t\texpect:  `{\"foo\": \"bar\", \"list\": [\"a\", \"b\", \"c\"]}`,\n\t\t},\n\t\t{\n\t\t\tmethod:  \"POST\",\n\t\t\tpath:    \"/foo\",\n\t\t\tpayload: `\"jet\"`,\n\t\t\texpect:  `{\"foo\": \"jet\", \"list\": [\"a\", \"b\", \"c\"]}`,\n\t\t},\n\t\t{\n\t\t\tmethod:  \"POST\",\n\t\t\tpath:    \"/bar\",\n\t\t\tpayload: `{\"aa\": \"bb\", \"qq\": \"zz\"}`,\n\t\t\texpect:  `{\"foo\": \"jet\", \"bar\": {\"aa\": \"bb\", \"qq\": \"zz\"}, \"list\": [\"a\", \"b\", \"c\"]}`,\n\t\t},\n\t\t{\n\t\t\tmethod: \"DELETE\",\n\t\t\tpath:   \"/bar/qq\",\n\t\t\texpect: `{\"foo\": \"jet\", \"bar\": {\"aa\": \"bb\"}, \"list\": [\"a\", \"b\", \"c\"]}`,\n\t\t},\n\t\t{\n\t\t\tmethod:    \"DELETE\",\n\t\t\tpath:      \"/bar/qq\",\n\t\t\texpect:    `{\"foo\": \"jet\", \"bar\": {\"aa\": \"bb\"}, \"list\": [\"a\", \"b\", \"c\"]}`,\n\t\t\tshouldErr: true,\n\t\t},\n\t\t{\n\t\t\tmethod:  \"POST\",\n\t\t\tpath:    \"/list\",\n\t\t\tpayload: `\"e\"`,\n\t\t\texpect:  `{\"foo\": \"jet\", \"bar\": {\"aa\": \"bb\"}, \"list\": [\"a\", \"b\", \"c\", \"e\"]}`,\n\t\t},\n\t\t{\n\t\t\tmethod:  \"PUT\",\n\t\t\tpath:    \"/list/3\",\n\t\t\tpayload: `\"d\"`,\n\t\t\texpect:  `{\"foo\": \"jet\", \"bar\": {\"aa\": \"bb\"}, \"list\": [\"a\", \"b\", \"c\", \"d\", \"e\"]}`,\n\t\t},\n\t\t{\n\t\t\tmethod: \"DELETE\",\n\t\t\tpath:   \"/list/3\",\n\t\t\texpect: `{\"foo\": \"jet\", \"bar\": {\"aa\": \"bb\"}, \"list\": [\"a\", \"b\", \"c\", \"e\"]}`,\n\t\t},\n\t\t{\n\t\t\tmethod:  \"PATCH\",\n\t\t\tpath:    \"/list/3\",\n\t\t\tpayload: `\"d\"`,\n\t\t\texpect:  `{\"foo\": \"jet\", \"bar\": {\"aa\": \"bb\"}, \"list\": [\"a\", \"b\", \"c\", \"d\"]}`,\n\t\t},\n\t\t{\n\t\t\tmethod:  \"POST\",\n\t\t\tpath:    \"/list/...\",\n\t\t\tpayload: `[\"e\", \"f\", \"g\"]`,\n\t\t\texpect:  `{\"foo\": \"jet\", \"bar\": {\"aa\": \"bb\"}, \"list\": [\"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\"]}`,\n\t\t},\n\t} {\n\t\terr := unsyncedConfigAccess(tc.method, rawConfigKey+tc.path, []byte(tc.payload), nil)\n\n\t\tif tc.shouldErr && err == nil {\n\t\t\tt.Fatalf(\"Test %d: Expected error return value, but got: %v\", i, err)\n\t\t}\n\t\tif !tc.shouldErr && err != nil {\n\t\t\tt.Fatalf(\"Test %d: Should not have had error return value, but got: %v\", i, err)\n\t\t}\n\n\t\t// decode the expected config so we can do a convenient DeepEqual\n\t\tvar expectedDecoded any\n\t\terr = json.Unmarshal([]byte(tc.expect), &expectedDecoded)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Test %d: Unmarshaling expected config: %v\", i, err)\n\t\t}\n\n\t\t// make sure the resulting config is as we expect it\n\t\tif !reflect.DeepEqual(rawCfg[rawConfigKey], expectedDecoded) {\n\t\t\tt.Fatalf(\"Test %d:\\nExpected:\\n\\t%#v\\nActual:\\n\\t%#v\",\n\t\t\t\ti, expectedDecoded, rawCfg[rawConfigKey])\n\t\t}\n\t}\n}\n\n// TestLoadConcurrent exercises Load under concurrent conditions\n// and is most useful under test with `-race` enabled.\nfunc TestLoadConcurrent(t *testing.T) {\n\tvar wg sync.WaitGroup\n\n\tfor i := 0; i < 100; i++ {\n\t\twg.Go(func() {\n\t\t\t_ = Load(testCfg, true)\n\t\t})\n\t}\n\twg.Wait()\n}\n\ntype fooModule struct {\n\tIntField int\n\tStrField string\n}\n\nfunc (fooModule) CaddyModule() ModuleInfo {\n\treturn ModuleInfo{\n\t\tID:  \"foo\",\n\t\tNew: func() Module { return new(fooModule) },\n\t}\n}\nfunc (fooModule) Start() error { return nil }\nfunc (fooModule) Stop() error  { return nil }\n\nfunc TestETags(t *testing.T) {\n\tRegisterModule(fooModule{})\n\n\tif err := Load([]byte(`{\"admin\": {\"listen\": \"localhost:2999\"}, \"apps\": {\"foo\": {\"strField\": \"abc\", \"intField\": 0}}}`), true); err != nil {\n\t\tt.Fatalf(\"loading: %s\", err)\n\t}\n\n\tconst key = \"/\" + rawConfigKey + \"/apps/foo\"\n\n\t// try update the config with the wrong etag\n\terr := changeConfig(http.MethodPost, key, []byte(`{\"strField\": \"abc\", \"intField\": 1}}`), fmt.Sprintf(`\"/%s not_an_etag\"`, rawConfigKey), false)\n\tif apiErr, ok := err.(APIError); !ok || apiErr.HTTPStatus != http.StatusPreconditionFailed {\n\t\tt.Fatalf(\"expected precondition failed; got %v\", err)\n\t}\n\n\t// get the etag\n\thash := etagHasher()\n\tif err := readConfig(key, hash); err != nil {\n\t\tt.Fatalf(\"reading: %s\", err)\n\t}\n\n\t// do the same update with the correct key\n\terr = changeConfig(http.MethodPost, key, []byte(`{\"strField\": \"abc\", \"intField\": 1}`), makeEtag(key, hash), false)\n\tif err != nil {\n\t\tt.Fatalf(\"expected update to work; got %v\", err)\n\t}\n\n\t// now try another update. The hash should no longer match and we should get precondition failed\n\terr = changeConfig(http.MethodPost, key, []byte(`{\"strField\": \"abc\", \"intField\": 2}`), makeEtag(key, hash), false)\n\tif apiErr, ok := err.(APIError); !ok || apiErr.HTTPStatus != http.StatusPreconditionFailed {\n\t\tt.Fatalf(\"expected precondition failed; got %v\", err)\n\t}\n}\n\nfunc BenchmarkLoad(b *testing.B) {\n\tfor b.Loop() {\n\t\tLoad(testCfg, true)\n\t}\n}\n\nfunc TestAdminHandlerErrorHandling(t *testing.T) {\n\tinitAdminMetrics()\n\n\thandler := adminHandler{\n\t\tmux: http.NewServeMux(),\n\t}\n\n\thandler.mux.Handle(\"/error\", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\terr := fmt.Errorf(\"test error\")\n\t\thandler.handleError(w, r, err)\n\t}))\n\n\treq := httptest.NewRequest(http.MethodGet, \"/error\", nil)\n\trr := httptest.NewRecorder()\n\n\thandler.ServeHTTP(rr, req)\n\n\tif rr.Code == http.StatusOK {\n\t\tt.Error(\"expected error response, got success\")\n\t}\n\n\tvar apiErr APIError\n\tif err := json.NewDecoder(rr.Body).Decode(&apiErr); err != nil {\n\t\tt.Fatalf(\"decoding response: %v\", err)\n\t}\n\tif apiErr.Message != \"test error\" {\n\t\tt.Errorf(\"expected error message 'test error', got '%s'\", apiErr.Message)\n\t}\n}\n\nfunc initAdminMetrics() {\n\tif adminMetrics.requestErrors != nil {\n\t\tprometheus.Unregister(adminMetrics.requestErrors)\n\t}\n\tif adminMetrics.requestCount != nil {\n\t\tprometheus.Unregister(adminMetrics.requestCount)\n\t}\n\n\tadminMetrics.requestErrors = prometheus.NewCounterVec(prometheus.CounterOpts{\n\t\tNamespace: \"caddy\",\n\t\tSubsystem: \"admin_http\",\n\t\tName:      \"request_errors_total\",\n\t\tHelp:      \"Number of errors that occurred handling admin endpoint requests\",\n\t}, []string{\"handler\", \"path\", \"method\"})\n\n\tadminMetrics.requestCount = prometheus.NewCounterVec(prometheus.CounterOpts{\n\t\tNamespace: \"caddy\",\n\t\tSubsystem: \"admin_http\",\n\t\tName:      \"requests_total\",\n\t\tHelp:      \"Count of requests to the admin endpoint\",\n\t}, []string{\"handler\", \"path\", \"code\", \"method\"}) // Added code and method labels\n\n\tprometheus.MustRegister(adminMetrics.requestErrors)\n\tprometheus.MustRegister(adminMetrics.requestCount)\n}\n\nfunc TestAdminHandlerBuiltinRouteErrors(t *testing.T) {\n\tinitAdminMetrics()\n\n\tcfg := &Config{\n\t\tAdmin: &AdminConfig{\n\t\t\tListen: \"localhost:2019\",\n\t\t},\n\t}\n\n\t// Build the admin handler directly (no listener active)\n\taddr, err := ParseNetworkAddress(\"localhost:2019\")\n\tif err != nil {\n\t\tt.Fatalf(\"Failed to parse address: %v\", err)\n\t}\n\thandler := cfg.Admin.newAdminHandler(addr, false, Context{})\n\n\ttests := []struct {\n\t\tname           string\n\t\tpath           string\n\t\tmethod         string\n\t\texpectedStatus int\n\t}{\n\t\t{\n\t\t\tname:           \"stop endpoint wrong method\",\n\t\t\tpath:           \"/stop\",\n\t\t\tmethod:         http.MethodGet,\n\t\t\texpectedStatus: http.StatusMethodNotAllowed,\n\t\t},\n\t\t{\n\t\t\tname:           \"config endpoint wrong content-type\",\n\t\t\tpath:           \"/config/\",\n\t\t\tmethod:         http.MethodPost,\n\t\t\texpectedStatus: http.StatusBadRequest,\n\t\t},\n\t\t{\n\t\t\tname:           \"config ID missing ID\",\n\t\t\tpath:           \"/id/\",\n\t\t\tmethod:         http.MethodGet,\n\t\t\texpectedStatus: http.StatusBadRequest,\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\treq := httptest.NewRequest(test.method, fmt.Sprintf(\"http://localhost:2019%s\", test.path), nil)\n\t\t\trr := httptest.NewRecorder()\n\n\t\t\thandler.ServeHTTP(rr, req)\n\n\t\t\tif rr.Code != test.expectedStatus {\n\t\t\t\tt.Errorf(\"expected status %d but got %d\", test.expectedStatus, rr.Code)\n\t\t\t}\n\n\t\t\tmetricValue := testGetMetricValue(map[string]string{\n\t\t\t\t\"path\":    test.path,\n\t\t\t\t\"handler\": \"admin\",\n\t\t\t\t\"method\":  test.method,\n\t\t\t})\n\t\t\tif metricValue != 1 {\n\t\t\t\tt.Errorf(\"expected error metric to be incremented once, got %v\", metricValue)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc testGetMetricValue(labels map[string]string) float64 {\n\tpromLabels := prometheus.Labels{}\n\tmaps.Copy(promLabels, labels)\n\n\tmetric, err := adminMetrics.requestErrors.GetMetricWith(promLabels)\n\tif err != nil {\n\t\treturn 0\n\t}\n\n\tpb := &dto.Metric{}\n\tmetric.Write(pb)\n\treturn pb.GetCounter().GetValue()\n}\n\ntype mockRouter struct {\n\troutes []AdminRoute\n}\n\nfunc (m mockRouter) Routes() []AdminRoute {\n\treturn m.routes\n}\n\ntype mockModule struct {\n\tmockRouter\n}\n\nfunc (m *mockModule) CaddyModule() ModuleInfo {\n\treturn ModuleInfo{\n\t\tID: \"admin.api.mock\",\n\t\tNew: func() Module {\n\t\t\tmm := &mockModule{\n\t\t\t\tmockRouter: mockRouter{\n\t\t\t\t\troutes: m.routes,\n\t\t\t\t},\n\t\t\t}\n\t\t\treturn mm\n\t\t},\n\t}\n}\n\nfunc TestNewAdminHandlerRouterRegistration(t *testing.T) {\n\toriginalModules := make(map[string]ModuleInfo)\n\tmaps.Copy(originalModules, modules)\n\tdefer func() {\n\t\tmodules = originalModules\n\t}()\n\n\tmockRoute := AdminRoute{\n\t\tPattern: \"/mock\",\n\t\tHandler: AdminHandlerFunc(func(w http.ResponseWriter, r *http.Request) error {\n\t\t\tw.WriteHeader(http.StatusOK)\n\t\t\treturn nil\n\t\t}),\n\t}\n\n\tmock := &mockModule{\n\t\tmockRouter: mockRouter{\n\t\t\troutes: []AdminRoute{mockRoute},\n\t\t},\n\t}\n\tRegisterModule(mock)\n\n\taddr, err := ParseNetworkAddress(\"localhost:2019\")\n\tif err != nil {\n\t\tt.Fatalf(\"Failed to parse address: %v\", err)\n\t}\n\n\tadmin := &AdminConfig{\n\t\tEnforceOrigin: false,\n\t}\n\thandler := admin.newAdminHandler(addr, false, Context{})\n\n\treq := httptest.NewRequest(\"GET\", \"/mock\", nil)\n\treq.Host = \"localhost:2019\"\n\trr := httptest.NewRecorder()\n\n\thandler.ServeHTTP(rr, req)\n\n\tif rr.Code != http.StatusOK {\n\t\tt.Errorf(\"Expected status code %d but got %d\", http.StatusOK, rr.Code)\n\t\tt.Logf(\"Response body: %s\", rr.Body.String())\n\t}\n\n\tif len(admin.routers) != 1 {\n\t\tt.Errorf(\"Expected 1 router to be stored, got %d\", len(admin.routers))\n\t}\n}\n\ntype mockProvisionableRouter struct {\n\tmockRouter\n\tprovisionErr error\n\tprovisioned  bool\n}\n\nfunc (m *mockProvisionableRouter) Provision(Context) error {\n\tm.provisioned = true\n\treturn m.provisionErr\n}\n\ntype mockProvisionableModule struct {\n\t*mockProvisionableRouter\n}\n\nfunc (m *mockProvisionableModule) CaddyModule() ModuleInfo {\n\treturn ModuleInfo{\n\t\tID: \"admin.api.mock_provision\",\n\t\tNew: func() Module {\n\t\t\tmm := &mockProvisionableModule{\n\t\t\t\tmockProvisionableRouter: &mockProvisionableRouter{\n\t\t\t\t\tmockRouter:   m.mockRouter,\n\t\t\t\t\tprovisionErr: m.provisionErr,\n\t\t\t\t},\n\t\t\t}\n\t\t\treturn mm\n\t\t},\n\t}\n}\n\nfunc TestAdminRouterProvisioning(t *testing.T) {\n\ttests := []struct {\n\t\tname         string\n\t\tprovisionErr error\n\t\twantErr      bool\n\t\troutersAfter int // expected number of routers after provisioning\n\t}{\n\t\t{\n\t\t\tname:         \"successful provisioning\",\n\t\t\tprovisionErr: nil,\n\t\t\twantErr:      false,\n\t\t\troutersAfter: 0,\n\t\t},\n\t\t{\n\t\t\tname:         \"provisioning error\",\n\t\t\tprovisionErr: fmt.Errorf(\"provision failed\"),\n\t\t\twantErr:      true,\n\t\t\troutersAfter: 1,\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\toriginalModules := make(map[string]ModuleInfo)\n\t\t\tmaps.Copy(originalModules, modules)\n\t\t\tdefer func() {\n\t\t\t\tmodules = originalModules\n\t\t\t}()\n\n\t\t\tmockRoute := AdminRoute{\n\t\t\t\tPattern: \"/mock\",\n\t\t\t\tHandler: AdminHandlerFunc(func(w http.ResponseWriter, r *http.Request) error {\n\t\t\t\t\treturn nil\n\t\t\t\t}),\n\t\t\t}\n\n\t\t\t// Create provisionable module\n\t\t\tmock := &mockProvisionableModule{\n\t\t\t\tmockProvisionableRouter: &mockProvisionableRouter{\n\t\t\t\t\tmockRouter: mockRouter{\n\t\t\t\t\t\troutes: []AdminRoute{mockRoute},\n\t\t\t\t\t},\n\t\t\t\t\tprovisionErr: test.provisionErr,\n\t\t\t\t},\n\t\t\t}\n\t\t\tRegisterModule(mock)\n\n\t\t\tadmin := &AdminConfig{}\n\t\t\taddr, err := ParseNetworkAddress(\"localhost:2019\")\n\t\t\tif err != nil {\n\t\t\t\tt.Fatalf(\"Failed to parse address: %v\", err)\n\t\t\t}\n\n\t\t\t_ = admin.newAdminHandler(addr, false, Context{})\n\t\t\terr = admin.provisionAdminRouters(Context{})\n\n\t\t\tif test.wantErr {\n\t\t\t\tif err == nil {\n\t\t\t\t\tt.Error(\"Expected error but got nil\")\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Errorf(\"Expected no error but got: %v\", err)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif len(admin.routers) != test.routersAfter {\n\t\t\t\tt.Errorf(\"Expected %d routers after provisioning, got %d\", test.routersAfter, len(admin.routers))\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestAllowedOriginsUnixSocket(t *testing.T) {\n\t// see comment in allowedOrigins() as to why we do not fill out allowed origins for UDS\n\ttests := []struct {\n\t\tname          string\n\t\taddr          NetworkAddress\n\t\torigins       []string\n\t\texpectOrigins []string\n\t}{\n\t\t{\n\t\t\tname: \"unix socket with default origins\",\n\t\t\taddr: NetworkAddress{\n\t\t\t\tNetwork: \"unix\",\n\t\t\t\tHost:    \"/tmp/caddy.sock\",\n\t\t\t},\n\t\t\torigins:       nil, // default origins\n\t\t\texpectOrigins: []string{},\n\t\t},\n\t\t{\n\t\t\tname: \"unix socket with custom origins\",\n\t\t\taddr: NetworkAddress{\n\t\t\t\tNetwork: \"unix\",\n\t\t\t\tHost:    \"/tmp/caddy.sock\",\n\t\t\t},\n\t\t\torigins: []string{\"example.com\"},\n\t\t\texpectOrigins: []string{\n\t\t\t\t\"example.com\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"tcp socket on localhost gets all loopback addresses\",\n\t\t\taddr: NetworkAddress{\n\t\t\t\tNetwork:   \"tcp\",\n\t\t\t\tHost:      \"localhost\",\n\t\t\t\tStartPort: 2019,\n\t\t\t\tEndPort:   2019,\n\t\t\t},\n\t\t\torigins: nil,\n\t\t\texpectOrigins: []string{\n\t\t\t\t\"localhost:2019\",\n\t\t\t\t\"[::1]:2019\",\n\t\t\t\t\"127.0.0.1:2019\",\n\t\t\t},\n\t\t},\n\t}\n\n\tfor i, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tadmin := AdminConfig{\n\t\t\t\tOrigins: test.origins,\n\t\t\t}\n\n\t\t\tgot := admin.allowedOrigins(test.addr)\n\n\t\t\tvar gotOrigins []string\n\t\t\tfor _, u := range got {\n\t\t\t\tgotOrigins = append(gotOrigins, u.Host)\n\t\t\t}\n\n\t\t\tif len(gotOrigins) != len(test.expectOrigins) {\n\t\t\t\tt.Errorf(\"%d: Expected %d origins but got %d\", i, len(test.expectOrigins), len(gotOrigins))\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\texpectMap := make(map[string]struct{})\n\t\t\tfor _, origin := range test.expectOrigins {\n\t\t\t\texpectMap[origin] = struct{}{}\n\t\t\t}\n\n\t\t\tgotMap := make(map[string]struct{})\n\t\t\tfor _, origin := range gotOrigins {\n\t\t\t\tgotMap[origin] = struct{}{}\n\t\t\t}\n\n\t\t\tif !reflect.DeepEqual(expectMap, gotMap) {\n\t\t\t\tt.Errorf(\"%d: Origins mismatch.\\nExpected: %v\\nGot: %v\", i, test.expectOrigins, gotOrigins)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestReplaceRemoteAdminServer(t *testing.T) {\n\tconst testCert = `MIIDCTCCAfGgAwIBAgIUXsqJ1mY8pKlHQtI3HJ23x2eZPqwwDQYJKoZIhvcNAQEL\nBQAwFDESMBAGA1UEAwwJbG9jYWxob3N0MB4XDTIzMDEwMTAwMDAwMFoXDTI0MDEw\nMTAwMDAwMFowFDESMBAGA1UEAwwJbG9jYWxob3N0MIIBIjANBgkqhkiG9w0BAQEF\nAAOCAQ8AMIIBCgKCAQEA4O4S6BSoYcoxvRqI+h7yPOjF6KjntjzVVm9M+uHK4lzX\nF1L3pSxJ2nDD4wZEV3FJ5yFOHVFqkG2vXG3BIczOlYG7UeNmKbQnKc5kZj3HGUrS\nVGEktA4OJbeZhhWP15gcXN5eDM2eH3g9BFXVX6AURxLiUXzhNBUEZuj/OEyH9yEF\n/qPCE+EjzVvWxvBXwgz/io4r4yok/Vq/bxJ6FlV6R7DX5oJSXyO0VEHZPi9DIyNU\nkK3F/r4U1sWiJGWOs8i3YQWZ2ejh1C0aLFZpPcCGGgMNpoF31gyYP6ZuPDUyCXsE\ng36UUw1JHNtIXYcLhnXuqj4A8TybTDpgXLqvwA9DBQIDAQABo1MwUTAdBgNVHQ4E\nFgQUc13z30pFC63rr/HGKOE7E82vjXwwHwYDVR0jBBgwFoAUc13z30pFC63rr/HG\nKOE7E82vjXwwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAHO3j\noeiUXXJ7xD4P8Wj5t9d+E8lE1Xv1Dk3Z+EdG5+dan+RcToE42JJp9zB7FIh5Qz8g\nW77LAjqh5oyqz3A2VJcyVgfE3uJP1R1mJM7JfGHf84QH4TZF2Q1RZY4SZs0VQ6+q\n5wSlIZ4NXDy4Q4XkIJBGS61wT8IzYFXYBpx4PCP1Qj0PIE4sevEGwjsBIgxK307o\nBxF8AWe6N6e4YZmQLGjQ+SeH0iwZb6vpkHyAY8Kj2hvK+cq2P7vU3VGi0t3r1F8L\nIvrXHCvO2BMNJ/1UK1M4YNX8LYJqQhg9hEsIROe1OE/m3VhxIYMJI+qZXk9yHfgJ\nvq+SH04xKhtFudVBAQ==`\n\n\ttests := []struct {\n\t\tname    string\n\t\tcfg     *Config\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname:    \"nil config\",\n\t\t\tcfg:     nil,\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"nil admin config\",\n\t\t\tcfg: &Config{\n\t\t\t\tAdmin: nil,\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"nil remote config\",\n\t\t\tcfg: &Config{\n\t\t\t\tAdmin: &AdminConfig{},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"invalid listen address\",\n\t\t\tcfg: &Config{\n\t\t\t\tAdmin: &AdminConfig{\n\t\t\t\t\tRemote: &RemoteAdmin{\n\t\t\t\t\t\tListen: \"invalid:address\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"valid config\",\n\t\t\tcfg: &Config{\n\t\t\t\tAdmin: &AdminConfig{\n\t\t\t\t\tIdentity: &IdentityConfig{},\n\t\t\t\t\tRemote: &RemoteAdmin{\n\t\t\t\t\t\tListen: \"localhost:2021\",\n\t\t\t\t\t\tAccessControl: []*AdminAccess{\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tPublicKeys:  []string{testCert},\n\t\t\t\t\t\t\t\tPermissions: []AdminPermissions{{Methods: []string{\"GET\"}, Paths: []string{\"/test\"}}},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"invalid certificate\",\n\t\t\tcfg: &Config{\n\t\t\t\tAdmin: &AdminConfig{\n\t\t\t\t\tIdentity: &IdentityConfig{},\n\t\t\t\t\tRemote: &RemoteAdmin{\n\t\t\t\t\t\tListen: \"localhost:2021\",\n\t\t\t\t\t\tAccessControl: []*AdminAccess{\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tPublicKeys:  []string{\"invalid-cert-data\"},\n\t\t\t\t\t\t\t\tPermissions: []AdminPermissions{{Methods: []string{\"GET\"}, Paths: []string{\"/test\"}}},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tctx := Context{\n\t\t\t\tContext: context.Background(),\n\t\t\t\tcfg:     test.cfg,\n\t\t\t}\n\n\t\t\tif test.cfg != nil {\n\t\t\t\ttest.cfg.storage = &certmagic.FileStorage{Path: t.TempDir()}\n\t\t\t}\n\n\t\t\tif test.cfg != nil && test.cfg.Admin != nil && test.cfg.Admin.Identity != nil {\n\t\t\t\tidentityCertCache = certmagic.NewCache(certmagic.CacheOptions{\n\t\t\t\t\tGetConfigForCert: func(certmagic.Certificate) (*certmagic.Config, error) {\n\t\t\t\t\t\treturn &certmagic.Config{}, nil\n\t\t\t\t\t},\n\t\t\t\t})\n\t\t\t}\n\n\t\t\terr := replaceRemoteAdminServer(ctx, test.cfg)\n\n\t\t\tif test.wantErr {\n\t\t\t\tif err == nil {\n\t\t\t\t\tt.Error(\"Expected error but got nil\")\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Errorf(\"Expected no error but got: %v\", err)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// Clean up\n\t\t\tif remoteAdminServer != nil {\n\t\t\t\t_ = stopAdminServer(remoteAdminServer)\n\t\t\t}\n\t\t})\n\t}\n}\n\ntype mockIssuer struct {\n\tconfigSet *certmagic.Config\n}\n\nfunc (m *mockIssuer) Issue(ctx context.Context, csr *x509.CertificateRequest) (*certmagic.IssuedCertificate, error) {\n\treturn &certmagic.IssuedCertificate{\n\t\tCertificate: []byte(csr.Raw),\n\t}, nil\n}\n\nfunc (m *mockIssuer) SetConfig(cfg *certmagic.Config) {\n\tm.configSet = cfg\n}\n\nfunc (m *mockIssuer) IssuerKey() string {\n\treturn \"mock\"\n}\n\ntype mockIssuerModule struct {\n\t*mockIssuer\n}\n\nfunc (m *mockIssuerModule) CaddyModule() ModuleInfo {\n\treturn ModuleInfo{\n\t\tID: \"tls.issuance.acme\",\n\t\tNew: func() Module {\n\t\t\treturn &mockIssuerModule{mockIssuer: new(mockIssuer)}\n\t\t},\n\t}\n}\n\nfunc TestManageIdentity(t *testing.T) {\n\toriginalModules := make(map[string]ModuleInfo)\n\tmaps.Copy(originalModules, modules)\n\tdefer func() {\n\t\tmodules = originalModules\n\t}()\n\n\tRegisterModule(&mockIssuerModule{})\n\n\tcertPEM := []byte(`-----BEGIN CERTIFICATE-----\nMIIDujCCAqKgAwIBAgIIE31FZVaPXTUwDQYJKoZIhvcNAQEFBQAwSTELMAkGA1UE\nBhMCVVMxEzARBgNVBAoTCkdvb2dsZSBJbmMxJTAjBgNVBAMTHEdvb2dsZSBJbnRl\ncm5ldCBBdXRob3JpdHkgRzIwHhcNMTQwMTI5MTMyNzQzWhcNMTQwNTI5MDAwMDAw\nWjBpMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwN\nTW91bnRhaW4gVmlldzETMBEGA1UECgwKR29vZ2xlIEluYzEYMBYGA1UEAwwPbWFp\nbC5nb29nbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE3lcub2pUwkjC\n5GJQA2ZZfJJi6d1QHhEmkX9VxKYGp6gagZuRqJWy9TXP6++1ZzQQxqZLD0TkuxZ9\n8i9Nz00000CCBjCCAQQwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMGgG\nCCsGAQUFBwEBBFwwWjArBggrBgEFBQcwAoYfaHR0cDovL3BraS5nb29nbGUuY29t\nL0dJQUcyLmNydDArBggrBgEFBQcwAYYfaHR0cDovL2NsaWVudHMxLmdvb2dsZS5j\nb20vb2NzcDAdBgNVHQ4EFgQUiJxtimAuTfwb+aUtBn5UYKreKvMwDAYDVR0TAQH/\nBAIwADAfBgNVHSMEGDAWgBRK3QYWG7z2aLV29YG2u2IaulqBLzAXBgNVHREEEDAO\nggxtYWlsLmdvb2dsZTANBgkqhkiG9w0BAQUFAAOCAQEAMP6IWgNGZE8wP9TjFjSZ\n3mmW3A1eIr0CuPwNZ2LJ5ZD1i70ojzcj4I9IdP5yPg9CAEV4hNASbM1LzfC7GmJE\ntPzW5tRmpKVWZGRgTgZI8Hp/xZXMwLh9ZmXV4kESFAGj5G5FNvJyUV7R5Eh+7OZX\n7G4jJ4ZGJh+5jzN9HdJJHQHGYNIYOzC7+HH9UMwCjX9vhQ4RjwFZJThS2Yb+y7pb\n9yxTJZoXC6J0H5JpnZb7kZEJ+Xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\n-----END CERTIFICATE-----`)\n\n\tkeyPEM := []byte(`-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDRS0LmTwUT0iwP\n...\n-----END PRIVATE KEY-----`)\n\n\ttmpDir, err := os.MkdirTemp(\"\", \"TestManageIdentity-\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\ttestStorage := certmagic.FileStorage{Path: tmpDir}\n\t// Clean up the temp dir after the test finishes. Ensure any background\n\t// certificate maintenance is stopped first to avoid RemoveAll races.\n\tt.Cleanup(func() {\n\t\tif identityCertCache != nil {\n\t\t\tidentityCertCache.Stop()\n\t\t\tidentityCertCache = nil\n\t\t}\n\t\t// Give goroutines a moment to exit and release file handles.\n\t\ttime.Sleep(50 * time.Millisecond)\n\t\t_ = os.RemoveAll(tmpDir)\n\t})\n\n\terr = testStorage.Store(context.Background(), \"localhost/localhost.crt\", certPEM)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\terr = testStorage.Store(context.Background(), \"localhost/localhost.key\", keyPEM)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\ttests := []struct {\n\t\tname       string\n\t\tcfg        *Config\n\t\twantErr    bool\n\t\tcheckState func(*testing.T, *Config)\n\t}{\n\t\t{\n\t\t\tname: \"nil config\",\n\t\t\tcfg:  nil,\n\t\t},\n\t\t{\n\t\t\tname: \"nil admin config\",\n\t\t\tcfg: &Config{\n\t\t\t\tAdmin: nil,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"nil identity config\",\n\t\t\tcfg: &Config{\n\t\t\t\tAdmin: &AdminConfig{},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"default issuer when none specified\",\n\t\t\tcfg: &Config{\n\t\t\t\tAdmin: &AdminConfig{\n\t\t\t\t\tIdentity: &IdentityConfig{\n\t\t\t\t\t\tIdentifiers: []string{\"localhost\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tstorage: &testStorage,\n\t\t\t},\n\t\t\tcheckState: func(t *testing.T, cfg *Config) {\n\t\t\t\tif len(cfg.Admin.Identity.issuers) == 0 {\n\t\t\t\t\tt.Error(\"Expected at least 1 issuer to be configured\")\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t\tif _, ok := cfg.Admin.Identity.issuers[0].(*mockIssuerModule); !ok {\n\t\t\t\t\tt.Error(\"Expected mock issuer to be configured\")\n\t\t\t\t}\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"custom issuer\",\n\t\t\tcfg: &Config{\n\t\t\t\tAdmin: &AdminConfig{\n\t\t\t\t\tIdentity: &IdentityConfig{\n\t\t\t\t\t\tIdentifiers: []string{\"localhost\"},\n\t\t\t\t\t\tIssuersRaw: []json.RawMessage{\n\t\t\t\t\t\t\tjson.RawMessage(`{\"module\": \"acme\"}`),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tstorage: &testStorage,\n\t\t\t},\n\t\t\tcheckState: func(t *testing.T, cfg *Config) {\n\t\t\t\tif len(cfg.Admin.Identity.issuers) != 1 {\n\t\t\t\t\tt.Fatalf(\"Expected 1 issuer, got %d\", len(cfg.Admin.Identity.issuers))\n\t\t\t\t}\n\t\t\t\tmockIss, ok := cfg.Admin.Identity.issuers[0].(*mockIssuerModule)\n\t\t\t\tif !ok {\n\t\t\t\t\tt.Fatal(\"Expected mock issuer\")\n\t\t\t\t}\n\t\t\t\tif mockIss.configSet == nil {\n\t\t\t\t\tt.Error(\"Issuer config was not set\")\n\t\t\t\t}\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"invalid issuer module\",\n\t\t\tcfg: &Config{\n\t\t\t\tAdmin: &AdminConfig{\n\t\t\t\t\tIdentity: &IdentityConfig{\n\t\t\t\t\t\tIdentifiers: []string{\"localhost\"},\n\t\t\t\t\t\tIssuersRaw: []json.RawMessage{\n\t\t\t\t\t\t\tjson.RawMessage(`{\"module\": \"doesnt_exist\"}`),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tif identityCertCache != nil {\n\t\t\t\t// Reset the cert cache before each test\n\t\t\t\tidentityCertCache.Stop()\n\t\t\t\tidentityCertCache = nil\n\t\t\t}\n\t\t\t// Ensure any cache started by manageIdentity is stopped at the end\n\t\t\tdefer func() {\n\t\t\t\tif identityCertCache != nil {\n\t\t\t\t\tidentityCertCache.Stop()\n\t\t\t\t\tidentityCertCache = nil\n\t\t\t\t}\n\t\t\t}()\n\n\t\t\tctx := Context{\n\t\t\t\tContext:         context.Background(),\n\t\t\t\tcfg:             test.cfg,\n\t\t\t\tmoduleInstances: make(map[string][]Module),\n\t\t\t}\n\n\t\t\t// If this test provided a FileStorage, set the package-level\n\t\t\t// testCertMagicStorageOverride so certmagicConfig will use it.\n\t\t\tif test.cfg != nil && test.cfg.storage != nil {\n\t\t\t\ttestCertMagicStorageOverride = test.cfg.storage\n\t\t\t\tdefer func() { testCertMagicStorageOverride = nil }()\n\t\t\t}\n\n\t\t\terr := manageIdentity(ctx, test.cfg)\n\n\t\t\tif test.wantErr {\n\t\t\t\tif err == nil {\n\t\t\t\t\tt.Error(\"Expected error but got nil\")\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif err != nil {\n\t\t\t\tt.Fatalf(\"Expected no error but got: %v\", err)\n\t\t\t}\n\n\t\t\tif test.checkState != nil {\n\t\t\t\ttest.checkState(t, test.cfg)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "caddy.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"encoding/hex\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/fs\"\n\t\"log\"\n\t\"net/http\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"runtime/debug\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"sync/atomic\"\n\t\"time\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/google/uuid\"\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2/internal/filesystems\"\n\t\"github.com/caddyserver/caddy/v2/notify\"\n)\n\n// Config is the top (or beginning) of the Caddy configuration structure.\n// Caddy config is expressed natively as a JSON document. If you prefer\n// not to work with JSON directly, there are [many config adapters](/docs/config-adapters)\n// available that can convert various inputs into Caddy JSON.\n//\n// Many parts of this config are extensible through the use of Caddy modules.\n// Fields which have a json.RawMessage type and which appear as dots (•••) in\n// the online docs can be fulfilled by modules in a certain module\n// namespace. The docs show which modules can be used in a given place.\n//\n// Whenever a module is used, its name must be given either inline as part of\n// the module, or as the key to the module's value. The docs will make it clear\n// which to use.\n//\n// Generally, all config settings are optional, as it is Caddy convention to\n// have good, documented default values. If a parameter is required, the docs\n// should say so.\n//\n// Go programs which are directly building a Config struct value should take\n// care to populate the JSON-encodable fields of the struct (i.e. the fields\n// with `json` struct tags) if employing the module lifecycle (e.g. Provision\n// method calls).\ntype Config struct {\n\tAdmin   *AdminConfig `json:\"admin,omitempty\"`\n\tLogging *Logging     `json:\"logging,omitempty\"`\n\n\t// StorageRaw is a storage module that defines how/where Caddy\n\t// stores assets (such as TLS certificates). The default storage\n\t// module is `caddy.storage.file_system` (the local file system),\n\t// and the default path\n\t// [depends on the OS and environment](/docs/conventions#data-directory).\n\tStorageRaw json.RawMessage `json:\"storage,omitempty\" caddy:\"namespace=caddy.storage inline_key=module\"`\n\n\t// AppsRaw are the apps that Caddy will load and run. The\n\t// app module name is the key, and the app's config is the\n\t// associated value.\n\tAppsRaw ModuleMap `json:\"apps,omitempty\" caddy:\"namespace=\"`\n\n\tapps map[string]App\n\n\t// failedApps is a map of apps that failed to provision with their underlying error.\n\tfailedApps   map[string]error\n\tstorage      certmagic.Storage\n\teventEmitter eventEmitter\n\n\tcancelFunc context.CancelCauseFunc\n\n\t// fileSystems is a dict of fileSystems that will later be loaded from and added to.\n\tfileSystems FileSystems\n}\n\n// App is a thing that Caddy runs.\ntype App interface {\n\tStart() error\n\tStop() error\n}\n\n// Run runs the given config, replacing any existing config.\nfunc Run(cfg *Config) error {\n\tcfgJSON, err := json.Marshal(cfg)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn Load(cfgJSON, true)\n}\n\n// Load loads the given config JSON and runs it only\n// if it is different from the current config or\n// forceReload is true.\nfunc Load(cfgJSON []byte, forceReload bool) error {\n\tif err := notify.Reloading(); err != nil {\n\t\tLog().Error(\"unable to notify service manager of reloading state\", zap.Error(err))\n\t}\n\n\t// after reload, notify system of success or, if\n\t// failure, update with status (error message)\n\tvar err error\n\tdefer func() {\n\t\tif err != nil {\n\t\t\tif notifyErr := notify.Error(err, 0); notifyErr != nil {\n\t\t\t\tLog().Error(\"unable to notify to service manager of reload error\",\n\t\t\t\t\tzap.Error(notifyErr),\n\t\t\t\t\tzap.String(\"reload_err\", err.Error()))\n\t\t\t}\n\t\t\treturn\n\t\t}\n\t\tif err := notify.Ready(); err != nil {\n\t\t\tLog().Error(\"unable to notify to service manager of ready state\", zap.Error(err))\n\t\t}\n\t}()\n\n\terr = changeConfig(http.MethodPost, \"/\"+rawConfigKey, cfgJSON, \"\", forceReload)\n\tif errors.Is(err, errSameConfig) {\n\t\terr = nil // not really an error\n\t}\n\n\treturn err\n}\n\n// changeConfig changes the current config (rawCfg) according to the\n// method, traversed via the given path, and uses the given input as\n// the new value (if applicable; i.e. \"DELETE\" doesn't have an input).\n// If the resulting config is the same as the previous, no reload will\n// occur unless forceReload is true. If the config is unchanged and not\n// forcefully reloaded, then errConfigUnchanged is returned. This function\n// is safe for concurrent use.\n// The ifMatchHeader can optionally be given a string of the format:\n//\n//\t\"<path> <hash>\"\n//\n// where <path> is the absolute path in the config and <hash> is the expected hash of\n// the config at that path. If the hash in the ifMatchHeader doesn't match\n// the hash of the config, then an APIError with status 412 will be returned.\nfunc changeConfig(method, path string, input []byte, ifMatchHeader string, forceReload bool) error {\n\tswitch method {\n\tcase http.MethodGet,\n\t\thttp.MethodHead,\n\t\thttp.MethodOptions,\n\t\thttp.MethodConnect,\n\t\thttp.MethodTrace:\n\t\treturn fmt.Errorf(\"method not allowed\")\n\t}\n\n\trawCfgMu.Lock()\n\tdefer rawCfgMu.Unlock()\n\n\tif ifMatchHeader != \"\" {\n\t\t// expect the first and last character to be quotes\n\t\tif len(ifMatchHeader) < 2 || ifMatchHeader[0] != '\"' || ifMatchHeader[len(ifMatchHeader)-1] != '\"' {\n\t\t\treturn APIError{\n\t\t\t\tHTTPStatus: http.StatusBadRequest,\n\t\t\t\tErr:        fmt.Errorf(\"malformed If-Match header; expect quoted string\"),\n\t\t\t}\n\t\t}\n\n\t\t// read out the parts\n\t\tparts := strings.Fields(ifMatchHeader[1 : len(ifMatchHeader)-1])\n\t\tif len(parts) != 2 {\n\t\t\treturn APIError{\n\t\t\t\tHTTPStatus: http.StatusBadRequest,\n\t\t\t\tErr:        fmt.Errorf(\"malformed If-Match header; expect format \\\"<path> <hash>\\\"\"),\n\t\t\t}\n\t\t}\n\n\t\t// get the current hash of the config\n\t\t// at the given path\n\t\thash := etagHasher()\n\t\terr := unsyncedConfigAccess(http.MethodGet, parts[0], nil, hash)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tif hex.EncodeToString(hash.Sum(nil)) != parts[1] {\n\t\t\treturn APIError{\n\t\t\t\tHTTPStatus: http.StatusPreconditionFailed,\n\t\t\t\tErr:        fmt.Errorf(\"If-Match header did not match current config hash\"),\n\t\t\t}\n\t\t}\n\t}\n\n\terr := unsyncedConfigAccess(method, path, input, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// the mutation is complete, so encode the entire config as JSON\n\tnewCfg, err := json.Marshal(rawCfg[rawConfigKey])\n\tif err != nil {\n\t\treturn APIError{\n\t\t\tHTTPStatus: http.StatusBadRequest,\n\t\t\tErr:        fmt.Errorf(\"encoding new config: %v\", err),\n\t\t}\n\t}\n\n\t// if nothing changed, no need to do a whole reload unless the client forces it\n\tif !forceReload && bytes.Equal(rawCfgJSON, newCfg) {\n\t\tLog().Info(\"config is unchanged\")\n\t\treturn errSameConfig\n\t}\n\n\t// find any IDs in this config and index them\n\tidx := make(map[string]string)\n\terr = indexConfigObjects(rawCfg[rawConfigKey], \"/\"+rawConfigKey, idx)\n\tif err != nil {\n\t\tif len(rawCfgJSON) > 0 {\n\t\t\tvar oldCfg any\n\t\t\terr2 := json.Unmarshal(rawCfgJSON, &oldCfg)\n\t\t\tif err2 != nil {\n\t\t\t\terr = fmt.Errorf(\"%v; additionally, restoring old config: %v\", err, err2)\n\t\t\t}\n\t\t\trawCfg[rawConfigKey] = oldCfg\n\t\t} else {\n\t\t\trawCfg[rawConfigKey] = nil\n\t\t}\n\t\treturn APIError{\n\t\t\tHTTPStatus: http.StatusBadRequest,\n\t\t\tErr:        fmt.Errorf(\"indexing config: %v\", err),\n\t\t}\n\t}\n\n\t// load this new config; if it fails, we need to revert to\n\t// our old representation of caddy's actual config\n\terr = unsyncedDecodeAndRun(newCfg, true)\n\tif err != nil {\n\t\tif len(rawCfgJSON) > 0 {\n\t\t\t// restore old config state to keep it consistent\n\t\t\t// with what caddy is still running; we need to\n\t\t\t// unmarshal it again because it's likely that\n\t\t\t// pointers deep in our rawCfg map were modified\n\t\t\tvar oldCfg any\n\t\t\terr2 := json.Unmarshal(rawCfgJSON, &oldCfg)\n\t\t\tif err2 != nil {\n\t\t\t\terr = fmt.Errorf(\"%v; additionally, restoring old config: %v\", err, err2)\n\t\t\t}\n\t\t\trawCfg[rawConfigKey] = oldCfg\n\t\t} else {\n\t\t\trawCfg[rawConfigKey] = nil\n\t\t}\n\n\t\treturn fmt.Errorf(\"loading new config: %v\", err)\n\t}\n\n\t// success, so update our stored copy of the encoded\n\t// config to keep it consistent with what caddy is now\n\t// running (storing an encoded copy is not strictly\n\t// necessary, but avoids an extra json.Marshal for\n\t// each config change)\n\trawCfgJSON = newCfg\n\trawCfgIndex = idx\n\n\treturn nil\n}\n\n// readConfig traverses the current config to path\n// and writes its JSON encoding to out.\nfunc readConfig(path string, out io.Writer) error {\n\trawCfgMu.RLock()\n\tdefer rawCfgMu.RUnlock()\n\treturn unsyncedConfigAccess(http.MethodGet, path, nil, out)\n}\n\n// indexConfigObjects recursively searches ptr for object fields named\n// \"@id\" and maps that ID value to the full configPath in the index.\n// This function is NOT safe for concurrent access; obtain a write lock\n// on currentCtxMu.\nfunc indexConfigObjects(ptr any, configPath string, index map[string]string) error {\n\tswitch val := ptr.(type) {\n\tcase map[string]any:\n\t\tfor k, v := range val {\n\t\t\tif k == idKey {\n\t\t\t\tvar idStr string\n\t\t\t\tswitch idVal := v.(type) {\n\t\t\t\tcase string:\n\t\t\t\t\tidStr = idVal\n\t\t\t\tcase float64: // all JSON numbers decode as float64\n\t\t\t\t\tidStr = fmt.Sprintf(\"%v\", idVal)\n\t\t\t\tdefault:\n\t\t\t\t\treturn fmt.Errorf(\"%s: %s field must be a string or number\", configPath, idKey)\n\t\t\t\t}\n\t\t\t\tif existingPath, ok := index[idStr]; ok {\n\t\t\t\t\treturn fmt.Errorf(\"duplicate ID '%s' found at %s and %s\", idStr, existingPath, configPath)\n\t\t\t\t}\n\t\t\t\tindex[idStr] = configPath\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\t// traverse this object property recursively\n\t\t\terr := indexConfigObjects(val[k], path.Join(configPath, k), index)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\tcase []any:\n\t\t// traverse each element of the array recursively\n\t\tfor i := range val {\n\t\t\terr := indexConfigObjects(val[i], path.Join(configPath, strconv.Itoa(i)), index)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// unsyncedDecodeAndRun removes any meta fields (like @id tags)\n// from cfgJSON, decodes the result into a *Config, and runs\n// it as the new config, replacing any other current config.\n// It does NOT update the raw config state, as this is a\n// lower-level function; most callers will want to use Load\n// instead. A write lock on rawCfgMu is required! If\n// allowPersist is false, it will not be persisted to disk,\n// even if it is configured to.\nfunc unsyncedDecodeAndRun(cfgJSON []byte, allowPersist bool) error {\n\t// remove any @id fields from the JSON, which would cause\n\t// loading to break since the field wouldn't be recognized\n\tstrippedCfgJSON := RemoveMetaFields(cfgJSON)\n\n\tvar newCfg *Config\n\terr := StrictUnmarshalJSON(strippedCfgJSON, &newCfg)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// prevent recursive config loads; that is a user error, and\n\t// although frequent config loads should be safe, we cannot\n\t// guarantee that in the presence of third party plugins, nor\n\t// do we want this error to go unnoticed (we assume it was a\n\t// pulled config if we're not allowed to persist it)\n\tif !allowPersist &&\n\t\tnewCfg != nil &&\n\t\tnewCfg.Admin != nil &&\n\t\tnewCfg.Admin.Config != nil &&\n\t\tnewCfg.Admin.Config.LoadRaw != nil &&\n\t\tnewCfg.Admin.Config.LoadDelay <= 0 {\n\t\treturn fmt.Errorf(\"recursive config loading detected: pulled configs cannot pull other configs without positive load_delay\")\n\t}\n\n\t// run the new config and start all its apps\n\tctx, err := run(newCfg, true)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// swap old context (including its config) with the new one\n\tcurrentCtxMu.Lock()\n\toldCtx := currentCtx\n\tcurrentCtx = ctx\n\tcurrentCtxMu.Unlock()\n\n\t// Stop, Cleanup each old app\n\tunsyncedStop(oldCtx)\n\n\t// autosave a non-nil config, if not disabled\n\tif allowPersist &&\n\t\tnewCfg != nil &&\n\t\t(newCfg.Admin == nil ||\n\t\t\tnewCfg.Admin.Config == nil ||\n\t\t\tnewCfg.Admin.Config.Persist == nil ||\n\t\t\t*newCfg.Admin.Config.Persist) {\n\t\tdir := filepath.Dir(ConfigAutosavePath)\n\t\terr := os.MkdirAll(dir, 0o700)\n\t\tif err != nil {\n\t\t\tLog().Error(\"unable to create folder for config autosave\",\n\t\t\t\tzap.String(\"dir\", dir),\n\t\t\t\tzap.Error(err))\n\t\t} else {\n\t\t\terr := os.WriteFile(ConfigAutosavePath, cfgJSON, 0o600)\n\t\t\tif err == nil {\n\t\t\t\tLog().Info(\"autosaved config (load with --resume flag)\", zap.String(\"file\", ConfigAutosavePath))\n\t\t\t} else {\n\t\t\t\tLog().Error(\"unable to autosave config\",\n\t\t\t\t\tzap.String(\"file\", ConfigAutosavePath),\n\t\t\t\t\tzap.Error(err))\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// run runs newCfg and starts all its apps if\n// start is true. If any errors happen, cleanup\n// is performed if any modules were provisioned;\n// apps that were started already will be stopped,\n// so this function should not leak resources if\n// an error is returned. However, if no error is\n// returned and start == false, you should cancel\n// the config if you are not going to start it,\n// so that each provisioned module will be\n// cleaned up.\n//\n// This is a low-level function; most callers\n// will want to use Run instead, which also\n// updates the config's raw state.\nfunc run(newCfg *Config, start bool) (Context, error) {\n\tctx, err := provisionContext(newCfg, start)\n\tif err != nil {\n\t\tglobalMetrics.configSuccess.Set(0)\n\t\treturn ctx, err\n\t}\n\n\tif !start {\n\t\treturn ctx, nil\n\t}\n\n\tdefer func() {\n\t\t// if newCfg fails to start completely, clean up the already provisioned modules\n\t\t// partially copied from provisionContext\n\t\tif err != nil {\n\t\t\tglobalMetrics.configSuccess.Set(0)\n\t\t\tctx.cfg.cancelFunc(fmt.Errorf(\"configuration start error: %w\", err))\n\n\t\t\tif currentCtx.cfg != nil {\n\t\t\t\tcertmagic.Default.Storage = currentCtx.cfg.storage\n\t\t\t}\n\t\t}\n\t}()\n\n\t// Provision any admin routers which may need to access\n\t// some of the other apps at runtime\n\terr = ctx.cfg.Admin.provisionAdminRouters(ctx)\n\tif err != nil {\n\t\treturn ctx, err\n\t}\n\n\t// Start\n\terr = func() error {\n\t\tstarted := make([]string, 0, len(ctx.cfg.apps))\n\t\tfor name, a := range ctx.cfg.apps {\n\t\t\terr := a.Start()\n\t\t\tif err != nil {\n\t\t\t\t// an app failed to start, so we need to stop\n\t\t\t\t// all other apps that were already started\n\t\t\t\tfor _, otherAppName := range started {\n\t\t\t\t\terr2 := ctx.cfg.apps[otherAppName].Stop()\n\t\t\t\t\tif err2 != nil {\n\t\t\t\t\t\terr = fmt.Errorf(\"%v; additionally, aborting app %s: %v\",\n\t\t\t\t\t\t\terr, otherAppName, err2)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn fmt.Errorf(\"%s app module: start: %v\", name, err)\n\t\t\t}\n\t\t\tstarted = append(started, name)\n\t\t}\n\t\treturn nil\n\t}()\n\tif err != nil {\n\t\treturn ctx, err\n\t}\n\tglobalMetrics.configSuccess.Set(1)\n\tglobalMetrics.configSuccessTime.SetToCurrentTime()\n\n\t// TODO: This event is experimental and subject to change.\n\tctx.emitEvent(\"started\", nil)\n\n\t// now that the user's config is running, finish setting up anything else,\n\t// such as remote admin endpoint, config loader, etc.\n\terr = finishSettingUp(ctx, ctx.cfg)\n\treturn ctx, err\n}\n\n// provisionContext creates a new context from the given configuration and provisions\n// storage and apps.\n// If `newCfg` is nil a new empty configuration will be created.\n// If `replaceAdminServer` is true any currently active admin server will be replaced\n// with a new admin server based on the provided configuration.\nfunc provisionContext(newCfg *Config, replaceAdminServer bool) (Context, error) {\n\t// because we will need to roll back any state\n\t// modifications if this function errors, we\n\t// keep a single error value and scope all\n\t// sub-operations to their own functions to\n\t// ensure this error value does not get\n\t// overridden or missed when it should have\n\t// been set by a short assignment\n\tvar err error\n\n\tif newCfg == nil {\n\t\tnewCfg = new(Config)\n\t}\n\n\t// create a context within which to load\n\t// modules - essentially our new config's\n\t// execution environment; be sure that\n\t// cleanup occurs when we return if there\n\t// was an error; if no error, it will get\n\t// cleaned up on next config cycle\n\tctx, cancelCause := NewContextWithCause(Context{Context: context.Background(), cfg: newCfg})\n\tdefer func() {\n\t\tif err != nil {\n\t\t\tglobalMetrics.configSuccess.Set(0)\n\t\t\t// if there were any errors during startup,\n\t\t\t// we should cancel the new context we created\n\t\t\t// since the associated config won't be used;\n\t\t\t// this will cause all modules that were newly\n\t\t\t// provisioned to clean themselves up\n\t\t\tcancelCause(fmt.Errorf(\"configuration error: %w\", err))\n\n\t\t\t// also undo any other state changes we made\n\t\t\tif currentCtx.cfg != nil {\n\t\t\t\tcertmagic.Default.Storage = currentCtx.cfg.storage\n\t\t\t}\n\t\t}\n\t}()\n\tnewCfg.cancelFunc = cancelCause // clean up later\n\n\t// set up logging before anything bad happens\n\tif newCfg.Logging == nil {\n\t\tnewCfg.Logging = new(Logging)\n\t}\n\terr = newCfg.Logging.openLogs(ctx)\n\tif err != nil {\n\t\treturn ctx, err\n\t}\n\n\t// create the new filesystem map\n\tnewCfg.fileSystems = &filesystems.FileSystemMap{}\n\n\t// prepare the new config for use\n\tnewCfg.apps = make(map[string]App)\n\tnewCfg.failedApps = make(map[string]error)\n\n\t// set up global storage and make it CertMagic's default storage, too\n\terr = func() error {\n\t\tif newCfg.StorageRaw != nil {\n\t\t\tval, err := ctx.LoadModule(newCfg, \"StorageRaw\")\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"loading storage module: %v\", err)\n\t\t\t}\n\t\t\tstor, err := val.(StorageConverter).CertMagicStorage()\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"creating storage value: %v\", err)\n\t\t\t}\n\t\t\tnewCfg.storage = stor\n\t\t}\n\n\t\tif newCfg.storage == nil {\n\t\t\tnewCfg.storage = DefaultStorage\n\t\t}\n\t\tcertmagic.Default.Storage = newCfg.storage\n\n\t\treturn nil\n\t}()\n\tif err != nil {\n\t\treturn ctx, err\n\t}\n\n\t// start the admin endpoint (and stop any prior one)\n\tif replaceAdminServer {\n\t\terr = replaceLocalAdminServer(newCfg, ctx)\n\t\tif err != nil {\n\t\t\treturn ctx, fmt.Errorf(\"starting caddy administration endpoint: %v\", err)\n\t\t}\n\t}\n\n\t// Load and Provision each app and their submodules\n\terr = func() error {\n\t\tfor appName := range newCfg.AppsRaw {\n\t\t\tif _, err := ctx.App(appName); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t\treturn nil\n\t}()\n\treturn ctx, err\n}\n\n// ProvisionContext creates a new context from the configuration and provisions storage\n// and app modules.\n// The function is intended for testing and advanced use cases only, typically `Run` should be\n// use to ensure a fully functional caddy instance.\n// EXPERIMENTAL: While this is public the interface and implementation details of this function may change.\nfunc ProvisionContext(newCfg *Config) (Context, error) {\n\treturn provisionContext(newCfg, false)\n}\n\n// finishSettingUp should be run after all apps have successfully started.\nfunc finishSettingUp(ctx Context, cfg *Config) error {\n\t// establish this server's identity (only after apps are loaded\n\t// so that cert management of this endpoint doesn't prevent user's\n\t// servers from starting which likely also use HTTP/HTTPS ports;\n\t// but before remote management which may depend on these creds)\n\terr := manageIdentity(ctx, cfg)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"provisioning remote admin endpoint: %v\", err)\n\t}\n\n\t// replace any remote admin endpoint\n\terr = replaceRemoteAdminServer(ctx, cfg)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"provisioning remote admin endpoint: %v\", err)\n\t}\n\n\t// if dynamic config is requested, set that up and run it\n\tif cfg != nil && cfg.Admin != nil && cfg.Admin.Config != nil && cfg.Admin.Config.LoadRaw != nil {\n\t\tval, err := ctx.LoadModule(cfg.Admin.Config, \"LoadRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading config loader module: %s\", err)\n\t\t}\n\n\t\tlogger := Log().Named(\"config_loader\").With(\n\t\t\tzap.String(\"module\", val.(Module).CaddyModule().ID.Name()),\n\t\t\tzap.Int(\"load_delay\", int(cfg.Admin.Config.LoadDelay)))\n\n\t\trunLoadedConfig := func(config []byte) error {\n\t\t\tlogger.Info(\"applying dynamically-loaded config\")\n\t\t\terr := changeConfig(http.MethodPost, \"/\"+rawConfigKey, config, \"\", false)\n\t\t\tif errors.Is(err, errSameConfig) {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif err != nil {\n\t\t\t\tlogger.Error(\"failed to run dynamically-loaded config\", zap.Error(err))\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tlogger.Info(\"successfully applied dynamically-loaded config\")\n\t\t\treturn nil\n\t\t}\n\n\t\tif cfg.Admin.Config.LoadDelay > 0 {\n\t\t\tgo func() {\n\t\t\t\t// the loop is here to iterate ONLY if there is an error, a no-op config load,\n\t\t\t\t// or an unchanged config; in which case we simply wait the delay and try again\n\t\t\t\tfor {\n\t\t\t\t\ttimer := time.NewTimer(time.Duration(cfg.Admin.Config.LoadDelay))\n\t\t\t\t\tselect {\n\t\t\t\t\tcase <-timer.C:\n\t\t\t\t\t\tloadedConfig, err := val.(ConfigLoader).LoadConfig(ctx)\n\t\t\t\t\t\tif err != nil {\n\t\t\t\t\t\t\tlogger.Error(\"failed loading dynamic config; will retry\", zap.Error(err))\n\t\t\t\t\t\t\tcontinue\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif loadedConfig == nil {\n\t\t\t\t\t\t\tlogger.Info(\"dynamically-loaded config was nil; will retry\")\n\t\t\t\t\t\t\tcontinue\n\t\t\t\t\t\t}\n\t\t\t\t\t\terr = runLoadedConfig(loadedConfig)\n\t\t\t\t\t\tif errors.Is(err, errSameConfig) {\n\t\t\t\t\t\t\tlogger.Info(\"dynamically-loaded config was unchanged; will retry\")\n\t\t\t\t\t\t\tcontinue\n\t\t\t\t\t\t}\n\t\t\t\t\tcase <-ctx.Done():\n\t\t\t\t\t\tif !timer.Stop() {\n\t\t\t\t\t\t\t<-timer.C\n\t\t\t\t\t\t}\n\t\t\t\t\t\tlogger.Info(\"stopping dynamic config loading\")\n\t\t\t\t\t}\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}()\n\t\t} else {\n\t\t\t// if no LoadDelay is provided, will load config synchronously\n\t\t\tloadedConfig, err := val.(ConfigLoader).LoadConfig(ctx)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"loading dynamic config from %T: %v\", val, err)\n\t\t\t}\n\t\t\t// do this in a goroutine so current config can finish being loaded; otherwise deadlock\n\t\t\tgo func() { _ = runLoadedConfig(loadedConfig) }()\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// ConfigLoader is a type that can load a Caddy config. If\n// the return value is non-nil, it must be valid Caddy JSON;\n// if nil or with non-nil error, it is considered to be a\n// no-op load and may be retried later.\ntype ConfigLoader interface {\n\tLoadConfig(Context) ([]byte, error)\n}\n\n// Stop stops running the current configuration.\n// It is the antithesis of Run(). This function\n// will log any errors that occur during the\n// stopping of individual apps and continue to\n// stop the others. Stop should only be called\n// if not replacing with a new config.\nfunc Stop() error {\n\tcurrentCtxMu.RLock()\n\tctx := currentCtx\n\tcurrentCtxMu.RUnlock()\n\n\trawCfgMu.Lock()\n\tunsyncedStop(ctx)\n\n\tcurrentCtxMu.Lock()\n\tcurrentCtx = Context{}\n\tcurrentCtxMu.Unlock()\n\n\trawCfgJSON = nil\n\trawCfgIndex = nil\n\trawCfg[rawConfigKey] = nil\n\trawCfgMu.Unlock()\n\n\treturn nil\n}\n\n// unsyncedStop stops ctx from running, but has\n// no locking around ctx. It is a no-op if ctx has a\n// nil cfg. If any app returns an error when stopping,\n// it is logged and the function continues stopping\n// the next app. This function assumes all apps in\n// ctx were successfully started first.\n//\n// A lock on rawCfgMu is required, even though this\n// function does not access rawCfg, that lock\n// synchronizes the stop/start of apps.\nfunc unsyncedStop(ctx Context) {\n\tif ctx.cfg == nil {\n\t\treturn\n\t}\n\n\t// TODO: This event is experimental and subject to change.\n\tctx.emitEvent(\"stopping\", nil)\n\n\t// stop each app\n\tfor name, a := range ctx.cfg.apps {\n\t\terr := a.Stop()\n\t\tif err != nil {\n\t\t\tlog.Printf(\"[ERROR] stop %s: %v\", name, err)\n\t\t}\n\t}\n\n\t// clean up all modules\n\tctx.cfg.cancelFunc(fmt.Errorf(\"stopping apps\"))\n}\n\n// Validate loads, provisions, and validates\n// cfg, but does not start running it.\nfunc Validate(cfg *Config) error {\n\t_, err := run(cfg, false)\n\tif err == nil {\n\t\tcfg.cancelFunc(fmt.Errorf(\"validation complete\")) // call Cleanup on all modules\n\t}\n\treturn err\n}\n\n// exitProcess exits the process as gracefully as possible,\n// but it always exits, even if there are errors doing so.\n// It stops all apps, cleans up external locks, removes any\n// PID file, and shuts down admin endpoint(s) in a goroutine.\n// Errors are logged along the way, and an appropriate exit\n// code is emitted.\nfunc exitProcess(ctx context.Context, logger *zap.Logger) {\n\t// let the rest of the program know we're quitting; only do it once\n\tif !atomic.CompareAndSwapInt32(exiting, 0, 1) {\n\t\treturn\n\t}\n\n\t// give the OS or service/process manager our 2 weeks' notice: we quit\n\tif err := notify.Stopping(); err != nil {\n\t\tLog().Error(\"unable to notify service manager of stopping state\", zap.Error(err))\n\t}\n\n\tif logger == nil {\n\t\tlogger = Log()\n\t}\n\tlogger.Warn(\"exiting; byeee!! 👋\")\n\n\texitCode := ExitCodeSuccess\n\tlastContext := ActiveContext()\n\n\t// stop all apps\n\tif err := Stop(); err != nil {\n\t\tlogger.Error(\"failed to stop apps\", zap.Error(err))\n\t\texitCode = ExitCodeFailedQuit\n\t}\n\n\t// clean up certmagic locks\n\tcertmagic.CleanUpOwnLocks(ctx, logger)\n\n\t// remove pidfile\n\tif pidfile != \"\" {\n\t\terr := os.Remove(pidfile)\n\t\tif err != nil {\n\t\t\tlogger.Error(\"cleaning up PID file:\",\n\t\t\t\tzap.String(\"pidfile\", pidfile),\n\t\t\t\tzap.Error(err))\n\t\t\texitCode = ExitCodeFailedQuit\n\t\t}\n\t}\n\n\t// execute any process-exit callbacks\n\tfor _, exitFunc := range lastContext.exitFuncs {\n\t\texitFunc(ctx)\n\t}\n\texitFuncsMu.Lock()\n\tfor _, exitFunc := range exitFuncs {\n\t\texitFunc(ctx)\n\t}\n\texitFuncsMu.Unlock()\n\n\t// shut down admin endpoint(s) in goroutines so that\n\t// if this function was called from an admin handler,\n\t// it has a chance to return gracefully\n\t// use goroutine so that we can finish responding to API request\n\tgo func() {\n\t\tdefer func() {\n\t\t\tlogger = logger.With(zap.Int(\"exit_code\", exitCode))\n\t\t\tif exitCode == ExitCodeSuccess {\n\t\t\t\tlogger.Info(\"shutdown complete\")\n\t\t\t} else {\n\t\t\t\tlogger.Error(\"unclean shutdown\")\n\t\t\t}\n\t\t\tos.Exit(exitCode)\n\t\t}()\n\n\t\tif remoteAdminServer != nil {\n\t\t\terr := stopAdminServer(remoteAdminServer)\n\t\t\tif err != nil {\n\t\t\t\texitCode = ExitCodeFailedQuit\n\t\t\t\tlogger.Error(\"failed to stop remote admin server gracefully\", zap.Error(err))\n\t\t\t}\n\t\t}\n\t\tif localAdminServer != nil {\n\t\t\terr := stopAdminServer(localAdminServer)\n\t\t\tif err != nil {\n\t\t\t\texitCode = ExitCodeFailedQuit\n\t\t\t\tlogger.Error(\"failed to stop local admin server gracefully\", zap.Error(err))\n\t\t\t}\n\t\t}\n\t}()\n}\n\nvar exiting = new(int32) // accessed atomically\n\n// Exiting returns true if the process is exiting.\n// EXPERIMENTAL API: subject to change or removal.\nfunc Exiting() bool { return atomic.LoadInt32(exiting) == 1 }\n\n// OnExit registers a callback to invoke during process exit.\n// This registration is PROCESS-GLOBAL, meaning that each\n// function should only be registered once forever, NOT once\n// per config load (etc).\n//\n// EXPERIMENTAL API: subject to change or removal.\nfunc OnExit(f func(context.Context)) {\n\texitFuncsMu.Lock()\n\texitFuncs = append(exitFuncs, f)\n\texitFuncsMu.Unlock()\n}\n\nvar (\n\texitFuncs   []func(context.Context)\n\texitFuncsMu sync.Mutex\n)\n\n// Duration can be an integer or a string. An integer is\n// interpreted as nanoseconds. If a string, it is a Go\n// time.Duration value such as `300ms`, `1.5h`, or `2h45m`;\n// valid units are `ns`, `us`/`µs`, `ms`, `s`, `m`, `h`, and `d`.\ntype Duration time.Duration\n\n// UnmarshalJSON satisfies json.Unmarshaler.\nfunc (d *Duration) UnmarshalJSON(b []byte) error {\n\tif len(b) == 0 {\n\t\treturn io.EOF\n\t}\n\tvar dur time.Duration\n\tvar err error\n\tif b[0] == byte('\"') && b[len(b)-1] == byte('\"') {\n\t\tdur, err = ParseDuration(strings.Trim(string(b), `\"`))\n\t} else {\n\t\terr = json.Unmarshal(b, &dur)\n\t}\n\t*d = Duration(dur)\n\treturn err\n}\n\n// ParseDuration parses a duration string, adding\n// support for the \"d\" unit meaning number of days,\n// where a day is assumed to be 24h. The maximum\n// input string length is 1024.\nfunc ParseDuration(s string) (time.Duration, error) {\n\tif len(s) > 1024 {\n\t\treturn 0, fmt.Errorf(\"parsing duration: input string too long\")\n\t}\n\tvar inNumber bool\n\tvar numStart int\n\tfor i := 0; i < len(s); i++ {\n\t\tch := s[i]\n\t\tif ch == 'd' {\n\t\t\tdaysStr := s[numStart:i]\n\t\t\tdays, err := strconv.ParseFloat(daysStr, 64)\n\t\t\tif err != nil {\n\t\t\t\treturn 0, err\n\t\t\t}\n\t\t\thours := days * 24.0\n\t\t\thoursStr := strconv.FormatFloat(hours, 'f', -1, 64)\n\t\t\ts = s[:numStart] + hoursStr + \"h\" + s[i+1:]\n\t\t\ti--\n\t\t\tcontinue\n\t\t}\n\t\tif !inNumber {\n\t\t\tnumStart = i\n\t\t}\n\t\tinNumber = (ch >= '0' && ch <= '9') || ch == '.' || ch == '-' || ch == '+'\n\t}\n\treturn time.ParseDuration(s)\n}\n\n// InstanceID returns the UUID for this instance, and generates one if it\n// does not already exist. The UUID is stored in the local data directory,\n// regardless of storage configuration, since each instance is intended to\n// have its own unique ID.\nfunc InstanceID() (uuid.UUID, error) {\n\tappDataDir := AppDataDir()\n\tuuidFilePath := filepath.Join(appDataDir, \"instance.uuid\")\n\tuuidFileBytes, err := os.ReadFile(uuidFilePath)\n\tif errors.Is(err, fs.ErrNotExist) {\n\t\tuuid, err := uuid.NewRandom()\n\t\tif err != nil {\n\t\t\treturn uuid, err\n\t\t}\n\t\terr = os.MkdirAll(appDataDir, 0o700)\n\t\tif err != nil {\n\t\t\treturn uuid, err\n\t\t}\n\t\terr = os.WriteFile(uuidFilePath, []byte(uuid.String()), 0o600)\n\t\treturn uuid, err\n\t} else if err != nil {\n\t\treturn [16]byte{}, err\n\t}\n\treturn uuid.ParseBytes(uuidFileBytes)\n}\n\n// CustomVersion is an optional string that overrides Caddy's\n// reported version. It can be helpful when downstream packagers\n// need to manually set Caddy's version. If no other version\n// information is available, the short form version (see\n// Version()) will be set to CustomVersion, and the full version\n// will include CustomVersion at the beginning.\n//\n// Set this variable during `go build` with `-ldflags`:\n//\n//\t-ldflags '-X github.com/caddyserver/caddy/v2.CustomVersion=v2.6.2'\n//\n// for example.\nvar CustomVersion string\n\n// CustomBinaryName is an optional string that overrides the root\n// command name from the default of \"caddy\". This is useful for\n// downstream projects that embed Caddy but use a different binary\n// name. Shell completions and help text will use this name instead\n// of \"caddy\".\n//\n// Set this variable during `go build` with `-ldflags`:\n//\n//\t-ldflags '-X github.com/caddyserver/caddy/v2.CustomBinaryName=my_custom_caddy'\n//\n// for example.\nvar CustomBinaryName string\n\n// CustomLongDescription is an optional string that overrides the\n// long description of the root Cobra command. This is useful for\n// downstream projects that embed Caddy but want different help\n// output.\n//\n// Set this variable in an init() function of a package that is\n// imported by your main:\n//\n//\tfunc init() {\n//\t    caddy.CustomLongDescription = \"My custom server based on Caddy...\"\n//\t}\n//\n// for example.\nvar CustomLongDescription string\n\n// Version returns the Caddy version in a simple/short form, and\n// a full version string. The short form will not have spaces and\n// is intended for User-Agent strings and similar, but may be\n// omitting valuable information. Note that Caddy must be compiled\n// in a special way to properly embed complete version information.\n// First this function tries to get the version from the embedded\n// build info provided by go.mod dependencies; then it tries to\n// get info from embedded VCS information, which requires having\n// built Caddy from a git repository. If no version is available,\n// this function returns \"(devel)\" because Go uses that, but for\n// the simple form we change it to \"unknown\". If still no version\n// is available (e.g. no VCS repo), then it will use CustomVersion;\n// CustomVersion is always prepended to the full version string.\n//\n// See relevant Go issues: https://github.com/golang/go/issues/29228\n// and https://github.com/golang/go/issues/50603.\n//\n// This function is experimental and subject to change or removal.\nfunc Version() (simple, full string) {\n\t// the currently-recommended way to build Caddy involves\n\t// building it as a dependency so we can extract version\n\t// information from go.mod tooling; once the upstream\n\t// Go issues are fixed, we should just be able to use\n\t// bi.Main... hopefully.\n\tvar module *debug.Module\n\tbi, ok := debug.ReadBuildInfo()\n\tif !ok {\n\t\tif CustomVersion != \"\" {\n\t\t\tfull = CustomVersion\n\t\t\tsimple = CustomVersion\n\t\t\treturn simple, full\n\t\t}\n\t\tfull = \"unknown\"\n\t\tsimple = \"unknown\"\n\t\treturn simple, full\n\t}\n\t// find the Caddy module in the dependency list\n\tfor _, dep := range bi.Deps {\n\t\tif dep.Path == ImportPath {\n\t\t\tmodule = dep\n\t\t\tbreak\n\t\t}\n\t}\n\tif module != nil {\n\t\tsimple, full = module.Version, module.Version\n\t\tif module.Sum != \"\" {\n\t\t\tfull += \" \" + module.Sum\n\t\t}\n\t\tif module.Replace != nil {\n\t\t\tfull += \" => \" + module.Replace.Path\n\t\t\tif module.Replace.Version != \"\" {\n\t\t\t\tsimple = module.Replace.Version + \"_custom\"\n\t\t\t\tfull += \"@\" + module.Replace.Version\n\t\t\t}\n\t\t\tif module.Replace.Sum != \"\" {\n\t\t\t\tfull += \" \" + module.Replace.Sum\n\t\t\t}\n\t\t}\n\t}\n\n\tif full == \"\" {\n\t\tvar vcsRevision string\n\t\tvar vcsTime time.Time\n\t\tvar vcsModified bool\n\t\tfor _, setting := range bi.Settings {\n\t\t\tswitch setting.Key {\n\t\t\tcase \"vcs.revision\":\n\t\t\t\tvcsRevision = setting.Value\n\t\t\tcase \"vcs.time\":\n\t\t\t\tvcsTime, _ = time.Parse(time.RFC3339, setting.Value)\n\t\t\tcase \"vcs.modified\":\n\t\t\t\tvcsModified, _ = strconv.ParseBool(setting.Value)\n\t\t\t}\n\t\t}\n\n\t\tif vcsRevision != \"\" {\n\t\t\tvar modified string\n\t\t\tif vcsModified {\n\t\t\t\tmodified = \"+modified\"\n\t\t\t}\n\t\t\tfull = fmt.Sprintf(\"%s%s (%s)\", vcsRevision, modified, vcsTime.Format(time.RFC822))\n\t\t\tsimple = vcsRevision\n\n\t\t\t// use short checksum for simple, if hex-only\n\t\t\tif _, err := hex.DecodeString(simple); err == nil {\n\t\t\t\tsimple = simple[:8]\n\t\t\t}\n\n\t\t\t// append date to simple since it can be convenient\n\t\t\t// to know the commit date as part of the version\n\t\t\tif !vcsTime.IsZero() {\n\t\t\t\tsimple += \"-\" + vcsTime.Format(\"20060102\")\n\t\t\t}\n\t\t}\n\t}\n\n\tif full == \"\" {\n\t\tif CustomVersion != \"\" {\n\t\t\tfull = CustomVersion\n\t\t} else {\n\t\t\tfull = \"unknown\"\n\t\t}\n\t} else if CustomVersion != \"\" {\n\t\tfull = CustomVersion + \" \" + full\n\t}\n\n\tif simple == \"\" || simple == \"(devel)\" {\n\t\tif CustomVersion != \"\" {\n\t\t\tsimple = CustomVersion\n\t\t} else {\n\t\t\tsimple = \"unknown\"\n\t\t}\n\t}\n\n\treturn simple, full\n}\n\n// Event represents something that has happened or is happening.\n// An Event value is not synchronized, so it should be copied if\n// being used in goroutines.\n//\n// EXPERIMENTAL: Events are subject to change.\ntype Event struct {\n\t// If non-nil, the event has been aborted, meaning\n\t// propagation has stopped to other handlers and\n\t// the code should stop what it was doing. Emitters\n\t// may choose to use this as a signal to adjust their\n\t// code path appropriately.\n\tAborted error\n\n\t// The data associated with the event. Usually the\n\t// original emitter will be the only one to set or\n\t// change these values, but the field is exported\n\t// so handlers can have full access if needed.\n\t// However, this map is not synchronized, so\n\t// handlers must not use this map directly in new\n\t// goroutines; instead, copy the map to use it in a\n\t// goroutine. Data may be nil.\n\tData map[string]any\n\n\tid     uuid.UUID\n\tts     time.Time\n\tname   string\n\torigin Module\n}\n\n// NewEvent creates a new event, but does not emit the event. To emit an\n// event, call Emit() on the current instance of the caddyevents app instead.\n//\n// EXPERIMENTAL: Subject to change.\nfunc NewEvent(ctx Context, name string, data map[string]any) (Event, error) {\n\tid, err := uuid.NewRandom()\n\tif err != nil {\n\t\treturn Event{}, fmt.Errorf(\"generating new event ID: %v\", err)\n\t}\n\tname = strings.ToLower(name)\n\treturn Event{\n\t\tData:   data,\n\t\tid:     id,\n\t\tts:     time.Now(),\n\t\tname:   name,\n\t\torigin: ctx.Module(),\n\t}, nil\n}\n\nfunc (e Event) ID() uuid.UUID        { return e.id }\nfunc (e Event) Timestamp() time.Time { return e.ts }\nfunc (e Event) Name() string         { return e.name }\nfunc (e Event) Origin() Module       { return e.origin } // Returns the module that originated the event. May be nil, usually if caddy core emits the event.\n\n// CloudEvent exports event e as a structure that, when\n// serialized as JSON, is compatible with the\n// CloudEvents spec.\nfunc (e Event) CloudEvent() CloudEvent {\n\tdataJSON, _ := json.Marshal(e.Data)\n\tvar source string\n\tif e.Origin() == nil {\n\t\tsource = \"caddy\"\n\t} else {\n\t\tsource = string(e.Origin().CaddyModule().ID)\n\t}\n\treturn CloudEvent{\n\t\tID:              e.id.String(),\n\t\tSource:          source,\n\t\tSpecVersion:     \"1.0\",\n\t\tType:            e.name,\n\t\tTime:            e.ts,\n\t\tDataContentType: \"application/json\",\n\t\tData:            dataJSON,\n\t}\n}\n\n// CloudEvent is a JSON-serializable structure that\n// is compatible with the CloudEvents specification.\n// See https://cloudevents.io.\n// EXPERIMENTAL: Subject to change.\ntype CloudEvent struct {\n\tID              string          `json:\"id\"`\n\tSource          string          `json:\"source\"`\n\tSpecVersion     string          `json:\"specversion\"`\n\tType            string          `json:\"type\"`\n\tTime            time.Time       `json:\"time\"`\n\tDataContentType string          `json:\"datacontenttype,omitempty\"`\n\tData            json.RawMessage `json:\"data,omitempty\"`\n}\n\n// ErrEventAborted cancels an event.\nvar ErrEventAborted = errors.New(\"event aborted\")\n\n// ActiveContext returns the currently-active context.\n// This function is experimental and might be changed\n// or removed in the future.\nfunc ActiveContext() Context {\n\tcurrentCtxMu.RLock()\n\tdefer currentCtxMu.RUnlock()\n\treturn currentCtx\n}\n\n// CtxKey is a value type for use with context.WithValue.\ntype CtxKey string\n\n// This group of variables pertains to the current configuration.\nvar (\n\t// currentCtx is the root context for the currently-running\n\t// configuration, which can be accessed through this value.\n\t// If the Config contained in this value is not nil, then\n\t// a config is currently active/running.\n\tcurrentCtx   Context\n\tcurrentCtxMu sync.RWMutex\n\n\t// rawCfg is the current, generic-decoded configuration;\n\t// we initialize it as a map with one field (\"config\")\n\t// to maintain parity with the API endpoint and to avoid\n\t// the special case of having to access/mutate the variable\n\t// directly without traversing into it.\n\trawCfg = map[string]any{\n\t\trawConfigKey: nil,\n\t}\n\n\t// rawCfgJSON is the JSON-encoded form of rawCfg. Keeping\n\t// this around avoids an extra Marshal call during changes.\n\trawCfgJSON []byte\n\n\t// rawCfgIndex is the map of user-assigned ID to expanded\n\t// path, for converting /id/ paths to /config/ paths.\n\trawCfgIndex map[string]string\n\n\t// rawCfgMu protects all the rawCfg fields and also\n\t// essentially synchronizes config changes/reloads.\n\trawCfgMu sync.RWMutex\n)\n\n// lastConfigFile and lastConfigAdapter remember the source config\n// file and adapter used when Caddy was started via the CLI \"run\" command.\n// These are consulted by the SIGUSR1 handler to attempt reloading from\n// the same source. They are intentionally not set for other entrypoints\n// such as \"caddy start\" or subcommands like file-server.\nvar (\n\tlastConfigMu      sync.RWMutex\n\tlastConfigFile    string\n\tlastConfigAdapter string\n)\n\n// reloadFromSourceFunc is the type of stored callback\n// which is called when we receive a SIGUSR1 signal.\ntype reloadFromSourceFunc func(file, adapter string) error\n\n// reloadFromSourceCallback is the stored callback\n// which is called when we receive a SIGUSR1 signal.\nvar reloadFromSourceCallback reloadFromSourceFunc\n\n// errReloadFromSourceUnavailable is returned when no reload-from-source callback is set.\nvar errReloadFromSourceUnavailable = errors.New(\"reload from source unavailable in this process\") //nolint:unused\n\n// SetLastConfig records the given source file and adapter as the\n// last-known external configuration source. Intended to be called\n// only when starting via \"caddy run --config <file> --adapter <adapter>\".\nfunc SetLastConfig(file, adapter string, fn reloadFromSourceFunc) {\n\tlastConfigMu.Lock()\n\tlastConfigFile = file\n\tlastConfigAdapter = adapter\n\treloadFromSourceCallback = fn\n\tlastConfigMu.Unlock()\n}\n\n// ClearLastConfigIfDifferent clears the recorded last-config if the provided\n// source file/adapter do not match the recorded last-config. If both srcFile\n// and srcAdapter are empty, the last-config is cleared.\nfunc ClearLastConfigIfDifferent(srcFile, srcAdapter string) {\n\tif (srcFile != \"\" || srcAdapter != \"\") && lastConfigMatches(srcFile, srcAdapter) {\n\t\treturn\n\t}\n\tSetLastConfig(\"\", \"\", nil)\n}\n\n// getLastConfig returns the last-known config file and adapter.\nfunc getLastConfig() (file, adapter string, fn reloadFromSourceFunc) {\n\tlastConfigMu.RLock()\n\tf, a, cb := lastConfigFile, lastConfigAdapter, reloadFromSourceCallback\n\tlastConfigMu.RUnlock()\n\treturn f, a, cb\n}\n\n// lastConfigMatches returns true if the provided source file and/or adapter\n// matches the recorded last-config. Matching rules (in priority order):\n// 1. If srcAdapter is provided and differs from the recorded adapter, no match.\n// 2. If srcFile exactly equals the recorded file, match.\n// 3. If both sides can be made absolute and equal, match.\n// 4. If basenames are equal, match.\nfunc lastConfigMatches(srcFile, srcAdapter string) bool {\n\tlf, la, _ := getLastConfig()\n\n\t// If adapter is provided, it must match.\n\tif srcAdapter != \"\" && srcAdapter != la {\n\t\treturn false\n\t}\n\n\t// Quick equality check.\n\tif srcFile == lf {\n\t\treturn true\n\t}\n\n\t// Try absolute path comparison.\n\tsAbs, sErr := filepath.Abs(srcFile)\n\tlAbs, lErr := filepath.Abs(lf)\n\tif sErr == nil && lErr == nil && sAbs == lAbs {\n\t\treturn true\n\t}\n\n\t// Final fallback: basename equality.\n\tif filepath.Base(srcFile) == filepath.Base(lf) {\n\t\treturn true\n\t}\n\n\treturn false\n}\n\n// errSameConfig is returned if the new config is the same\n// as the old one. This isn't usually an actual, actionable\n// error; it's mostly a sentinel value.\nvar errSameConfig = errors.New(\"config is unchanged\")\n\n// ImportPath is the package import path for Caddy core.\n// This identifier may be removed in the future.\nconst ImportPath = \"github.com/caddyserver/caddy/v2\"\n"
  },
  {
    "path": "caddy_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport (\n\t\"context\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc TestParseDuration(t *testing.T) {\n\tconst day = 24 * time.Hour\n\tfor i, tc := range []struct {\n\t\tinput  string\n\t\texpect time.Duration\n\t}{\n\t\t{\n\t\t\tinput:  \"3h\",\n\t\t\texpect: 3 * time.Hour,\n\t\t},\n\t\t{\n\t\t\tinput:  \"1d\",\n\t\t\texpect: day,\n\t\t},\n\t\t{\n\t\t\tinput:  \"1d30m\",\n\t\t\texpect: day + 30*time.Minute,\n\t\t},\n\t\t{\n\t\t\tinput:  \"1m2d\",\n\t\t\texpect: time.Minute + day*2,\n\t\t},\n\t\t{\n\t\t\tinput:  \"1m2d30s\",\n\t\t\texpect: time.Minute + day*2 + 30*time.Second,\n\t\t},\n\t\t{\n\t\t\tinput:  \"1d2d\",\n\t\t\texpect: 3 * day,\n\t\t},\n\t\t{\n\t\t\tinput:  \"1.5d\",\n\t\t\texpect: time.Duration(1.5 * float64(day)),\n\t\t},\n\t\t{\n\t\t\tinput:  \"4m1.25d\",\n\t\t\texpect: 4*time.Minute + time.Duration(1.25*float64(day)),\n\t\t},\n\t\t{\n\t\t\tinput:  \"-1.25d12h\",\n\t\t\texpect: time.Duration(-1.25*float64(day)) - 12*time.Hour,\n\t\t},\n\t} {\n\t\tactual, err := ParseDuration(tc.input)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Test %d ('%s'): Got error: %v\", i, tc.input, err)\n\t\t\tcontinue\n\t\t}\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d ('%s'): Expected=%s Actual=%s\", i, tc.input, tc.expect, actual)\n\t\t}\n\t}\n}\n\nfunc TestEvent_CloudEvent_NilOrigin(t *testing.T) {\n\tctx, _ := NewContext(Context{Context: context.Background()}) // module will be nil by default\n\tevent, err := NewEvent(ctx, \"started\", nil)\n\tif err != nil {\n\t\tt.Fatalf(\"NewEvent() error = %v\", err)\n\t}\n\n\t// This should not panic\n\tce := event.CloudEvent()\n\n\tif ce.Source != \"caddy\" {\n\t\tt.Errorf(\"Expected CloudEvent Source to be 'caddy', got '%s'\", ce.Source)\n\t}\n\tif ce.Type != \"started\" {\n\t\tt.Errorf(\"Expected CloudEvent Type to be 'started', got '%s'\", ce.Type)\n\t}\n}\n"
  },
  {
    "path": "caddyconfig/caddyfile/adapter.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyfile\n\nimport (\n\t\"bytes\"\n\t\"encoding/json\"\n\t\"fmt\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n)\n\n// Adapter adapts Caddyfile to Caddy JSON.\ntype Adapter struct {\n\tServerType ServerType\n}\n\n// Adapt converts the Caddyfile config in body to Caddy JSON.\nfunc (a Adapter) Adapt(body []byte, options map[string]any) ([]byte, []caddyconfig.Warning, error) {\n\tif a.ServerType == nil {\n\t\treturn nil, nil, fmt.Errorf(\"no server type\")\n\t}\n\tif options == nil {\n\t\toptions = make(map[string]any)\n\t}\n\n\tfilename, _ := options[\"filename\"].(string)\n\tif filename == \"\" {\n\t\tfilename = \"Caddyfile\"\n\t}\n\n\tserverBlocks, err := Parse(filename, body)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\tcfg, warnings, err := a.ServerType.Setup(serverBlocks, options)\n\tif err != nil {\n\t\treturn nil, warnings, err\n\t}\n\n\t// lint check: see if input was properly formatted; sometimes messy files parse\n\t// successfully but result in logical errors (the Caddyfile is a bad format, I'm sorry)\n\tif warning, different := FormattingDifference(filename, body); different {\n\t\twarnings = append(warnings, warning)\n\t}\n\n\tresult, err := json.Marshal(cfg)\n\n\treturn result, warnings, err\n}\n\n// FormattingDifference returns a warning and true if the formatted version\n// is any different from the input; empty warning and false otherwise.\n// TODO: also perform this check on imported files\nfunc FormattingDifference(filename string, body []byte) (caddyconfig.Warning, bool) {\n\t// replace windows-style newlines to normalize comparison\n\tnormalizedBody := bytes.ReplaceAll(body, []byte(\"\\r\\n\"), []byte(\"\\n\"))\n\n\tformatted := Format(normalizedBody)\n\tif bytes.Equal(formatted, normalizedBody) {\n\t\treturn caddyconfig.Warning{}, false\n\t}\n\n\t// find where the difference is\n\tline := 1\n\tfor i, ch := range normalizedBody {\n\t\tif i >= len(formatted) || ch != formatted[i] {\n\t\t\tbreak\n\t\t}\n\t\tif ch == '\\n' {\n\t\t\tline++\n\t\t}\n\t}\n\treturn caddyconfig.Warning{\n\t\tFile:    filename,\n\t\tLine:    line,\n\t\tMessage: \"Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies\",\n\t}, true\n}\n\n// Unmarshaler is a type that can unmarshal Caddyfile tokens to\n// set itself up for a JSON encoding. The goal of an unmarshaler\n// is not to set itself up for actual use, but to set itself up for\n// being marshaled into JSON. Caddyfile-unmarshaled values will not\n// be used directly; they will be encoded as JSON and then used from\n// that. Implementations _may_ be able to support multiple segments\n// (instances of their directive or batch of tokens); typically this\n// means wrapping parsing logic in a loop: `for d.Next() { ... }`.\n// More commonly, only a single segment is supported, so a simple\n// `d.Next()` at the start should be used to consume the module\n// identifier token (directive name, etc).\ntype Unmarshaler interface {\n\tUnmarshalCaddyfile(d *Dispenser) error\n}\n\n// ServerType is a type that can evaluate a Caddyfile and set up a caddy config.\ntype ServerType interface {\n\t// Setup takes the server blocks which contain tokens,\n\t// as well as options (e.g. CLI flags) and creates a\n\t// Caddy config, along with any warnings or an error.\n\tSetup([]ServerBlock, map[string]any) (*caddy.Config, []caddyconfig.Warning, error)\n}\n\n// UnmarshalModule instantiates a module with the given ID and invokes\n// UnmarshalCaddyfile on the new value using the immediate next segment\n// of d as input. In other words, d's next token should be the first\n// token of the module's Caddyfile input.\n//\n// This function is used when the next segment of Caddyfile tokens\n// belongs to another Caddy module. The returned value is often\n// type-asserted to the module's associated type for practical use\n// when setting up a config.\nfunc UnmarshalModule(d *Dispenser, moduleID string) (Unmarshaler, error) {\n\tmod, err := caddy.GetModule(moduleID)\n\tif err != nil {\n\t\treturn nil, d.Errf(\"getting module named '%s': %v\", moduleID, err)\n\t}\n\tinst := mod.New()\n\tunm, ok := inst.(Unmarshaler)\n\tif !ok {\n\t\treturn nil, d.Errf(\"module %s is not a Caddyfile unmarshaler; is %T\", mod.ID, inst)\n\t}\n\terr = unm.UnmarshalCaddyfile(d.NewFromNextSegment())\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn unm, nil\n}\n\n// Interface guard\nvar _ caddyconfig.Adapter = (*Adapter)(nil)\n"
  },
  {
    "path": "caddyconfig/caddyfile/dispenser.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyfile\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"log\"\n\t\"strconv\"\n\t\"strings\"\n)\n\n// Dispenser is a type that dispenses tokens, similarly to a lexer,\n// except that it can do so with some notion of structure. An empty\n// Dispenser is invalid; call NewDispenser to make a proper instance.\ntype Dispenser struct {\n\ttokens  []Token\n\tcursor  int\n\tnesting int\n\n\t// A map of arbitrary context data that can be used\n\t// to pass through some information to unmarshalers.\n\tcontext map[string]any\n}\n\n// NewDispenser returns a Dispenser filled with the given tokens.\nfunc NewDispenser(tokens []Token) *Dispenser {\n\treturn &Dispenser{\n\t\ttokens: tokens,\n\t\tcursor: -1,\n\t}\n}\n\n// NewTestDispenser parses input into tokens and creates a new\n// Dispenser for test purposes only; any errors are fatal.\nfunc NewTestDispenser(input string) *Dispenser {\n\ttokens, err := allTokens(\"Testfile\", []byte(input))\n\tif err != nil && err != io.EOF {\n\t\tlog.Fatalf(\"getting all tokens from input: %v\", err)\n\t}\n\treturn NewDispenser(tokens)\n}\n\n// Next loads the next token. Returns true if a token\n// was loaded; false otherwise. If false, all tokens\n// have been consumed.\nfunc (d *Dispenser) Next() bool {\n\tif d.cursor < len(d.tokens)-1 {\n\t\td.cursor++\n\t\treturn true\n\t}\n\treturn false\n}\n\n// Prev moves to the previous token. It does the inverse\n// of Next(), except this function may decrement the cursor\n// to -1 so that the next call to Next() points to the\n// first token; this allows dispensing to \"start over\". This\n// method returns true if the cursor ends up pointing to a\n// valid token.\nfunc (d *Dispenser) Prev() bool {\n\tif d.cursor > -1 {\n\t\td.cursor--\n\t\treturn d.cursor > -1\n\t}\n\treturn false\n}\n\n// NextArg loads the next token if it is on the same\n// line and if it is not a block opening (open curly\n// brace). Returns true if an argument token was\n// loaded; false otherwise. If false, all tokens on\n// the line have been consumed except for potentially\n// a block opening. It handles imported tokens\n// correctly.\nfunc (d *Dispenser) NextArg() bool {\n\tif !d.nextOnSameLine() {\n\t\treturn false\n\t}\n\tif d.Val() == \"{\" {\n\t\t// roll back; a block opening is not an argument\n\t\td.cursor--\n\t\treturn false\n\t}\n\treturn true\n}\n\n// nextOnSameLine advances the cursor if the next\n// token is on the same line of the same file.\nfunc (d *Dispenser) nextOnSameLine() bool {\n\tif d.cursor < 0 {\n\t\td.cursor++\n\t\treturn true\n\t}\n\tif d.cursor >= len(d.tokens)-1 {\n\t\treturn false\n\t}\n\tcurr := d.tokens[d.cursor]\n\tnext := d.tokens[d.cursor+1]\n\tif !isNextOnNewLine(curr, next) {\n\t\td.cursor++\n\t\treturn true\n\t}\n\treturn false\n}\n\n// NextLine loads the next token only if it is not on the same\n// line as the current token, and returns true if a token was\n// loaded; false otherwise. If false, there is not another token\n// or it is on the same line. It handles imported tokens correctly.\nfunc (d *Dispenser) NextLine() bool {\n\tif d.cursor < 0 {\n\t\td.cursor++\n\t\treturn true\n\t}\n\tif d.cursor >= len(d.tokens)-1 {\n\t\treturn false\n\t}\n\tcurr := d.tokens[d.cursor]\n\tnext := d.tokens[d.cursor+1]\n\tif isNextOnNewLine(curr, next) {\n\t\td.cursor++\n\t\treturn true\n\t}\n\treturn false\n}\n\n// NextBlock can be used as the condition of a for loop\n// to load the next token as long as it opens a block or\n// is already in a block nested more than initialNestingLevel.\n// In other words, a loop over NextBlock() will iterate\n// all tokens in the block assuming the next token is an\n// open curly brace, until the matching closing brace.\n// The open and closing brace tokens for the outer-most\n// block will be consumed internally and omitted from\n// the iteration.\n//\n// Proper use of this method looks like this:\n//\n//\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n//\t}\n//\n// However, in simple cases where it is known that the\n// Dispenser is new and has not already traversed state\n// by a loop over NextBlock(), this will do:\n//\n//\tfor d.NextBlock(0) {\n//\t}\n//\n// As with other token parsing logic, a loop over\n// NextBlock() should be contained within a loop over\n// Next(), as it is usually prudent to skip the initial\n// token.\nfunc (d *Dispenser) NextBlock(initialNestingLevel int) bool {\n\tif d.nesting > initialNestingLevel {\n\t\tif !d.Next() {\n\t\t\treturn false // should be EOF error\n\t\t}\n\t\tif d.Val() == \"}\" && !d.nextOnSameLine() {\n\t\t\td.nesting--\n\t\t} else if d.Val() == \"{\" && !d.nextOnSameLine() {\n\t\t\td.nesting++\n\t\t}\n\t\treturn d.nesting > initialNestingLevel\n\t}\n\tif !d.nextOnSameLine() { // block must open on same line\n\t\treturn false\n\t}\n\tif d.Val() != \"{\" {\n\t\td.cursor-- // roll back if not opening brace\n\t\treturn false\n\t}\n\td.Next() // consume open curly brace\n\tif d.Val() == \"}\" {\n\t\treturn false // open and then closed right away\n\t}\n\td.nesting++\n\treturn true\n}\n\n// Nesting returns the current nesting level. Necessary\n// if using NextBlock()\nfunc (d *Dispenser) Nesting() int {\n\treturn d.nesting\n}\n\n// Val gets the text of the current token. If there is no token\n// loaded, it returns empty string.\nfunc (d *Dispenser) Val() string {\n\tif d.cursor < 0 || d.cursor >= len(d.tokens) {\n\t\treturn \"\"\n\t}\n\treturn d.tokens[d.cursor].Text\n}\n\n// ValRaw gets the raw text of the current token (including quotes).\n// If the token was a heredoc, then the delimiter is not included,\n// because that is not relevant to any unmarshaling logic at this time.\n// If there is no token loaded, it returns empty string.\nfunc (d *Dispenser) ValRaw() string {\n\tif d.cursor < 0 || d.cursor >= len(d.tokens) {\n\t\treturn \"\"\n\t}\n\tquote := d.tokens[d.cursor].wasQuoted\n\tif quote > 0 && quote != '<' {\n\t\t// string literal\n\t\treturn string(quote) + d.tokens[d.cursor].Text + string(quote)\n\t}\n\treturn d.tokens[d.cursor].Text\n}\n\n// ScalarVal gets value of the current token, converted to the closest\n// scalar type. If there is no token loaded, it returns nil.\nfunc (d *Dispenser) ScalarVal() any {\n\tif d.cursor < 0 || d.cursor >= len(d.tokens) {\n\t\treturn nil\n\t}\n\tquote := d.tokens[d.cursor].wasQuoted\n\ttext := d.tokens[d.cursor].Text\n\n\tif quote > 0 {\n\t\treturn text // string literal\n\t}\n\tif num, err := strconv.Atoi(text); err == nil {\n\t\treturn num\n\t}\n\tif num, err := strconv.ParseFloat(text, 64); err == nil {\n\t\treturn num\n\t}\n\tif bool, err := strconv.ParseBool(text); err == nil {\n\t\treturn bool\n\t}\n\treturn text\n}\n\n// Line gets the line number of the current token.\n// If there is no token loaded, it returns 0.\nfunc (d *Dispenser) Line() int {\n\tif d.cursor < 0 || d.cursor >= len(d.tokens) {\n\t\treturn 0\n\t}\n\treturn d.tokens[d.cursor].Line\n}\n\n// File gets the filename where the current token originated.\nfunc (d *Dispenser) File() string {\n\tif d.cursor < 0 || d.cursor >= len(d.tokens) {\n\t\treturn \"\"\n\t}\n\treturn d.tokens[d.cursor].File\n}\n\n// Args is a convenience function that loads the next arguments\n// (tokens on the same line) into an arbitrary number of strings\n// pointed to in targets. If there are not enough argument tokens\n// available to fill targets, false is returned and the remaining\n// targets are left unchanged. If all the targets are filled,\n// then true is returned.\nfunc (d *Dispenser) Args(targets ...*string) bool {\n\tfor i := range targets {\n\t\tif !d.NextArg() {\n\t\t\treturn false\n\t\t}\n\t\t*targets[i] = d.Val()\n\t}\n\treturn true\n}\n\n// AllArgs is like Args, but if there are more argument tokens\n// available than there are targets, false is returned. The\n// number of available argument tokens must match the number of\n// targets exactly to return true.\nfunc (d *Dispenser) AllArgs(targets ...*string) bool {\n\tif !d.Args(targets...) {\n\t\treturn false\n\t}\n\tif d.NextArg() {\n\t\td.Prev()\n\t\treturn false\n\t}\n\treturn true\n}\n\n// CountRemainingArgs counts the amount of remaining arguments\n// (tokens on the same line) without consuming the tokens.\nfunc (d *Dispenser) CountRemainingArgs() int {\n\tcount := 0\n\tfor d.NextArg() {\n\t\tcount++\n\t}\n\tfor i := 0; i < count; i++ {\n\t\td.Prev()\n\t}\n\treturn count\n}\n\n// RemainingArgs loads any more arguments (tokens on the same line)\n// into a slice of strings and returns them. Open curly brace tokens\n// also indicate the end of arguments, and the curly brace is not\n// included in the return value nor is it loaded.\nfunc (d *Dispenser) RemainingArgs() []string {\n\tvar args []string\n\tfor d.NextArg() {\n\t\targs = append(args, d.Val())\n\t}\n\treturn args\n}\n\n// RemainingArgsRaw loads any more arguments (tokens on the same line,\n// retaining quotes) into a slice of strings and returns them.\n// Open curly brace tokens also indicate the end of arguments,\n// and the curly brace is not included in the return value nor is it loaded.\nfunc (d *Dispenser) RemainingArgsRaw() []string {\n\tvar args []string\n\tfor d.NextArg() {\n\t\targs = append(args, d.ValRaw())\n\t}\n\treturn args\n}\n\n// RemainingArgsAsTokens loads any more arguments (tokens on the same line)\n// into a slice of Token-structs and returns them. Open curly brace tokens\n// also indicate the end of arguments, and the curly brace is not included\n// in the return value nor is it loaded.\nfunc (d *Dispenser) RemainingArgsAsTokens() []Token {\n\tvar args []Token\n\tfor d.NextArg() {\n\t\targs = append(args, d.Token())\n\t}\n\treturn args\n}\n\n// NewFromNextSegment returns a new dispenser with a copy of\n// the tokens from the current token until the end of the\n// \"directive\" whether that be to the end of the line or\n// the end of a block that starts at the end of the line;\n// in other words, until the end of the segment.\nfunc (d *Dispenser) NewFromNextSegment() *Dispenser {\n\treturn NewDispenser(d.NextSegment())\n}\n\n// NextSegment returns a copy of the tokens from the current\n// token until the end of the line or block that starts at\n// the end of the line.\nfunc (d *Dispenser) NextSegment() Segment {\n\ttkns := Segment{d.Token()}\n\tfor d.NextArg() {\n\t\ttkns = append(tkns, d.Token())\n\t}\n\tvar openedBlock bool\n\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\tif !openedBlock {\n\t\t\t// because NextBlock() consumes the initial open\n\t\t\t// curly brace, we rewind here to append it, since\n\t\t\t// our case is special in that we want the new\n\t\t\t// dispenser to have all the tokens including\n\t\t\t// surrounding curly braces\n\t\t\td.Prev()\n\t\t\ttkns = append(tkns, d.Token())\n\t\t\td.Next()\n\t\t\topenedBlock = true\n\t\t}\n\t\ttkns = append(tkns, d.Token())\n\t}\n\tif openedBlock {\n\t\t// include closing brace\n\t\ttkns = append(tkns, d.Token())\n\n\t\t// do not consume the closing curly brace; the\n\t\t// next iteration of the enclosing loop will\n\t\t// call Next() and consume it\n\t}\n\treturn tkns\n}\n\n// Token returns the current token.\nfunc (d *Dispenser) Token() Token {\n\tif d.cursor < 0 || d.cursor >= len(d.tokens) {\n\t\treturn Token{}\n\t}\n\treturn d.tokens[d.cursor]\n}\n\n// Reset sets d's cursor to the beginning, as\n// if this was a new and unused dispenser.\nfunc (d *Dispenser) Reset() {\n\td.cursor = -1\n\td.nesting = 0\n}\n\n// ArgErr returns an argument error, meaning that another\n// argument was expected but not found. In other words,\n// a line break or open curly brace was encountered instead of\n// an argument.\nfunc (d *Dispenser) ArgErr() error {\n\tif d.Val() == \"{\" {\n\t\treturn d.Err(\"unexpected token '{', expecting argument\")\n\t}\n\treturn d.Errf(\"wrong argument count or unexpected line ending after '%s'\", d.Val())\n}\n\n// SyntaxErr creates a generic syntax error which explains what was\n// found and what was expected.\nfunc (d *Dispenser) SyntaxErr(expected string) error {\n\tmsg := fmt.Sprintf(\"syntax error: unexpected token '%s', expecting '%s', at %s:%d import chain: ['%s']\", d.Val(), expected, d.File(), d.Line(), strings.Join(d.Token().imports, \"','\"))\n\treturn errors.New(msg)\n}\n\n// EOFErr returns an error indicating that the dispenser reached\n// the end of the input when searching for the next token.\nfunc (d *Dispenser) EOFErr() error {\n\treturn d.Errf(\"unexpected EOF\")\n}\n\n// Err generates a custom parse-time error with a message of msg.\nfunc (d *Dispenser) Err(msg string) error {\n\treturn d.WrapErr(errors.New(msg))\n}\n\n// Errf is like Err, but for formatted error messages\nfunc (d *Dispenser) Errf(format string, args ...any) error {\n\treturn d.WrapErr(fmt.Errorf(format, args...))\n}\n\n// WrapErr takes an existing error and adds the Caddyfile file and line number.\nfunc (d *Dispenser) WrapErr(err error) error {\n\tif len(d.Token().imports) > 0 {\n\t\treturn fmt.Errorf(\"%w, at %s:%d import chain ['%s']\", err, d.File(), d.Line(), strings.Join(d.Token().imports, \"','\"))\n\t}\n\treturn fmt.Errorf(\"%w, at %s:%d\", err, d.File(), d.Line())\n}\n\n// Delete deletes the current token and returns the updated slice\n// of tokens. The cursor is not advanced to the next token.\n// Because deletion modifies the underlying slice, this method\n// should only be called if you have access to the original slice\n// of tokens and/or are using the slice of tokens outside this\n// Dispenser instance. If you do not re-assign the slice with the\n// return value of this method, inconsistencies in the token\n// array will become apparent (or worse, hide from you like they\n// did me for 3 and a half freaking hours late one night).\nfunc (d *Dispenser) Delete() []Token {\n\tif d.cursor >= 0 && d.cursor <= len(d.tokens)-1 {\n\t\td.tokens = append(d.tokens[:d.cursor], d.tokens[d.cursor+1:]...)\n\t\td.cursor--\n\t}\n\treturn d.tokens\n}\n\n// DeleteN is the same as Delete, but can delete many tokens at once.\n// If there aren't N tokens available to delete, none are deleted.\nfunc (d *Dispenser) DeleteN(amount int) []Token {\n\tif amount > 0 && d.cursor >= (amount-1) && d.cursor <= len(d.tokens)-1 {\n\t\td.tokens = append(d.tokens[:d.cursor-(amount-1)], d.tokens[d.cursor+1:]...)\n\t\td.cursor -= amount\n\t}\n\treturn d.tokens\n}\n\n// SetContext sets a key-value pair in the context map.\nfunc (d *Dispenser) SetContext(key string, value any) {\n\tif d.context == nil {\n\t\td.context = make(map[string]any)\n\t}\n\td.context[key] = value\n}\n\n// GetContext gets the value of a key in the context map.\nfunc (d *Dispenser) GetContext(key string) any {\n\tif d.context == nil {\n\t\treturn nil\n\t}\n\treturn d.context[key]\n}\n\n// GetContextString gets the value of a key in the context map\n// as a string, or an empty string if the key does not exist.\nfunc (d *Dispenser) GetContextString(key string) string {\n\tif d.context == nil {\n\t\treturn \"\"\n\t}\n\tif val, ok := d.context[key].(string); ok {\n\t\treturn val\n\t}\n\treturn \"\"\n}\n\n// isNewLine determines whether the current token is on a different\n// line (higher line number) than the previous token. It handles imported\n// tokens correctly. If there isn't a previous token, it returns true.\nfunc (d *Dispenser) isNewLine() bool {\n\tif d.cursor < 1 {\n\t\treturn true\n\t}\n\tif d.cursor > len(d.tokens)-1 {\n\t\treturn false\n\t}\n\n\tprev := d.tokens[d.cursor-1]\n\tcurr := d.tokens[d.cursor]\n\treturn isNextOnNewLine(prev, curr)\n}\n\n// isNextOnNewLine determines whether the current token is on a different\n// line (higher line number) than the next token. It handles imported\n// tokens correctly. If there isn't a next token, it returns true.\nfunc (d *Dispenser) isNextOnNewLine() bool {\n\tif d.cursor < 0 {\n\t\treturn false\n\t}\n\tif d.cursor >= len(d.tokens)-1 {\n\t\treturn true\n\t}\n\n\tcurr := d.tokens[d.cursor]\n\tnext := d.tokens[d.cursor+1]\n\treturn isNextOnNewLine(curr, next)\n}\n\nconst MatcherNameCtxKey = \"matcher_name\"\n"
  },
  {
    "path": "caddyconfig/caddyfile/dispenser_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyfile\n\nimport (\n\t\"errors\"\n\t\"reflect\"\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc TestDispenser_Val_Next(t *testing.T) {\n\tinput := `host:port\n\t\t\t  dir1 arg1\n\t\t\t  dir2 arg2 arg3\n\t\t\t  dir3`\n\td := NewTestDispenser(input)\n\n\tif val := d.Val(); val != \"\" {\n\t\tt.Fatalf(\"Val(): Should return empty string when no token loaded; got '%s'\", val)\n\t}\n\n\tassertNext := func(shouldLoad bool, expectedCursor int, expectedVal string) {\n\t\tif loaded := d.Next(); loaded != shouldLoad {\n\t\t\tt.Errorf(\"Next(): Expected %v but got %v instead (val '%s')\", shouldLoad, loaded, d.Val())\n\t\t}\n\t\tif d.cursor != expectedCursor {\n\t\t\tt.Errorf(\"Expected cursor to be %d, but was %d\", expectedCursor, d.cursor)\n\t\t}\n\t\tif d.nesting != 0 {\n\t\t\tt.Errorf(\"Nesting should be 0, was %d instead\", d.nesting)\n\t\t}\n\t\tif val := d.Val(); val != expectedVal {\n\t\t\tt.Errorf(\"Val(): Expected '%s' but got '%s'\", expectedVal, val)\n\t\t}\n\t}\n\n\tassertNext(true, 0, \"host:port\")\n\tassertNext(true, 1, \"dir1\")\n\tassertNext(true, 2, \"arg1\")\n\tassertNext(true, 3, \"dir2\")\n\tassertNext(true, 4, \"arg2\")\n\tassertNext(true, 5, \"arg3\")\n\tassertNext(true, 6, \"dir3\")\n\t// Note: This next test simply asserts existing behavior.\n\t// If desired, we may wish to empty the token value after\n\t// reading past the EOF. Open an issue if you want this change.\n\tassertNext(false, 6, \"dir3\")\n}\n\nfunc TestDispenser_NextArg(t *testing.T) {\n\tinput := `dir1 arg1\n\t\t\t  dir2 arg2 arg3\n\t\t\t  dir3`\n\td := NewTestDispenser(input)\n\n\tassertNext := func(shouldLoad bool, expectedVal string, expectedCursor int) {\n\t\tif d.Next() != shouldLoad {\n\t\t\tt.Errorf(\"Next(): Should load token but got false instead (val: '%s')\", d.Val())\n\t\t}\n\t\tif d.cursor != expectedCursor {\n\t\t\tt.Errorf(\"Next(): Expected cursor to be at %d, but it was %d\", expectedCursor, d.cursor)\n\t\t}\n\t\tif val := d.Val(); val != expectedVal {\n\t\t\tt.Errorf(\"Val(): Expected '%s' but got '%s'\", expectedVal, val)\n\t\t}\n\t}\n\n\tassertNextArg := func(expectedVal string, loadAnother bool, expectedCursor int) {\n\t\tif !d.NextArg() {\n\t\t\tt.Error(\"NextArg(): Should load next argument but got false instead\")\n\t\t}\n\t\tif d.cursor != expectedCursor {\n\t\t\tt.Errorf(\"NextArg(): Expected cursor to be at %d, but it was %d\", expectedCursor, d.cursor)\n\t\t}\n\t\tif val := d.Val(); val != expectedVal {\n\t\t\tt.Errorf(\"Val(): Expected '%s' but got '%s'\", expectedVal, val)\n\t\t}\n\t\tif !loadAnother {\n\t\t\tif d.NextArg() {\n\t\t\t\tt.Fatalf(\"NextArg(): Should NOT load another argument, but got true instead (val: '%s')\", d.Val())\n\t\t\t}\n\t\t\tif d.cursor != expectedCursor {\n\t\t\t\tt.Errorf(\"NextArg(): Expected cursor to remain at %d, but it was %d\", expectedCursor, d.cursor)\n\t\t\t}\n\t\t}\n\t}\n\n\tassertNext(true, \"dir1\", 0)\n\tassertNextArg(\"arg1\", false, 1)\n\tassertNext(true, \"dir2\", 2)\n\tassertNextArg(\"arg2\", true, 3)\n\tassertNextArg(\"arg3\", false, 4)\n\tassertNext(true, \"dir3\", 5)\n\tassertNext(false, \"dir3\", 5)\n}\n\nfunc TestDispenser_NextLine(t *testing.T) {\n\tinput := `host:port\n\t\t\t  dir1 arg1\n\t\t\t  dir2 arg2 arg3`\n\td := NewTestDispenser(input)\n\n\tassertNextLine := func(shouldLoad bool, expectedVal string, expectedCursor int) {\n\t\tif d.NextLine() != shouldLoad {\n\t\t\tt.Errorf(\"NextLine(): Should load token but got false instead (val: '%s')\", d.Val())\n\t\t}\n\t\tif d.cursor != expectedCursor {\n\t\t\tt.Errorf(\"NextLine(): Expected cursor to be %d, instead was %d\", expectedCursor, d.cursor)\n\t\t}\n\t\tif val := d.Val(); val != expectedVal {\n\t\t\tt.Errorf(\"Val(): Expected '%s' but got '%s'\", expectedVal, val)\n\t\t}\n\t}\n\n\tassertNextLine(true, \"host:port\", 0)\n\tassertNextLine(true, \"dir1\", 1)\n\tassertNextLine(false, \"dir1\", 1)\n\td.Next() // arg1\n\tassertNextLine(true, \"dir2\", 3)\n\tassertNextLine(false, \"dir2\", 3)\n\td.Next() // arg2\n\tassertNextLine(false, \"arg2\", 4)\n\td.Next() // arg3\n\tassertNextLine(false, \"arg3\", 5)\n}\n\nfunc TestDispenser_NextBlock(t *testing.T) {\n\tinput := `foobar1 {\n\t\t\t  \tsub1 arg1\n\t\t\t  \tsub2\n\t\t\t  }\n\t\t\t  foobar2 {\n\t\t\t  }`\n\td := NewTestDispenser(input)\n\n\tassertNextBlock := func(shouldLoad bool, expectedCursor, expectedNesting int) {\n\t\tif loaded := d.NextBlock(0); loaded != shouldLoad {\n\t\t\tt.Errorf(\"NextBlock(): Should return %v but got %v\", shouldLoad, loaded)\n\t\t}\n\t\tif d.cursor != expectedCursor {\n\t\t\tt.Errorf(\"NextBlock(): Expected cursor to be %d, was %d\", expectedCursor, d.cursor)\n\t\t}\n\t\tif d.nesting != expectedNesting {\n\t\t\tt.Errorf(\"NextBlock(): Nesting should be %d, not %d\", expectedNesting, d.nesting)\n\t\t}\n\t}\n\n\tassertNextBlock(false, -1, 0)\n\td.Next() // foobar1\n\tassertNextBlock(true, 2, 1)\n\tassertNextBlock(true, 3, 1)\n\tassertNextBlock(true, 4, 1)\n\tassertNextBlock(false, 5, 0)\n\td.Next()                     // foobar2\n\tassertNextBlock(false, 8, 0) // empty block is as if it didn't exist\n}\n\nfunc TestDispenser_Args(t *testing.T) {\n\tvar s1, s2, s3 string\n\tinput := `dir1 arg1 arg2 arg3\n\t\t\t  dir2 arg4 arg5\n\t\t\t  dir3 arg6 arg7\n\t\t\t  dir4`\n\td := NewTestDispenser(input)\n\n\td.Next() // dir1\n\n\t// As many strings as arguments\n\tif all := d.Args(&s1, &s2, &s3); !all {\n\t\tt.Error(\"Args(): Expected true, got false\")\n\t}\n\tif s1 != \"arg1\" {\n\t\tt.Errorf(\"Args(): Expected s1 to be 'arg1', got '%s'\", s1)\n\t}\n\tif s2 != \"arg2\" {\n\t\tt.Errorf(\"Args(): Expected s2 to be 'arg2', got '%s'\", s2)\n\t}\n\tif s3 != \"arg3\" {\n\t\tt.Errorf(\"Args(): Expected s3 to be 'arg3', got '%s'\", s3)\n\t}\n\n\td.Next() // dir2\n\n\t// More strings than arguments\n\tif all := d.Args(&s1, &s2, &s3); all {\n\t\tt.Error(\"Args(): Expected false, got true\")\n\t}\n\tif s1 != \"arg4\" {\n\t\tt.Errorf(\"Args(): Expected s1 to be 'arg4', got '%s'\", s1)\n\t}\n\tif s2 != \"arg5\" {\n\t\tt.Errorf(\"Args(): Expected s2 to be 'arg5', got '%s'\", s2)\n\t}\n\tif s3 != \"arg3\" {\n\t\tt.Errorf(\"Args(): Expected s3 to be unchanged ('arg3'), instead got '%s'\", s3)\n\t}\n\n\t// (quick cursor check just for kicks and giggles)\n\tif d.cursor != 6 {\n\t\tt.Errorf(\"Cursor should be 6, but is %d\", d.cursor)\n\t}\n\n\td.Next() // dir3\n\n\t// More arguments than strings\n\tif all := d.Args(&s1); !all {\n\t\tt.Error(\"Args(): Expected true, got false\")\n\t}\n\tif s1 != \"arg6\" {\n\t\tt.Errorf(\"Args(): Expected s1 to be 'arg6', got '%s'\", s1)\n\t}\n\n\td.Next() // dir4\n\n\t// No arguments or strings\n\tif all := d.Args(); !all {\n\t\tt.Error(\"Args(): Expected true, got false\")\n\t}\n\n\t// No arguments but at least one string\n\tif all := d.Args(&s1); all {\n\t\tt.Error(\"Args(): Expected false, got true\")\n\t}\n}\n\nfunc TestDispenser_RemainingArgs(t *testing.T) {\n\tinput := `dir1 arg1 arg2 arg3\n\t\t\t  dir2 arg4 arg5\n\t\t\t  dir3 arg6 { arg7\n\t\t\t  dir4`\n\td := NewTestDispenser(input)\n\n\td.Next() // dir1\n\n\targs := d.RemainingArgs()\n\tif expected := []string{\"arg1\", \"arg2\", \"arg3\"}; !reflect.DeepEqual(args, expected) {\n\t\tt.Errorf(\"RemainingArgs(): Expected %v, got %v\", expected, args)\n\t}\n\n\td.Next() // dir2\n\n\targs = d.RemainingArgs()\n\tif expected := []string{\"arg4\", \"arg5\"}; !reflect.DeepEqual(args, expected) {\n\t\tt.Errorf(\"RemainingArgs(): Expected %v, got %v\", expected, args)\n\t}\n\n\td.Next() // dir3\n\n\targs = d.RemainingArgs()\n\tif expected := []string{\"arg6\"}; !reflect.DeepEqual(args, expected) {\n\t\tt.Errorf(\"RemainingArgs(): Expected %v, got %v\", expected, args)\n\t}\n\n\td.Next() // {\n\td.Next() // arg7\n\td.Next() // dir4\n\n\targs = d.RemainingArgs()\n\tif len(args) != 0 {\n\t\tt.Errorf(\"RemainingArgs(): Expected %v, got %v\", []string{}, args)\n\t}\n}\n\nfunc TestDispenser_RemainingArgsAsTokens(t *testing.T) {\n\tinput := `dir1 arg1 arg2 arg3\n\t\t\t  dir2 arg4 arg5\n\t\t\t  dir3 arg6 { arg7\n\t\t\t  dir4`\n\td := NewTestDispenser(input)\n\n\td.Next() // dir1\n\n\targs := d.RemainingArgsAsTokens()\n\n\ttokenTexts := make([]string, 0, len(args))\n\tfor _, arg := range args {\n\t\ttokenTexts = append(tokenTexts, arg.Text)\n\t}\n\n\tif expected := []string{\"arg1\", \"arg2\", \"arg3\"}; !reflect.DeepEqual(tokenTexts, expected) {\n\t\tt.Errorf(\"RemainingArgsAsTokens(): Expected %v, got %v\", expected, tokenTexts)\n\t}\n\n\td.Next() // dir2\n\n\targs = d.RemainingArgsAsTokens()\n\n\ttokenTexts = tokenTexts[:0]\n\tfor _, arg := range args {\n\t\ttokenTexts = append(tokenTexts, arg.Text)\n\t}\n\n\tif expected := []string{\"arg4\", \"arg5\"}; !reflect.DeepEqual(tokenTexts, expected) {\n\t\tt.Errorf(\"RemainingArgsAsTokens(): Expected %v, got %v\", expected, tokenTexts)\n\t}\n\n\td.Next() // dir3\n\n\targs = d.RemainingArgsAsTokens()\n\ttokenTexts = tokenTexts[:0]\n\tfor _, arg := range args {\n\t\ttokenTexts = append(tokenTexts, arg.Text)\n\t}\n\n\tif expected := []string{\"arg6\"}; !reflect.DeepEqual(tokenTexts, expected) {\n\t\tt.Errorf(\"RemainingArgsAsTokens(): Expected %v, got %v\", expected, tokenTexts)\n\t}\n\n\td.Next() // {\n\td.Next() // arg7\n\td.Next() // dir4\n\n\targs = d.RemainingArgsAsTokens()\n\ttokenTexts = tokenTexts[:0]\n\tfor _, arg := range args {\n\t\ttokenTexts = append(tokenTexts, arg.Text)\n\t}\n\n\tif len(args) != 0 {\n\t\tt.Errorf(\"RemainingArgsAsTokens(): Expected %v, got %v\", []string{}, tokenTexts)\n\t}\n}\n\nfunc TestDispenser_ArgErr_Err(t *testing.T) {\n\tinput := `dir1 {\n\t\t\t  }\n\t\t\t  dir2 arg1 arg2`\n\td := NewTestDispenser(input)\n\n\td.cursor = 1 // {\n\n\tif err := d.ArgErr(); err == nil || !strings.Contains(err.Error(), \"{\") {\n\t\tt.Errorf(\"ArgErr(): Expected an error message with { in it, but got '%v'\", err)\n\t}\n\n\td.cursor = 5 // arg2\n\n\tif err := d.ArgErr(); err == nil || !strings.Contains(err.Error(), \"arg2\") {\n\t\tt.Errorf(\"ArgErr(): Expected an error message with 'arg2' in it; got '%v'\", err)\n\t}\n\n\terr := d.Err(\"foobar\")\n\tif err == nil {\n\t\tt.Fatalf(\"Err(): Expected an error, got nil\")\n\t}\n\n\tif !strings.Contains(err.Error(), \"Testfile:3\") {\n\t\tt.Errorf(\"Expected error message with filename:line in it; got '%v'\", err)\n\t}\n\n\tif !strings.Contains(err.Error(), \"foobar\") {\n\t\tt.Errorf(\"Expected error message with custom message in it ('foobar'); got '%v'\", err)\n\t}\n\n\tErrBarIsFull := errors.New(\"bar is full\")\n\tbookingError := d.Errf(\"unable to reserve: %w\", ErrBarIsFull)\n\tif !errors.Is(bookingError, ErrBarIsFull) {\n\t\tt.Errorf(\"Errf(): should be able to unwrap the error chain\")\n\t}\n}\n"
  },
  {
    "path": "caddyconfig/caddyfile/formatter.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyfile\n\nimport (\n\t\"bytes\"\n\t\"io\"\n\t\"slices\"\n\t\"strings\"\n\t\"unicode\"\n)\n\n// Format formats the input Caddyfile to a standard, nice-looking\n// appearance. It works by reading each rune of the input and taking\n// control over all the bracing and whitespace that is written; otherwise,\n// words, comments, placeholders, and escaped characters are all treated\n// literally and written as they appear in the input.\nfunc Format(input []byte) []byte {\n\tinput = bytes.TrimSpace(input)\n\n\tout := new(bytes.Buffer)\n\trdr := bytes.NewReader(input)\n\n\ttype heredocState int\n\n\tconst (\n\t\theredocClosed  heredocState = 0\n\t\theredocOpening heredocState = 1\n\t\theredocOpened  heredocState = 2\n\t)\n\n\tvar (\n\t\tlast rune // the last character that was written to the result\n\n\t\tspace           = true // whether current/previous character was whitespace (beginning of input counts as space)\n\t\tbeginningOfLine = true // whether we are at beginning of line\n\n\t\topenBrace        bool // whether current word/token is or started with open curly brace\n\t\topenBraceWritten bool // if openBrace, whether that brace was written or not\n\t\topenBraceSpace   bool // whether there was a non-newline space before open brace\n\n\t\tnewLines int // count of newlines consumed\n\n\t\tcomment bool   // whether we're in a comment\n\t\tquotes  string // encountered quotes ('', '`', '\"', '\"`', '`\"')\n\t\tescaped bool   // whether current char is escaped\n\n\t\theredoc              heredocState // whether we're in a heredoc\n\t\theredocEscaped       bool         // whether heredoc is escaped\n\t\theredocMarker        []rune\n\t\theredocClosingMarker []rune\n\n\t\tnesting int // indentation level\n\t)\n\n\twrite := func(ch rune) {\n\t\tout.WriteRune(ch)\n\t\tlast = ch\n\t}\n\n\tindent := func() {\n\t\tfor tabs := nesting; tabs > 0; tabs-- {\n\t\t\twrite('\\t')\n\t\t}\n\t}\n\n\tnextLine := func() {\n\t\twrite('\\n')\n\t\tbeginningOfLine = true\n\t}\n\n\tfor {\n\t\tch, _, err := rdr.ReadRune()\n\t\tif err != nil {\n\t\t\tif err == io.EOF {\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tpanic(err)\n\t\t}\n\t\t// detect whether we have the start of a heredoc\n\t\tif quotes == \"\" && (heredoc == heredocClosed && !heredocEscaped) &&\n\t\t\tspace && last == '<' && ch == '<' {\n\t\t\twrite(ch)\n\t\t\theredoc = heredocOpening\n\t\t\tspace = false\n\t\t\tcontinue\n\t\t}\n\n\t\tif heredoc == heredocOpening {\n\t\t\tif ch == '\\n' {\n\t\t\t\tif len(heredocMarker) > 0 && heredocMarkerRegexp.MatchString(string(heredocMarker)) {\n\t\t\t\t\theredoc = heredocOpened\n\t\t\t\t} else {\n\t\t\t\t\theredocMarker = nil\n\t\t\t\t\theredoc = heredocClosed\n\t\t\t\t\tnextLine()\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\twrite(ch)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif unicode.IsSpace(ch) {\n\t\t\t\t// a space means it's just a regular token and not a heredoc\n\t\t\t\theredocMarker = nil\n\t\t\t\theredoc = heredocClosed\n\t\t\t} else {\n\t\t\t\theredocMarker = append(heredocMarker, ch)\n\t\t\t\twrite(ch)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\t\t// if we're in a heredoc, all characters are read&write as-is\n\t\tif heredoc == heredocOpened {\n\t\t\theredocClosingMarker = append(heredocClosingMarker, ch)\n\t\t\tif len(heredocClosingMarker) > len(heredocMarker)+1 { // We assert that the heredocClosingMarker is followed by a unicode.Space\n\t\t\t\theredocClosingMarker = heredocClosingMarker[1:]\n\t\t\t}\n\t\t\t// check if we're done\n\t\t\tif unicode.IsSpace(ch) && slices.Equal(heredocClosingMarker[:len(heredocClosingMarker)-1], heredocMarker) {\n\t\t\t\theredocMarker = nil\n\t\t\t\theredocClosingMarker = nil\n\t\t\t\theredoc = heredocClosed\n\t\t\t} else {\n\t\t\t\twrite(ch)\n\t\t\t\tif ch == '\\n' {\n\t\t\t\t\theredocClosingMarker = heredocClosingMarker[:0]\n\t\t\t\t}\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\n\t\tif last == '<' && space {\n\t\t\tspace = false\n\t\t}\n\n\t\tif comment {\n\t\t\tif ch == '\\n' {\n\t\t\t\tcomment = false\n\t\t\t\tspace = true\n\t\t\t\tnextLine()\n\t\t\t\tcontinue\n\t\t\t} else {\n\t\t\t\twrite(ch)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\n\t\tif !escaped && ch == '\\\\' {\n\t\t\tif space {\n\t\t\t\twrite(' ')\n\t\t\t\tspace = false\n\t\t\t}\n\t\t\twrite(ch)\n\t\t\tescaped = true\n\t\t\tcontinue\n\t\t}\n\n\t\tif escaped {\n\t\t\tif ch == '<' {\n\t\t\t\theredocEscaped = true\n\t\t\t}\n\t\t\twrite(ch)\n\t\t\tescaped = false\n\t\t\tcontinue\n\t\t}\n\n\t\tif ch == '`' {\n\t\t\tswitch quotes {\n\t\t\tcase \"\\\"`\":\n\t\t\t\tquotes = \"\\\"\"\n\t\t\tcase \"`\":\n\t\t\t\tquotes = \"\"\n\t\t\tcase \"\\\"\":\n\t\t\t\tquotes = \"\\\"`\"\n\t\t\tdefault:\n\t\t\t\tquotes = \"`\"\n\t\t\t}\n\t\t}\n\n\t\tif quotes == \"\\\"\" {\n\t\t\tif ch == '\"' {\n\t\t\t\tquotes = \"\"\n\t\t\t}\n\t\t\twrite(ch)\n\t\t\tcontinue\n\t\t}\n\n\t\tif ch == '\"' {\n\t\t\tswitch quotes {\n\t\t\tcase \"\":\n\t\t\t\tif space {\n\t\t\t\t\tquotes = \"\\\"\"\n\t\t\t\t}\n\t\t\tcase \"`\\\"\":\n\t\t\t\tquotes = \"`\"\n\t\t\tcase \"\\\"`\":\n\t\t\t\tquotes = \"\"\n\t\t\t}\n\t\t}\n\n\t\tif strings.Contains(quotes, \"`\") {\n\t\t\tif ch == '`' && space && !beginningOfLine {\n\t\t\t\twrite(' ')\n\t\t\t}\n\t\t\twrite(ch)\n\t\t\tspace = false\n\t\t\tcontinue\n\t\t}\n\n\t\tif unicode.IsSpace(ch) {\n\t\t\tspace = true\n\t\t\theredocEscaped = false\n\t\t\tif ch == '\\n' {\n\t\t\t\tnewLines++\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\t\tspacePrior := space\n\t\tspace = false\n\n\t\t//////////////////////////////////////////////////////////\n\t\t// I find it helpful to think of the formatting loop in two\n\t\t// main sections; by the time we reach this point, we\n\t\t// know we are in a \"regular\" part of the file: we know\n\t\t// the character is not a space, not in a literal segment\n\t\t// like a comment or quoted, it's not escaped, etc.\n\t\t//////////////////////////////////////////////////////////\n\n\t\tif ch == '#' {\n\t\t\tcomment = true\n\t\t}\n\n\t\tif openBrace && spacePrior && !openBraceWritten {\n\t\t\tif nesting == 0 && last == '}' {\n\t\t\t\tnextLine()\n\t\t\t\tnextLine()\n\t\t\t}\n\n\t\t\topenBrace = false\n\t\t\tif beginningOfLine {\n\t\t\t\tindent()\n\t\t\t} else if !openBraceSpace || !unicode.IsSpace(last) {\n\t\t\t\twrite(' ')\n\t\t\t}\n\t\t\twrite('{')\n\t\t\topenBraceWritten = true\n\t\t\tnextLine()\n\t\t\tnewLines = 0\n\t\t\t// prevent infinite nesting from ridiculous inputs (issue #4169)\n\t\t\tif nesting < 10 {\n\t\t\t\tnesting++\n\t\t\t}\n\t\t}\n\n\t\tswitch {\n\t\tcase ch == '{':\n\t\t\topenBrace = true\n\t\t\topenBraceSpace = spacePrior && !beginningOfLine\n\t\t\tif openBraceSpace && newLines == 0 {\n\t\t\t\twrite(' ')\n\t\t\t}\n\t\t\topenBraceWritten = false\n\t\t\tif quotes == \"`\" {\n\t\t\t\twrite('{')\n\t\t\t\topenBraceWritten = true\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tcontinue\n\n\t\tcase ch == '}' && (spacePrior || !openBrace):\n\t\t\tif quotes == \"`\" {\n\t\t\t\twrite('}')\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif last != '\\n' {\n\t\t\t\tnextLine()\n\t\t\t}\n\t\t\tif nesting > 0 {\n\t\t\t\tnesting--\n\t\t\t}\n\t\t\tindent()\n\t\t\twrite('}')\n\t\t\tnewLines = 0\n\t\t\tcontinue\n\t\t}\n\n\t\tif newLines > 2 {\n\t\t\tnewLines = 2\n\t\t}\n\t\tfor i := 0; i < newLines; i++ {\n\t\t\tnextLine()\n\t\t}\n\t\tnewLines = 0\n\t\tif beginningOfLine {\n\t\t\tindent()\n\t\t}\n\t\tif nesting == 0 && last == '}' && beginningOfLine {\n\t\t\tnextLine()\n\t\t\tnextLine()\n\t\t}\n\n\t\tif !beginningOfLine && spacePrior {\n\t\t\twrite(' ')\n\t\t}\n\n\t\tif openBrace && !openBraceWritten {\n\t\t\twrite('{')\n\t\t\topenBraceWritten = true\n\t\t}\n\n\t\tif spacePrior && ch == '<' {\n\t\t\tspace = true\n\t\t}\n\n\t\twrite(ch)\n\n\t\tbeginningOfLine = false\n\t}\n\n\t// the Caddyfile does not need any leading or trailing spaces, but...\n\ttrimmedResult := bytes.TrimSpace(out.Bytes())\n\n\t// ...Caddyfiles should, however, end with a newline because\n\t// newlines are significant to the syntax of the file\n\treturn append(trimmedResult, '\\n')\n}\n"
  },
  {
    "path": "caddyconfig/caddyfile/formatter_fuzz.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n//go:build gofuzz\n\npackage caddyfile\n\nimport \"bytes\"\n\nfunc FuzzFormat(input []byte) int {\n\tformatted := Format(input)\n\tif bytes.Equal(formatted, Format(formatted)) {\n\t\treturn 1\n\t}\n\treturn 0\n}\n"
  },
  {
    "path": "caddyconfig/caddyfile/formatter_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyfile\n\nimport (\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc TestFormatter(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tdescription string\n\t\tinput       string\n\t\texpect      string\n\t}{\n\t\t{\n\t\t\tdescription: \"very simple\",\n\t\t\tinput: `abc   def\n\tg hi jkl\nmn`,\n\t\t\texpect: `abc def\ng hi jkl\nmn`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"basic indentation, line breaks, and nesting\",\n\t\t\tinput: `  a\nb\n\n\tc {\n\t\td\n}\n\ne { f\n}\n\n\n\ng {\nh {\ni\n}\n}\n\nj { k {\nl\n}\n}\n\nm {\n\tn { o\n\t}\n\tp { q r\ns }\n}\n\n\t{\n{ t\n\t\tu\n\n\tv\n\nw\n}\n}`,\n\t\t\texpect: `a\nb\n\nc {\n\td\n}\n\ne {\n\tf\n}\n\ng {\n\th {\n\t\ti\n\t}\n}\n\nj {\n\tk {\n\t\tl\n\t}\n}\n\nm {\n\tn {\n\t\to\n\t}\n\tp {\n\t\tq r\n\t\ts\n\t}\n}\n\n{\n\t{\n\t\tt\n\t\tu\n\n\t\tv\n\n\t\tw\n\t}\n}`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"block spacing\",\n\t\t\tinput: `a{\n\tb\n}\n\nc{ d\n}`,\n\t\t\texpect: `a {\n\tb\n}\n\nc {\n\td\n}`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"advanced spacing\",\n\t\t\tinput: `abc {\n\tdef\n}ghi{\n\tjkl mno\npqr}`,\n\t\t\texpect: `abc {\n\tdef\n}\n\nghi {\n\tjkl mno\n\tpqr\n}`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"env var placeholders\",\n\t\t\tinput: `{$A}\n\nb {\n{$C}\n}\n\nd { {$E}\n}\n\n{ {$F}\n}\n`,\n\t\t\texpect: `{$A}\n\nb {\n\t{$C}\n}\n\nd {\n\t{$E}\n}\n\n{\n\t{$F}\n}`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"env var placeholders with port\",\n\t\t\tinput:       `:{$PORT}`,\n\t\t\texpect:      `:{$PORT}`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"comments\",\n\t\t\tinput: `#a \"\\n\"\n\n #b {\n\tc\n}\n\nd {\ne#f\n# g\n}\n\nh { # i\n}`,\n\t\t\texpect: `#a \"\\n\"\n\n#b {\nc\n}\n\nd {\n\te#f\n\t# g\n}\n\nh {\n\t# i\n}`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"quotes and escaping\",\n\t\t\tinput: `\"a \\\"b\\\" \"#c\n\td\n\ne {\n\"f\"\n}\n\ng { \"h\"\n}\n\ni {\n\t\"foo\nbar\"\n}\n\nj {\n\"\\\"k\\\" l m\"\n}`,\n\t\t\texpect: `\"a \\\"b\\\" \"#c\nd\n\ne {\n\t\"f\"\n}\n\ng {\n\t\"h\"\n}\n\ni {\n\t\"foo\nbar\"\n}\n\nj {\n\t\"\\\"k\\\" l m\"\n}`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"bad nesting (too many open)\",\n\t\t\tinput: `a\n{\n\t{\n}`,\n\t\t\texpect: `a {\n\t{\n\t}\n`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"bad nesting (too many close)\",\n\t\t\tinput: `a\n{\n\t{\n}}}`,\n\t\t\texpect: `a {\n\t{\n\t}\n}\n}\n`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"json\",\n\t\t\tinput: `foo\nbar      \"{\\\"key\\\":34}\"\n`,\n\t\t\texpect: `foo\nbar \"{\\\"key\\\":34}\"`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"escaping after spaces\",\n\t\t\tinput:       `foo \\\"literal\\\"`,\n\t\t\texpect:      `foo \\\"literal\\\"`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"simple placeholders as standalone tokens\",\n\t\t\tinput:       `foo {bar}`,\n\t\t\texpect:      `foo {bar}`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"simple placeholders within tokens\",\n\t\t\tinput:       `foo{bar} foo{bar}baz`,\n\t\t\texpect:      `foo{bar} foo{bar}baz`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"placeholders and malformed braces\",\n\t\t\tinput:       `foo{bar} foo{ bar}baz`,\n\t\t\texpect: `foo{bar} foo {\n\tbar\n}\n\nbaz`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"hash within string is not a comment\",\n\t\t\tinput:       `redir / /some/#/path`,\n\t\t\texpect:      `redir / /some/#/path`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"brace does not fold into comment above\",\n\t\t\tinput: `# comment\n{\n\tfoo\n}`,\n\t\t\texpect: `# comment\n{\n\tfoo\n}`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"matthewpi/vscode-caddyfile-support#13\",\n\t\t\tinput: `{\n\temail {$ACMEEMAIL}\n\t#debug\n}\n\nblock {\n}\n`,\n\t\t\texpect: `{\n\temail {$ACMEEMAIL}\n\t#debug\n}\n\nblock {\n}\n`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"matthewpi/vscode-caddyfile-support#13 - bad formatting\",\n\t\t\tinput: `{\n\temail {$ACMEEMAIL}\n\t#debug\n\t}\n\n\tblock {\n\t}\n`,\n\t\t\texpect: `{\n\temail {$ACMEEMAIL}\n\t#debug\n}\n\nblock {\n}\n`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"keep heredoc as-is\",\n\t\t\tinput: `block {\n\theredoc <<HEREDOC\n\tHere's more than one space       Here's more than one space\n\tHEREDOC\n}\n`,\n\t\t\texpect: `block {\n\theredoc <<HEREDOC\n\tHere's more than one space       Here's more than one space\n\tHEREDOC\n}\n`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"Mixing heredoc with regular part\",\n\t\t\tinput: `block {\n\theredoc <<HEREDOC\n\tHere's more than one space       Here's more than one space\n\tHEREDOC\n\trespond \"More than one space will be eaten\"     200\n}\n\nblock2 {\n\theredoc <<HEREDOC\n\tHere's more than one space       Here's more than one space\n\tHEREDOC\n\trespond \"More than one space will be eaten\" 200\n}\n`,\n\t\t\texpect: `block {\n\theredoc <<HEREDOC\n\tHere's more than one space       Here's more than one space\n\tHEREDOC\n\trespond \"More than one space will be eaten\" 200\n}\n\nblock2 {\n\theredoc <<HEREDOC\n\tHere's more than one space       Here's more than one space\n\tHEREDOC\n\trespond \"More than one space will be eaten\" 200\n}\n`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"Heredoc as regular token\",\n\t\t\tinput: `block {\n\theredoc <<HEREDOC                                 \"More than one space will be eaten\"\n}\n`,\n\t\t\texpect: `block {\n\theredoc <<HEREDOC \"More than one space will be eaten\"\n}\n`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"Escape heredoc\",\n\t\t\tinput: `block {\n\theredoc \\<<HEREDOC\n\trespond \"More than one space will be eaten\"                           200\n}\n`,\n\t\t\texpect: `block {\n\theredoc \\<<HEREDOC\n\trespond \"More than one space will be eaten\" 200\n}\n`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"Preserve braces wrapped by backquotes\",\n\t\t\tinput:       \"block {respond `All braces should remain: {{now | date \\\"2006\\\"}}`}\",\n\t\t\texpect:      \"block {respond `All braces should remain: {{now | date \\\"2006\\\"}}`}\",\n\t\t},\n\t\t{\n\t\t\tdescription: \"Preserve braces wrapped by quotes\",\n\t\t\tinput:       \"block {respond \\\"All braces should remain: {{now | date `2006`}}\\\"}\",\n\t\t\texpect:      \"block {respond \\\"All braces should remain: {{now | date `2006`}}\\\"}\",\n\t\t},\n\t\t{\n\t\t\tdescription: \"Preserve quoted backticks and backticked quotes\",\n\t\t\tinput:       \"block { respond \\\"`\\\" } block { respond `\\\"`}\",\n\t\t\texpect:      \"block {\\n\\trespond \\\"`\\\"\\n}\\n\\nblock {\\n\\trespond `\\\"`\\n}\",\n\t\t},\n\t\t{\n\t\t\tdescription: \"No trailing space on line before env variable\",\n\t\t\tinput: `{\n\ta\n\n\t{$ENV_VAR}\n}\n`,\n\t\t\texpect: `{\n\ta\n\n\t{$ENV_VAR}\n}\n`,\n\t\t},\n\t\t{\n\t\t\tdescription: \"issue #7425: multiline backticked string indentation\",\n\t\t\tinput: `https://localhost:8953 {\n    respond ` + \"`\" + `Here are some random numbers:\n\n{{randNumeric 16}}\n\nHope this helps.` + \"`\" + `\n}`,\n\t\t\texpect: \"https://localhost:8953 {\\n\\trespond `Here are some random numbers:\\n\\n{{randNumeric 16}}\\n\\nHope this helps.`\\n}\",\n\t\t},\n\t} {\n\t\t// the formatter should output a trailing newline,\n\t\t// even if the tests aren't written to expect that\n\t\tif !strings.HasSuffix(tc.expect, \"\\n\") {\n\t\t\ttc.expect += \"\\n\"\n\t\t}\n\n\t\tactual := Format([]byte(tc.input))\n\n\t\tif string(actual) != tc.expect {\n\t\t\tt.Errorf(\"\\n[TEST %d: %s]\\n====== EXPECTED ======\\n%s\\n====== ACTUAL ======\\n%s^^^^^^^^^^^^^^^^^^^^^\",\n\t\t\t\ti, tc.description, string(tc.expect), string(actual))\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddyconfig/caddyfile/importargs.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyfile\n\nimport (\n\t\"regexp\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\n// parseVariadic determines if the token is a variadic placeholder,\n// and if so, determines the index range (start/end) of args to use.\n// Returns a boolean signaling whether a variadic placeholder was found,\n// and the start and end indices.\nfunc parseVariadic(token Token, argCount int) (bool, int, int) {\n\tif !strings.HasPrefix(token.Text, \"{args[\") {\n\t\treturn false, 0, 0\n\t}\n\tif !strings.HasSuffix(token.Text, \"]}\") {\n\t\treturn false, 0, 0\n\t}\n\n\targRange := strings.TrimSuffix(strings.TrimPrefix(token.Text, \"{args[\"), \"]}\")\n\tif argRange == \"\" {\n\t\tcaddy.Log().Named(\"caddyfile\").Warn(\n\t\t\t\"Placeholder \"+token.Text+\" cannot have an empty index\",\n\t\t\tzap.String(\"file\", token.File+\":\"+strconv.Itoa(token.Line)), zap.Strings(\"import_chain\", token.imports))\n\t\treturn false, 0, 0\n\t}\n\n\tstart, end, found := strings.Cut(argRange, \":\")\n\n\t// If no \":\" delimiter is found, this is not a variadic.\n\t// The replacer will pick this up.\n\tif !found {\n\t\treturn false, 0, 0\n\t}\n\n\t// A valid token may contain several placeholders, and\n\t// they may be separated by \":\". It's not variadic.\n\t// https://github.com/caddyserver/caddy/issues/5716\n\tif strings.Contains(start, \"}\") || strings.Contains(end, \"{\") {\n\t\treturn false, 0, 0\n\t}\n\n\tvar (\n\t\tstartIndex = 0\n\t\tendIndex   = argCount\n\t\terr        error\n\t)\n\tif start != \"\" {\n\t\tstartIndex, err = strconv.Atoi(start)\n\t\tif err != nil {\n\t\t\tcaddy.Log().Named(\"caddyfile\").Warn(\n\t\t\t\t\"Variadic placeholder \"+token.Text+\" has an invalid start index\",\n\t\t\t\tzap.String(\"file\", token.File+\":\"+strconv.Itoa(token.Line)), zap.Strings(\"import_chain\", token.imports))\n\t\t\treturn false, 0, 0\n\t\t}\n\t}\n\tif end != \"\" {\n\t\tendIndex, err = strconv.Atoi(end)\n\t\tif err != nil {\n\t\t\tcaddy.Log().Named(\"caddyfile\").Warn(\n\t\t\t\t\"Variadic placeholder \"+token.Text+\" has an invalid end index\",\n\t\t\t\tzap.String(\"file\", token.File+\":\"+strconv.Itoa(token.Line)), zap.Strings(\"import_chain\", token.imports))\n\t\t\treturn false, 0, 0\n\t\t}\n\t}\n\n\t// bound check\n\tif startIndex < 0 || startIndex > endIndex || endIndex > argCount {\n\t\tcaddy.Log().Named(\"caddyfile\").Warn(\n\t\t\t\"Variadic placeholder \"+token.Text+\" indices are out of bounds, only \"+strconv.Itoa(argCount)+\" argument(s) exist\",\n\t\t\tzap.String(\"file\", token.File+\":\"+strconv.Itoa(token.Line)), zap.Strings(\"import_chain\", token.imports))\n\t\treturn false, 0, 0\n\t}\n\treturn true, startIndex, endIndex\n}\n\n// makeArgsReplacer prepares a Replacer which can replace\n// non-variadic args placeholders in imported tokens.\nfunc makeArgsReplacer(args []string) *caddy.Replacer {\n\trepl := caddy.NewEmptyReplacer()\n\trepl.Map(func(key string) (any, bool) {\n\t\t// TODO: Remove the deprecated {args.*} placeholder\n\t\t// support at some point in the future\n\t\tif matches := argsRegexpIndexDeprecated.FindStringSubmatch(key); len(matches) > 0 {\n\t\t\t// What's matched may be a substring of the key\n\t\t\tif matches[0] != key {\n\t\t\t\treturn nil, false\n\t\t\t}\n\n\t\t\tvalue, err := strconv.Atoi(matches[1])\n\t\t\tif err != nil {\n\t\t\t\tcaddy.Log().Named(\"caddyfile\").Warn(\n\t\t\t\t\t\"Placeholder {args.\" + matches[1] + \"} has an invalid index\")\n\t\t\t\treturn nil, false\n\t\t\t}\n\t\t\tif value >= len(args) {\n\t\t\t\tcaddy.Log().Named(\"caddyfile\").Warn(\n\t\t\t\t\t\"Placeholder {args.\" + matches[1] + \"} index is out of bounds, only \" + strconv.Itoa(len(args)) + \" argument(s) exist\")\n\t\t\t\treturn nil, false\n\t\t\t}\n\t\t\tcaddy.Log().Named(\"caddyfile\").Warn(\n\t\t\t\t\"Placeholder {args.\" + matches[1] + \"} deprecated, use {args[\" + matches[1] + \"]} instead\")\n\t\t\treturn args[value], true\n\t\t}\n\n\t\t// Handle args[*] form\n\t\tif matches := argsRegexpIndex.FindStringSubmatch(key); len(matches) > 0 {\n\t\t\t// What's matched may be a substring of the key\n\t\t\tif matches[0] != key {\n\t\t\t\treturn nil, false\n\t\t\t}\n\n\t\t\tif strings.Contains(matches[1], \":\") {\n\t\t\t\tcaddy.Log().Named(\"caddyfile\").Warn(\n\t\t\t\t\t\"Variadic placeholder {args[\" + matches[1] + \"]} must be a token on its own\")\n\t\t\t\treturn nil, false\n\t\t\t}\n\t\t\tvalue, err := strconv.Atoi(matches[1])\n\t\t\tif err != nil {\n\t\t\t\tcaddy.Log().Named(\"caddyfile\").Warn(\n\t\t\t\t\t\"Placeholder {args[\" + matches[1] + \"]} has an invalid index\")\n\t\t\t\treturn nil, false\n\t\t\t}\n\t\t\tif value >= len(args) {\n\t\t\t\tcaddy.Log().Named(\"caddyfile\").Warn(\n\t\t\t\t\t\"Placeholder {args[\" + matches[1] + \"]} index is out of bounds, only \" + strconv.Itoa(len(args)) + \" argument(s) exist\")\n\t\t\t\treturn nil, false\n\t\t\t}\n\t\t\treturn args[value], true\n\t\t}\n\n\t\t// Not an args placeholder, ignore\n\t\treturn nil, false\n\t})\n\treturn repl\n}\n\nvar (\n\targsRegexpIndexDeprecated = regexp.MustCompile(`args\\.(.+)`)\n\targsRegexpIndex           = regexp.MustCompile(`args\\[(.+)]`)\n)\n"
  },
  {
    "path": "caddyconfig/caddyfile/importgraph.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyfile\n\nimport (\n\t\"fmt\"\n\t\"slices\"\n)\n\ntype adjacency map[string][]string\n\ntype importGraph struct {\n\tnodes map[string]struct{}\n\tedges adjacency\n}\n\nfunc (i *importGraph) addNode(name string) {\n\tif i.nodes == nil {\n\t\ti.nodes = make(map[string]struct{})\n\t}\n\tif _, exists := i.nodes[name]; exists {\n\t\treturn\n\t}\n\ti.nodes[name] = struct{}{}\n}\n\nfunc (i *importGraph) addNodes(names []string) {\n\tfor _, name := range names {\n\t\ti.addNode(name)\n\t}\n}\n\nfunc (i *importGraph) removeNode(name string) {\n\tdelete(i.nodes, name)\n}\n\nfunc (i *importGraph) removeNodes(names []string) {\n\tfor _, name := range names {\n\t\ti.removeNode(name)\n\t}\n}\n\nfunc (i *importGraph) addEdge(from, to string) error {\n\tif !i.exists(from) || !i.exists(to) {\n\t\treturn fmt.Errorf(\"one of the nodes does not exist\")\n\t}\n\n\tif i.willCycle(to, from) {\n\t\treturn fmt.Errorf(\"a cycle of imports exists between %s and %s\", from, to)\n\t}\n\n\tif i.areConnected(from, to) {\n\t\t// if connected, there's nothing to do\n\t\treturn nil\n\t}\n\n\tif i.nodes == nil {\n\t\ti.nodes = make(map[string]struct{})\n\t}\n\tif i.edges == nil {\n\t\ti.edges = make(adjacency)\n\t}\n\n\ti.edges[from] = append(i.edges[from], to)\n\treturn nil\n}\n\nfunc (i *importGraph) addEdges(from string, tos []string) error {\n\tfor _, to := range tos {\n\t\terr := i.addEdge(from, to)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (i *importGraph) areConnected(from, to string) bool {\n\tal, ok := i.edges[from]\n\tif !ok {\n\t\treturn false\n\t}\n\treturn slices.Contains(al, to)\n}\n\nfunc (i *importGraph) willCycle(from, to string) bool {\n\tcollector := make(map[string]bool)\n\n\tvar visit func(string)\n\tvisit = func(start string) {\n\t\tif !collector[start] {\n\t\t\tcollector[start] = true\n\t\t\tfor _, v := range i.edges[start] {\n\t\t\t\tvisit(v)\n\t\t\t}\n\t\t}\n\t}\n\n\tfor _, v := range i.edges[from] {\n\t\tvisit(v)\n\t}\n\tfor k := range collector {\n\t\tif to == k {\n\t\t\treturn true\n\t\t}\n\t}\n\n\treturn false\n}\n\nfunc (i *importGraph) exists(key string) bool {\n\t_, exists := i.nodes[key]\n\treturn exists\n}\n"
  },
  {
    "path": "caddyconfig/caddyfile/lexer.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyfile\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"regexp\"\n\t\"strings\"\n\t\"unicode\"\n)\n\ntype (\n\t// lexer is a utility which can get values, token by\n\t// token, from a Reader. A token is a word, and tokens\n\t// are separated by whitespace. A word can be enclosed\n\t// in quotes if it contains whitespace.\n\tlexer struct {\n\t\treader       *bufio.Reader\n\t\ttoken        Token\n\t\tline         int\n\t\tskippedLines int\n\t}\n\n\t// Token represents a single parsable unit.\n\tToken struct {\n\t\tFile          string\n\t\timports       []string\n\t\tLine          int\n\t\tText          string\n\t\twasQuoted     rune // enclosing quote character, if any\n\t\theredocMarker string\n\t\tsnippetName   string\n\t}\n)\n\n// Tokenize takes bytes as input and lexes it into\n// a list of tokens that can be parsed as a Caddyfile.\n// Also takes a filename to fill the token's File as\n// the source of the tokens, which is important to\n// determine relative paths for `import` directives.\nfunc Tokenize(input []byte, filename string) ([]Token, error) {\n\tl := lexer{}\n\tif err := l.load(bytes.NewReader(input)); err != nil {\n\t\treturn nil, err\n\t}\n\tvar tokens []Token\n\tfor {\n\t\tfound, err := l.next()\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif !found {\n\t\t\tbreak\n\t\t}\n\t\tl.token.File = filename\n\t\ttokens = append(tokens, l.token)\n\t}\n\treturn tokens, nil\n}\n\n// load prepares the lexer to scan an input for tokens.\n// It discards any leading byte order mark.\nfunc (l *lexer) load(input io.Reader) error {\n\tl.reader = bufio.NewReader(input)\n\tl.line = 1\n\n\t// discard byte order mark, if present\n\tfirstCh, _, err := l.reader.ReadRune()\n\tif err != nil {\n\t\treturn err\n\t}\n\tif firstCh != 0xFEFF {\n\t\terr := l.reader.UnreadRune()\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// next loads the next token into the lexer.\n// A token is delimited by whitespace, unless\n// the token starts with a quotes character (\")\n// in which case the token goes until the closing\n// quotes (the enclosing quotes are not included).\n// Inside quoted strings, quotes may be escaped\n// with a preceding \\ character. No other chars\n// may be escaped. The rest of the line is skipped\n// if a \"#\" character is read in. Returns true if\n// a token was loaded; false otherwise.\nfunc (l *lexer) next() (bool, error) {\n\tvar val []rune\n\tvar comment, quoted, btQuoted, inHeredoc, heredocEscaped, escaped bool\n\tvar heredocMarker string\n\n\tmakeToken := func(quoted rune) bool {\n\t\tl.token.Text = string(val)\n\t\tl.token.wasQuoted = quoted\n\t\tl.token.heredocMarker = heredocMarker\n\t\treturn true\n\t}\n\n\tfor {\n\t\t// Read a character in; if err then if we had\n\t\t// read some characters, make a token. If we\n\t\t// reached EOF, then no more tokens to read.\n\t\t// If no EOF, then we had a problem.\n\t\tch, _, err := l.reader.ReadRune()\n\t\tif err != nil {\n\t\t\tif len(val) > 0 {\n\t\t\t\tif inHeredoc {\n\t\t\t\t\treturn false, fmt.Errorf(\"incomplete heredoc <<%s on line #%d, expected ending marker %s\", heredocMarker, l.line+l.skippedLines, heredocMarker)\n\t\t\t\t}\n\n\t\t\t\treturn makeToken(0), nil\n\t\t\t}\n\t\t\tif err == io.EOF {\n\t\t\t\treturn false, nil\n\t\t\t}\n\t\t\treturn false, err\n\t\t}\n\n\t\t// detect whether we have the start of a heredoc\n\t\tif (!quoted && !btQuoted) && (!inHeredoc && !heredocEscaped) &&\n\t\t\tlen(val) > 1 && string(val[:2]) == \"<<\" {\n\t\t\t// a space means it's just a regular token and not a heredoc\n\t\t\tif ch == ' ' {\n\t\t\t\treturn makeToken(0), nil\n\t\t\t}\n\n\t\t\t// skip CR, we only care about LF\n\t\t\tif ch == '\\r' {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// after hitting a newline, we know that the heredoc marker\n\t\t\t// is the characters after the two << and the newline.\n\t\t\t// we reset the val because the heredoc is syntax we don't\n\t\t\t// want to keep.\n\t\t\tif ch == '\\n' {\n\t\t\t\tif len(val) == 2 {\n\t\t\t\t\treturn false, fmt.Errorf(\"missing opening heredoc marker on line #%d; must contain only alpha-numeric characters, dashes and underscores; got empty string\", l.line)\n\t\t\t\t}\n\n\t\t\t\t// check if there's too many <\n\t\t\t\tif string(val[:3]) == \"<<<\" {\n\t\t\t\t\treturn false, fmt.Errorf(\"too many '<' for heredoc on line #%d; only use two, for example <<END\", l.line)\n\t\t\t\t}\n\n\t\t\t\theredocMarker = string(val[2:])\n\t\t\t\tif !heredocMarkerRegexp.Match([]byte(heredocMarker)) {\n\t\t\t\t\treturn false, fmt.Errorf(\"heredoc marker on line #%d must contain only alpha-numeric characters, dashes and underscores; got '%s'\", l.line, heredocMarker)\n\t\t\t\t}\n\n\t\t\t\tinHeredoc = true\n\t\t\t\tl.skippedLines++\n\t\t\t\tval = nil\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tval = append(val, ch)\n\t\t\tcontinue\n\t\t}\n\n\t\t// if we're in a heredoc, all characters are read as-is\n\t\tif inHeredoc {\n\t\t\tval = append(val, ch)\n\n\t\t\tif ch == '\\n' {\n\t\t\t\tl.skippedLines++\n\t\t\t}\n\n\t\t\t// check if we're done, i.e. that the last few characters are the marker\n\t\t\tif len(val) >= len(heredocMarker) && heredocMarker == string(val[len(val)-len(heredocMarker):]) {\n\t\t\t\t// set the final value\n\t\t\t\tval, err = l.finalizeHeredoc(val, heredocMarker)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn false, err\n\t\t\t\t}\n\n\t\t\t\t// set the line counter, and make the token\n\t\t\t\tl.line += l.skippedLines\n\t\t\t\tl.skippedLines = 0\n\t\t\t\treturn makeToken('<'), nil\n\t\t\t}\n\n\t\t\t// stay in the heredoc until we find the ending marker\n\t\t\tcontinue\n\t\t}\n\n\t\t// track whether we found an escape '\\' for the next\n\t\t// iteration to be contextually aware\n\t\tif !escaped && !btQuoted && ch == '\\\\' {\n\t\t\tescaped = true\n\t\t\tcontinue\n\t\t}\n\n\t\tif quoted || btQuoted {\n\t\t\tif quoted && escaped {\n\t\t\t\t// all is literal in quoted area,\n\t\t\t\t// so only escape quotes\n\t\t\t\tif ch != '\"' {\n\t\t\t\t\tval = append(val, '\\\\')\n\t\t\t\t}\n\t\t\t\tescaped = false\n\t\t\t} else {\n\t\t\t\tif (quoted && ch == '\"') || (btQuoted && ch == '`') {\n\t\t\t\t\treturn makeToken(ch), nil\n\t\t\t\t}\n\t\t\t}\n\t\t\t// allow quoted text to wrap continue on multiple lines\n\t\t\tif ch == '\\n' {\n\t\t\t\tl.line += 1 + l.skippedLines\n\t\t\t\tl.skippedLines = 0\n\t\t\t}\n\t\t\t// collect this character as part of the quoted token\n\t\t\tval = append(val, ch)\n\t\t\tcontinue\n\t\t}\n\n\t\tif unicode.IsSpace(ch) {\n\t\t\t// ignore CR altogether, we only actually care about LF (\\n)\n\t\t\tif ch == '\\r' {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\t// end of the line\n\t\t\tif ch == '\\n' {\n\t\t\t\t// newlines can be escaped to chain arguments\n\t\t\t\t// onto multiple lines; else, increment the line count\n\t\t\t\tif escaped {\n\t\t\t\t\tl.skippedLines++\n\t\t\t\t\tescaped = false\n\t\t\t\t} else {\n\t\t\t\t\tl.line += 1 + l.skippedLines\n\t\t\t\t\tl.skippedLines = 0\n\t\t\t\t}\n\t\t\t\t// comments (#) are single-line only\n\t\t\t\tcomment = false\n\t\t\t}\n\t\t\t// any kind of space means we're at the end of this token\n\t\t\tif len(val) > 0 {\n\t\t\t\treturn makeToken(0), nil\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\t// comments must be at the start of a token,\n\t\t// in other words, preceded by space or newline\n\t\tif ch == '#' && len(val) == 0 {\n\t\t\tcomment = true\n\t\t}\n\t\tif comment {\n\t\t\tcontinue\n\t\t}\n\n\t\tif len(val) == 0 {\n\t\t\tl.token = Token{Line: l.line}\n\t\t\tif ch == '\"' {\n\t\t\t\tquoted = true\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif ch == '`' {\n\t\t\t\tbtQuoted = true\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\n\t\tif escaped {\n\t\t\t// allow escaping the first < to skip the heredoc syntax\n\t\t\tif ch == '<' {\n\t\t\t\theredocEscaped = true\n\t\t\t} else {\n\t\t\t\tval = append(val, '\\\\')\n\t\t\t}\n\t\t\tescaped = false\n\t\t}\n\n\t\tval = append(val, ch)\n\t}\n}\n\n// finalizeHeredoc takes the runes read as the heredoc text and the marker,\n// and processes the text to strip leading whitespace, returning the final\n// value without the leading whitespace.\nfunc (l *lexer) finalizeHeredoc(val []rune, marker string) ([]rune, error) {\n\tstringVal := string(val)\n\n\t// find the last newline of the heredoc, which is where the contents end\n\tlastNewline := strings.LastIndex(stringVal, \"\\n\")\n\n\t// collapse the content, then split into separate lines\n\tlines := strings.Split(stringVal[:lastNewline+1], \"\\n\")\n\n\t// figure out how much whitespace we need to strip from the front of every line\n\t// by getting the string that precedes the marker, on the last line\n\tpaddingToStrip := stringVal[lastNewline+1 : len(stringVal)-len(marker)]\n\n\t// iterate over each line and strip the whitespace from the front\n\tvar out string\n\tfor lineNum, lineText := range lines[:len(lines)-1] {\n\t\tif lineText == \"\" || lineText == \"\\r\" {\n\t\t\tout += \"\\n\"\n\t\t\tcontinue\n\t\t}\n\n\t\t// find an exact match for the padding\n\t\tindex := strings.Index(lineText, paddingToStrip)\n\n\t\t// if the padding doesn't match exactly at the start then we can't safely strip\n\t\tif index != 0 {\n\t\t\tcleanLineText := strings.TrimRight(lineText, \"\\r\\n\")\n\t\t\treturn nil, fmt.Errorf(\"mismatched leading whitespace in heredoc <<%s on line #%d [%s], expected whitespace [%s] to match the closing marker\", marker, l.line+lineNum+1, cleanLineText, paddingToStrip)\n\t\t}\n\n\t\t// strip, then append the line, with the newline, to the output.\n\t\t// also removes all \"\\r\" because Windows.\n\t\tout += strings.ReplaceAll(lineText[len(paddingToStrip):]+\"\\n\", \"\\r\", \"\")\n\t}\n\n\t// Remove the trailing newline from the loop\n\tif len(out) > 0 && out[len(out)-1] == '\\n' {\n\t\tout = out[:len(out)-1]\n\t}\n\n\t// return the final value\n\treturn []rune(out), nil\n}\n\n// Quoted returns true if the token was enclosed in quotes\n// (i.e. double quotes, backticks, or heredoc).\nfunc (t Token) Quoted() bool {\n\treturn t.wasQuoted > 0\n}\n\n// NumLineBreaks counts how many line breaks are in the token text.\nfunc (t Token) NumLineBreaks() int {\n\tlineBreaks := strings.Count(t.Text, \"\\n\")\n\tif t.wasQuoted == '<' {\n\t\t// heredocs have an extra linebreak because the opening\n\t\t// delimiter is on its own line and is not included in the\n\t\t// token Text itself, and the trailing newline is removed.\n\t\tlineBreaks += 2\n\t}\n\treturn lineBreaks\n}\n\n// Clone returns a deep copy of the token.\nfunc (t Token) Clone() Token {\n\treturn Token{\n\t\tFile:          t.File,\n\t\timports:       append([]string{}, t.imports...),\n\t\tLine:          t.Line,\n\t\tText:          t.Text,\n\t\twasQuoted:     t.wasQuoted,\n\t\theredocMarker: t.heredocMarker,\n\t\tsnippetName:   t.snippetName,\n\t}\n}\n\nvar heredocMarkerRegexp = regexp.MustCompile(\"^[A-Za-z0-9_-]+$\")\n\n// isNextOnNewLine tests whether t2 is on a different line from t1\nfunc isNextOnNewLine(t1, t2 Token) bool {\n\t// If the second token is from a different file,\n\t// we can assume it's from a different line\n\tif t1.File != t2.File {\n\t\treturn true\n\t}\n\n\t// If the second token is from a different import chain,\n\t// we can assume it's from a different line\n\tif len(t1.imports) != len(t2.imports) {\n\t\treturn true\n\t}\n\tfor i, im := range t1.imports {\n\t\tif im != t2.imports[i] {\n\t\t\treturn true\n\t\t}\n\t}\n\n\t// If the first token (incl line breaks) ends\n\t// on a line earlier than the next token,\n\t// then the second token is on a new line\n\treturn t1.Line+t1.NumLineBreaks() < t2.Line\n}\n"
  },
  {
    "path": "caddyconfig/caddyfile/lexer_fuzz.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n//go:build gofuzz\n\npackage caddyfile\n\nfunc FuzzTokenize(input []byte) int {\n\ttokens, err := Tokenize(input, \"Caddyfile\")\n\tif err != nil {\n\t\treturn 0\n\t}\n\tif len(tokens) == 0 {\n\t\treturn -1\n\t}\n\treturn 1\n}\n"
  },
  {
    "path": "caddyconfig/caddyfile/lexer_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyfile\n\nimport (\n\t\"testing\"\n)\n\nfunc TestLexer(t *testing.T) {\n\ttestCases := []struct {\n\t\tinput        []byte\n\t\texpected     []Token\n\t\texpectErr    bool\n\t\terrorMessage string\n\t}{\n\t\t{\n\t\t\tinput: []byte(`host:123`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: \"host:123\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`host:123\n\n\t\t\t\t\tdirective`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: \"host:123\"},\n\t\t\t\t{Line: 3, Text: \"directive\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`host:123 {\n\t\t\t\t\t\tdirective\n\t\t\t\t\t}`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: \"host:123\"},\n\t\t\t\t{Line: 1, Text: \"{\"},\n\t\t\t\t{Line: 2, Text: \"directive\"},\n\t\t\t\t{Line: 3, Text: \"}\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`host:123 { directive }`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: \"host:123\"},\n\t\t\t\t{Line: 1, Text: \"{\"},\n\t\t\t\t{Line: 1, Text: \"directive\"},\n\t\t\t\t{Line: 1, Text: \"}\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`host:123 {\n\t\t\t\t\t\t#comment\n\t\t\t\t\t\tdirective\n\t\t\t\t\t\t# comment\n\t\t\t\t\t\tfoobar # another comment\n\t\t\t\t\t}`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: \"host:123\"},\n\t\t\t\t{Line: 1, Text: \"{\"},\n\t\t\t\t{Line: 3, Text: \"directive\"},\n\t\t\t\t{Line: 5, Text: \"foobar\"},\n\t\t\t\t{Line: 6, Text: \"}\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`host:123 {\n\t\t\t\t\t\t# hash inside string is not a comment\n\t\t\t\t\t\tredir / /some/#/path\n\t\t\t\t\t}`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: \"host:123\"},\n\t\t\t\t{Line: 1, Text: \"{\"},\n\t\t\t\t{Line: 3, Text: \"redir\"},\n\t\t\t\t{Line: 3, Text: \"/\"},\n\t\t\t\t{Line: 3, Text: \"/some/#/path\"},\n\t\t\t\t{Line: 4, Text: \"}\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(\"# comment at beginning of file\\n# comment at beginning of line\\nhost:123\"),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 3, Text: \"host:123\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`a \"quoted value\" b\n\t\t\t\t\tfoobar`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: \"a\"},\n\t\t\t\t{Line: 1, Text: \"quoted value\"},\n\t\t\t\t{Line: 1, Text: \"b\"},\n\t\t\t\t{Line: 2, Text: \"foobar\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`A \"quoted \\\"value\\\" inside\" B`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: \"A\"},\n\t\t\t\t{Line: 1, Text: `quoted \"value\" inside`},\n\t\t\t\t{Line: 1, Text: \"B\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(\"An escaped \\\"newline\\\\\\ninside\\\" quotes\"),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: \"An\"},\n\t\t\t\t{Line: 1, Text: \"escaped\"},\n\t\t\t\t{Line: 1, Text: \"newline\\\\\\ninside\"},\n\t\t\t\t{Line: 2, Text: \"quotes\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(\"An escaped newline\\\\\\noutside quotes\"),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: \"An\"},\n\t\t\t\t{Line: 1, Text: \"escaped\"},\n\t\t\t\t{Line: 1, Text: \"newline\"},\n\t\t\t\t{Line: 1, Text: \"outside\"},\n\t\t\t\t{Line: 1, Text: \"quotes\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(\"line1\\\\\\nescaped\\nline2\\nline3\"),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: \"line1\"},\n\t\t\t\t{Line: 1, Text: \"escaped\"},\n\t\t\t\t{Line: 3, Text: \"line2\"},\n\t\t\t\t{Line: 4, Text: \"line3\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(\"line1\\\\\\nescaped1\\\\\\nescaped2\\nline4\\nline5\"),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: \"line1\"},\n\t\t\t\t{Line: 1, Text: \"escaped1\"},\n\t\t\t\t{Line: 1, Text: \"escaped2\"},\n\t\t\t\t{Line: 4, Text: \"line4\"},\n\t\t\t\t{Line: 5, Text: \"line5\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`\"unescapable\\ in quotes\"`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `unescapable\\ in quotes`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`\"don't\\escape\"`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `don't\\escape`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`\"don't\\\\escape\"`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `don't\\\\escape`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`un\\escapable`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `un\\escapable`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`A \"quoted value with line\n\t\t\t\t\tbreak inside\" {\n\t\t\t\t\t\tfoobar\n\t\t\t\t\t}`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: \"A\"},\n\t\t\t\t{Line: 1, Text: \"quoted value with line\\n\\t\\t\\t\\t\\tbreak inside\"},\n\t\t\t\t{Line: 2, Text: \"{\"},\n\t\t\t\t{Line: 3, Text: \"foobar\"},\n\t\t\t\t{Line: 4, Text: \"}\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`\"C:\\php\\php-cgi.exe\"`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `C:\\php\\php-cgi.exe`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`empty \"\" string`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `empty`},\n\t\t\t\t{Line: 1, Text: ``},\n\t\t\t\t{Line: 1, Text: `string`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(\"skip those\\r\\nCR characters\"),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: \"skip\"},\n\t\t\t\t{Line: 1, Text: \"those\"},\n\t\t\t\t{Line: 2, Text: \"CR\"},\n\t\t\t\t{Line: 2, Text: \"characters\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(\"\\xEF\\xBB\\xBF:8080\"), // test with leading byte order mark\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: \":8080\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(\"simple `backtick quoted` string\"),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `simple`},\n\t\t\t\t{Line: 1, Text: `backtick quoted`},\n\t\t\t\t{Line: 1, Text: `string`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(\"multiline `backtick\\nquoted\\n` string\"),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `multiline`},\n\t\t\t\t{Line: 1, Text: \"backtick\\nquoted\\n\"},\n\t\t\t\t{Line: 3, Text: `string`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(\"nested `\\\"quotes inside\\\" backticks` string\"),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `nested`},\n\t\t\t\t{Line: 1, Text: `\"quotes inside\" backticks`},\n\t\t\t\t{Line: 1, Text: `string`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(\"reverse-nested \\\"`backticks` inside\\\" quotes\"),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `reverse-nested`},\n\t\t\t\t{Line: 1, Text: \"`backticks` inside\"},\n\t\t\t\t{Line: 1, Text: `quotes`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`heredoc <<EOF\ncontent\nEOF same-line-arg\n\t`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `heredoc`},\n\t\t\t\t{Line: 1, Text: \"content\"},\n\t\t\t\t{Line: 3, Text: `same-line-arg`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`heredoc <<VERY-LONG-MARKER\ncontent\nVERY-LONG-MARKER same-line-arg\n\t`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `heredoc`},\n\t\t\t\t{Line: 1, Text: \"content\"},\n\t\t\t\t{Line: 3, Text: `same-line-arg`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`heredoc <<EOF\nextra-newline\n\nEOF same-line-arg\n\t`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `heredoc`},\n\t\t\t\t{Line: 1, Text: \"extra-newline\\n\"},\n\t\t\t\t{Line: 4, Text: `same-line-arg`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`heredoc <<EOF\nEOF\n\tHERE same-line-arg\n\t`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `heredoc`},\n\t\t\t\t{Line: 1, Text: ``},\n\t\t\t\t{Line: 3, Text: `HERE`},\n\t\t\t\t{Line: 3, Text: `same-line-arg`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`heredoc <<EOF\n\t\tEOF same-line-arg\n\t`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `heredoc`},\n\t\t\t\t{Line: 1, Text: \"\"},\n\t\t\t\t{Line: 2, Text: `same-line-arg`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`heredoc <<EOF\n\tcontent\n\tEOF same-line-arg\n\t`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `heredoc`},\n\t\t\t\t{Line: 1, Text: \"content\"},\n\t\t\t\t{Line: 3, Text: `same-line-arg`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`prev-line\n\theredoc <<EOF\n\t\tmulti\n\t\tline\n\t\tcontent\n\tEOF same-line-arg\n\tnext-line\n\t`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `prev-line`},\n\t\t\t\t{Line: 2, Text: `heredoc`},\n\t\t\t\t{Line: 2, Text: \"\\tmulti\\n\\tline\\n\\tcontent\"},\n\t\t\t\t{Line: 6, Text: `same-line-arg`},\n\t\t\t\t{Line: 7, Text: `next-line`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`escaped-heredoc \\<< >>`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `escaped-heredoc`},\n\t\t\t\t{Line: 1, Text: `<<`},\n\t\t\t\t{Line: 1, Text: `>>`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`not-a-heredoc <EOF\n\tcontent\n\t`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `not-a-heredoc`},\n\t\t\t\t{Line: 1, Text: `<EOF`},\n\t\t\t\t{Line: 2, Text: `content`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`not-a-heredoc <<<EOF content`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `not-a-heredoc`},\n\t\t\t\t{Line: 1, Text: `<<<EOF`},\n\t\t\t\t{Line: 1, Text: `content`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`not-a-heredoc \"<<\" \">>\"`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `not-a-heredoc`},\n\t\t\t\t{Line: 1, Text: `<<`},\n\t\t\t\t{Line: 1, Text: `>>`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`not-a-heredoc << >>`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `not-a-heredoc`},\n\t\t\t\t{Line: 1, Text: `<<`},\n\t\t\t\t{Line: 1, Text: `>>`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`not-a-heredoc <<HERE SAME LINE\n\tcontent\n\tHERE same-line-arg\n\t`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `not-a-heredoc`},\n\t\t\t\t{Line: 1, Text: `<<HERE`},\n\t\t\t\t{Line: 1, Text: `SAME`},\n\t\t\t\t{Line: 1, Text: `LINE`},\n\t\t\t\t{Line: 2, Text: `content`},\n\t\t\t\t{Line: 3, Text: `HERE`},\n\t\t\t\t{Line: 3, Text: `same-line-arg`},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`heredoc <<s\n\t\t\t�\n\t\t\ts\n\t`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: `heredoc`},\n\t\t\t\t{Line: 1, Text: \"�\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(\"\\u000Aheredoc \\u003C\\u003C\\u0073\\u0073\\u000A\\u00BF\\u0057\\u0001\\u0000\\u00FF\\u00FF\\u00FF\\u00FF\\u00FF\\u00FF\\u00FF\\u003D\\u001F\\u000A\\u0073\\u0073\\u000A\\u00BF\\u0057\\u0001\\u0000\\u00FF\\u00FF\\u00FF\\u00FF\\u00FF\\u00FF\\u00FF\\u003D\\u001F\\u000A\\u00BF\\u00BF\\u0057\\u0001\\u0000\\u00FF\\u00FF\\u00FF\\u00FF\\u00FF\\u00FF\\u00FF\\u003D\\u001F\"),\n\t\t\texpected: []Token{\n\t\t\t\t{\n\t\t\t\t\tLine: 2,\n\t\t\t\t\tText: \"heredoc\",\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tLine: 2,\n\t\t\t\t\tText: \"\\u00BF\\u0057\\u0001\\u0000\\u00FF\\u00FF\\u00FF\\u00FF\\u00FF\\u00FF\\u00FF\\u003D\\u001F\",\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tLine: 5,\n\t\t\t\t\tText: \"\\u00BF\\u0057\\u0001\\u0000\\u00FF\\u00FF\\u00FF\\u00FF\\u00FF\\u00FF\\u00FF\\u003D\\u001F\",\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tLine: 6,\n\t\t\t\t\tText: \"\\u00BF\\u00BF\\u0057\\u0001\\u0000\\u00FF\\u00FF\\u00FF\\u00FF\\u00FF\\u00FF\\u00FF\\u003D\\u001F\",\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:        []byte(\"not-a-heredoc <<\\n\"),\n\t\t\texpectErr:    true,\n\t\t\terrorMessage: \"missing opening heredoc marker on line #1; must contain only alpha-numeric characters, dashes and underscores; got empty string\",\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`heredoc <<<EOF\n\tcontent\n\tEOF same-line-arg\n\t`),\n\t\t\texpectErr:    true,\n\t\t\terrorMessage: \"too many '<' for heredoc on line #1; only use two, for example <<END\",\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`heredoc <<EOF\n\tcontent\n\t`),\n\t\t\texpectErr:    true,\n\t\t\terrorMessage: \"incomplete heredoc <<EOF on line #3, expected ending marker EOF\",\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`heredoc <<EOF\n\tcontent\n\t\tEOF\n\t`),\n\t\t\texpectErr:    true,\n\t\t\terrorMessage: \"mismatched leading whitespace in heredoc <<EOF on line #2 [\\tcontent], expected whitespace [\\t\\t] to match the closing marker\",\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`heredoc <<EOF\n        content\n\t\tEOF\n\t`),\n\t\t\texpectErr:    true,\n\t\t\terrorMessage: \"mismatched leading whitespace in heredoc <<EOF on line #2 [        content], expected whitespace [\\t\\t] to match the closing marker\",\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`heredoc <<EOF\nThe next line is a blank line\n\nThe previous line is a blank line\nEOF`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: \"heredoc\"},\n\t\t\t\t{Line: 1, Text: \"The next line is a blank line\\n\\nThe previous line is a blank line\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`heredoc <<EOF\n\tOne tab indented heredoc with blank next line\n\n\tOne tab indented heredoc with blank previous line\n\tEOF`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: \"heredoc\"},\n\t\t\t\t{Line: 1, Text: \"One tab indented heredoc with blank next line\\n\\nOne tab indented heredoc with blank previous line\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`heredoc <<EOF\nThe next line is a blank line with one tab\n\t\nThe previous line is a blank line with one tab\nEOF`),\n\t\t\texpected: []Token{\n\t\t\t\t{Line: 1, Text: \"heredoc\"},\n\t\t\t\t{Line: 1, Text: \"The next line is a blank line with one tab\\n\\t\\nThe previous line is a blank line with one tab\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: []byte(`heredoc <<EOF\n\t\tThe next line is a blank line with one tab less than the correct indentation\n\t\n\t\tThe previous line is a blank line with one tab less than the correct indentation\n\t\tEOF`),\n\t\t\texpectErr:    true,\n\t\t\terrorMessage: \"mismatched leading whitespace in heredoc <<EOF on line #3 [\\t], expected whitespace [\\t\\t] to match the closing marker\",\n\t\t},\n\t}\n\n\tfor i, testCase := range testCases {\n\t\tactual, err := Tokenize(testCase.input, \"\")\n\t\tif testCase.expectErr {\n\t\t\tif err == nil {\n\t\t\t\tt.Fatalf(\"expected error, got actual: %v\", actual)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif err.Error() != testCase.errorMessage {\n\t\t\t\tt.Fatalf(\"expected error '%v', got: %v\", testCase.errorMessage, err)\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"%v\", err)\n\t\t}\n\t\tlexerCompare(t, i, testCase.expected, actual)\n\t}\n}\n\nfunc lexerCompare(t *testing.T, n int, expected, actual []Token) {\n\tif len(expected) != len(actual) {\n\t\tt.Fatalf(\"Test case %d: expected %d token(s) but got %d\", n, len(expected), len(actual))\n\t}\n\n\tfor i := 0; i < len(actual) && i < len(expected); i++ {\n\t\tif actual[i].Line != expected[i].Line {\n\t\t\tt.Fatalf(\"Test case %d token %d ('%s'): expected line %d but was line %d\",\n\t\t\t\tn, i, expected[i].Text, expected[i].Line, actual[i].Line)\n\t\t\tbreak\n\t\t}\n\t\tif actual[i].Text != expected[i].Text {\n\t\t\tt.Fatalf(\"Test case %d token %d: expected text '%s' but was '%s'\",\n\t\t\t\tn, i, expected[i].Text, actual[i].Text)\n\t\t\tbreak\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddyconfig/caddyfile/parse.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyfile\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\n// Parse parses the input just enough to group tokens, in\n// order, by server block. No further parsing is performed.\n// Server blocks are returned in the order in which they appear.\n// Directives that do not appear in validDirectives will cause\n// an error. If you do not want to check for valid directives,\n// pass in nil instead.\n//\n// Environment variables in {$ENVIRONMENT_VARIABLE} notation\n// will be replaced before parsing begins.\nfunc Parse(filename string, input []byte) ([]ServerBlock, error) {\n\t// unfortunately, we must copy the input because parsing must\n\t// remain a read-only operation, but we have to expand environment\n\t// variables before we parse, which changes the underlying array (#4422)\n\tinputCopy := make([]byte, len(input))\n\tcopy(inputCopy, input)\n\n\ttokens, err := allTokens(filename, inputCopy)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tp := parser{\n\t\tDispenser: NewDispenser(tokens),\n\t\timportGraph: importGraph{\n\t\t\tnodes: make(map[string]struct{}),\n\t\t\tedges: make(adjacency),\n\t\t},\n\t}\n\treturn p.parseAll()\n}\n\n// allTokens lexes the entire input, but does not parse it.\n// It returns all the tokens from the input, unstructured\n// and in order. It may mutate input as it expands env vars.\nfunc allTokens(filename string, input []byte) ([]Token, error) {\n\treturn Tokenize(replaceEnvVars(input), filename)\n}\n\n// replaceEnvVars replaces all occurrences of environment variables.\n// It mutates the underlying array and returns the updated slice.\nfunc replaceEnvVars(input []byte) []byte {\n\tvar offset int\n\tfor {\n\t\tbegin := bytes.Index(input[offset:], spanOpen)\n\t\tif begin < 0 {\n\t\t\tbreak\n\t\t}\n\t\tbegin += offset // make beginning relative to input, not offset\n\t\tend := bytes.Index(input[begin+len(spanOpen):], spanClose)\n\t\tif end < 0 {\n\t\t\tbreak\n\t\t}\n\t\tend += begin + len(spanOpen) // make end relative to input, not begin\n\n\t\t// get the name; if there is no name, skip it\n\t\tenvString := input[begin+len(spanOpen) : end]\n\t\tif len(envString) == 0 {\n\t\t\toffset = end + len(spanClose)\n\t\t\tcontinue\n\t\t}\n\n\t\t// split the string into a key and an optional default\n\t\tenvParts := strings.SplitN(string(envString), envVarDefaultDelimiter, 2)\n\n\t\t// do a lookup for the env var, replace with the default if not found\n\t\tenvVarValue, found := os.LookupEnv(envParts[0])\n\t\tif !found && len(envParts) == 2 {\n\t\t\tenvVarValue = envParts[1]\n\t\t}\n\n\t\t// get the value of the environment variable\n\t\t// note that this causes one-level deep chaining\n\t\tenvVarBytes := []byte(envVarValue)\n\n\t\t// splice in the value\n\t\tinput = append(input[:begin],\n\t\t\tappend(envVarBytes, input[end+len(spanClose):]...)...)\n\n\t\t// continue at the end of the replacement\n\t\toffset = begin + len(envVarBytes)\n\t}\n\treturn input\n}\n\ntype parser struct {\n\t*Dispenser\n\tblock           ServerBlock // current server block being parsed\n\teof             bool        // if we encounter a valid EOF in a hard place\n\tdefinedSnippets map[string][]Token\n\tnesting         int\n\timportGraph     importGraph\n}\n\nfunc (p *parser) parseAll() ([]ServerBlock, error) {\n\tvar blocks []ServerBlock\n\n\tfor p.Next() {\n\t\terr := p.parseOne()\n\t\tif err != nil {\n\t\t\treturn blocks, err\n\t\t}\n\t\tif len(p.block.Keys) > 0 || len(p.block.Segments) > 0 {\n\t\t\tblocks = append(blocks, p.block)\n\t\t}\n\t\tif p.nesting > 0 {\n\t\t\treturn blocks, p.EOFErr()\n\t\t}\n\t}\n\n\treturn blocks, nil\n}\n\nfunc (p *parser) parseOne() error {\n\tp.block = ServerBlock{}\n\treturn p.begin()\n}\n\nfunc (p *parser) begin() error {\n\tif len(p.tokens) == 0 {\n\t\treturn nil\n\t}\n\n\terr := p.addresses()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif p.eof {\n\t\t// this happens if the Caddyfile consists of only\n\t\t// a line of addresses and nothing else\n\t\treturn nil\n\t}\n\n\tif ok, name := p.isNamedRoute(); ok {\n\t\t// we just need a dummy leading token to ease parsing later\n\t\tnameToken := p.Token()\n\t\tnameToken.Text = name\n\n\t\t// named routes only have one key, the route name\n\t\tp.block.Keys = []Token{nameToken}\n\t\tp.block.IsNamedRoute = true\n\n\t\t// get all the tokens from the block, including the braces\n\t\ttokens, err := p.blockTokens(true)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\ttokens = append([]Token{nameToken}, tokens...)\n\t\tp.block.Segments = []Segment{tokens}\n\t\treturn nil\n\t}\n\n\tif ok, name := p.isSnippet(); ok {\n\t\tif p.definedSnippets == nil {\n\t\t\tp.definedSnippets = map[string][]Token{}\n\t\t}\n\t\tif _, found := p.definedSnippets[name]; found {\n\t\t\treturn p.Errf(\"redeclaration of previously declared snippet %s\", name)\n\t\t}\n\t\t// consume all tokens til matched close brace\n\t\ttokens, err := p.blockTokens(false)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t// Just as we need to track which file the token comes from, we need to\n\t\t// keep track of which snippet the token comes from. This is helpful\n\t\t// in tracking import cycles across files/snippets by namespacing them.\n\t\t// Without this, we end up with false-positives in cycle-detection.\n\t\tfor k, v := range tokens {\n\t\t\tv.snippetName = name\n\t\t\ttokens[k] = v\n\t\t}\n\t\tp.definedSnippets[name] = tokens\n\t\t// empty block keys so we don't save this block as a real server.\n\t\tp.block.Keys = nil\n\t\treturn nil\n\t}\n\n\treturn p.blockContents()\n}\n\nfunc (p *parser) addresses() error {\n\tvar expectingAnother bool\n\n\tfor {\n\t\tvalue := p.Val()\n\t\ttoken := p.Token()\n\n\t\t// Reject request matchers if trying to define them globally\n\t\tif strings.HasPrefix(value, \"@\") {\n\t\t\treturn p.Errf(\"request matchers may not be defined globally, they must be in a site block; found %s\", value)\n\t\t}\n\n\t\t// Special case: import directive replaces tokens during parse-time\n\t\tif value == \"import\" && p.isNewLine() {\n\t\t\terr := p.doImport(0)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\t// Open brace definitely indicates end of addresses\n\t\tif value == \"{\" {\n\t\t\tif expectingAnother {\n\t\t\t\treturn p.Errf(\"Expected another address but had '%s' - check for extra comma\", value)\n\t\t\t}\n\t\t\t// Mark this server block as being defined with braces.\n\t\t\t// This is used to provide a better error message when\n\t\t\t// the user may have tried to define two server blocks\n\t\t\t// without having used braces, which are required in\n\t\t\t// that case.\n\t\t\tp.block.HasBraces = true\n\t\t\tbreak\n\t\t}\n\n\t\t// Users commonly forget to place a space between the address and the '{'\n\t\tif strings.HasSuffix(value, \"{\") {\n\t\t\treturn p.Errf(\"Site addresses cannot end with a curly brace: '%s' - put a space between the token and the brace\", value)\n\t\t}\n\n\t\tif value != \"\" { // empty token possible if user typed \"\"\n\t\t\t// Trailing comma indicates another address will follow, which\n\t\t\t// may possibly be on the next line\n\t\t\tif value[len(value)-1] == ',' {\n\t\t\t\tvalue = value[:len(value)-1]\n\t\t\t\texpectingAnother = true\n\t\t\t} else {\n\t\t\t\texpectingAnother = false // but we may still see another one on this line\n\t\t\t}\n\n\t\t\t// If there's a comma here, it's probably because they didn't use a space\n\t\t\t// between their two domains, e.g. \"foo.com,bar.com\", which would not be\n\t\t\t// parsed as two separate site addresses.\n\t\t\tif strings.Contains(value, \",\") {\n\t\t\t\treturn p.Errf(\"Site addresses cannot contain a comma ',': '%s' - put a space after the comma to separate site addresses\", value)\n\t\t\t}\n\n\t\t\t// After the above, a comma surrounded by spaces would result\n\t\t\t// in an empty token which we should ignore\n\t\t\tif value != \"\" {\n\t\t\t\t// Add the token as a site address\n\t\t\t\ttoken.Text = value\n\t\t\t\tp.block.Keys = append(p.block.Keys, token)\n\t\t\t}\n\t\t}\n\n\t\t// Advance token and possibly break out of loop or return error\n\t\thasNext := p.Next()\n\t\tif expectingAnother && !hasNext {\n\t\t\treturn p.EOFErr()\n\t\t}\n\t\tif !hasNext {\n\t\t\tp.eof = true\n\t\t\tbreak // EOF\n\t\t}\n\t\tif !expectingAnother && p.isNewLine() {\n\t\t\tbreak\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (p *parser) blockContents() error {\n\terrOpenCurlyBrace := p.openCurlyBrace()\n\tif errOpenCurlyBrace != nil {\n\t\t// single-server configs don't need curly braces\n\t\tp.cursor--\n\t}\n\n\terr := p.directives()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// only look for close curly brace if there was an opening\n\tif errOpenCurlyBrace == nil {\n\t\terr = p.closeCurlyBrace()\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// directives parses through all the lines for directives\n// and it expects the next token to be the first\n// directive. It goes until EOF or closing curly brace\n// which ends the server block.\nfunc (p *parser) directives() error {\n\tfor p.Next() {\n\t\t// end of server block\n\t\tif p.Val() == \"}\" {\n\t\t\t// p.nesting has already been decremented\n\t\t\tbreak\n\t\t}\n\n\t\t// special case: import directive replaces tokens during parse-time\n\t\tif p.Val() == \"import\" {\n\t\t\terr := p.doImport(1)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tp.cursor-- // cursor is advanced when we continue, so roll back one more\n\t\t\tcontinue\n\t\t}\n\n\t\t// normal case: parse a directive as a new segment\n\t\t// (a \"segment\" is a line which starts with a directive\n\t\t// and which ends at the end of the line or at the end of\n\t\t// the block that is opened at the end of the line)\n\t\tif err := p.directive(); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// doImport swaps out the import directive and its argument\n// (a total of 2 tokens) with the tokens in the specified file\n// or globbing pattern. When the function returns, the cursor\n// is on the token before where the import directive was. In\n// other words, call Next() to access the first token that was\n// imported.\nfunc (p *parser) doImport(nesting int) error {\n\t// syntax checks\n\tif !p.NextArg() {\n\t\treturn p.ArgErr()\n\t}\n\timportPattern := p.Val()\n\tif importPattern == \"\" {\n\t\treturn p.Err(\"Import requires a non-empty filepath\")\n\t}\n\n\t// grab remaining args as placeholder replacements\n\targs := p.RemainingArgs()\n\n\t// set up a replacer for non-variadic args replacement\n\trepl := makeArgsReplacer(args)\n\n\t// grab all the tokens (if it exists) from within a block that follows the import\n\tvar blockTokens []Token\n\tfor currentNesting := p.Nesting(); p.NextBlock(currentNesting); {\n\t\tblockTokens = append(blockTokens, p.Token())\n\t}\n\t// initialize with size 1\n\tblockMapping := make(map[string][]Token, 1)\n\tif len(blockTokens) > 0 {\n\t\t// use such tokens to create a new dispenser, and then use it to parse each block\n\t\tbd := NewDispenser(blockTokens)\n\n\t\t// one iteration processes one sub-block inside the import\n\t\tfor bd.Next() {\n\t\t\tcurrentMappingKey := bd.Val()\n\n\t\t\tif currentMappingKey == \"{\" {\n\t\t\t\treturn p.Err(\"anonymous blocks are not supported\")\n\t\t\t}\n\n\t\t\t// load up all arguments (if there even are any)\n\t\t\tcurrentMappingTokens := bd.RemainingArgsAsTokens()\n\n\t\t\t// load up the entire block\n\t\t\tfor mappingNesting := bd.Nesting(); bd.NextBlock(mappingNesting); {\n\t\t\t\tcurrentMappingTokens = append(currentMappingTokens, bd.Token())\n\t\t\t}\n\n\t\t\tblockMapping[currentMappingKey] = currentMappingTokens\n\t\t}\n\t}\n\n\t// splice out the import directive and its arguments\n\t// (2 tokens, plus the length of args)\n\ttokensBefore := p.tokens[:p.cursor-1-len(args)-len(blockTokens)]\n\ttokensAfter := p.tokens[p.cursor+1:]\n\tvar importedTokens []Token\n\tvar nodes []string\n\n\t// first check snippets. That is a simple, non-recursive replacement\n\tif p.definedSnippets != nil && p.definedSnippets[importPattern] != nil {\n\t\timportedTokens = p.definedSnippets[importPattern]\n\t\tif len(importedTokens) > 0 {\n\t\t\t// just grab the first one\n\t\t\tnodes = append(nodes, fmt.Sprintf(\"%s:%s\", importedTokens[0].File, importedTokens[0].snippetName))\n\t\t}\n\t} else {\n\t\t// make path relative to the file of the _token_ being processed rather\n\t\t// than current working directory (issue #867) and then use glob to get\n\t\t// list of matching filenames\n\t\tabsFile, err := caddy.FastAbs(p.Dispenser.File())\n\t\tif err != nil {\n\t\t\treturn p.Errf(\"Failed to get absolute path of file: %s: %v\", p.Dispenser.File(), err)\n\t\t}\n\n\t\tvar matches []string\n\t\tvar globPattern string\n\t\tif !filepath.IsAbs(importPattern) {\n\t\t\tglobPattern = filepath.Join(filepath.Dir(absFile), importPattern)\n\t\t} else {\n\t\t\tglobPattern = importPattern\n\t\t}\n\t\tif strings.Count(globPattern, \"*\") > 1 || strings.Count(globPattern, \"?\") > 1 ||\n\t\t\t(strings.Contains(globPattern, \"[\") && strings.Contains(globPattern, \"]\")) {\n\t\t\t// See issue #2096 - a pattern with many glob expansions can hang for too long\n\t\t\treturn p.Errf(\"Glob pattern may only contain one wildcard (*), but has others: %s\", globPattern)\n\t\t}\n\t\tmatches, err = filepath.Glob(globPattern)\n\t\tif err != nil {\n\t\t\treturn p.Errf(\"Failed to use import pattern %s: %v\", importPattern, err)\n\t\t}\n\t\tif len(matches) == 0 {\n\t\t\tif strings.ContainsAny(globPattern, \"*?[]\") {\n\t\t\t\tcaddy.Log().Warn(\"No files matching import glob pattern\", zap.String(\"pattern\", importPattern))\n\t\t\t} else {\n\t\t\t\treturn p.Errf(\"File to import not found: %s\", importPattern)\n\t\t\t}\n\t\t} else {\n\t\t\t// See issue #5295 - should skip any files that start with a . when iterating over them.\n\t\t\tsep := string(filepath.Separator)\n\t\t\tsegGlobPattern := strings.Split(globPattern, sep)\n\t\t\tif strings.HasPrefix(segGlobPattern[len(segGlobPattern)-1], \"*\") {\n\t\t\t\tvar tmpMatches []string\n\t\t\t\tfor _, m := range matches {\n\t\t\t\t\tseg := strings.Split(m, sep)\n\t\t\t\t\tif !strings.HasPrefix(seg[len(seg)-1], \".\") {\n\t\t\t\t\t\ttmpMatches = append(tmpMatches, m)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tmatches = tmpMatches\n\t\t\t}\n\t\t}\n\n\t\t// collect all the imported tokens\n\t\tfor _, importFile := range matches {\n\t\t\tnewTokens, err := p.doSingleImport(importFile)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\timportedTokens = append(importedTokens, newTokens...)\n\t\t}\n\t\tnodes = matches\n\t}\n\n\tnodeName := p.File()\n\tif p.Token().snippetName != \"\" {\n\t\tnodeName += fmt.Sprintf(\":%s\", p.Token().snippetName)\n\t}\n\tp.importGraph.addNode(nodeName)\n\tp.importGraph.addNodes(nodes)\n\tif err := p.importGraph.addEdges(nodeName, nodes); err != nil {\n\t\tp.importGraph.removeNodes(nodes)\n\t\treturn err\n\t}\n\n\t// copy the tokens so we don't overwrite p.definedSnippets\n\ttokensCopy := make([]Token, 0, len(importedTokens))\n\n\tvar (\n\t\tmaybeSnippet   bool\n\t\tmaybeSnippetId bool\n\t\tindex          int\n\t)\n\n\t// run the argument replacer on the tokens\n\t// golang for range slice return a copy of value\n\t// similarly, append also copy value\n\tfor i, token := range importedTokens {\n\t\t// update the token's imports to refer to import directive filename, line number and snippet name if there is one\n\t\tif token.snippetName != \"\" {\n\t\t\ttoken.imports = append(token.imports, fmt.Sprintf(\"%s:%d (import %s)\", p.File(), p.Line(), token.snippetName))\n\t\t} else {\n\t\t\ttoken.imports = append(token.imports, fmt.Sprintf(\"%s:%d (import)\", p.File(), p.Line()))\n\t\t}\n\n\t\t// naive way of determine snippets, as snippets definition can only follow name + block\n\t\t// format, won't check for nesting correctness or any other error, that's what parser does.\n\t\tif !maybeSnippet && nesting == 0 {\n\t\t\t// first of the line\n\t\t\tif i == 0 || isNextOnNewLine(tokensCopy[len(tokensCopy)-1], token) {\n\t\t\t\tindex = 0\n\t\t\t} else {\n\t\t\t\tindex++\n\t\t\t}\n\n\t\t\tif index == 0 && len(token.Text) >= 3 && strings.HasPrefix(token.Text, \"(\") && strings.HasSuffix(token.Text, \")\") {\n\t\t\t\tmaybeSnippetId = true\n\t\t\t}\n\t\t}\n\n\t\tswitch token.Text {\n\t\tcase \"{\":\n\t\t\tnesting++\n\t\t\tif index == 1 && maybeSnippetId && nesting == 1 {\n\t\t\t\tmaybeSnippet = true\n\t\t\t\tmaybeSnippetId = false\n\t\t\t}\n\t\tcase \"}\":\n\t\t\tnesting--\n\t\t\tif nesting == 0 && maybeSnippet {\n\t\t\t\tmaybeSnippet = false\n\t\t\t}\n\t\t}\n\t\t// if it is {block}, we substitute with all tokens in the block\n\t\t// if it is {blocks.*}, we substitute with the tokens in the mapping for the *\n\t\tvar tokensToAdd []Token\n\t\tfoundBlockDirective := false\n\t\tswitch {\n\t\tcase token.Text == \"{block}\":\n\t\t\tfoundBlockDirective = true\n\t\t\ttokensToAdd = blockTokens\n\t\tcase strings.HasPrefix(token.Text, \"{blocks.\") && strings.HasSuffix(token.Text, \"}\"):\n\t\t\tfoundBlockDirective = true\n\t\t\t// {blocks.foo.bar} will be extracted to key `foo.bar`\n\t\t\tblockKey := strings.TrimPrefix(strings.TrimSuffix(token.Text, \"}\"), \"{blocks.\")\n\t\t\tval, ok := blockMapping[blockKey]\n\t\t\tif ok {\n\t\t\t\ttokensToAdd = val\n\t\t\t}\n\t\t}\n\n\t\tif foundBlockDirective {\n\t\t\ttokensCopy = append(tokensCopy, tokensToAdd...)\n\t\t\tcontinue\n\t\t}\n\n\t\tif maybeSnippet {\n\t\t\ttokensCopy = append(tokensCopy, token)\n\t\t\tcontinue\n\t\t}\n\n\t\tfoundVariadic, startIndex, endIndex := parseVariadic(token, len(args))\n\t\tif foundVariadic {\n\t\t\tfor _, arg := range args[startIndex:endIndex] {\n\t\t\t\ttoken.Text = arg\n\t\t\t\ttokensCopy = append(tokensCopy, token)\n\t\t\t}\n\t\t} else {\n\t\t\ttoken.Text = repl.ReplaceKnown(token.Text, \"\")\n\t\t\ttokensCopy = append(tokensCopy, token)\n\t\t}\n\t}\n\n\t// splice the imported tokens in the place of the import statement\n\t// and rewind cursor so Next() will land on first imported token\n\tp.tokens = append(tokensBefore, append(tokensCopy, tokensAfter...)...)\n\tp.cursor -= len(args) + len(blockTokens) + 1\n\n\treturn nil\n}\n\n// doSingleImport lexes the individual file at importFile and returns\n// its tokens or an error, if any.\nfunc (p *parser) doSingleImport(importFile string) ([]Token, error) {\n\tfile, err := os.Open(importFile)\n\tif err != nil {\n\t\treturn nil, p.Errf(\"Could not import %s: %v\", importFile, err)\n\t}\n\tdefer file.Close()\n\n\tif info, err := file.Stat(); err != nil {\n\t\treturn nil, p.Errf(\"Could not import %s: %v\", importFile, err)\n\t} else if info.IsDir() {\n\t\treturn nil, p.Errf(\"Could not import %s: is a directory\", importFile)\n\t}\n\n\tinput, err := io.ReadAll(file)\n\tif err != nil {\n\t\treturn nil, p.Errf(\"Could not read imported file %s: %v\", importFile, err)\n\t}\n\n\t// only warning in case of empty files\n\tif len(input) == 0 || len(strings.TrimSpace(string(input))) == 0 {\n\t\tcaddy.Log().Warn(\"Import file is empty\", zap.String(\"file\", importFile))\n\t\treturn []Token{}, nil\n\t}\n\n\timportedTokens, err := allTokens(importFile, input)\n\tif err != nil {\n\t\treturn nil, p.Errf(\"Could not read tokens while importing %s: %v\", importFile, err)\n\t}\n\n\t// Tack the file path onto these tokens so errors show the imported file's name\n\t// (we use full, absolute path to avoid bugs: issue #1892)\n\tfilename, err := caddy.FastAbs(importFile)\n\tif err != nil {\n\t\treturn nil, p.Errf(\"Failed to get absolute path of file: %s: %v\", importFile, err)\n\t}\n\tfor i := range importedTokens {\n\t\timportedTokens[i].File = filename\n\t}\n\n\treturn importedTokens, nil\n}\n\n// directive collects tokens until the directive's scope\n// closes (either end of line or end of curly brace block).\n// It expects the currently-loaded token to be a directive\n// (or } that ends a server block). The collected tokens\n// are loaded into the current server block for later use\n// by directive setup functions.\nfunc (p *parser) directive() error {\n\t// a segment is a list of tokens associated with this directive\n\tvar segment Segment\n\n\t// the directive itself is appended as a relevant token\n\tsegment = append(segment, p.Token())\n\n\tfor p.Next() {\n\t\tif p.Val() == \"{\" {\n\t\t\tp.nesting++\n\t\t\tif !p.isNextOnNewLine() && p.Token().wasQuoted == 0 {\n\t\t\t\treturn p.Err(\"Unexpected next token after '{' on same line\")\n\t\t\t}\n\t\t\tif p.isNewLine() {\n\t\t\t\treturn p.Err(\"Unexpected '{' on a new line; did you mean to place the '{' on the previous line?\")\n\t\t\t}\n\t\t} else if p.Val() == \"{}\" {\n\t\t\tif p.isNextOnNewLine() && p.Token().wasQuoted == 0 {\n\t\t\t\treturn p.Err(\"Unexpected '{}' at end of line\")\n\t\t\t}\n\t\t} else if p.isNewLine() && p.nesting == 0 {\n\t\t\tp.cursor-- // read too far\n\t\t\tbreak\n\t\t} else if p.Val() == \"}\" && p.nesting > 0 {\n\t\t\tp.nesting--\n\t\t} else if p.Val() == \"}\" && p.nesting == 0 {\n\t\t\treturn p.Err(\"Unexpected '}' because no matching opening brace\")\n\t\t} else if p.Val() == \"import\" && p.isNewLine() {\n\t\t\tif err := p.doImport(1); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tp.cursor-- // cursor is advanced when we continue, so roll back one more\n\t\t\tcontinue\n\t\t}\n\n\t\tsegment = append(segment, p.Token())\n\t}\n\n\tp.block.Segments = append(p.block.Segments, segment)\n\n\tif p.nesting > 0 {\n\t\treturn p.EOFErr()\n\t}\n\n\treturn nil\n}\n\n// openCurlyBrace expects the current token to be an\n// opening curly brace. This acts like an assertion\n// because it returns an error if the token is not\n// a opening curly brace. It does NOT advance the token.\nfunc (p *parser) openCurlyBrace() error {\n\tif p.Val() != \"{\" {\n\t\treturn p.SyntaxErr(\"{\")\n\t}\n\treturn nil\n}\n\n// closeCurlyBrace expects the current token to be\n// a closing curly brace. This acts like an assertion\n// because it returns an error if the token is not\n// a closing curly brace. It does NOT advance the token.\nfunc (p *parser) closeCurlyBrace() error {\n\tif p.Val() != \"}\" {\n\t\treturn p.SyntaxErr(\"}\")\n\t}\n\treturn nil\n}\n\nfunc (p *parser) isNamedRoute() (bool, string) {\n\tkeys := p.block.Keys\n\t// A named route block is a single key with parens, prefixed with &.\n\tif len(keys) == 1 && strings.HasPrefix(keys[0].Text, \"&(\") && strings.HasSuffix(keys[0].Text, \")\") {\n\t\treturn true, strings.TrimSuffix(keys[0].Text[2:], \")\")\n\t}\n\treturn false, \"\"\n}\n\nfunc (p *parser) isSnippet() (bool, string) {\n\tkeys := p.block.Keys\n\t// A snippet block is a single key with parens. Nothing else qualifies.\n\tif len(keys) == 1 && strings.HasPrefix(keys[0].Text, \"(\") && strings.HasSuffix(keys[0].Text, \")\") {\n\t\treturn true, strings.TrimSuffix(keys[0].Text[1:], \")\")\n\t}\n\treturn false, \"\"\n}\n\n// read and store everything in a block for later replay.\nfunc (p *parser) blockTokens(retainCurlies bool) ([]Token, error) {\n\t// block must have curlies.\n\terr := p.openCurlyBrace()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tnesting := 1 // count our own nesting\n\ttokens := []Token{}\n\tif retainCurlies {\n\t\ttokens = append(tokens, p.Token())\n\t}\n\tfor p.Next() {\n\t\tif p.Val() == \"}\" {\n\t\t\tnesting--\n\t\t\tif nesting == 0 {\n\t\t\t\tif retainCurlies {\n\t\t\t\t\ttokens = append(tokens, p.Token())\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tif p.Val() == \"{\" {\n\t\t\tnesting++\n\t\t}\n\t\ttokens = append(tokens, p.tokens[p.cursor])\n\t}\n\t// make sure we're matched up\n\tif nesting != 0 {\n\t\treturn nil, p.SyntaxErr(\"}\")\n\t}\n\treturn tokens, nil\n}\n\n// ServerBlock associates any number of keys from the\n// head of the server block with tokens, which are\n// grouped by segments.\ntype ServerBlock struct {\n\tHasBraces    bool\n\tKeys         []Token\n\tSegments     []Segment\n\tIsNamedRoute bool\n}\n\nfunc (sb ServerBlock) GetKeysText() []string {\n\tres := make([]string, 0, len(sb.Keys))\n\tfor _, k := range sb.Keys {\n\t\tres = append(res, k.Text)\n\t}\n\treturn res\n}\n\n// DispenseDirective returns a dispenser that contains\n// all the tokens in the server block.\nfunc (sb ServerBlock) DispenseDirective(dir string) *Dispenser {\n\tvar tokens []Token\n\tfor _, seg := range sb.Segments {\n\t\tif len(seg) > 0 && seg[0].Text == dir {\n\t\t\ttokens = append(tokens, seg...)\n\t\t}\n\t}\n\treturn NewDispenser(tokens)\n}\n\n// Segment is a list of tokens which begins with a directive\n// and ends at the end of the directive (either at the end of\n// the line, or at the end of a block it opens).\ntype Segment []Token\n\n// Directive returns the directive name for the segment.\n// The directive name is the text of the first token.\nfunc (s Segment) Directive() string {\n\tif len(s) > 0 {\n\t\treturn s[0].Text\n\t}\n\treturn \"\"\n}\n\n// spanOpen and spanClose are used to bound spans that\n// contain the name of an environment variable.\nvar (\n\tspanOpen, spanClose    = []byte{'{', '$'}, []byte{'}'}\n\tenvVarDefaultDelimiter = \":\"\n)\n"
  },
  {
    "path": "caddyconfig/caddyfile/parse_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyfile\n\nimport (\n\t\"bytes\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc TestParseVariadic(t *testing.T) {\n\targs := make([]string, 10)\n\tfor i, tc := range []struct {\n\t\tinput  string\n\t\tresult bool\n\t}{\n\t\t{\n\t\t\tinput:  \"\",\n\t\t\tresult: false,\n\t\t},\n\t\t{\n\t\t\tinput:  \"{args[1\",\n\t\t\tresult: false,\n\t\t},\n\t\t{\n\t\t\tinput:  \"1]}\",\n\t\t\tresult: false,\n\t\t},\n\t\t{\n\t\t\tinput:  \"{args[:]}aaaaa\",\n\t\t\tresult: false,\n\t\t},\n\t\t{\n\t\t\tinput:  \"aaaaa{args[:]}\",\n\t\t\tresult: false,\n\t\t},\n\t\t{\n\t\t\tinput:  \"{args.}\",\n\t\t\tresult: false,\n\t\t},\n\t\t{\n\t\t\tinput:  \"{args.1}\",\n\t\t\tresult: false,\n\t\t},\n\t\t{\n\t\t\tinput:  \"{args[]}\",\n\t\t\tresult: false,\n\t\t},\n\t\t{\n\t\t\tinput:  \"{args[:]}\",\n\t\t\tresult: true,\n\t\t},\n\t\t{\n\t\t\tinput:  \"{args[:]}\",\n\t\t\tresult: true,\n\t\t},\n\t\t{\n\t\t\tinput:  \"{args[0:]}\",\n\t\t\tresult: true,\n\t\t},\n\t\t{\n\t\t\tinput:  \"{args[:0]}\",\n\t\t\tresult: true,\n\t\t},\n\t\t{\n\t\t\tinput:  \"{args[-1:]}\",\n\t\t\tresult: false,\n\t\t},\n\t\t{\n\t\t\tinput:  \"{args[:11]}\",\n\t\t\tresult: false,\n\t\t},\n\t\t{\n\t\t\tinput:  \"{args[10:0]}\",\n\t\t\tresult: false,\n\t\t},\n\t\t{\n\t\t\tinput:  \"{args[0:10]}\",\n\t\t\tresult: true,\n\t\t},\n\t\t{\n\t\t\tinput:  \"{args[0]}:{args[1]}:{args[2]}\",\n\t\t\tresult: false,\n\t\t},\n\t} {\n\t\ttoken := Token{\n\t\t\tFile: \"test\",\n\t\t\tLine: 1,\n\t\t\tText: tc.input,\n\t\t}\n\t\tif v, _, _ := parseVariadic(token, len(args)); v != tc.result {\n\t\t\tt.Errorf(\"Test %d error expectation failed Expected: %t, got %t\", i, tc.result, v)\n\t\t}\n\t}\n}\n\nfunc TestAllTokens(t *testing.T) {\n\tinput := []byte(\"a b c\\nd e\")\n\texpected := []string{\"a\", \"b\", \"c\", \"d\", \"e\"}\n\ttokens, err := allTokens(\"TestAllTokens\", input)\n\tif err != nil {\n\t\tt.Fatalf(\"Expected no error, got %v\", err)\n\t}\n\tif len(tokens) != len(expected) {\n\t\tt.Fatalf(\"Expected %d tokens, got %d\", len(expected), len(tokens))\n\t}\n\n\tfor i, val := range expected {\n\t\tif tokens[i].Text != val {\n\t\t\tt.Errorf(\"Token %d should be '%s' but was '%s'\", i, val, tokens[i].Text)\n\t\t}\n\t}\n}\n\nfunc TestParseOneAndImport(t *testing.T) {\n\ttestParseOne := func(input string) (ServerBlock, error) {\n\t\tp := testParser(input)\n\t\tp.Next() // parseOne doesn't call Next() to start, so we must\n\t\terr := p.parseOne()\n\t\treturn p.block, err\n\t}\n\n\tfor i, test := range []struct {\n\t\tinput     string\n\t\tshouldErr bool\n\t\tkeys      []string\n\t\tnumTokens []int // number of tokens to expect in each segment\n\t}{\n\t\t{`localhost`, false, []string{\n\t\t\t\"localhost\",\n\t\t}, []int{}},\n\n\t\t{`localhost\n\t\t  dir1`, false, []string{\n\t\t\t\"localhost\",\n\t\t}, []int{1}},\n\n\t\t{\n\t\t\t`localhost:1234\n\t\t  dir1 foo bar`, false, []string{\n\t\t\t\t\"localhost:1234\",\n\t\t\t}, []int{3},\n\t\t},\n\n\t\t{`localhost {\n\t\t    dir1\n\t\t  }`, false, []string{\n\t\t\t\"localhost\",\n\t\t}, []int{1}},\n\n\t\t{`localhost:1234 {\n\t\t    dir1 foo bar\n\t\t    dir2\n\t\t  }`, false, []string{\n\t\t\t\"localhost:1234\",\n\t\t}, []int{3, 1}},\n\n\t\t{`http://localhost https://localhost\n\t\t  dir1 foo bar`, false, []string{\n\t\t\t\"http://localhost\",\n\t\t\t\"https://localhost\",\n\t\t}, []int{3}},\n\n\t\t{`http://localhost https://localhost {\n\t\t    dir1 foo bar\n\t\t  }`, false, []string{\n\t\t\t\"http://localhost\",\n\t\t\t\"https://localhost\",\n\t\t}, []int{3}},\n\n\t\t{`http://localhost, https://localhost {\n\t\t    dir1 foo bar\n\t\t  }`, false, []string{\n\t\t\t\"http://localhost\",\n\t\t\t\"https://localhost\",\n\t\t}, []int{3}},\n\n\t\t{`http://localhost, {\n\t\t  }`, true, []string{\n\t\t\t\"http://localhost\",\n\t\t}, []int{}},\n\n\t\t{`host1:80, http://host2.com\n\t\t  dir1 foo bar\n\t\t  dir2 baz`, false, []string{\n\t\t\t\"host1:80\",\n\t\t\t\"http://host2.com\",\n\t\t}, []int{3, 2}},\n\n\t\t{`http://host1.com,\n\t\t  http://host2.com,\n\t\t  https://host3.com`, false, []string{\n\t\t\t\"http://host1.com\",\n\t\t\t\"http://host2.com\",\n\t\t\t\"https://host3.com\",\n\t\t}, []int{}},\n\n\t\t{`http://host1.com:1234, https://host2.com\n\t\t  dir1 foo {\n\t\t    bar baz\n\t\t  }\n\t\t  dir2`, false, []string{\n\t\t\t\"http://host1.com:1234\",\n\t\t\t\"https://host2.com\",\n\t\t}, []int{6, 1}},\n\n\t\t{`127.0.0.1\n\t\t  dir1 {\n\t\t    bar baz\n\t\t  }\n\t\t  dir2 {\n\t\t    foo bar\n\t\t  }`, false, []string{\n\t\t\t\"127.0.0.1\",\n\t\t}, []int{5, 5}},\n\n\t\t{`localhost\n\t\t  dir1 {\n\t\t    foo`, true, []string{\n\t\t\t\"localhost\",\n\t\t}, []int{3}},\n\n\t\t{`localhost\n\t\t  dir1 {\n\t\t  }`, false, []string{\n\t\t\t\"localhost\",\n\t\t}, []int{3}},\n\n\t\t{`localhost\n\t\t  dir1 {\n\t\t  } }`, true, []string{\n\t\t\t\"localhost\",\n\t\t}, []int{}},\n\n\t\t{`localhost{\n\t\t    dir1\n\t\t  }`, true, []string{}, []int{}},\n\n\t\t{`localhost\n\t\t  dir1 {\n\t\t    nested {\n\t\t      foo\n\t\t    }\n\t\t  }\n\t\t  dir2 foo bar`, false, []string{\n\t\t\t\"localhost\",\n\t\t}, []int{7, 3}},\n\n\t\t{``, false, []string{}, []int{}},\n\n\t\t{`localhost\n\t\t  dir1 arg1\n\t\t  import testdata/import_test1.txt`, false, []string{\n\t\t\t\"localhost\",\n\t\t}, []int{2, 3, 1}},\n\n\t\t{`import testdata/import_test2.txt`, false, []string{\n\t\t\t\"host1\",\n\t\t}, []int{1, 2}},\n\n\t\t{`import testdata/not_found.txt`, true, []string{}, []int{}},\n\n\t\t// empty file should just log a warning, and result in no tokens\n\t\t{`import testdata/empty.txt`, false, []string{}, []int{}},\n\n\t\t{`import testdata/only_white_space.txt`, false, []string{}, []int{}},\n\n\t\t// import path/to/dir/* should skip any files that start with a . when iterating over them.\n\t\t{`localhost\n\t\t  dir1 arg1\n\t\t  import testdata/glob/*`, false, []string{\n\t\t\t\"localhost\",\n\t\t}, []int{2, 3, 1}},\n\n\t\t// import path/to/dir/.* should continue to read all dotfiles in a dir.\n\t\t{`import testdata/glob/.*`, false, []string{\n\t\t\t\"host1\",\n\t\t}, []int{1, 2}},\n\n\t\t{`\"\"`, false, []string{}, []int{}},\n\n\t\t{``, false, []string{}, []int{}},\n\n\t\t// Unexpected next token after '{' on same line\n\t\t{`localhost\n\t\t  dir1 { a b }`, true, []string{\"localhost\"}, []int{}},\n\n\t\t// Unexpected '{' on a new line\n\t\t{`localhost\n\t\tdir1\n\t\t{\n\t\t\ta b\n\t\t}`, true, []string{\"localhost\"}, []int{}},\n\n\t\t// Workaround with quotes\n\t\t{`localhost\n\t\t  dir1 \"{\" a b \"}\"`, false, []string{\"localhost\"}, []int{5}},\n\n\t\t// Unexpected '{}' at end of line\n\t\t{`localhost\n\t\t  dir1 {}`, true, []string{\"localhost\"}, []int{}},\n\t\t// Workaround with quotes\n\t\t{`localhost\n\t\t  dir1 \"{}\"`, false, []string{\"localhost\"}, []int{2}},\n\n\t\t// import with args\n\t\t{`import testdata/import_args0.txt a`, false, []string{\"a\"}, []int{}},\n\t\t{`import testdata/import_args1.txt a b`, false, []string{\"a\", \"b\"}, []int{}},\n\t\t{`import testdata/import_args*.txt a b`, false, []string{\"a\"}, []int{2}},\n\n\t\t// test cases found by fuzzing!\n\t\t{`import }{$\"`, true, []string{}, []int{}},\n\t\t{`import /*/*.txt`, true, []string{}, []int{}},\n\t\t{`import /???/?*?o`, true, []string{}, []int{}},\n\t\t{`import /??`, true, []string{}, []int{}},\n\t\t{`import /[a-z]`, true, []string{}, []int{}},\n\t\t{`import {$}`, true, []string{}, []int{}},\n\t\t{`import {%}`, true, []string{}, []int{}},\n\t\t{`import {$$}`, true, []string{}, []int{}},\n\t\t{`import {%%}`, true, []string{}, []int{}},\n\t} {\n\t\tresult, err := testParseOne(test.input)\n\n\t\tif test.shouldErr && err == nil {\n\t\t\tt.Errorf(\"Test %d: Expected an error, but didn't get one\", i)\n\t\t}\n\t\tif !test.shouldErr && err != nil {\n\t\t\tt.Errorf(\"Test %d: Expected no error, but got: %v\", i, err)\n\t\t}\n\n\t\t// t.Logf(\"%+v\\n\", result)\n\t\tif len(result.Keys) != len(test.keys) {\n\t\t\tt.Errorf(\"Test %d: Expected %d keys, got %d\",\n\t\t\t\ti, len(test.keys), len(result.Keys))\n\t\t\tcontinue\n\t\t}\n\t\tfor j, addr := range result.GetKeysText() {\n\t\t\tif addr != test.keys[j] {\n\t\t\t\tt.Errorf(\"Test %d, key %d: Expected '%s', but was '%s'\",\n\t\t\t\t\ti, j, test.keys[j], addr)\n\t\t\t}\n\t\t}\n\n\t\tif len(result.Segments) != len(test.numTokens) {\n\t\t\tt.Errorf(\"Test %d: Expected %d segments, had %d\",\n\t\t\t\ti, len(test.numTokens), len(result.Segments))\n\t\t\tcontinue\n\t\t}\n\n\t\tfor j, seg := range result.Segments {\n\t\t\tif len(seg) != test.numTokens[j] {\n\t\t\t\tt.Errorf(\"Test %d, segment %d: Expected %d tokens, counted %d\",\n\t\t\t\t\ti, j, test.numTokens[j], len(seg))\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc TestRecursiveImport(t *testing.T) {\n\ttestParseOne := func(input string) (ServerBlock, error) {\n\t\tp := testParser(input)\n\t\tp.Next() // parseOne doesn't call Next() to start, so we must\n\t\terr := p.parseOne()\n\t\treturn p.block, err\n\t}\n\n\tisExpected := func(got ServerBlock) bool {\n\t\ttextKeys := got.GetKeysText()\n\t\tif len(textKeys) != 1 || textKeys[0] != \"localhost\" {\n\t\t\tt.Errorf(\"got keys unexpected: expect localhost, got %v\", textKeys)\n\t\t\treturn false\n\t\t}\n\t\tif len(got.Segments) != 2 {\n\t\t\tt.Errorf(\"got wrong number of segments: expect 2, got %d\", len(got.Segments))\n\t\t\treturn false\n\t\t}\n\t\tif len(got.Segments[0]) != 1 || len(got.Segments[1]) != 2 {\n\t\t\tt.Errorf(\"got unexpected tokens: %v\", got.Segments)\n\t\t\treturn false\n\t\t}\n\t\treturn true\n\t}\n\n\trecursiveFile1, err := filepath.Abs(\"testdata/recursive_import_test1\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\trecursiveFile2, err := filepath.Abs(\"testdata/recursive_import_test2\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// test relative recursive import\n\terr = os.WriteFile(recursiveFile1, []byte(\n\t\t`localhost\n\t\tdir1\n\t\timport recursive_import_test2`), 0o644)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer os.Remove(recursiveFile1)\n\n\terr = os.WriteFile(recursiveFile2, []byte(\"dir2 1\"), 0o644)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer os.Remove(recursiveFile2)\n\n\t// import absolute path\n\tresult, err := testParseOne(\"import \" + recursiveFile1)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif !isExpected(result) {\n\t\tt.Error(\"absolute+relative import failed\")\n\t}\n\n\t// import relative path\n\tresult, err = testParseOne(\"import testdata/recursive_import_test1\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif !isExpected(result) {\n\t\tt.Error(\"relative+relative import failed\")\n\t}\n\n\t// test absolute recursive import\n\terr = os.WriteFile(recursiveFile1, []byte(\n\t\t`localhost\n\t\tdir1\n\t\timport `+recursiveFile2), 0o644)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// import absolute path\n\tresult, err = testParseOne(\"import \" + recursiveFile1)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif !isExpected(result) {\n\t\tt.Error(\"absolute+absolute import failed\")\n\t}\n\n\t// import relative path\n\tresult, err = testParseOne(\"import testdata/recursive_import_test1\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif !isExpected(result) {\n\t\tt.Error(\"relative+absolute import failed\")\n\t}\n}\n\nfunc TestDirectiveImport(t *testing.T) {\n\ttestParseOne := func(input string) (ServerBlock, error) {\n\t\tp := testParser(input)\n\t\tp.Next() // parseOne doesn't call Next() to start, so we must\n\t\terr := p.parseOne()\n\t\treturn p.block, err\n\t}\n\n\tisExpected := func(got ServerBlock) bool {\n\t\ttextKeys := got.GetKeysText()\n\t\tif len(textKeys) != 1 || textKeys[0] != \"localhost\" {\n\t\t\tt.Errorf(\"got keys unexpected: expect localhost, got %v\", textKeys)\n\t\t\treturn false\n\t\t}\n\t\tif len(got.Segments) != 2 {\n\t\t\tt.Errorf(\"got wrong number of segments: expect 2, got %d\", len(got.Segments))\n\t\t\treturn false\n\t\t}\n\t\tif len(got.Segments[0]) != 1 || len(got.Segments[1]) != 8 {\n\t\t\tt.Errorf(\"got unexpected tokens: %v\", got.Segments)\n\t\t\treturn false\n\t\t}\n\t\treturn true\n\t}\n\n\tdirectiveFile, err := filepath.Abs(\"testdata/directive_import_test\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\terr = os.WriteFile(directiveFile, []byte(`prop1 1\n\tprop2 2`), 0o644)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer os.Remove(directiveFile)\n\n\t// import from existing file\n\tresult, err := testParseOne(`localhost\n\tdir1\n\tproxy {\n\t\timport testdata/directive_import_test\n\t\ttransparent\n\t}`)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif !isExpected(result) {\n\t\tt.Error(\"directive import failed\")\n\t}\n\n\t// import from nonexistent file\n\t_, err = testParseOne(`localhost\n\tdir1\n\tproxy {\n\t\timport testdata/nonexistent_file\n\t\ttransparent\n\t}`)\n\tif err == nil {\n\t\tt.Fatal(\"expected error when importing a nonexistent file\")\n\t}\n}\n\nfunc TestParseAll(t *testing.T) {\n\tfor i, test := range []struct {\n\t\tinput     string\n\t\tshouldErr bool\n\t\tkeys      [][]string // keys per server block, in order\n\t}{\n\t\t{`localhost`, false, [][]string{\n\t\t\t{\"localhost\"},\n\t\t}},\n\n\t\t{`localhost:1234`, false, [][]string{\n\t\t\t{\"localhost:1234\"},\n\t\t}},\n\n\t\t{`localhost:1234 {\n\t\t  }\n\t\t  localhost:2015 {\n\t\t  }`, false, [][]string{\n\t\t\t{\"localhost:1234\"},\n\t\t\t{\"localhost:2015\"},\n\t\t}},\n\n\t\t{`localhost:1234, http://host2`, false, [][]string{\n\t\t\t{\"localhost:1234\", \"http://host2\"},\n\t\t}},\n\n\t\t{`foo.example.com   ,   example.com`, false, [][]string{\n\t\t\t{\"foo.example.com\", \"example.com\"},\n\t\t}},\n\n\t\t{`localhost:1234, http://host2,`, true, [][]string{}},\n\n\t\t{`http://host1.com, http://host2.com {\n\t\t  }\n\t\t  https://host3.com, https://host4.com {\n\t\t  }`, false, [][]string{\n\t\t\t{\"http://host1.com\", \"http://host2.com\"},\n\t\t\t{\"https://host3.com\", \"https://host4.com\"},\n\t\t}},\n\n\t\t{`import testdata/import_glob*.txt`, false, [][]string{\n\t\t\t{\"glob0.host0\"},\n\t\t\t{\"glob0.host1\"},\n\t\t\t{\"glob1.host0\"},\n\t\t\t{\"glob2.host0\"},\n\t\t}},\n\n\t\t{`import notfound/*`, false, [][]string{}},        // glob needn't error with no matches\n\t\t{`import notfound/file.conf`, true, [][]string{}}, // but a specific file should\n\n\t\t// recursive self-import\n\t\t{`import testdata/import_recursive0.txt`, true, [][]string{}},\n\t\t{`import testdata/import_recursive3.txt\n\t\timport testdata/import_recursive1.txt`, true, [][]string{}},\n\n\t\t// cyclic imports\n\t\t{`(A) {\n\t\t\timport A\n\t\t}\n\t\t:80\n\t\timport A\n\t\t`, true, [][]string{}},\n\t\t{`(A) {\n\t\t\timport B\n\t\t}\n\t\t(B) {\n\t\t\timport A\n\t\t}\n\t\t:80\n\t\timport A\n\t\t`, true, [][]string{}},\n\t} {\n\t\tp := testParser(test.input)\n\t\tblocks, err := p.parseAll()\n\n\t\tif test.shouldErr && err == nil {\n\t\t\tt.Errorf(\"Test %d: Expected an error, but didn't get one\", i)\n\t\t}\n\t\tif !test.shouldErr && err != nil {\n\t\t\tt.Errorf(\"Test %d: Expected no error, but got: %v\", i, err)\n\t\t}\n\n\t\tif len(blocks) != len(test.keys) {\n\t\t\tt.Errorf(\"Test %d: Expected %d server blocks, got %d\",\n\t\t\t\ti, len(test.keys), len(blocks))\n\t\t\tcontinue\n\t\t}\n\t\tfor j, block := range blocks {\n\t\t\tif len(block.Keys) != len(test.keys[j]) {\n\t\t\t\tt.Errorf(\"Test %d: Expected %d keys in block %d, got %d: %v\",\n\t\t\t\t\ti, len(test.keys[j]), j, len(block.Keys), block.Keys)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tfor k, addr := range block.GetKeysText() {\n\t\t\t\tif addr != test.keys[j][k] {\n\t\t\t\t\tt.Errorf(\"Test %d, block %d, key %d: Expected '%s', but got '%s'\",\n\t\t\t\t\t\ti, j, k, test.keys[j][k], addr)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc TestEnvironmentReplacement(t *testing.T) {\n\tos.Setenv(\"FOOBAR\", \"foobar\")\n\tos.Setenv(\"CHAINED\", \"$FOOBAR\")\n\n\tfor i, test := range []struct {\n\t\tinput  string\n\t\texpect string\n\t}{\n\t\t{\n\t\t\tinput:  \"\",\n\t\t\texpect: \"\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"foo\",\n\t\t\texpect: \"foo\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"{$NOT_SET}\",\n\t\t\texpect: \"\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"foo{$NOT_SET}bar\",\n\t\t\texpect: \"foobar\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"{$FOOBAR}\",\n\t\t\texpect: \"foobar\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"foo {$FOOBAR} bar\",\n\t\t\texpect: \"foo foobar bar\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"foo{$FOOBAR}bar\",\n\t\t\texpect: \"foofoobarbar\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"foo\\n{$FOOBAR}\\nbar\",\n\t\t\texpect: \"foo\\nfoobar\\nbar\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"{$FOOBAR} {$FOOBAR}\",\n\t\t\texpect: \"foobar foobar\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"{$FOOBAR}{$FOOBAR}\",\n\t\t\texpect: \"foobarfoobar\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"{$CHAINED}\",\n\t\t\texpect: \"$FOOBAR\", // should not chain env expands\n\t\t},\n\t\t{\n\t\t\tinput:  \"{$FOO:default}\",\n\t\t\texpect: \"default\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"foo{$BAR:bar}baz\",\n\t\t\texpect: \"foobarbaz\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"foo{$BAR:$FOOBAR}baz\",\n\t\t\texpect: \"foo$FOOBARbaz\", // should not chain env expands\n\t\t},\n\t\t{\n\t\t\tinput:  \"{$FOOBAR\",\n\t\t\texpect: \"{$FOOBAR\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"{$LONGER_NAME $FOOBAR}\",\n\t\t\texpect: \"\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"{$}\",\n\t\t\texpect: \"{$}\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"{$$}\",\n\t\t\texpect: \"\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"{$\",\n\t\t\texpect: \"{$\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"}{$\",\n\t\t\texpect: \"}{$\",\n\t\t},\n\t} {\n\t\tactual := replaceEnvVars([]byte(test.input))\n\t\tif !bytes.Equal(actual, []byte(test.expect)) {\n\t\t\tt.Errorf(\"Test %d: Expected: '%s' but got '%s'\", i, test.expect, actual)\n\t\t}\n\t}\n}\n\nfunc TestImportReplacementInJSONWithBrace(t *testing.T) {\n\tfor i, test := range []struct {\n\t\targs   []string\n\t\tinput  string\n\t\texpect string\n\t}{\n\t\t{\n\t\t\targs:   []string{\"123\"},\n\t\t\tinput:  \"{args[0]}\",\n\t\t\texpect: \"123\",\n\t\t},\n\t\t{\n\t\t\targs:   []string{\"123\"},\n\t\t\tinput:  `{\"key\":\"{args[0]}\"}`,\n\t\t\texpect: `{\"key\":\"123\"}`,\n\t\t},\n\t\t{\n\t\t\targs:   []string{\"123\", \"123\"},\n\t\t\tinput:  `{\"key\":[{args[0]},{args[1]}]}`,\n\t\t\texpect: `{\"key\":[123,123]}`,\n\t\t},\n\t} {\n\t\trepl := makeArgsReplacer(test.args)\n\t\tactual := repl.ReplaceKnown(test.input, \"\")\n\t\tif actual != test.expect {\n\t\t\tt.Errorf(\"Test %d: Expected: '%s' but got '%s'\", i, test.expect, actual)\n\t\t}\n\t}\n}\n\nfunc TestSnippets(t *testing.T) {\n\tp := testParser(`\n\t\t(common) {\n\t\t\tgzip foo\n\t\t\terrors stderr\n\t\t}\n\t\thttp://example.com {\n\t\t\timport common\n\t\t}\n\t`)\n\tblocks, err := p.parseAll()\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif len(blocks) != 1 {\n\t\tt.Fatalf(\"Expect exactly one server block. Got %d.\", len(blocks))\n\t}\n\tif actual, expected := blocks[0].GetKeysText()[0], \"http://example.com\"; expected != actual {\n\t\tt.Errorf(\"Expected server name to be '%s' but was '%s'\", expected, actual)\n\t}\n\tif len(blocks[0].Segments) != 2 {\n\t\tt.Fatalf(\"Server block should have tokens from import, got: %+v\", blocks[0])\n\t}\n\tif actual, expected := blocks[0].Segments[0][0].Text, \"gzip\"; expected != actual {\n\t\tt.Errorf(\"Expected argument to be '%s' but was '%s'\", expected, actual)\n\t}\n\tif actual, expected := blocks[0].Segments[1][1].Text, \"stderr\"; expected != actual {\n\t\tt.Errorf(\"Expected argument to be '%s' but was '%s'\", expected, actual)\n\t}\n}\n\nfunc writeStringToTempFileOrDie(t *testing.T, str string) (pathToFile string) {\n\tfile, err := os.CreateTemp(\"\", t.Name())\n\tif err != nil {\n\t\tpanic(err) // get a stack trace so we know where this was called from.\n\t}\n\tif _, err := file.WriteString(str); err != nil {\n\t\tpanic(err)\n\t}\n\tif err := file.Close(); err != nil {\n\t\tpanic(err)\n\t}\n\treturn file.Name()\n}\n\nfunc TestImportedFilesIgnoreNonDirectiveImportTokens(t *testing.T) {\n\tfileName := writeStringToTempFileOrDie(t, `\n\t\thttp://example.com {\n\t\t\t# This isn't an import directive, it's just an arg with value 'import'\n\t\t\tbasic_auth / import password\n\t\t}\n\t`)\n\t// Parse the root file that imports the other one.\n\tp := testParser(`import ` + fileName)\n\tblocks, err := p.parseAll()\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tauth := blocks[0].Segments[0]\n\tline := auth[0].Text + \" \" + auth[1].Text + \" \" + auth[2].Text + \" \" + auth[3].Text\n\tif line != \"basic_auth / import password\" {\n\t\t// Previously, it would be changed to:\n\t\t//   basic_auth / import /path/to/test/dir/password\n\t\t// referencing a file that (probably) doesn't exist and changing the\n\t\t// password!\n\t\tt.Errorf(\"Expected basic_auth tokens to be 'basic_auth / import password' but got %#q\", line)\n\t}\n}\n\nfunc TestSnippetAcrossMultipleFiles(t *testing.T) {\n\t// Make the derived Caddyfile that expects (common) to be defined.\n\tfileName := writeStringToTempFileOrDie(t, `\n\t\thttp://example.com {\n\t\t\timport common\n\t\t}\n\t`)\n\n\t// Parse the root file that defines (common) and then imports the other one.\n\tp := testParser(`\n\t\t(common) {\n\t\t\tgzip foo\n\t\t}\n\t\timport ` + fileName + `\n\t`)\n\n\tblocks, err := p.parseAll()\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif len(blocks) != 1 {\n\t\tt.Fatalf(\"Expect exactly one server block. Got %d.\", len(blocks))\n\t}\n\tif actual, expected := blocks[0].GetKeysText()[0], \"http://example.com\"; expected != actual {\n\t\tt.Errorf(\"Expected server name to be '%s' but was '%s'\", expected, actual)\n\t}\n\tif len(blocks[0].Segments) != 1 {\n\t\tt.Fatalf(\"Server block should have tokens from import\")\n\t}\n\tif actual, expected := blocks[0].Segments[0][0].Text, \"gzip\"; expected != actual {\n\t\tt.Errorf(\"Expected argument to be '%s' but was '%s'\", expected, actual)\n\t}\n}\n\nfunc TestRejectsGlobalMatcher(t *testing.T) {\n\tp := testParser(`\n\t\t@rejected path /foo\n\n\t\t(common) {\n\t\t\tgzip foo\n\t\t\terrors stderr\n\t\t}\n\n\t\thttp://example.com {\n\t\t\timport common\n\t\t}\n\t`)\n\t_, err := p.parseAll()\n\tif err == nil {\n\t\tt.Fatal(\"Expected an error, but got nil\")\n\t}\n\texpected := \"request matchers may not be defined globally, they must be in a site block; found @rejected, at Testfile:2\"\n\tif err.Error() != expected {\n\t\tt.Errorf(\"Expected error to be '%s' but got '%v'\", expected, err)\n\t}\n}\n\nfunc TestRejectAnonymousImportBlock(t *testing.T) {\n\tp := testParser(`\n\t\t(site) {\n\t\t\thttp://{args[0]} https://{args[0]} {\n\t\t\t\t{block}\n\t\t\t}\n\t\t}\n\n\t\timport site test.domain {\n\t\t\t{ \n\t\t\t\theader_up Host {host}\n\t\t\t\theader_up X-Real-IP {remote_host}\n\t\t\t}\n\t\t}\n\t`)\n\t_, err := p.parseAll()\n\tif err == nil {\n\t\tt.Fatal(\"Expected an error, but got nil\")\n\t}\n\texpected := \"anonymous blocks are not supported\"\n\tif !strings.HasPrefix(err.Error(), \"anonymous blocks are not supported\") {\n\t\tt.Errorf(\"Expected error to start with '%s' but got '%v'\", expected, err)\n\t}\n}\n\nfunc TestAcceptSiteImportWithBraces(t *testing.T) {\n\tp := testParser(`\n\t\t(site) {\n\t\t\thttp://{args[0]} https://{args[0]} {\n\t\t\t\t{block}\n\t\t\t}\n\t\t}\n\n\t\timport site test.domain {\n\t\t\treverse_proxy http://192.168.1.1:8080 { \n\t\t\t\theader_up Host {host}\n\t\t\t}\n\t\t}\n\t`)\n\t_, err := p.parseAll()\n\tif err != nil {\n\t\tt.Errorf(\"Expected error to be nil but got '%v'\", err)\n\t}\n}\n\nfunc testParser(input string) parser {\n\treturn parser{Dispenser: NewTestDispenser(input)}\n}\n"
  },
  {
    "path": "caddyconfig/caddyfile/testdata/empty.txt",
    "content": ""
  },
  {
    "path": "caddyconfig/caddyfile/testdata/glob/.dotfile.txt",
    "content": "host1 {\n\tdir1\n\tdir2 arg1\n}\n"
  },
  {
    "path": "caddyconfig/caddyfile/testdata/glob/import_test1.txt",
    "content": "dir2 arg1 arg2\ndir3"
  },
  {
    "path": "caddyconfig/caddyfile/testdata/import_args0.txt",
    "content": "{args[0]}"
  },
  {
    "path": "caddyconfig/caddyfile/testdata/import_args1.txt",
    "content": "{args[0]} {args[1]}"
  },
  {
    "path": "caddyconfig/caddyfile/testdata/import_glob0.txt",
    "content": "glob0.host0 {\n\tdir2 arg1\n}\n\nglob0.host1 {\n}\n"
  },
  {
    "path": "caddyconfig/caddyfile/testdata/import_glob1.txt",
    "content": "glob1.host0 {\n\tdir1\n\tdir2 arg1\n}\n"
  },
  {
    "path": "caddyconfig/caddyfile/testdata/import_glob2.txt",
    "content": "glob2.host0 {\n\tdir2 arg1\n}\n"
  },
  {
    "path": "caddyconfig/caddyfile/testdata/import_recursive0.txt",
    "content": "import import_recursive0.txt"
  },
  {
    "path": "caddyconfig/caddyfile/testdata/import_recursive1.txt",
    "content": "import import_recursive2.txt"
  },
  {
    "path": "caddyconfig/caddyfile/testdata/import_recursive2.txt",
    "content": "import import_recursive3.txt"
  },
  {
    "path": "caddyconfig/caddyfile/testdata/import_recursive3.txt",
    "content": "import import_recursive1.txt"
  },
  {
    "path": "caddyconfig/caddyfile/testdata/import_test1.txt",
    "content": "dir2 arg1 arg2\ndir3"
  },
  {
    "path": "caddyconfig/caddyfile/testdata/import_test2.txt",
    "content": "host1 {\n\tdir1\n\tdir2 arg1\n}"
  },
  {
    "path": "caddyconfig/caddyfile/testdata/only_white_space.txt",
    "content": "\n\n \n  \n　\n\n\n"
  },
  {
    "path": "caddyconfig/configadapters.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyconfig\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\n// Adapter is a type which can adapt a configuration to Caddy JSON.\n// It returns the results and any warnings, or an error.\ntype Adapter interface {\n\tAdapt(body []byte, options map[string]any) ([]byte, []Warning, error)\n}\n\n// Warning represents a warning or notice related to conversion.\ntype Warning struct {\n\tFile      string `json:\"file,omitempty\"`\n\tLine      int    `json:\"line,omitempty\"`\n\tDirective string `json:\"directive,omitempty\"`\n\tMessage   string `json:\"message,omitempty\"`\n}\n\nfunc (w Warning) String() string {\n\tvar directive string\n\tif w.Directive != \"\" {\n\t\tdirective = fmt.Sprintf(\" (%s)\", w.Directive)\n\t}\n\treturn fmt.Sprintf(\"%s:%d%s: %s\", w.File, w.Line, directive, w.Message)\n}\n\n// JSON encodes val as JSON, returning it as a json.RawMessage. Any\n// marshaling errors (which are highly unlikely with correct code)\n// are converted to warnings. This is convenient when filling config\n// structs that require a json.RawMessage, without having to worry\n// about errors.\nfunc JSON(val any, warnings *[]Warning) json.RawMessage {\n\tb, err := json.Marshal(val)\n\tif err != nil {\n\t\tif warnings != nil {\n\t\t\t*warnings = append(*warnings, Warning{Message: err.Error()})\n\t\t}\n\t\treturn nil\n\t}\n\treturn b\n}\n\n// JSONModuleObject is like JSON(), except it marshals val into a JSON object\n// with an added key named fieldName with the value fieldVal. This is useful\n// for encoding module values where the module name has to be described within\n// the object by a certain key; for example, `\"handler\": \"file_server\"` for a\n// file server HTTP handler (fieldName=\"handler\" and fieldVal=\"file_server\").\n// The val parameter must encode into a map[string]any (i.e. it must be\n// a struct or map). Any errors are converted into warnings.\nfunc JSONModuleObject(val any, fieldName, fieldVal string, warnings *[]Warning) json.RawMessage {\n\t// encode to a JSON object first\n\tenc, err := json.Marshal(val)\n\tif err != nil {\n\t\tif warnings != nil {\n\t\t\t*warnings = append(*warnings, Warning{Message: err.Error()})\n\t\t}\n\t\treturn nil\n\t}\n\n\t// then decode the object\n\tvar tmp map[string]any\n\terr = json.Unmarshal(enc, &tmp)\n\tif err != nil {\n\t\tif warnings != nil {\n\t\t\tmessage := err.Error()\n\t\t\tif jsonErr, ok := err.(*json.SyntaxError); ok {\n\t\t\t\tmessage = fmt.Sprintf(\"%v, at offset %d\", jsonErr.Error(), jsonErr.Offset)\n\t\t\t}\n\t\t\t*warnings = append(*warnings, Warning{Message: message})\n\t\t}\n\t\treturn nil\n\t}\n\n\t// so we can easily add the module's field with its appointed value\n\ttmp[fieldName] = fieldVal\n\n\t// then re-marshal as JSON\n\tresult, err := json.Marshal(tmp)\n\tif err != nil {\n\t\tif warnings != nil {\n\t\t\t*warnings = append(*warnings, Warning{Message: err.Error()})\n\t\t}\n\t\treturn nil\n\t}\n\n\treturn result\n}\n\n// RegisterAdapter registers a config adapter with the given name.\n// This should usually be done at init-time. It panics if the\n// adapter cannot be registered successfully.\nfunc RegisterAdapter(name string, adapter Adapter) {\n\tif _, ok := configAdapters[name]; ok {\n\t\tpanic(fmt.Errorf(\"%s: already registered\", name))\n\t}\n\tconfigAdapters[name] = adapter\n\tcaddy.RegisterModule(adapterModule{name, adapter})\n}\n\n// GetAdapter returns the adapter with the given name,\n// or nil if one with that name is not registered.\nfunc GetAdapter(name string) Adapter {\n\treturn configAdapters[name]\n}\n\n// adapterModule is a wrapper type that can turn any config\n// adapter into a Caddy module, which has the benefit of being\n// counted with other modules, even though they do not\n// technically extend the Caddy configuration structure.\n// See caddyserver/caddy#3132.\ntype adapterModule struct {\n\tname string\n\tAdapter\n}\n\nfunc (am adapterModule) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  caddy.ModuleID(\"caddy.adapters.\" + am.name),\n\t\tNew: func() caddy.Module { return am },\n\t}\n}\n\nvar configAdapters = make(map[string]Adapter)\n"
  },
  {
    "path": "caddyconfig/httpcaddyfile/addresses.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage httpcaddyfile\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\t\"net/netip\"\n\t\"reflect\"\n\t\"sort\"\n\t\"strconv\"\n\t\"strings\"\n\t\"unicode\"\n\n\t\"github.com/caddyserver/certmagic\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\n// mapAddressToProtocolToServerBlocks returns a map of listener address to list of server\n// blocks that will be served on that address. To do this, each server block is\n// expanded so that each one is considered individually, although keys of a\n// server block that share the same address stay grouped together so the config\n// isn't repeated unnecessarily. For example, this Caddyfile:\n//\n//\texample.com {\n//\t\tbind 127.0.0.1\n//\t}\n//\twww.example.com, example.net/path, localhost:9999 {\n//\t\tbind 127.0.0.1 1.2.3.4\n//\t}\n//\n// has two server blocks to start with. But expressed in this Caddyfile are\n// actually 4 listener addresses: 127.0.0.1:443, 1.2.3.4:443, 127.0.0.1:9999,\n// and 127.0.0.1:9999. This is because the bind directive is applied to each\n// key of its server block (specifying the host part), and each key may have\n// a different port. And we definitely need to be sure that a site which is\n// bound to be served on a specific interface is not served on others just\n// because that is more convenient: it would be a potential security risk\n// if the difference between interfaces means private vs. public.\n//\n// So what this function does for the example above is iterate each server\n// block, and for each server block, iterate its keys. For the first, it\n// finds one key (example.com) and determines its listener address\n// (127.0.0.1:443 - because of 'bind' and automatic HTTPS). It then adds\n// the listener address to the map value returned by this function, with\n// the first server block as one of its associations.\n//\n// It then iterates each key on the second server block and associates them\n// with one or more listener addresses. Indeed, each key in this block has\n// two listener addresses because of the 'bind' directive. Once we know\n// which addresses serve which keys, we can create a new server block for\n// each address containing the contents of the server block and only those\n// specific keys of the server block which use that address.\n//\n// It is possible and even likely that some keys in the returned map have\n// the exact same list of server blocks (i.e. they are identical). This\n// happens when multiple hosts are declared with a 'bind' directive and\n// the resulting listener addresses are not shared by any other server\n// block (or the other server blocks are exactly identical in their token\n// contents). This happens with our example above because 1.2.3.4:443\n// and 1.2.3.4:9999 are used exclusively with the second server block. This\n// repetition may be undesirable, so call consolidateAddrMappings() to map\n// multiple addresses to the same lists of server blocks (a many:many mapping).\n// (Doing this is essentially a map-reduce technique.)\nfunc (st *ServerType) mapAddressToProtocolToServerBlocks(originalServerBlocks []serverBlock,\n\toptions map[string]any,\n) (map[string]map[string][]serverBlock, error) {\n\taddrToProtocolToServerBlocks := map[string]map[string][]serverBlock{}\n\n\ttype keyWithParsedKey struct {\n\t\tkey       caddyfile.Token\n\t\tparsedKey Address\n\t}\n\n\tfor i, sblock := range originalServerBlocks {\n\t\t// within a server block, we need to map all the listener addresses\n\t\t// implied by the server block to the keys of the server block which\n\t\t// will be served by them; this has the effect of treating each\n\t\t// key of a server block as its own, but without having to repeat its\n\t\t// contents in cases where multiple keys really can be served together\n\t\taddrToProtocolToKeyWithParsedKeys := map[string]map[string][]keyWithParsedKey{}\n\t\tfor j, key := range sblock.block.Keys {\n\t\t\tparsedKey, err := ParseAddress(key.Text)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"parsing key: %v\", err)\n\t\t\t}\n\t\t\tparsedKey = parsedKey.Normalize()\n\n\t\t\t// a key can have multiple listener addresses if there are multiple\n\t\t\t// arguments to the 'bind' directive (although they will all have\n\t\t\t// the same port, since the port is defined by the key or is implicit\n\t\t\t// through automatic HTTPS)\n\t\t\tlisteners, err := st.listenersForServerBlockAddress(sblock, parsedKey, options)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"server block %d, key %d (%s): determining listener address: %v\", i, j, key.Text, err)\n\t\t\t}\n\n\t\t\t// associate this key with its protocols and each listener address served with them\n\t\t\tkwpk := keyWithParsedKey{key, parsedKey}\n\t\t\tfor addr, protocols := range listeners {\n\t\t\t\tprotocolToKeyWithParsedKeys, ok := addrToProtocolToKeyWithParsedKeys[addr]\n\t\t\t\tif !ok {\n\t\t\t\t\tprotocolToKeyWithParsedKeys = map[string][]keyWithParsedKey{}\n\t\t\t\t\taddrToProtocolToKeyWithParsedKeys[addr] = protocolToKeyWithParsedKeys\n\t\t\t\t}\n\n\t\t\t\t// an empty protocol indicates the default, a nil or empty value in the ListenProtocols array\n\t\t\t\tif len(protocols) == 0 {\n\t\t\t\t\tprotocols[\"\"] = struct{}{}\n\t\t\t\t}\n\t\t\t\tfor prot := range protocols {\n\t\t\t\t\tprotocolToKeyWithParsedKeys[prot] = append(\n\t\t\t\t\t\tprotocolToKeyWithParsedKeys[prot],\n\t\t\t\t\t\tkwpk)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// make a slice of the map keys so we can iterate in sorted order\n\t\taddrs := make([]string, 0, len(addrToProtocolToKeyWithParsedKeys))\n\t\tfor addr := range addrToProtocolToKeyWithParsedKeys {\n\t\t\taddrs = append(addrs, addr)\n\t\t}\n\t\tsort.Strings(addrs)\n\n\t\t// now that we know which addresses serve which keys of this\n\t\t// server block, we iterate that mapping and create a list of\n\t\t// new server blocks for each address where the keys of the\n\t\t// server block are only the ones which use the address; but\n\t\t// the contents (tokens) are of course the same\n\t\tfor _, addr := range addrs {\n\t\t\tprotocolToKeyWithParsedKeys := addrToProtocolToKeyWithParsedKeys[addr]\n\n\t\t\tprots := make([]string, 0, len(protocolToKeyWithParsedKeys))\n\t\t\tfor prot := range protocolToKeyWithParsedKeys {\n\t\t\t\tprots = append(prots, prot)\n\t\t\t}\n\t\t\tsort.Strings(prots)\n\n\t\t\tprotocolToServerBlocks, ok := addrToProtocolToServerBlocks[addr]\n\t\t\tif !ok {\n\t\t\t\tprotocolToServerBlocks = map[string][]serverBlock{}\n\t\t\t\taddrToProtocolToServerBlocks[addr] = protocolToServerBlocks\n\t\t\t}\n\n\t\t\tfor _, prot := range prots {\n\t\t\t\tkeyWithParsedKeys := protocolToKeyWithParsedKeys[prot]\n\n\t\t\t\tkeys := make([]caddyfile.Token, len(keyWithParsedKeys))\n\t\t\t\tparsedKeys := make([]Address, len(keyWithParsedKeys))\n\n\t\t\t\tfor k, keyWithParsedKey := range keyWithParsedKeys {\n\t\t\t\t\tkeys[k] = keyWithParsedKey.key\n\t\t\t\t\tparsedKeys[k] = keyWithParsedKey.parsedKey\n\t\t\t\t}\n\n\t\t\t\tprotocolToServerBlocks[prot] = append(protocolToServerBlocks[prot], serverBlock{\n\t\t\t\t\tblock: caddyfile.ServerBlock{\n\t\t\t\t\t\tKeys:     keys,\n\t\t\t\t\t\tSegments: sblock.block.Segments,\n\t\t\t\t\t},\n\t\t\t\t\tpile:       sblock.pile,\n\t\t\t\t\tparsedKeys: parsedKeys,\n\t\t\t\t})\n\t\t\t}\n\t\t}\n\t}\n\n\treturn addrToProtocolToServerBlocks, nil\n}\n\n// consolidateAddrMappings eliminates repetition of identical server blocks in a mapping of\n// single listener addresses to protocols to lists of server blocks. Since multiple addresses\n// may serve multiple protocols to identical sites (server block contents), this function turns\n// a 1:many mapping into a many:many mapping. Server block contents (tokens) must be\n// exactly identical so that reflect.DeepEqual returns true in order for the addresses to be combined.\n// Identical entries are deleted from the addrToServerBlocks map. Essentially, each pairing (each\n// association from multiple addresses to multiple server blocks; i.e. each element of\n// the returned slice) becomes a server definition in the output JSON.\nfunc (st *ServerType) consolidateAddrMappings(addrToProtocolToServerBlocks map[string]map[string][]serverBlock) []sbAddrAssociation {\n\tsbaddrs := make([]sbAddrAssociation, 0, len(addrToProtocolToServerBlocks))\n\n\taddrs := make([]string, 0, len(addrToProtocolToServerBlocks))\n\tfor addr := range addrToProtocolToServerBlocks {\n\t\taddrs = append(addrs, addr)\n\t}\n\tsort.Strings(addrs)\n\n\tfor _, addr := range addrs {\n\t\tprotocolToServerBlocks := addrToProtocolToServerBlocks[addr]\n\n\t\tprots := make([]string, 0, len(protocolToServerBlocks))\n\t\tfor prot := range protocolToServerBlocks {\n\t\t\tprots = append(prots, prot)\n\t\t}\n\t\tsort.Strings(prots)\n\n\t\tfor _, prot := range prots {\n\t\t\tserverBlocks := protocolToServerBlocks[prot]\n\n\t\t\t// now find other addresses that map to identical\n\t\t\t// server blocks and add them to our map of listener\n\t\t\t// addresses and protocols, while removing them from\n\t\t\t// the original map\n\t\t\tlisteners := map[string]map[string]struct{}{}\n\n\t\t\tfor otherAddr, otherProtocolToServerBlocks := range addrToProtocolToServerBlocks {\n\t\t\t\tfor otherProt, otherServerBlocks := range otherProtocolToServerBlocks {\n\t\t\t\t\tif addr == otherAddr && prot == otherProt || reflect.DeepEqual(serverBlocks, otherServerBlocks) {\n\t\t\t\t\t\tlistener, ok := listeners[otherAddr]\n\t\t\t\t\t\tif !ok {\n\t\t\t\t\t\t\tlistener = map[string]struct{}{}\n\t\t\t\t\t\t\tlisteners[otherAddr] = listener\n\t\t\t\t\t\t}\n\t\t\t\t\t\tlistener[otherProt] = struct{}{}\n\t\t\t\t\t\tdelete(otherProtocolToServerBlocks, otherProt)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\taddresses := make([]string, 0, len(listeners))\n\t\t\tfor lnAddr := range listeners {\n\t\t\t\taddresses = append(addresses, lnAddr)\n\t\t\t}\n\t\t\tsort.Strings(addresses)\n\n\t\t\taddressesWithProtocols := make([]addressWithProtocols, 0, len(listeners))\n\n\t\t\tfor _, lnAddr := range addresses {\n\t\t\t\tlnProts := listeners[lnAddr]\n\t\t\t\tprots := make([]string, 0, len(lnProts))\n\t\t\t\tfor prot := range lnProts {\n\t\t\t\t\tprots = append(prots, prot)\n\t\t\t\t}\n\t\t\t\tsort.Strings(prots)\n\n\t\t\t\taddressesWithProtocols = append(addressesWithProtocols, addressWithProtocols{\n\t\t\t\t\taddress:   lnAddr,\n\t\t\t\t\tprotocols: prots,\n\t\t\t\t})\n\t\t\t}\n\n\t\t\tsbaddrs = append(sbaddrs, sbAddrAssociation{\n\t\t\t\taddressesWithProtocols: addressesWithProtocols,\n\t\t\t\tserverBlocks:           serverBlocks,\n\t\t\t})\n\t\t}\n\t}\n\n\treturn sbaddrs\n}\n\n// listenersForServerBlockAddress essentially converts the Caddyfile site addresses to a map from\n// Caddy listener addresses and the protocols to serve them with to the parsed address for each server block.\nfunc (st *ServerType) listenersForServerBlockAddress(sblock serverBlock, addr Address,\n\toptions map[string]any,\n) (map[string]map[string]struct{}, error) {\n\tswitch addr.Scheme {\n\tcase \"wss\":\n\t\treturn nil, fmt.Errorf(\"the scheme wss:// is only supported in browsers; use https:// instead\")\n\tcase \"ws\":\n\t\treturn nil, fmt.Errorf(\"the scheme ws:// is only supported in browsers; use http:// instead\")\n\tcase \"https\", \"http\", \"\":\n\t\t// Do nothing or handle the valid schemes\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"unsupported URL scheme %s://\", addr.Scheme)\n\t}\n\n\t// figure out the HTTP and HTTPS ports; either\n\t// use defaults, or override with user config\n\thttpPort, httpsPort := strconv.Itoa(caddyhttp.DefaultHTTPPort), strconv.Itoa(caddyhttp.DefaultHTTPSPort)\n\tif hport, ok := options[\"http_port\"]; ok {\n\t\thttpPort = strconv.Itoa(hport.(int))\n\t}\n\tif hsport, ok := options[\"https_port\"]; ok {\n\t\thttpsPort = strconv.Itoa(hsport.(int))\n\t}\n\n\t// default port is the HTTPS port\n\tlnPort := httpsPort\n\tif addr.Port != \"\" {\n\t\t// port explicitly defined\n\t\tlnPort = addr.Port\n\t} else if addr.Scheme == \"http\" {\n\t\t// port inferred from scheme\n\t\tlnPort = httpPort\n\t}\n\n\t// error if scheme and port combination violate convention\n\tif (addr.Scheme == \"http\" && lnPort == httpsPort) || (addr.Scheme == \"https\" && lnPort == httpPort) {\n\t\treturn nil, fmt.Errorf(\"[%s] scheme and port violate convention\", addr.String())\n\t}\n\n\t// the bind directive specifies hosts (and potentially network), and the protocols to serve them with, but is optional\n\tlnCfgVals := make([]addressesWithProtocols, 0, len(sblock.pile[\"bind\"]))\n\tfor _, cfgVal := range sblock.pile[\"bind\"] {\n\t\tif val, ok := cfgVal.Value.(addressesWithProtocols); ok {\n\t\t\tlnCfgVals = append(lnCfgVals, val)\n\t\t}\n\t}\n\tif len(lnCfgVals) == 0 {\n\t\tif defaultBindValues, ok := options[\"default_bind\"].([]ConfigValue); ok {\n\t\t\tfor _, defaultBindValue := range defaultBindValues {\n\t\t\t\tlnCfgVals = append(lnCfgVals, defaultBindValue.Value.(addressesWithProtocols))\n\t\t\t}\n\t\t} else {\n\t\t\tlnCfgVals = []addressesWithProtocols{{\n\t\t\t\taddresses: []string{\"\"},\n\t\t\t\tprotocols: nil,\n\t\t\t}}\n\t\t}\n\t}\n\n\t// use a map to prevent duplication\n\tlisteners := map[string]map[string]struct{}{}\n\tfor _, lnCfgVal := range lnCfgVals {\n\t\tfor _, lnAddr := range lnCfgVal.addresses {\n\t\t\tlnNetw, lnHost, _, err := caddy.SplitNetworkAddress(lnAddr)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"splitting listener address: %v\", err)\n\t\t\t}\n\t\t\tnetworkAddr, err := caddy.ParseNetworkAddress(caddy.JoinNetworkAddress(lnNetw, lnHost, lnPort))\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"parsing network address: %v\", err)\n\t\t\t}\n\t\t\tif _, ok := listeners[addr.String()]; !ok {\n\t\t\t\tlisteners[networkAddr.String()] = map[string]struct{}{}\n\t\t\t}\n\t\t\tfor _, protocol := range lnCfgVal.protocols {\n\t\t\t\tlisteners[networkAddr.String()][protocol] = struct{}{}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn listeners, nil\n}\n\n// addressesWithProtocols associates a list of listen addresses\n// with a list of protocols to serve them with\ntype addressesWithProtocols struct {\n\taddresses []string\n\tprotocols []string\n}\n\n// Address represents a site address. It contains\n// the original input value, and the component\n// parts of an address. The component parts may be\n// updated to the correct values as setup proceeds,\n// but the original value should never be changed.\n//\n// The Host field must be in a normalized form.\ntype Address struct {\n\tOriginal, Scheme, Host, Port, Path string\n}\n\n// ParseAddress parses an address string into a structured format with separate\n// scheme, host, port, and path portions, as well as the original input string.\nfunc ParseAddress(str string) (Address, error) {\n\tconst maxLen = 4096\n\tif len(str) > maxLen {\n\t\tstr = str[:maxLen]\n\t}\n\tremaining := strings.TrimSpace(str)\n\ta := Address{Original: remaining}\n\n\t// extract scheme\n\tsplitScheme := strings.SplitN(remaining, \"://\", 2)\n\tswitch len(splitScheme) {\n\tcase 0:\n\t\treturn a, nil\n\tcase 1:\n\t\tremaining = splitScheme[0]\n\tcase 2:\n\t\ta.Scheme = splitScheme[0]\n\t\tremaining = splitScheme[1]\n\t}\n\n\t// extract host and port\n\thostSplit := strings.SplitN(remaining, \"/\", 2)\n\tif len(hostSplit) > 0 {\n\t\thost, port, err := net.SplitHostPort(hostSplit[0])\n\t\tif err != nil {\n\t\t\thost, port, err = net.SplitHostPort(hostSplit[0] + \":\")\n\t\t\tif err != nil {\n\t\t\t\thost = hostSplit[0]\n\t\t\t}\n\t\t}\n\t\ta.Host = host\n\t\ta.Port = port\n\t}\n\tif len(hostSplit) == 2 {\n\t\t// all that remains is the path\n\t\ta.Path = \"/\" + hostSplit[1]\n\t}\n\n\t// make sure port is valid\n\tif a.Port != \"\" {\n\t\tif portNum, err := strconv.Atoi(a.Port); err != nil {\n\t\t\treturn Address{}, fmt.Errorf(\"invalid port '%s': %v\", a.Port, err)\n\t\t} else if portNum < 0 || portNum > 65535 {\n\t\t\treturn Address{}, fmt.Errorf(\"port %d is out of range\", portNum)\n\t\t}\n\t}\n\n\treturn a, nil\n}\n\n// String returns a human-readable form of a. It will\n// be a cleaned-up and filled-out URL string.\nfunc (a Address) String() string {\n\tif a.Host == \"\" && a.Port == \"\" {\n\t\treturn \"\"\n\t}\n\tscheme := a.Scheme\n\tif scheme == \"\" {\n\t\tif a.Port == strconv.Itoa(certmagic.HTTPSPort) {\n\t\t\tscheme = \"https\"\n\t\t} else {\n\t\t\tscheme = \"http\"\n\t\t}\n\t}\n\ts := scheme\n\tif s != \"\" {\n\t\ts += \"://\"\n\t}\n\tif a.Port != \"\" &&\n\t\t((scheme == \"https\" && a.Port != strconv.Itoa(caddyhttp.DefaultHTTPSPort)) ||\n\t\t\t(scheme == \"http\" && a.Port != strconv.Itoa(caddyhttp.DefaultHTTPPort))) {\n\t\ts += net.JoinHostPort(a.Host, a.Port)\n\t} else {\n\t\ts += a.Host\n\t}\n\tif a.Path != \"\" {\n\t\ts += a.Path\n\t}\n\treturn s\n}\n\n// Normalize returns a normalized version of a.\nfunc (a Address) Normalize() Address {\n\tpath := a.Path\n\n\t// ensure host is normalized if it's an IP address\n\thost := strings.TrimSpace(a.Host)\n\tif ip, err := netip.ParseAddr(host); err == nil {\n\t\tif ip.Is6() && !ip.Is4() && !ip.Is4In6() {\n\t\t\thost = ip.String()\n\t\t}\n\t}\n\n\treturn Address{\n\t\tOriginal: a.Original,\n\t\tScheme:   lowerExceptPlaceholders(a.Scheme),\n\t\tHost:     lowerExceptPlaceholders(host),\n\t\tPort:     a.Port,\n\t\tPath:     path,\n\t}\n}\n\n// lowerExceptPlaceholders lowercases s except within\n// placeholders (substrings in non-escaped '{ }' spans).\n// See https://github.com/caddyserver/caddy/issues/3264\nfunc lowerExceptPlaceholders(s string) string {\n\tvar sb strings.Builder\n\tvar escaped, inPlaceholder bool\n\tfor _, ch := range s {\n\t\tif ch == '\\\\' && !escaped {\n\t\t\tescaped = true\n\t\t\tsb.WriteRune(ch)\n\t\t\tcontinue\n\t\t}\n\t\tif ch == '{' && !escaped {\n\t\t\tinPlaceholder = true\n\t\t}\n\t\tif ch == '}' && inPlaceholder && !escaped {\n\t\t\tinPlaceholder = false\n\t\t}\n\t\tif inPlaceholder {\n\t\t\tsb.WriteRune(ch)\n\t\t} else {\n\t\t\tsb.WriteRune(unicode.ToLower(ch))\n\t\t}\n\t\tescaped = false\n\t}\n\treturn sb.String()\n}\n"
  },
  {
    "path": "caddyconfig/httpcaddyfile/addresses_fuzz.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n//go:build gofuzz\n\npackage httpcaddyfile\n\nfunc FuzzParseAddress(data []byte) int {\n\taddr, err := ParseAddress(string(data))\n\tif err != nil {\n\t\tif addr == (Address{}) {\n\t\t\treturn 1\n\t\t}\n\t\treturn 0\n\t}\n\treturn 1\n}\n"
  },
  {
    "path": "caddyconfig/httpcaddyfile/addresses_test.go",
    "content": "package httpcaddyfile\n\nimport (\n\t\"testing\"\n)\n\nfunc TestParseAddress(t *testing.T) {\n\tfor i, test := range []struct {\n\t\tinput                    string\n\t\tscheme, host, port, path string\n\t\tshouldErr                bool\n\t}{\n\t\t{``, \"\", \"\", \"\", \"\", false},\n\t\t{`localhost`, \"\", \"localhost\", \"\", \"\", false},\n\t\t{`localhost:1234`, \"\", \"localhost\", \"1234\", \"\", false},\n\t\t{`localhost:`, \"\", \"localhost\", \"\", \"\", false},\n\t\t{`0.0.0.0`, \"\", \"0.0.0.0\", \"\", \"\", false},\n\t\t{`127.0.0.1:1234`, \"\", \"127.0.0.1\", \"1234\", \"\", false},\n\t\t{`:1234`, \"\", \"\", \"1234\", \"\", false},\n\t\t{`[::1]`, \"\", \"::1\", \"\", \"\", false},\n\t\t{`[::1]:1234`, \"\", \"::1\", \"1234\", \"\", false},\n\t\t{`:`, \"\", \"\", \"\", \"\", false},\n\t\t{`:http`, \"\", \"\", \"\", \"\", true},\n\t\t{`:https`, \"\", \"\", \"\", \"\", true},\n\t\t{`localhost:http`, \"\", \"\", \"\", \"\", true}, // using service name in port is verboten, as of Go 1.12.8\n\t\t{`localhost:https`, \"\", \"\", \"\", \"\", true},\n\t\t{`http://localhost:https`, \"\", \"\", \"\", \"\", true}, // conflict\n\t\t{`http://localhost:http`, \"\", \"\", \"\", \"\", true},  // repeated scheme\n\t\t{`host:https/path`, \"\", \"\", \"\", \"\", true},\n\t\t{`http://localhost:443`, \"http\", \"localhost\", \"443\", \"\", false}, // NOTE: not conventional\n\t\t{`https://localhost:80`, \"https\", \"localhost\", \"80\", \"\", false}, // NOTE: not conventional\n\t\t{`http://localhost`, \"http\", \"localhost\", \"\", \"\", false},\n\t\t{`https://localhost`, \"https\", \"localhost\", \"\", \"\", false},\n\t\t{`http://{env.APP_DOMAIN}`, \"http\", \"{env.APP_DOMAIN}\", \"\", \"\", false},\n\t\t{`{env.APP_DOMAIN}:80`, \"\", \"{env.APP_DOMAIN}\", \"80\", \"\", false},\n\t\t{`{env.APP_DOMAIN}/path`, \"\", \"{env.APP_DOMAIN}\", \"\", \"/path\", false},\n\t\t{`example.com/{env.APP_PATH}`, \"\", \"example.com\", \"\", \"/{env.APP_PATH}\", false},\n\t\t{`http://127.0.0.1`, \"http\", \"127.0.0.1\", \"\", \"\", false},\n\t\t{`https://127.0.0.1`, \"https\", \"127.0.0.1\", \"\", \"\", false},\n\t\t{`http://[::1]`, \"http\", \"::1\", \"\", \"\", false},\n\t\t{`http://localhost:1234`, \"http\", \"localhost\", \"1234\", \"\", false},\n\t\t{`https://127.0.0.1:1234`, \"https\", \"127.0.0.1\", \"1234\", \"\", false},\n\t\t{`http://[::1]:1234`, \"http\", \"::1\", \"1234\", \"\", false},\n\t\t{``, \"\", \"\", \"\", \"\", false},\n\t\t{`::1`, \"\", \"::1\", \"\", \"\", false},\n\t\t{`localhost::`, \"\", \"localhost::\", \"\", \"\", false},\n\t\t{`#$%@`, \"\", \"#$%@\", \"\", \"\", false}, // don't want to presume what the hostname could be\n\t\t{`host/path`, \"\", \"host\", \"\", \"/path\", false},\n\t\t{`http://host/`, \"http\", \"host\", \"\", \"/\", false},\n\t\t{`//asdf`, \"\", \"\", \"\", \"//asdf\", false},\n\t\t{`:1234/asdf`, \"\", \"\", \"1234\", \"/asdf\", false},\n\t\t{`http://host/path`, \"http\", \"host\", \"\", \"/path\", false},\n\t\t{`https://host:443/path/foo`, \"https\", \"host\", \"443\", \"/path/foo\", false},\n\t\t{`host:80/path`, \"\", \"host\", \"80\", \"/path\", false},\n\t\t{`/path`, \"\", \"\", \"\", \"/path\", false},\n\t} {\n\t\tactual, err := ParseAddress(test.input)\n\n\t\tif err != nil && !test.shouldErr {\n\t\t\tt.Errorf(\"Test %d (%s): Expected no error, but had error: %v\", i, test.input, err)\n\t\t}\n\t\tif err == nil && test.shouldErr {\n\t\t\tt.Errorf(\"Test %d (%s): Expected error, but had none (%#v)\", i, test.input, actual)\n\t\t}\n\n\t\tif !test.shouldErr && actual.Original != test.input {\n\t\t\tt.Errorf(\"Test %d (%s): Expected original '%s', got '%s'\", i, test.input, test.input, actual.Original)\n\t\t}\n\t\tif actual.Scheme != test.scheme {\n\t\t\tt.Errorf(\"Test %d (%s): Expected scheme '%s', got '%s'\", i, test.input, test.scheme, actual.Scheme)\n\t\t}\n\t\tif actual.Host != test.host {\n\t\t\tt.Errorf(\"Test %d (%s): Expected host '%s', got '%s'\", i, test.input, test.host, actual.Host)\n\t\t}\n\t\tif actual.Port != test.port {\n\t\t\tt.Errorf(\"Test %d (%s): Expected port '%s', got '%s'\", i, test.input, test.port, actual.Port)\n\t\t}\n\t\tif actual.Path != test.path {\n\t\t\tt.Errorf(\"Test %d (%s): Expected path '%s', got '%s'\", i, test.input, test.path, actual.Path)\n\t\t}\n\t}\n}\n\nfunc TestAddressString(t *testing.T) {\n\tfor i, test := range []struct {\n\t\taddr     Address\n\t\texpected string\n\t}{\n\t\t{Address{Scheme: \"http\", Host: \"host\", Port: \"1234\", Path: \"/path\"}, \"http://host:1234/path\"},\n\t\t{Address{Scheme: \"\", Host: \"host\", Port: \"\", Path: \"\"}, \"http://host\"},\n\t\t{Address{Scheme: \"\", Host: \"host\", Port: \"80\", Path: \"\"}, \"http://host\"},\n\t\t{Address{Scheme: \"\", Host: \"host\", Port: \"443\", Path: \"\"}, \"https://host\"},\n\t\t{Address{Scheme: \"https\", Host: \"host\", Port: \"443\", Path: \"\"}, \"https://host\"},\n\t\t{Address{Scheme: \"https\", Host: \"host\", Port: \"\", Path: \"\"}, \"https://host\"},\n\t\t{Address{Scheme: \"\", Host: \"host\", Port: \"80\", Path: \"/path\"}, \"http://host/path\"},\n\t\t{Address{Scheme: \"http\", Host: \"\", Port: \"1234\", Path: \"\"}, \"http://:1234\"},\n\t\t{Address{Scheme: \"\", Host: \"\", Port: \"\", Path: \"\"}, \"\"},\n\t} {\n\t\tactual := test.addr.String()\n\t\tif actual != test.expected {\n\t\t\tt.Errorf(\"Test %d: expected '%s' but got '%s'\", i, test.expected, actual)\n\t\t}\n\t}\n}\n\nfunc TestKeyNormalization(t *testing.T) {\n\ttestCases := []struct {\n\t\tinput  string\n\t\texpect Address\n\t}{\n\t\t{\n\t\t\tinput: \"example.com\",\n\t\t\texpect: Address{\n\t\t\t\tHost: \"example.com\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: \"http://host:1234/path\",\n\t\t\texpect: Address{\n\t\t\t\tScheme: \"http\",\n\t\t\t\tHost:   \"host\",\n\t\t\t\tPort:   \"1234\",\n\t\t\t\tPath:   \"/path\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: \"HTTP://A/ABCDEF\",\n\t\t\texpect: Address{\n\t\t\t\tScheme: \"http\",\n\t\t\t\tHost:   \"a\",\n\t\t\t\tPath:   \"/ABCDEF\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: \"A/ABCDEF\",\n\t\t\texpect: Address{\n\t\t\t\tHost: \"a\",\n\t\t\t\tPath: \"/ABCDEF\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: \"A:2015/Path\",\n\t\t\texpect: Address{\n\t\t\t\tHost: \"a\",\n\t\t\t\tPort: \"2015\",\n\t\t\t\tPath: \"/Path\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: \"sub.{env.MY_DOMAIN}\",\n\t\t\texpect: Address{\n\t\t\t\tHost: \"sub.{env.MY_DOMAIN}\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: \"sub.ExAmPle\",\n\t\t\texpect: Address{\n\t\t\t\tHost: \"sub.example\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: \"sub.\\\\{env.MY_DOMAIN\\\\}\",\n\t\t\texpect: Address{\n\t\t\t\tHost: \"sub.\\\\{env.my_domain\\\\}\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: \"sub.{env.MY_DOMAIN}.com\",\n\t\t\texpect: Address{\n\t\t\t\tHost: \"sub.{env.MY_DOMAIN}.com\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: \":80\",\n\t\t\texpect: Address{\n\t\t\t\tPort: \"80\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: \":443\",\n\t\t\texpect: Address{\n\t\t\t\tPort: \"443\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: \":1234\",\n\t\t\texpect: Address{\n\t\t\t\tPort: \"1234\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:  \"\",\n\t\t\texpect: Address{},\n\t\t},\n\t\t{\n\t\t\tinput:  \":\",\n\t\t\texpect: Address{},\n\t\t},\n\t\t{\n\t\t\tinput: \"[::]\",\n\t\t\texpect: Address{\n\t\t\t\tHost: \"::\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: \"127.0.0.1\",\n\t\t\texpect: Address{\n\t\t\t\tHost: \"127.0.0.1\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: \"[2001:db8:85a3:8d3:1319:8a2e:370:7348]:1234\",\n\t\t\texpect: Address{\n\t\t\t\tHost: \"2001:db8:85a3:8d3:1319:8a2e:370:7348\",\n\t\t\t\tPort: \"1234\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\t// IPv4 address in IPv6 form (#4381)\n\t\t\tinput: \"[::ffff:cff4:e77d]:1234\",\n\t\t\texpect: Address{\n\t\t\t\tHost: \"::ffff:cff4:e77d\",\n\t\t\t\tPort: \"1234\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: \"::ffff:cff4:e77d\",\n\t\t\texpect: Address{\n\t\t\t\tHost: \"::ffff:cff4:e77d\",\n\t\t\t},\n\t\t},\n\t}\n\tfor i, tc := range testCases {\n\t\taddr, err := ParseAddress(tc.input)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Test %d: Parsing address '%s': %v\", i, tc.input, err)\n\t\t\tcontinue\n\t\t}\n\t\tactual := addr.Normalize()\n\t\tif actual.Scheme != tc.expect.Scheme {\n\t\t\tt.Errorf(\"Test %d: Input '%s': Expected Scheme='%s' but got Scheme='%s'\", i, tc.input, tc.expect.Scheme, actual.Scheme)\n\t\t}\n\t\tif actual.Host != tc.expect.Host {\n\t\t\tt.Errorf(\"Test %d: Input '%s': Expected Host='%s' but got Host='%s'\", i, tc.input, tc.expect.Host, actual.Host)\n\t\t}\n\t\tif actual.Port != tc.expect.Port {\n\t\t\tt.Errorf(\"Test %d: Input '%s': Expected Port='%s' but got Port='%s'\", i, tc.input, tc.expect.Port, actual.Port)\n\t\t}\n\t\tif actual.Path != tc.expect.Path {\n\t\t\tt.Errorf(\"Test %d: Input '%s': Expected Path='%s' but got Path='%s'\", i, tc.input, tc.expect.Path, actual.Path)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddyconfig/httpcaddyfile/builtins.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage httpcaddyfile\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"html\"\n\t\"net/http\"\n\t\"reflect\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/mholt/acmez/v3/acme\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddytls\"\n)\n\nfunc init() {\n\tRegisterDirective(\"bind\", parseBind)\n\tRegisterDirective(\"tls\", parseTLS)\n\tRegisterHandlerDirective(\"fs\", parseFilesystem)\n\tRegisterDirective(\"root\", parseRoot)\n\tRegisterHandlerDirective(\"vars\", parseVars)\n\tRegisterHandlerDirective(\"redir\", parseRedir)\n\tRegisterHandlerDirective(\"respond\", parseRespond)\n\tRegisterHandlerDirective(\"abort\", parseAbort)\n\tRegisterHandlerDirective(\"error\", parseError)\n\tRegisterHandlerDirective(\"route\", parseRoute)\n\tRegisterHandlerDirective(\"handle\", parseHandle)\n\tRegisterDirective(\"handle_errors\", parseHandleErrors)\n\tRegisterHandlerDirective(\"invoke\", parseInvoke)\n\tRegisterDirective(\"log\", parseLog)\n\tRegisterHandlerDirective(\"skip_log\", parseLogSkip)\n\tRegisterHandlerDirective(\"log_skip\", parseLogSkip)\n\tRegisterHandlerDirective(\"log_name\", parseLogName)\n}\n\n// parseBind parses the bind directive. Syntax:\n//\n//\t\tbind <addresses...> [{\n//\t   protocols [h1|h2|h2c|h3] [...]\n//\t }]\nfunc parseBind(h Helper) ([]ConfigValue, error) {\n\th.Next() // consume directive name\n\tvar addresses, protocols []string\n\taddresses = h.RemainingArgs()\n\n\tfor h.NextBlock(0) {\n\t\tswitch h.Val() {\n\t\tcase \"protocols\":\n\t\t\tprotocols = h.RemainingArgs()\n\t\t\tif len(protocols) == 0 {\n\t\t\t\treturn nil, h.Errf(\"protocols requires one or more arguments\")\n\t\t\t}\n\t\tdefault:\n\t\t\treturn nil, h.Errf(\"unknown subdirective: %s\", h.Val())\n\t\t}\n\t}\n\n\treturn []ConfigValue{{Class: \"bind\", Value: addressesWithProtocols{\n\t\taddresses: addresses,\n\t\tprotocols: protocols,\n\t}}}, nil\n}\n\n// parseTLS parses the tls directive. Syntax:\n//\n//\ttls [<email>|internal|force_automate]|[<cert_file> <key_file>] {\n//\t    protocols <min> [<max>]\n//\t    ciphers   <cipher_suites...>\n//\t    curves    <curves...>\n//\t    client_auth {\n//\t        mode                   [request|require|verify_if_given|require_and_verify]\n//\t        trust_pool             <module_name> [...]\n//\t        trusted_leaf_cert      <base64_der>\n//\t        trusted_leaf_cert_file <filename>\n//\t    }\n//\t    alpn                          <values...>\n//\t    load                          <paths...>\n//\t    ca                            <acme_ca_endpoint>\n//\t    ca_root                       <pem_file>\n//\t    key_type                      [ed25519|p256|p384|rsa2048|rsa4096]\n//\t    dns                           [<provider_name> [...]]    (required, though, if DNS is not configured as global option)\n//\t    propagation_delay             <duration>\n//\t    propagation_timeout           <duration>\n//\t    resolvers                     <dns_servers...>\n//\t    dns_ttl                       <duration>\n//\t    dns_challenge_override_domain <domain>\n//\t    on_demand\n//\t    reuse_private_keys\n//\t    force_automate\n//\t    eab                           <key_id> <mac_key>\n//\t    issuer                        <module_name> [...]\n//\t    get_certificate               <module_name> [...]\n//\t    insecure_secrets_log          <log_file>\n//\t    renewal_window_ratio          <ratio>\n//\t}\nfunc parseTLS(h Helper) ([]ConfigValue, error) {\n\th.Next() // consume directive name\n\n\tcp := new(caddytls.ConnectionPolicy)\n\tvar fileLoader caddytls.FileLoader\n\tvar folderLoader caddytls.FolderLoader\n\tvar certSelector caddytls.CustomCertSelectionPolicy\n\tvar acmeIssuer *caddytls.ACMEIssuer\n\tvar keyType string\n\tvar internalIssuer *caddytls.InternalIssuer\n\tvar issuers []certmagic.Issuer\n\tvar certManagers []certmagic.Manager\n\tvar onDemand bool\n\tvar reusePrivateKeys bool\n\tvar forceAutomate bool\n\tvar renewalWindowRatio float64\n\n\t// Track which DNS challenge options are set\n\tvar dnsOptionsSet []string\n\n\tfirstLine := h.RemainingArgs()\n\tswitch len(firstLine) {\n\tcase 0:\n\tcase 1:\n\t\tif firstLine[0] == \"internal\" {\n\t\t\tinternalIssuer = new(caddytls.InternalIssuer)\n\t\t} else if firstLine[0] == \"force_automate\" {\n\t\t\tforceAutomate = true\n\t\t} else if !strings.Contains(firstLine[0], \"@\") {\n\t\t\treturn nil, h.Err(\"single argument must either be 'internal', 'force_automate', or an email address\")\n\t\t} else {\n\t\t\tacmeIssuer = &caddytls.ACMEIssuer{\n\t\t\t\tEmail: firstLine[0],\n\t\t\t}\n\t\t}\n\n\tcase 2:\n\t\t// file certificate loader\n\t\tcertFilename := firstLine[0]\n\t\tkeyFilename := firstLine[1]\n\n\t\t// tag this certificate so if multiple certs match, specifically\n\t\t// this one that the user has provided will be used, see #2588:\n\t\t// https://github.com/caddyserver/caddy/issues/2588 ... but we\n\t\t// must be careful about how we do this; being careless will\n\t\t// lead to failed handshakes\n\t\t//\n\t\t// we need to remember which cert files we've seen, since we\n\t\t// must load each cert only once; otherwise, they each get a\n\t\t// different tag... since a cert loaded twice has the same\n\t\t// bytes, it will overwrite the first one in the cache, and\n\t\t// only the last cert (and its tag) will survive, so any conn\n\t\t// policy that is looking for any tag other than the last one\n\t\t// to be loaded won't find it, and TLS handshakes will fail\n\t\t// (see end of issue #3004)\n\t\t//\n\t\t// tlsCertTags maps certificate filenames to their tag.\n\t\t// This is used to remember which tag is used for each\n\t\t// certificate files, since we need to avoid loading\n\t\t// the same certificate files more than once, overwriting\n\t\t// previous tags\n\t\ttlsCertTags, ok := h.State[\"tlsCertTags\"].(map[string]string)\n\t\tif !ok {\n\t\t\ttlsCertTags = make(map[string]string)\n\t\t\th.State[\"tlsCertTags\"] = tlsCertTags\n\t\t}\n\n\t\ttag, ok := tlsCertTags[certFilename]\n\t\tif !ok {\n\t\t\t// haven't seen this cert file yet, let's give it a tag\n\t\t\t// and add a loader for it\n\t\t\ttag = fmt.Sprintf(\"cert%d\", len(tlsCertTags))\n\t\t\tfileLoader = append(fileLoader, caddytls.CertKeyFilePair{\n\t\t\t\tCertificate: certFilename,\n\t\t\t\tKey:         keyFilename,\n\t\t\t\tTags:        []string{tag},\n\t\t\t})\n\t\t\t// remember this for next time we see this cert file\n\t\t\ttlsCertTags[certFilename] = tag\n\t\t}\n\t\tcertSelector.AnyTag = append(certSelector.AnyTag, tag)\n\n\tdefault:\n\t\treturn nil, h.ArgErr()\n\t}\n\n\tvar hasBlock bool\n\tfor h.NextBlock(0) {\n\t\thasBlock = true\n\n\t\tswitch h.Val() {\n\t\tcase \"protocols\":\n\t\t\targs := h.RemainingArgs()\n\t\t\tif len(args) == 0 {\n\t\t\t\treturn nil, h.Errf(\"protocols requires one or two arguments\")\n\t\t\t}\n\t\t\tif len(args) > 0 {\n\t\t\t\tif _, ok := caddytls.SupportedProtocols[args[0]]; !ok {\n\t\t\t\t\treturn nil, h.Errf(\"wrong protocol name or protocol not supported: '%s'\", args[0])\n\t\t\t\t}\n\t\t\t\tcp.ProtocolMin = args[0]\n\t\t\t}\n\t\t\tif len(args) > 1 {\n\t\t\t\tif _, ok := caddytls.SupportedProtocols[args[1]]; !ok {\n\t\t\t\t\treturn nil, h.Errf(\"wrong protocol name or protocol not supported: '%s'\", args[1])\n\t\t\t\t}\n\t\t\t\tcp.ProtocolMax = args[1]\n\t\t\t}\n\n\t\tcase \"ciphers\":\n\t\t\tfor h.NextArg() {\n\t\t\t\tif !caddytls.CipherSuiteNameSupported(h.Val()) {\n\t\t\t\t\treturn nil, h.Errf(\"wrong cipher suite name or cipher suite not supported: '%s'\", h.Val())\n\t\t\t\t}\n\t\t\t\tcp.CipherSuites = append(cp.CipherSuites, h.Val())\n\t\t\t}\n\n\t\tcase \"curves\":\n\t\t\tfor h.NextArg() {\n\t\t\t\tif _, ok := caddytls.SupportedCurves[h.Val()]; !ok {\n\t\t\t\t\treturn nil, h.Errf(\"Wrong curve name or curve not supported: '%s'\", h.Val())\n\t\t\t\t}\n\t\t\t\tcp.Curves = append(cp.Curves, h.Val())\n\t\t\t}\n\n\t\tcase \"client_auth\":\n\t\t\tcp.ClientAuthentication = &caddytls.ClientAuthentication{}\n\t\t\tif err := cp.ClientAuthentication.UnmarshalCaddyfile(h.NewFromNextSegment()); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\tcase \"alpn\":\n\t\t\targs := h.RemainingArgs()\n\t\t\tif len(args) == 0 {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tcp.ALPN = args\n\n\t\tcase \"load\":\n\t\t\tfolderLoader = append(folderLoader, h.RemainingArgs()...)\n\n\t\tcase \"ca\":\n\t\t\targ := h.RemainingArgs()\n\t\t\tif len(arg) != 1 {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tif acmeIssuer == nil {\n\t\t\t\tacmeIssuer = new(caddytls.ACMEIssuer)\n\t\t\t}\n\t\t\tacmeIssuer.CA = arg[0]\n\n\t\tcase \"key_type\":\n\t\t\targ := h.RemainingArgs()\n\t\t\tif len(arg) != 1 {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tkeyType = arg[0]\n\n\t\tcase \"eab\":\n\t\t\targ := h.RemainingArgs()\n\t\t\tif len(arg) != 2 {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tif acmeIssuer == nil {\n\t\t\t\tacmeIssuer = new(caddytls.ACMEIssuer)\n\t\t\t}\n\t\t\tacmeIssuer.ExternalAccount = &acme.EAB{\n\t\t\t\tKeyID:  arg[0],\n\t\t\t\tMACKey: arg[1],\n\t\t\t}\n\n\t\tcase \"issuer\":\n\t\t\tif !h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tmodName := h.Val()\n\t\t\tmodID := \"tls.issuance.\" + modName\n\t\t\tunm, err := caddyfile.UnmarshalModule(h.Dispenser, modID)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tissuer, ok := unm.(certmagic.Issuer)\n\t\t\tif !ok {\n\t\t\t\treturn nil, h.Errf(\"module %s (%T) is not a certmagic.Issuer\", modID, unm)\n\t\t\t}\n\t\t\tissuers = append(issuers, issuer)\n\n\t\tcase \"get_certificate\":\n\t\t\tif !h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tmodName := h.Val()\n\t\t\tmodID := \"tls.get_certificate.\" + modName\n\t\t\tunm, err := caddyfile.UnmarshalModule(h.Dispenser, modID)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tcertManager, ok := unm.(certmagic.Manager)\n\t\t\tif !ok {\n\t\t\t\treturn nil, h.Errf(\"module %s (%T) is not a certmagic.CertificateManager\", modID, unm)\n\t\t\t}\n\t\t\tcertManagers = append(certManagers, certManager)\n\n\t\tcase \"dns\":\n\t\t\tif acmeIssuer == nil {\n\t\t\t\tacmeIssuer = new(caddytls.ACMEIssuer)\n\t\t\t}\n\t\t\tif acmeIssuer.Challenges == nil {\n\t\t\t\tacmeIssuer.Challenges = new(caddytls.ChallengesConfig)\n\t\t\t}\n\t\t\tif acmeIssuer.Challenges.DNS == nil {\n\t\t\t\tacmeIssuer.Challenges.DNS = new(caddytls.DNSChallengeConfig)\n\t\t\t}\n\t\t\t// DNS provider configuration optional, since it may be configured globally via the TLS app with global options\n\t\t\tif h.NextArg() {\n\t\t\t\tprovName := h.Val()\n\t\t\t\tmodID := \"dns.providers.\" + provName\n\t\t\t\tunm, err := caddyfile.UnmarshalModule(h.Dispenser, modID)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t\tacmeIssuer.Challenges.DNS.ProviderRaw = caddyconfig.JSONModuleObject(unm, \"name\", provName, h.warnings)\n\t\t\t} else if h.Option(\"dns\") == nil {\n\t\t\t\t// if DNS is omitted locally, it needs to be configured globally\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\n\t\tcase \"resolvers\":\n\t\t\targs := h.RemainingArgs()\n\t\t\tif len(args) == 0 {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tif acmeIssuer == nil {\n\t\t\t\tacmeIssuer = new(caddytls.ACMEIssuer)\n\t\t\t}\n\t\t\tif acmeIssuer.Challenges == nil {\n\t\t\t\tacmeIssuer.Challenges = new(caddytls.ChallengesConfig)\n\t\t\t}\n\t\t\tif acmeIssuer.Challenges.DNS == nil {\n\t\t\t\tacmeIssuer.Challenges.DNS = new(caddytls.DNSChallengeConfig)\n\t\t\t}\n\t\t\tdnsOptionsSet = append(dnsOptionsSet, \"resolvers\")\n\t\t\tacmeIssuer.Challenges.DNS.Resolvers = args\n\n\t\tcase \"propagation_delay\":\n\t\t\targ := h.RemainingArgs()\n\t\t\tif len(arg) != 1 {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tdelayStr := arg[0]\n\t\t\tdelay, err := caddy.ParseDuration(delayStr)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, h.Errf(\"invalid propagation_delay duration %s: %v\", delayStr, err)\n\t\t\t}\n\t\t\tif acmeIssuer == nil {\n\t\t\t\tacmeIssuer = new(caddytls.ACMEIssuer)\n\t\t\t}\n\t\t\tif acmeIssuer.Challenges == nil {\n\t\t\t\tacmeIssuer.Challenges = new(caddytls.ChallengesConfig)\n\t\t\t}\n\t\t\tif acmeIssuer.Challenges.DNS == nil {\n\t\t\t\tacmeIssuer.Challenges.DNS = new(caddytls.DNSChallengeConfig)\n\t\t\t}\n\t\t\tdnsOptionsSet = append(dnsOptionsSet, \"propagation_delay\")\n\t\t\tacmeIssuer.Challenges.DNS.PropagationDelay = caddy.Duration(delay)\n\n\t\tcase \"propagation_timeout\":\n\t\t\targ := h.RemainingArgs()\n\t\t\tif len(arg) != 1 {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\ttimeoutStr := arg[0]\n\t\t\tvar timeout time.Duration\n\t\t\tif timeoutStr == \"-1\" {\n\t\t\t\ttimeout = time.Duration(-1)\n\t\t\t} else {\n\t\t\t\tvar err error\n\t\t\t\ttimeout, err = caddy.ParseDuration(timeoutStr)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, h.Errf(\"invalid propagation_timeout duration %s: %v\", timeoutStr, err)\n\t\t\t\t}\n\t\t\t}\n\t\t\tif acmeIssuer == nil {\n\t\t\t\tacmeIssuer = new(caddytls.ACMEIssuer)\n\t\t\t}\n\t\t\tif acmeIssuer.Challenges == nil {\n\t\t\t\tacmeIssuer.Challenges = new(caddytls.ChallengesConfig)\n\t\t\t}\n\t\t\tif acmeIssuer.Challenges.DNS == nil {\n\t\t\t\tacmeIssuer.Challenges.DNS = new(caddytls.DNSChallengeConfig)\n\t\t\t}\n\t\t\tdnsOptionsSet = append(dnsOptionsSet, \"propagation_timeout\")\n\t\t\tacmeIssuer.Challenges.DNS.PropagationTimeout = caddy.Duration(timeout)\n\n\t\tcase \"dns_ttl\":\n\t\t\targ := h.RemainingArgs()\n\t\t\tif len(arg) != 1 {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tttlStr := arg[0]\n\t\t\tttl, err := caddy.ParseDuration(ttlStr)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, h.Errf(\"invalid dns_ttl duration %s: %v\", ttlStr, err)\n\t\t\t}\n\t\t\tif acmeIssuer == nil {\n\t\t\t\tacmeIssuer = new(caddytls.ACMEIssuer)\n\t\t\t}\n\t\t\tif acmeIssuer.Challenges == nil {\n\t\t\t\tacmeIssuer.Challenges = new(caddytls.ChallengesConfig)\n\t\t\t}\n\t\t\tif acmeIssuer.Challenges.DNS == nil {\n\t\t\t\tacmeIssuer.Challenges.DNS = new(caddytls.DNSChallengeConfig)\n\t\t\t}\n\t\t\tdnsOptionsSet = append(dnsOptionsSet, \"dns_ttl\")\n\t\t\tacmeIssuer.Challenges.DNS.TTL = caddy.Duration(ttl)\n\n\t\tcase \"dns_challenge_override_domain\":\n\t\t\targ := h.RemainingArgs()\n\t\t\tif len(arg) != 1 {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tif acmeIssuer == nil {\n\t\t\t\tacmeIssuer = new(caddytls.ACMEIssuer)\n\t\t\t}\n\t\t\tif acmeIssuer.Challenges == nil {\n\t\t\t\tacmeIssuer.Challenges = new(caddytls.ChallengesConfig)\n\t\t\t}\n\t\t\tif acmeIssuer.Challenges.DNS == nil {\n\t\t\t\tacmeIssuer.Challenges.DNS = new(caddytls.DNSChallengeConfig)\n\t\t\t}\n\t\t\tdnsOptionsSet = append(dnsOptionsSet, \"dns_challenge_override_domain\")\n\t\t\tacmeIssuer.Challenges.DNS.OverrideDomain = arg[0]\n\n\t\tcase \"ca_root\":\n\t\t\targ := h.RemainingArgs()\n\t\t\tif len(arg) != 1 {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tif acmeIssuer == nil {\n\t\t\t\tacmeIssuer = new(caddytls.ACMEIssuer)\n\t\t\t}\n\t\t\tacmeIssuer.TrustedRootsPEMFiles = append(acmeIssuer.TrustedRootsPEMFiles, arg[0])\n\n\t\tcase \"on_demand\":\n\t\t\tif h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tonDemand = true\n\n\t\tcase \"reuse_private_keys\":\n\t\t\tif h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\treusePrivateKeys = true\n\n\t\tcase \"insecure_secrets_log\":\n\t\t\tif !h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tcp.InsecureSecretsLog = h.Val()\n\n\t\tcase \"renewal_window_ratio\":\n\t\t\targ := h.RemainingArgs()\n\t\t\tif len(arg) != 1 {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tratio, err := strconv.ParseFloat(arg[0], 64)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, h.Errf(\"parsing renewal_window_ratio: %v\", err)\n\t\t\t}\n\t\t\tif ratio <= 0 || ratio >= 1 {\n\t\t\t\treturn nil, h.Errf(\"renewal_window_ratio must be between 0 and 1 (exclusive)\")\n\t\t\t}\n\t\t\trenewalWindowRatio = ratio\n\n\t\tdefault:\n\t\t\treturn nil, h.Errf(\"unknown subdirective: %s\", h.Val())\n\t\t}\n\t}\n\n\t// Validate DNS challenge config: any DNS challenge option except \"dns\" requires a DNS provider\n\tif acmeIssuer != nil && acmeIssuer.Challenges != nil && acmeIssuer.Challenges.DNS != nil {\n\t\tdnsCfg := acmeIssuer.Challenges.DNS\n\t\tproviderSet := dnsCfg.ProviderRaw != nil || h.Option(\"dns\") != nil || h.Option(\"acme_dns\") != nil\n\t\tif len(dnsOptionsSet) > 0 && !providerSet {\n\t\t\treturn nil, h.Errf(\n\t\t\t\t\"setting DNS challenge options [%s] requires a DNS provider (set with the 'dns' subdirective or 'acme_dns' global option)\",\n\t\t\t\tstrings.Join(dnsOptionsSet, \", \"),\n\t\t\t)\n\t\t}\n\t}\n\n\t// a naked tls directive is not allowed\n\tif len(firstLine) == 0 && !hasBlock {\n\t\treturn nil, h.ArgErr()\n\t}\n\n\t// begin building the final config values\n\tconfigVals := []ConfigValue{}\n\n\t// certificate loaders\n\tif len(fileLoader) > 0 {\n\t\tconfigVals = append(configVals, ConfigValue{\n\t\t\tClass: \"tls.cert_loader\",\n\t\t\tValue: fileLoader,\n\t\t})\n\t}\n\tif len(folderLoader) > 0 {\n\t\tconfigVals = append(configVals, ConfigValue{\n\t\t\tClass: \"tls.cert_loader\",\n\t\t\tValue: folderLoader,\n\t\t})\n\t}\n\n\t// some tls subdirectives are shortcuts that implicitly configure issuers, and the\n\t// user can also configure issuers explicitly using the issuer subdirective; the\n\t// logic to support both would likely be complex, or at least unintuitive\n\tif len(issuers) > 0 && (acmeIssuer != nil || internalIssuer != nil) {\n\t\treturn nil, h.Err(\"cannot mix issuer subdirective (explicit issuers) with other issuer-specific subdirectives (implicit issuers)\")\n\t}\n\tif acmeIssuer != nil && internalIssuer != nil {\n\t\treturn nil, h.Err(\"cannot create both ACME and internal certificate issuers\")\n\t}\n\n\t// now we should either have: explicitly-created issuers, or an implicitly-created\n\t// ACME or internal issuer, or no issuers at all\n\tswitch {\n\tcase len(issuers) > 0:\n\t\tfor _, issuer := range issuers {\n\t\t\tconfigVals = append(configVals, ConfigValue{\n\t\t\t\tClass: \"tls.cert_issuer\",\n\t\t\t\tValue: issuer,\n\t\t\t})\n\t\t}\n\n\tcase acmeIssuer != nil:\n\t\t// implicit ACME issuers (from various subdirectives) - use defaults; there might be more than one\n\t\tdefaultIssuers := caddytls.DefaultIssuers(acmeIssuer.Email)\n\n\t\t// if an ACME CA endpoint was set, the user expects to use that specific one,\n\t\t// not any others that may be defaults, so replace all defaults with that ACME CA\n\t\tif acmeIssuer.CA != \"\" {\n\t\t\tdefaultIssuers = []certmagic.Issuer{acmeIssuer}\n\t\t}\n\n\t\tfor _, issuer := range defaultIssuers {\n\t\t\t// apply settings from the implicitly-configured ACMEIssuer to any\n\t\t\t// default ACMEIssuers, but preserve each default issuer's CA endpoint,\n\t\t\t// because, for example, if you configure the DNS challenge, it should\n\t\t\t// apply to any of the default ACMEIssuers, but you don't want to trample\n\t\t\t// out their unique CA endpoints\n\t\t\tif iss, ok := issuer.(*caddytls.ACMEIssuer); ok && iss != nil {\n\t\t\t\tacmeCopy := *acmeIssuer\n\t\t\t\tacmeCopy.CA = iss.CA\n\t\t\t\tissuer = &acmeCopy\n\t\t\t}\n\t\t\tconfigVals = append(configVals, ConfigValue{\n\t\t\t\tClass: \"tls.cert_issuer\",\n\t\t\t\tValue: issuer,\n\t\t\t})\n\t\t}\n\n\tcase internalIssuer != nil:\n\t\tconfigVals = append(configVals, ConfigValue{\n\t\t\tClass: \"tls.cert_issuer\",\n\t\t\tValue: internalIssuer,\n\t\t})\n\t}\n\n\t// certificate key type\n\tif keyType != \"\" {\n\t\tconfigVals = append(configVals, ConfigValue{\n\t\t\tClass: \"tls.key_type\",\n\t\t\tValue: keyType,\n\t\t})\n\t}\n\n\t// on-demand TLS\n\tif onDemand {\n\t\tconfigVals = append(configVals, ConfigValue{\n\t\t\tClass: \"tls.on_demand\",\n\t\t\tValue: true,\n\t\t})\n\t}\n\tfor _, certManager := range certManagers {\n\t\tconfigVals = append(configVals, ConfigValue{\n\t\t\tClass: \"tls.cert_manager\",\n\t\t\tValue: certManager,\n\t\t})\n\t}\n\n\t// reuse private keys TLS\n\tif reusePrivateKeys {\n\t\tconfigVals = append(configVals, ConfigValue{\n\t\t\tClass: \"tls.reuse_private_keys\",\n\t\t\tValue: true,\n\t\t})\n\t}\n\n\t// renewal window ratio\n\tif renewalWindowRatio > 0 {\n\t\tconfigVals = append(configVals, ConfigValue{\n\t\t\tClass: \"tls.renewal_window_ratio\",\n\t\t\tValue: renewalWindowRatio,\n\t\t})\n\t}\n\n\t// if enabled, the names in the site addresses will be\n\t// added to the automation policies\n\tif forceAutomate {\n\t\tconfigVals = append(configVals, ConfigValue{\n\t\t\tClass: \"tls.force_automate\",\n\t\t\tValue: true,\n\t\t})\n\t}\n\n\t// custom certificate selection\n\tif len(certSelector.AnyTag) > 0 {\n\t\tcp.CertSelection = &certSelector\n\t}\n\n\t// connection policy -- always add one, to ensure that TLS\n\t// is enabled, because this directive was used (this is\n\t// needed, for instance, when a site block has a key of\n\t// just \":5000\" - i.e. no hostname, and only on-demand TLS\n\t// is enabled)\n\tconfigVals = append(configVals, ConfigValue{\n\t\tClass: \"tls.connection_policy\",\n\t\tValue: cp,\n\t})\n\n\treturn configVals, nil\n}\n\n// parseRoot parses the root directive. Syntax:\n//\n//\troot [<matcher>] <path>\nfunc parseRoot(h Helper) ([]ConfigValue, error) {\n\th.Next() // consume directive name\n\n\t// count the tokens to determine what to do\n\targsCount := h.CountRemainingArgs()\n\tif argsCount == 0 {\n\t\treturn nil, h.Errf(\"too few arguments; must have at least a root path\")\n\t}\n\tif argsCount > 2 {\n\t\treturn nil, h.Errf(\"too many arguments; should only be a matcher and a path\")\n\t}\n\n\t// with only one arg, assume it's a root path with no matcher token\n\tif argsCount == 1 {\n\t\tif !h.NextArg() {\n\t\t\treturn nil, h.ArgErr()\n\t\t}\n\t\treturn h.NewRoute(nil, caddyhttp.VarsMiddleware{\"root\": h.Val()}), nil\n\t}\n\n\t// parse the matcher token into a matcher set\n\tuserMatcherSet, err := h.ExtractMatcherSet()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\th.Next() // consume directive name again, matcher parsing does a reset\n\n\t// advance to the root path\n\tif !h.NextArg() {\n\t\treturn nil, h.ArgErr()\n\t}\n\t// make the route with the matcher\n\treturn h.NewRoute(userMatcherSet, caddyhttp.VarsMiddleware{\"root\": h.Val()}), nil\n}\n\n// parseFilesystem parses the fs directive. Syntax:\n//\n//\tfs <filesystem>\nfunc parseFilesystem(h Helper) (caddyhttp.MiddlewareHandler, error) {\n\th.Next() // consume directive name\n\tif !h.NextArg() {\n\t\treturn nil, h.ArgErr()\n\t}\n\tif h.NextArg() {\n\t\treturn nil, h.ArgErr()\n\t}\n\treturn caddyhttp.VarsMiddleware{\"fs\": h.Val()}, nil\n}\n\n// parseVars parses the vars directive. See its UnmarshalCaddyfile method for syntax.\nfunc parseVars(h Helper) (caddyhttp.MiddlewareHandler, error) {\n\tv := new(caddyhttp.VarsMiddleware)\n\terr := v.UnmarshalCaddyfile(h.Dispenser)\n\treturn v, err\n}\n\n// parseRedir parses the redir directive. Syntax:\n//\n//\tredir [<matcher>] <to> [<code>]\n//\n// <code> can be \"permanent\" for 301, \"temporary\" for 302 (default),\n// a placeholder, or any number in the 3xx range or 401. The special\n// code \"html\" can be used to redirect only browser clients (will\n// respond with HTTP 200 and no Location header; redirect is performed\n// with JS and a meta tag).\nfunc parseRedir(h Helper) (caddyhttp.MiddlewareHandler, error) {\n\th.Next() // consume directive name\n\tif !h.NextArg() {\n\t\treturn nil, h.ArgErr()\n\t}\n\tto := h.Val()\n\n\tvar code string\n\tif h.NextArg() {\n\t\tcode = h.Val()\n\t}\n\n\tvar body string\n\tvar hdr http.Header\n\tswitch code {\n\tcase \"permanent\":\n\t\tcode = \"301\"\n\n\tcase \"temporary\", \"\":\n\t\tcode = \"302\"\n\n\tcase \"html\":\n\t\t// Script tag comes first since that will better imitate a redirect in the browser's\n\t\t// history, but the meta tag is a fallback for most non-JS clients.\n\t\tconst metaRedir = `<!DOCTYPE html>\n<html>\n\t<head>\n\t\t<title>Redirecting...</title>\n\t\t<script>window.location.replace(\"%s\");</script>\n\t\t<meta http-equiv=\"refresh\" content=\"0; URL='%s'\">\n\t</head>\n\t<body>Redirecting to <a href=\"%s\">%s</a>...</body>\n</html>\n`\n\t\tsafeTo := html.EscapeString(to)\n\t\tbody = fmt.Sprintf(metaRedir, safeTo, safeTo, safeTo, safeTo)\n\t\thdr = http.Header{\"Content-Type\": []string{\"text/html; charset=utf-8\"}}\n\t\tcode = \"200\" // don't redirect non-browser clients\n\n\tdefault:\n\t\t// Allow placeholders for the code\n\t\tif strings.HasPrefix(code, \"{\") {\n\t\t\tbreak\n\t\t}\n\t\t// Try to validate as an integer otherwise\n\t\tcodeInt, err := strconv.Atoi(code)\n\t\tif err != nil {\n\t\t\treturn nil, h.Errf(\"Not a supported redir code type or not valid integer: '%s'\", code)\n\t\t}\n\t\t// Sometimes, a 401 with Location header is desirable because\n\t\t// requests made with XHR will \"eat\" the 3xx redirect; so if\n\t\t// the intent was to redirect to an auth page, a 3xx won't\n\t\t// work. Responding with 401 allows JS code to read the\n\t\t// Location header and do a window.location redirect manually.\n\t\t// see https://stackoverflow.com/a/2573589/846934\n\t\t// see https://github.com/oauth2-proxy/oauth2-proxy/issues/1522\n\t\tif codeInt < 300 || (codeInt > 399 && codeInt != 401) {\n\t\t\treturn nil, h.Errf(\"Redir code not in the 3xx range or 401: '%v'\", codeInt)\n\t\t}\n\t}\n\n\t// don't redirect non-browser clients\n\tif code != \"200\" {\n\t\thdr = http.Header{\"Location\": []string{to}}\n\t}\n\n\treturn caddyhttp.StaticResponse{\n\t\tStatusCode: caddyhttp.WeakString(code),\n\t\tHeaders:    hdr,\n\t\tBody:       body,\n\t}, nil\n}\n\n// parseRespond parses the respond directive.\nfunc parseRespond(h Helper) (caddyhttp.MiddlewareHandler, error) {\n\tsr := new(caddyhttp.StaticResponse)\n\terr := sr.UnmarshalCaddyfile(h.Dispenser)\n\treturn sr, err\n}\n\n// parseAbort parses the abort directive.\nfunc parseAbort(h Helper) (caddyhttp.MiddlewareHandler, error) {\n\th.Next() // consume directive\n\tfor h.Next() || h.NextBlock(0) {\n\t\treturn nil, h.ArgErr()\n\t}\n\treturn &caddyhttp.StaticResponse{Abort: true}, nil\n}\n\n// parseError parses the error directive.\nfunc parseError(h Helper) (caddyhttp.MiddlewareHandler, error) {\n\tse := new(caddyhttp.StaticError)\n\terr := se.UnmarshalCaddyfile(h.Dispenser)\n\treturn se, err\n}\n\n// parseRoute parses the route directive.\nfunc parseRoute(h Helper) (caddyhttp.MiddlewareHandler, error) {\n\tallResults, err := parseSegmentAsConfig(h)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tfor _, result := range allResults {\n\t\tswitch result.Value.(type) {\n\t\tcase caddyhttp.Route, caddyhttp.Subroute:\n\t\tdefault:\n\t\t\treturn nil, h.Errf(\"%s directive returned something other than an HTTP route or subroute: %#v (only handler directives can be used in routes)\", result.directive, result.Value)\n\t\t}\n\t}\n\n\treturn buildSubroute(allResults, h.groupCounter, false)\n}\n\nfunc parseHandle(h Helper) (caddyhttp.MiddlewareHandler, error) {\n\treturn ParseSegmentAsSubroute(h)\n}\n\nfunc parseHandleErrors(h Helper) ([]ConfigValue, error) {\n\th.Next() // consume directive name\n\n\texpression := \"\"\n\targs := h.RemainingArgs()\n\tif len(args) > 0 {\n\t\tcodes := []string{}\n\t\tfor _, val := range args {\n\t\t\tif len(val) != 3 {\n\t\t\t\treturn nil, h.Errf(\"bad status value '%s'\", val)\n\t\t\t}\n\t\t\tif strings.HasSuffix(val, \"xx\") {\n\t\t\t\tval = val[:1]\n\t\t\t\t_, err := strconv.Atoi(val)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, h.Errf(\"bad status value '%s': %v\", val, err)\n\t\t\t\t}\n\t\t\t\tif expression != \"\" {\n\t\t\t\t\texpression += \" || \"\n\t\t\t\t}\n\t\t\t\texpression += fmt.Sprintf(\"{http.error.status_code} >= %s00 && {http.error.status_code} <= %s99\", val, val)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\t_, err := strconv.Atoi(val)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, h.Errf(\"bad status value '%s': %v\", val, err)\n\t\t\t}\n\t\t\tcodes = append(codes, val)\n\t\t}\n\t\tif len(codes) > 0 {\n\t\t\tif expression != \"\" {\n\t\t\t\texpression += \" || \"\n\t\t\t}\n\t\t\texpression += \"{http.error.status_code} in [\" + strings.Join(codes, \", \") + \"]\"\n\t\t}\n\t\t// Reset cursor position to get ready for ParseSegmentAsSubroute\n\t\th.Reset()\n\t\th.Next()\n\t\th.RemainingArgs()\n\t\th.Prev()\n\t} else {\n\t\t// If no arguments present reset the cursor position to get ready for ParseSegmentAsSubroute\n\t\th.Prev()\n\t}\n\n\thandler, err := ParseSegmentAsSubroute(h)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tsubroute, ok := handler.(*caddyhttp.Subroute)\n\tif !ok {\n\t\treturn nil, h.Errf(\"segment was not parsed as a subroute\")\n\t}\n\n\t// wrap the subroutes\n\twrappingRoute := caddyhttp.Route{\n\t\tHandlersRaw: []json.RawMessage{caddyconfig.JSONModuleObject(subroute, \"handler\", \"subroute\", nil)},\n\t}\n\tsubroute = &caddyhttp.Subroute{\n\t\tRoutes: []caddyhttp.Route{wrappingRoute},\n\t}\n\tif expression != \"\" {\n\t\tstatusMatcher := caddy.ModuleMap{\n\t\t\t\"expression\": h.JSON(caddyhttp.MatchExpression{Expr: expression}),\n\t\t}\n\t\tsubroute.Routes[0].MatcherSetsRaw = []caddy.ModuleMap{statusMatcher}\n\t}\n\treturn []ConfigValue{\n\t\t{\n\t\t\tClass: \"error_route\",\n\t\t\tValue: subroute,\n\t\t},\n\t}, nil\n}\n\n// parseInvoke parses the invoke directive.\nfunc parseInvoke(h Helper) (caddyhttp.MiddlewareHandler, error) {\n\th.Next() // consume directive\n\tif !h.NextArg() {\n\t\treturn nil, h.ArgErr()\n\t}\n\tfor h.Next() || h.NextBlock(0) {\n\t\treturn nil, h.ArgErr()\n\t}\n\n\t// remember that we're invoking this name\n\t// to populate the server with these named routes\n\tif h.State[namedRouteKey] == nil {\n\t\th.State[namedRouteKey] = map[string]struct{}{}\n\t}\n\th.State[namedRouteKey].(map[string]struct{})[h.Val()] = struct{}{}\n\n\t// return the handler\n\treturn &caddyhttp.Invoke{Name: h.Val()}, nil\n}\n\n// parseLog parses the log directive. Syntax:\n//\n//\tlog <logger_name> {\n//\t    hostnames <hostnames...>\n//\t    output <writer_module> ...\n//\t    core   <core_module> ...\n//\t    format <encoder_module> ...\n//\t    level  <level>\n//\t}\nfunc parseLog(h Helper) ([]ConfigValue, error) {\n\treturn parseLogHelper(h, nil)\n}\n\n// parseLogHelper is used both for the parseLog directive within Server Blocks,\n// as well as the global \"log\" option for configuring loggers at the global\n// level. The parseAsGlobalOption parameter is used to distinguish any differing logic\n// between the two.\nfunc parseLogHelper(h Helper, globalLogNames map[string]struct{}) ([]ConfigValue, error) {\n\th.Next() // consume option name\n\n\t// When the globalLogNames parameter is passed in, we make\n\t// modifications to the parsing behavior.\n\tparseAsGlobalOption := globalLogNames != nil\n\n\t// nolint:prealloc\n\tvar configValues []ConfigValue\n\n\t// Logic below expects that a name is always present when a\n\t// global option is being parsed; or an optional override\n\t// is supported for access logs.\n\tvar logName string\n\n\tif parseAsGlobalOption {\n\t\tif h.NextArg() {\n\t\t\tlogName = h.Val()\n\n\t\t\t// Only a single argument is supported.\n\t\t\tif h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t} else {\n\t\t\t// If there is no log name specified, we\n\t\t\t// reference the default logger. See the\n\t\t\t// setupNewDefault function in the logging\n\t\t\t// package for where this is configured.\n\t\t\tlogName = caddy.DefaultLoggerName\n\t\t}\n\n\t\t// Verify this name is unused.\n\t\t_, used := globalLogNames[logName]\n\t\tif used {\n\t\t\treturn nil, h.Err(\"duplicate global log option for: \" + logName)\n\t\t}\n\t\tglobalLogNames[logName] = struct{}{}\n\t} else {\n\t\t// An optional override of the logger name can be provided;\n\t\t// otherwise a default will be used, like \"log0\", \"log1\", etc.\n\t\tif h.NextArg() {\n\t\t\tlogName = h.Val()\n\n\t\t\t// Only a single argument is supported.\n\t\t\tif h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t}\n\t}\n\n\tcl := new(caddy.CustomLog)\n\n\t// allow overriding the current site block's hostnames for this logger;\n\t// this is useful for setting up loggers per subdomain in a site block\n\t// with a wildcard domain\n\tcustomHostnames := []string{}\n\tnoHostname := false\n\tfor h.NextBlock(0) {\n\t\tswitch h.Val() {\n\t\tcase \"hostnames\":\n\t\t\tif parseAsGlobalOption {\n\t\t\t\treturn nil, h.Err(\"hostnames is not allowed in the log global options\")\n\t\t\t}\n\t\t\targs := h.RemainingArgs()\n\t\t\tif len(args) == 0 {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tcustomHostnames = append(customHostnames, args...)\n\n\t\tcase \"output\":\n\t\t\tif !h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tmoduleName := h.Val()\n\n\t\t\t// can't use the usual caddyfile.Unmarshaler flow with the\n\t\t\t// standard writers because they are in the caddy package\n\t\t\t// (because they are the default) and implementing that\n\t\t\t// interface there would unfortunately create circular import\n\t\t\tvar wo caddy.WriterOpener\n\t\t\tswitch moduleName {\n\t\t\tcase \"stdout\":\n\t\t\t\two = caddy.StdoutWriter{}\n\t\t\tcase \"stderr\":\n\t\t\t\two = caddy.StderrWriter{}\n\t\t\tcase \"discard\":\n\t\t\t\two = caddy.DiscardWriter{}\n\t\t\tdefault:\n\t\t\t\tmodID := \"caddy.logging.writers.\" + moduleName\n\t\t\t\tunm, err := caddyfile.UnmarshalModule(h.Dispenser, modID)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t\tvar ok bool\n\t\t\t\two, ok = unm.(caddy.WriterOpener)\n\t\t\t\tif !ok {\n\t\t\t\t\treturn nil, h.Errf(\"module %s (%T) is not a WriterOpener\", modID, unm)\n\t\t\t\t}\n\t\t\t}\n\t\t\tcl.WriterRaw = caddyconfig.JSONModuleObject(wo, \"output\", moduleName, h.warnings)\n\n\t\tcase \"sampling\":\n\t\t\td := h.Dispenser.NewFromNextSegment()\n\t\t\tfor d.NextArg() {\n\t\t\t\t// consume any tokens on the same line, if any.\n\t\t\t}\n\n\t\t\tsampling := &caddy.LogSampling{}\n\t\t\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\t\t\tsubdir := d.Val()\n\t\t\t\tswitch subdir {\n\t\t\t\tcase \"interval\":\n\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t}\n\t\t\t\t\tinterval, err := time.ParseDuration(d.Val() + \"ns\")\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn nil, d.Errf(\"failed to parse interval: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t\tsampling.Interval = interval\n\t\t\t\tcase \"first\":\n\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t}\n\t\t\t\t\tfirst, err := strconv.Atoi(d.Val())\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn nil, d.Errf(\"failed to parse first: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t\tsampling.First = first\n\t\t\t\tcase \"thereafter\":\n\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t}\n\t\t\t\t\tthereafter, err := strconv.Atoi(d.Val())\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn nil, d.Errf(\"failed to parse thereafter: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t\tsampling.Thereafter = thereafter\n\t\t\t\tdefault:\n\t\t\t\t\treturn nil, d.Errf(\"unrecognized subdirective: %s\", subdir)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tcl.Sampling = sampling\n\n\t\tcase \"core\":\n\t\t\tif !h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tmoduleName := h.Val()\n\t\t\tmoduleID := \"caddy.logging.cores.\" + moduleName\n\t\t\tunm, err := caddyfile.UnmarshalModule(h.Dispenser, moduleID)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tcore, ok := unm.(zapcore.Core)\n\t\t\tif !ok {\n\t\t\t\treturn nil, h.Errf(\"module %s (%T) is not a zapcore.Core\", moduleID, unm)\n\t\t\t}\n\t\t\tcl.CoreRaw = caddyconfig.JSONModuleObject(core, \"module\", moduleName, h.warnings)\n\n\t\tcase \"format\":\n\t\t\tif !h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tmoduleName := h.Val()\n\t\t\tmoduleID := \"caddy.logging.encoders.\" + moduleName\n\t\t\tunm, err := caddyfile.UnmarshalModule(h.Dispenser, moduleID)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tenc, ok := unm.(zapcore.Encoder)\n\t\t\tif !ok {\n\t\t\t\treturn nil, h.Errf(\"module %s (%T) is not a zapcore.Encoder\", moduleID, unm)\n\t\t\t}\n\t\t\tcl.EncoderRaw = caddyconfig.JSONModuleObject(enc, \"format\", moduleName, h.warnings)\n\n\t\tcase \"level\":\n\t\t\tif !h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tcl.Level = h.Val()\n\t\t\tif h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\n\t\tcase \"include\":\n\t\t\tif !parseAsGlobalOption {\n\t\t\t\treturn nil, h.Err(\"include is not allowed in the log directive\")\n\t\t\t}\n\t\t\tfor h.NextArg() {\n\t\t\t\tcl.Include = append(cl.Include, h.Val())\n\t\t\t}\n\n\t\tcase \"exclude\":\n\t\t\tif !parseAsGlobalOption {\n\t\t\t\treturn nil, h.Err(\"exclude is not allowed in the log directive\")\n\t\t\t}\n\t\t\tfor h.NextArg() {\n\t\t\t\tcl.Exclude = append(cl.Exclude, h.Val())\n\t\t\t}\n\n\t\tcase \"no_hostname\":\n\t\t\tif h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tnoHostname = true\n\n\t\tdefault:\n\t\t\treturn nil, h.Errf(\"unrecognized subdirective: %s\", h.Val())\n\t\t}\n\t}\n\n\tvar val namedCustomLog\n\tval.hostnames = customHostnames\n\tval.noHostname = noHostname\n\tisEmptyConfig := reflect.DeepEqual(cl, new(caddy.CustomLog))\n\n\t// Skip handling of empty logging configs\n\n\tif parseAsGlobalOption {\n\t\t// Use indicated name for global log options\n\t\tval.name = logName\n\t} else {\n\t\tif logName != \"\" {\n\t\t\tval.name = logName\n\t\t} else if !isEmptyConfig {\n\t\t\t// Construct a log name for server log streams\n\t\t\tlogCounter, ok := h.State[\"logCounter\"].(int)\n\t\t\tif !ok {\n\t\t\t\tlogCounter = 0\n\t\t\t}\n\t\t\tval.name = fmt.Sprintf(\"log%d\", logCounter)\n\t\t\tlogCounter++\n\t\t\th.State[\"logCounter\"] = logCounter\n\t\t}\n\t\tif val.name != \"\" {\n\t\t\tcl.Include = []string{\"http.log.access.\" + val.name}\n\t\t}\n\t}\n\tif !isEmptyConfig {\n\t\tval.log = cl\n\t}\n\tconfigValues = append(configValues, ConfigValue{\n\t\tClass: \"custom_log\",\n\t\tValue: val,\n\t})\n\treturn configValues, nil\n}\n\n// parseLogSkip parses the log_skip directive. Syntax:\n//\n//\tlog_skip [<matcher>]\nfunc parseLogSkip(h Helper) (caddyhttp.MiddlewareHandler, error) {\n\th.Next() // consume directive name\n\n\t// \"skip_log\" is deprecated, replaced by \"log_skip\"\n\tif h.Val() == \"skip_log\" {\n\t\tcaddy.Log().Named(\"config.adapter.caddyfile\").Warn(\"the 'skip_log' directive is deprecated, please use 'log_skip' instead!\")\n\t}\n\n\tif h.NextArg() {\n\t\treturn nil, h.ArgErr()\n\t}\n\n\tif h.NextBlock(0) {\n\t\treturn nil, h.Err(\"log_skip directive does not accept blocks\")\n\t}\n\n\treturn caddyhttp.VarsMiddleware{\"log_skip\": true}, nil\n}\n\n// parseLogName parses the log_name directive. Syntax:\n//\n//\tlog_name <names...>\nfunc parseLogName(h Helper) (caddyhttp.MiddlewareHandler, error) {\n\th.Next() // consume directive name\n\treturn caddyhttp.VarsMiddleware{\n\t\tcaddyhttp.AccessLoggerNameVarKey: h.RemainingArgs(),\n\t}, nil\n}\n"
  },
  {
    "path": "caddyconfig/httpcaddyfile/builtins_test.go",
    "content": "package httpcaddyfile\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/logging\"\n)\n\nfunc TestLogDirectiveSyntax(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinput       string\n\t\toutput      string\n\t\texpectError bool\n\t}{\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tlog\n\t\t\t}\n\t\t\t`,\n\t\t\toutput:      `{\"apps\":{\"http\":{\"servers\":{\"srv0\":{\"listen\":[\":8080\"],\"logs\":{}}}}}}`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tlog {\n\t\t\t\t\tcore mock\n\t\t\t\t\toutput file foo.log\n\t\t\t\t}\n\t\t\t}\n\t\t\t`,\n\t\t\toutput:      `{\"logging\":{\"logs\":{\"default\":{\"exclude\":[\"http.log.access.log0\"]},\"log0\":{\"writer\":{\"filename\":\"foo.log\",\"output\":\"file\"},\"core\":{\"module\":\"mock\"},\"include\":[\"http.log.access.log0\"]}}},\"apps\":{\"http\":{\"servers\":{\"srv0\":{\"listen\":[\":8080\"],\"logs\":{\"default_logger_name\":\"log0\"}}}}}}`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tlog {\n\t\t\t\t\tformat filter {\n\t\t\t\t\t\twrap console\n\t\t\t\t\t\tfields {\n\t\t\t\t\t\t\trequest>remote_ip ip_mask {\n\t\t\t\t\t\t\t\tipv4 24\n\t\t\t\t\t\t\t\tipv6 32\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\t`,\n\t\t\toutput:      `{\"logging\":{\"logs\":{\"default\":{\"exclude\":[\"http.log.access.log0\"]},\"log0\":{\"encoder\":{\"fields\":{\"request\\u003eremote_ip\":{\"filter\":\"ip_mask\",\"ipv4_cidr\":24,\"ipv6_cidr\":32}},\"format\":\"filter\",\"wrap\":{\"format\":\"console\"}},\"include\":[\"http.log.access.log0\"]}}},\"apps\":{\"http\":{\"servers\":{\"srv0\":{\"listen\":[\":8080\"],\"logs\":{\"default_logger_name\":\"log0\"}}}}}}`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tlog name-override {\n\t\t\t\t\tcore mock\n\t\t\t\t\toutput file foo.log\n\t\t\t\t}\n\t\t\t}\n\t\t\t`,\n\t\t\toutput:      `{\"logging\":{\"logs\":{\"default\":{\"exclude\":[\"http.log.access.name-override\"]},\"name-override\":{\"writer\":{\"filename\":\"foo.log\",\"output\":\"file\"},\"core\":{\"module\":\"mock\"},\"include\":[\"http.log.access.name-override\"]}}},\"apps\":{\"http\":{\"servers\":{\"srv0\":{\"listen\":[\":8080\"],\"logs\":{\"default_logger_name\":\"name-override\"}}}}}}`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tlog {\n\t\t\t\t\tsampling {\n\t\t\t\t\t\tinterval 2\n\t\t\t\t\t\tfirst 3\n\t\t\t\t\t\tthereafter 4\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\t`,\n\t\t\toutput:      `{\"logging\":{\"logs\":{\"default\":{\"exclude\":[\"http.log.access.log0\"]},\"log0\":{\"sampling\":{\"interval\":2,\"first\":3,\"thereafter\":4},\"include\":[\"http.log.access.log0\"]}}},\"apps\":{\"http\":{\"servers\":{\"srv0\":{\"listen\":[\":8080\"],\"logs\":{\"default_logger_name\":\"log0\"}}}}}}`,\n\t\t\texpectError: false,\n\t\t},\n\t} {\n\n\t\tadapter := caddyfile.Adapter{\n\t\t\tServerType: ServerType{},\n\t\t}\n\n\t\tout, _, err := adapter.Adapt([]byte(tc.input), nil)\n\n\t\tif err != nil != tc.expectError {\n\t\t\tt.Errorf(\"Test %d error expectation failed Expected: %v, got %s\", i, tc.expectError, err)\n\t\t\tcontinue\n\t\t}\n\n\t\tif string(out) != tc.output {\n\t\t\tt.Errorf(\"Test %d error output mismatch Expected: %s, got %s\", i, tc.output, out)\n\t\t}\n\t}\n}\n\nfunc TestRedirDirectiveSyntax(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinput       string\n\t\texpectError bool\n\t}{\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tredir :8081\n\t\t\t}`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tredir * :8081\n\t\t\t}`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tredir /api/* :8081 300\n\t\t\t}`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tredir :8081 300\n\t\t\t}`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tredir /api/* :8081 399\n\t\t\t}`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tredir :8081 399\n\t\t\t}`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tredir /old.html /new.html\n\t\t\t}`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tredir /old.html /new.html temporary\n\t\t\t}`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tredir https://example.com{uri} permanent\n\t\t\t}`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tredir /old.html /new.html permanent\n\t\t\t}`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tredir /old.html /new.html html\n\t\t\t}`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\t// this is now allowed so a Location header\n\t\t\t// can be written and consumed by JS\n\t\t\t// in the case of XHR requests\n\t\t\tinput: `:8080 {\n\t\t\t\tredir * :8081 401\n\t\t\t}`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tredir * :8081 402\n\t\t\t}`,\n\t\t\texpectError: true,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tredir * :8081 {http.reverse_proxy.status_code}\n\t\t\t}`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tredir /old.html /new.html htlm\n\t\t\t}`,\n\t\t\texpectError: true,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tredir * :8081 200\n\t\t\t}`,\n\t\t\texpectError: true,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tredir * :8081 temp\n\t\t\t}`,\n\t\t\texpectError: true,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tredir * :8081 perm\n\t\t\t}`,\n\t\t\texpectError: true,\n\t\t},\n\t\t{\n\t\t\tinput: `:8080 {\n\t\t\t\tredir * :8081 php\n\t\t\t}`,\n\t\t\texpectError: true,\n\t\t},\n\t} {\n\n\t\tadapter := caddyfile.Adapter{\n\t\t\tServerType: ServerType{},\n\t\t}\n\n\t\t_, _, err := adapter.Adapt([]byte(tc.input), nil)\n\n\t\tif err != nil != tc.expectError {\n\t\t\tt.Errorf(\"Test %d error expectation failed Expected: %v, got %s\", i, tc.expectError, err)\n\t\t\tcontinue\n\t\t}\n\t}\n}\n\nfunc TestImportErrorLine(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinput     string\n\t\terrorFunc func(err error) bool\n\t}{\n\t\t{\n\t\t\tinput: `(t1) {\n\t\t\t\t\tabort {args[:]}\n\t\t\t\t}\n\t\t\t\t:8080 {\n\t\t\t\t\timport t1\n\t\t\t\t\timport t1 true\n\t\t\t\t}`,\n\t\t\terrorFunc: func(err error) bool {\n\t\t\t\treturn err != nil && strings.Contains(err.Error(), \"Caddyfile:6 (import t1)\")\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: `(t1) {\n\t\t\t\t\tabort {args[:]}\n\t\t\t\t}\n\t\t\t\t:8080 {\n\t\t\t\t\timport t1 true\n\t\t\t\t}`,\n\t\t\terrorFunc: func(err error) bool {\n\t\t\t\treturn err != nil && strings.Contains(err.Error(), \"Caddyfile:5 (import t1)\")\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\timport testdata/import_variadic_snippet.txt\n\t\t\t\t:8080 {\n\t\t\t\t\timport t1 true\n\t\t\t\t}`,\n\t\t\terrorFunc: func(err error) bool {\n\t\t\t\treturn err == nil\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\timport testdata/import_variadic_with_import.txt\n\t\t\t\t:8080 {\n\t\t\t\t\timport t1 true\n\t\t\t\t\timport t2 true\n\t\t\t\t}`,\n\t\t\terrorFunc: func(err error) bool {\n\t\t\t\treturn err == nil\n\t\t\t},\n\t\t},\n\t} {\n\t\tadapter := caddyfile.Adapter{\n\t\t\tServerType: ServerType{},\n\t\t}\n\n\t\t_, _, err := adapter.Adapt([]byte(tc.input), nil)\n\n\t\tif !tc.errorFunc(err) {\n\t\t\tt.Errorf(\"Test %d error expectation failed, got %s\", i, err)\n\t\t\tcontinue\n\t\t}\n\t}\n}\n\nfunc TestNestedImport(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinput     string\n\t\terrorFunc func(err error) bool\n\t}{\n\t\t{\n\t\t\tinput: `(t1) {\n\t\t\t\t\t\trespond {args[0]} {args[1]}\n\t\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\t\t(t2) {\n\t\t\t\t\t\timport t1 {args[0]} 202\n\t\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\t\t:8080 {\n\t\t\t\t\t\thandle {\n\t\t\t\t\t\t\timport t2 \"foobar\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}`,\n\t\t\terrorFunc: func(err error) bool {\n\t\t\t\treturn err == nil\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: `(t1) {\n\t\t\t\t\t\trespond {args[:]}\n\t\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\t\t(t2) {\n\t\t\t\t\t\timport t1 {args[0]} {args[1]}\n\t\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\t\t:8080 {\n\t\t\t\t\t\thandle {\n\t\t\t\t\t\t\timport t2 \"foobar\" 202\n\t\t\t\t\t\t}\n\t\t\t\t\t}`,\n\t\t\terrorFunc: func(err error) bool {\n\t\t\t\treturn err == nil\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: `(t1) {\n\t\t\t\t\t\trespond {args[0]} {args[1]}\n\t\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\t\t(t2) {\n\t\t\t\t\t\timport t1 {args[:]}\n\t\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\t\t:8080 {\n\t\t\t\t\t\thandle {\n\t\t\t\t\t\t\timport t2 \"foobar\" 202\n\t\t\t\t\t\t}\n\t\t\t\t\t}`,\n\t\t\terrorFunc: func(err error) bool {\n\t\t\t\treturn err == nil\n\t\t\t},\n\t\t},\n\t} {\n\t\tadapter := caddyfile.Adapter{\n\t\t\tServerType: ServerType{},\n\t\t}\n\n\t\t_, _, err := adapter.Adapt([]byte(tc.input), nil)\n\n\t\tif !tc.errorFunc(err) {\n\t\t\tt.Errorf(\"Test %d error expectation failed, got %s\", i, err)\n\t\t\tcontinue\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddyconfig/httpcaddyfile/directives.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage httpcaddyfile\n\nimport (\n\t\"encoding/json\"\n\t\"maps\"\n\t\"net\"\n\t\"slices\"\n\t\"sort\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\n// defaultDirectiveOrder specifies the default order\n// to apply directives in HTTP routes. This must only\n// consist of directives that are included in Caddy's\n// standard distribution.\n//\n// e.g. The 'root' directive goes near the start in\n// case rewrites or redirects depend on existence of\n// files, i.e. the file matcher, which must know the\n// root first.\n//\n// e.g. The 'header' directive goes before 'redir' so\n// that headers can be manipulated before doing redirects.\n//\n// e.g. The 'respond' directive is near the end because it\n// writes a response and terminates the middleware chain.\nvar defaultDirectiveOrder = []string{\n\t\"tracing\",\n\n\t// set variables that may be used by other directives\n\t\"map\",\n\t\"vars\",\n\t\"fs\",\n\t\"root\",\n\t\"log_append\",\n\t\"skip_log\", // TODO: deprecated, renamed to log_skip\n\t\"log_skip\",\n\t\"log_name\",\n\n\t\"header\",\n\t\"copy_response_headers\", // only in reverse_proxy's handle_response\n\t\"request_body\",\n\n\t\"redir\",\n\n\t// incoming request manipulation\n\t\"method\",\n\t\"rewrite\",\n\t\"uri\",\n\t\"try_files\",\n\n\t// middleware handlers; some wrap responses\n\t\"basicauth\", // TODO: deprecated, renamed to basic_auth\n\t\"basic_auth\",\n\t\"forward_auth\",\n\t\"request_header\",\n\t\"encode\",\n\t\"push\",\n\t\"intercept\",\n\t\"templates\",\n\n\t// special routing & dispatching directives\n\t\"invoke\",\n\t\"handle\",\n\t\"handle_path\",\n\t\"route\",\n\n\t// handlers that typically respond to requests\n\t\"abort\",\n\t\"error\",\n\t\"copy_response\", // only in reverse_proxy's handle_response\n\t\"respond\",\n\t\"metrics\",\n\t\"reverse_proxy\",\n\t\"php_fastcgi\",\n\t\"file_server\",\n\t\"acme_server\",\n}\n\n// directiveOrder specifies the order to apply directives\n// in HTTP routes, after being modified by either the\n// plugins or by the user via the \"order\" global option.\nvar directiveOrder = defaultDirectiveOrder\n\n// RegisterDirective registers a unique directive dir with an\n// associated unmarshaling (setup) function. When directive dir\n// is encountered in a Caddyfile, setupFunc will be called to\n// unmarshal its tokens.\nfunc RegisterDirective(dir string, setupFunc UnmarshalFunc) {\n\tif _, ok := registeredDirectives[dir]; ok {\n\t\tpanic(\"directive \" + dir + \" already registered\")\n\t}\n\tregisteredDirectives[dir] = setupFunc\n}\n\n// RegisterHandlerDirective is like RegisterDirective, but for\n// directives which specifically output only an HTTP handler.\n// Directives registered with this function will always have\n// an optional matcher token as the first argument.\nfunc RegisterHandlerDirective(dir string, setupFunc UnmarshalHandlerFunc) {\n\tRegisterDirective(dir, func(h Helper) ([]ConfigValue, error) {\n\t\tif !h.Next() {\n\t\t\treturn nil, h.ArgErr()\n\t\t}\n\n\t\tmatcherSet, err := h.ExtractMatcherSet()\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tval, err := setupFunc(h)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\treturn h.NewRoute(matcherSet, val), nil\n\t})\n}\n\n// RegisterDirectiveOrder registers the default order for a\n// directive from a plugin.\n//\n// This is useful when a plugin has a well-understood place\n// it should run in the middleware pipeline, and it allows\n// users to avoid having to define the order themselves.\n//\n// The directive dir may be placed in the position relative\n// to ('before' or 'after') a directive included in Caddy's\n// standard distribution. It cannot be relative to another\n// plugin's directive.\n//\n// EXPERIMENTAL: This API may change or be removed.\nfunc RegisterDirectiveOrder(dir string, position Positional, standardDir string) {\n\t// check if directive was already ordered\n\tif slices.Contains(directiveOrder, dir) {\n\t\tpanic(\"directive '\" + dir + \"' already ordered\")\n\t}\n\n\tif position != Before && position != After {\n\t\tpanic(\"the 2nd argument must be either 'before' or 'after', got '\" + position + \"'\")\n\t}\n\n\t// check if directive exists in standard distribution, since\n\t// we can't allow plugins to depend on one another; we can't\n\t// guarantee the order that plugins are loaded in.\n\tfoundStandardDir := slices.Contains(defaultDirectiveOrder, standardDir)\n\tif !foundStandardDir {\n\t\tpanic(\"the 3rd argument '\" + standardDir + \"' must be a directive that exists in the standard distribution of Caddy\")\n\t}\n\n\t// insert directive into proper position\n\tnewOrder := directiveOrder\n\tfor i, d := range newOrder {\n\t\tif d != standardDir {\n\t\t\tcontinue\n\t\t}\n\t\tswitch position {\n\t\tcase Before:\n\t\t\tnewOrder = append(newOrder[:i], append([]string{dir}, newOrder[i:]...)...)\n\t\tcase After:\n\t\t\tnewOrder = append(newOrder[:i+1], append([]string{dir}, newOrder[i+1:]...)...)\n\t\tcase First, Last:\n\t\t}\n\t\tbreak\n\t}\n\tdirectiveOrder = newOrder\n}\n\n// RegisterGlobalOption registers a unique global option opt with\n// an associated unmarshaling (setup) function. When the global\n// option opt is encountered in a Caddyfile, setupFunc will be\n// called to unmarshal its tokens.\nfunc RegisterGlobalOption(opt string, setupFunc UnmarshalGlobalFunc) {\n\tif _, ok := registeredGlobalOptions[opt]; ok {\n\t\tpanic(\"global option \" + opt + \" already registered\")\n\t}\n\tregisteredGlobalOptions[opt] = setupFunc\n}\n\n// Helper is a type which helps setup a value from\n// Caddyfile tokens.\ntype Helper struct {\n\t*caddyfile.Dispenser\n\t// State stores intermediate variables during caddyfile adaptation.\n\tState        map[string]any\n\toptions      map[string]any\n\twarnings     *[]caddyconfig.Warning\n\tmatcherDefs  map[string]caddy.ModuleMap\n\tparentBlock  caddyfile.ServerBlock\n\tgroupCounter counter\n}\n\n// Option gets the option keyed by name.\nfunc (h Helper) Option(name string) any {\n\treturn h.options[name]\n}\n\n// Caddyfiles returns the list of config files from\n// which tokens in the current server block were loaded.\nfunc (h Helper) Caddyfiles() []string {\n\t// first obtain set of names of files involved\n\t// in this server block, without duplicates\n\tfiles := make(map[string]struct{})\n\tfor _, segment := range h.parentBlock.Segments {\n\t\tfor _, token := range segment {\n\t\t\tfiles[token.File] = struct{}{}\n\t\t}\n\t}\n\t// then convert the set into a slice\n\tfilesSlice := make([]string, 0, len(files))\n\tfor file := range files {\n\t\tfilesSlice = append(filesSlice, file)\n\t}\n\tsort.Strings(filesSlice)\n\treturn filesSlice\n}\n\n// JSON converts val into JSON. Any errors are added to warnings.\nfunc (h Helper) JSON(val any) json.RawMessage {\n\treturn caddyconfig.JSON(val, h.warnings)\n}\n\n// MatcherToken assumes the next argument token is (possibly) a matcher,\n// and if so, returns the matcher set along with a true value. If the next\n// token is not a matcher, nil and false is returned. Note that a true\n// value may be returned with a nil matcher set if it is a catch-all.\nfunc (h Helper) MatcherToken() (caddy.ModuleMap, bool, error) {\n\tif !h.NextArg() {\n\t\treturn nil, false, nil\n\t}\n\treturn matcherSetFromMatcherToken(h.Dispenser.Token(), h.matcherDefs, h.warnings)\n}\n\n// ExtractMatcherSet is like MatcherToken, except this is a higher-level\n// method that returns the matcher set described by the matcher token,\n// or nil if there is none, and deletes the matcher token from the\n// dispenser and resets it as if this look-ahead never happened. Useful\n// when wrapping a route (one or more handlers) in a user-defined matcher.\nfunc (h Helper) ExtractMatcherSet() (caddy.ModuleMap, error) {\n\tmatcherSet, hasMatcher, err := h.MatcherToken()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif hasMatcher {\n\t\t// strip matcher token; we don't need to\n\t\t// use the return value here because a\n\t\t// new dispenser should have been made\n\t\t// solely for this directive's tokens,\n\t\t// with no other uses of same slice\n\t\th.Dispenser.Delete()\n\t}\n\th.Dispenser.Reset() // pretend this lookahead never happened\n\treturn matcherSet, nil\n}\n\n// NewRoute returns config values relevant to creating a new HTTP route.\nfunc (h Helper) NewRoute(matcherSet caddy.ModuleMap,\n\thandler caddyhttp.MiddlewareHandler,\n) []ConfigValue {\n\tmod, err := caddy.GetModule(caddy.GetModuleID(handler))\n\tif err != nil {\n\t\t*h.warnings = append(*h.warnings, caddyconfig.Warning{\n\t\t\tFile:    h.File(),\n\t\t\tLine:    h.Line(),\n\t\t\tMessage: err.Error(),\n\t\t})\n\t\treturn nil\n\t}\n\tvar matcherSetsRaw []caddy.ModuleMap\n\tif matcherSet != nil {\n\t\tmatcherSetsRaw = append(matcherSetsRaw, matcherSet)\n\t}\n\treturn []ConfigValue{\n\t\t{\n\t\t\tClass: \"route\",\n\t\t\tValue: caddyhttp.Route{\n\t\t\t\tMatcherSetsRaw: matcherSetsRaw,\n\t\t\t\tHandlersRaw:    []json.RawMessage{caddyconfig.JSONModuleObject(handler, \"handler\", mod.ID.Name(), h.warnings)},\n\t\t\t},\n\t\t},\n\t}\n}\n\n// GroupRoutes adds the routes (caddyhttp.Route type) in vals to the\n// same group, if there is more than one route in vals.\nfunc (h Helper) GroupRoutes(vals []ConfigValue) {\n\t// ensure there's at least two routes; group of one is pointless\n\tvar count int\n\tfor _, v := range vals {\n\t\tif _, ok := v.Value.(caddyhttp.Route); ok {\n\t\t\tcount++\n\t\t\tif count > 1 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\tif count < 2 {\n\t\treturn\n\t}\n\n\t// now that we know the group will have some effect, do it\n\tgroupName := h.groupCounter.nextGroup()\n\tfor i := range vals {\n\t\tif route, ok := vals[i].Value.(caddyhttp.Route); ok {\n\t\t\troute.Group = groupName\n\t\t\tvals[i].Value = route\n\t\t}\n\t}\n}\n\n// WithDispenser returns a new instance based on d. All others Helper\n// fields are copied, so typically maps are shared with this new instance.\nfunc (h Helper) WithDispenser(d *caddyfile.Dispenser) Helper {\n\th.Dispenser = d\n\treturn h\n}\n\n// ParseSegmentAsSubroute parses the segment such that its subdirectives\n// are themselves treated as directives, from which a subroute is built\n// and returned.\nfunc ParseSegmentAsSubroute(h Helper) (caddyhttp.MiddlewareHandler, error) {\n\tallResults, err := parseSegmentAsConfig(h)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn buildSubroute(allResults, h.groupCounter, true)\n}\n\n// parseSegmentAsConfig parses the segment such that its subdirectives\n// are themselves treated as directives, including named matcher definitions,\n// and the raw Config structs are returned.\nfunc parseSegmentAsConfig(h Helper) ([]ConfigValue, error) {\n\tvar allResults []ConfigValue\n\n\tfor h.Next() {\n\t\t// don't allow non-matcher args on the first line\n\t\tif h.NextArg() {\n\t\t\treturn nil, h.ArgErr()\n\t\t}\n\n\t\t// slice the linear list of tokens into top-level segments\n\t\tvar segments []caddyfile.Segment\n\t\tfor nesting := h.Nesting(); h.NextBlock(nesting); {\n\t\t\tsegments = append(segments, h.NextSegment())\n\t\t}\n\n\t\t// copy existing matcher definitions so we can augment\n\t\t// new ones that are defined only in this scope\n\t\tmatcherDefs := make(map[string]caddy.ModuleMap, len(h.matcherDefs))\n\t\tmaps.Copy(matcherDefs, h.matcherDefs)\n\n\t\t// find and extract any embedded matcher definitions in this scope\n\t\tfor i := 0; i < len(segments); i++ {\n\t\t\tseg := segments[i]\n\t\t\tif strings.HasPrefix(seg.Directive(), matcherPrefix) {\n\t\t\t\t// parse, then add the matcher to matcherDefs\n\t\t\t\terr := parseMatcherDefinitions(caddyfile.NewDispenser(seg), matcherDefs)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t\t// remove the matcher segment (consumed), then step back the loop\n\t\t\t\tsegments = append(segments[:i], segments[i+1:]...)\n\t\t\t\ti--\n\t\t\t}\n\t\t}\n\n\t\t// with matchers ready to go, evaluate each directive's segment\n\t\tfor _, seg := range segments {\n\t\t\tdir := seg.Directive()\n\t\t\tdirFunc, ok := registeredDirectives[dir]\n\t\t\tif !ok {\n\t\t\t\treturn nil, h.Errf(\"unrecognized directive: %s - are you sure your Caddyfile structure (nesting and braces) is correct?\", dir)\n\t\t\t}\n\n\t\t\tsubHelper := h\n\t\t\tsubHelper.Dispenser = caddyfile.NewDispenser(seg)\n\t\t\tsubHelper.matcherDefs = matcherDefs\n\n\t\t\tresults, err := dirFunc(subHelper)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, h.Errf(\"parsing caddyfile tokens for '%s': %v\", dir, err)\n\t\t\t}\n\n\t\t\tdir = normalizeDirectiveName(dir)\n\n\t\t\tfor _, result := range results {\n\t\t\t\tresult.directive = dir\n\t\t\t\tallResults = append(allResults, result)\n\t\t\t}\n\t\t}\n\t}\n\n\treturn allResults, nil\n}\n\n// ConfigValue represents a value to be added to the final\n// configuration, or a value to be consulted when building\n// the final configuration.\ntype ConfigValue struct {\n\t// The kind of value this is. As the config is\n\t// being built, the adapter will look in the\n\t// \"pile\" for values belonging to a certain\n\t// class when it is setting up a certain part\n\t// of the config. The associated value will be\n\t// type-asserted and placed accordingly.\n\tClass string\n\n\t// The value to be used when building the config.\n\t// Generally its type is associated with the\n\t// name of the Class.\n\tValue any\n\n\tdirective string\n}\n\nfunc sortRoutes(routes []ConfigValue) {\n\tdirPositions := make(map[string]int)\n\tfor i, dir := range directiveOrder {\n\t\tdirPositions[dir] = i\n\t}\n\n\tsort.SliceStable(routes, func(i, j int) bool {\n\t\t// if the directives are different, just use the established directive order\n\t\tiDir, jDir := routes[i].directive, routes[j].directive\n\t\tif iDir != jDir {\n\t\t\treturn dirPositions[iDir] < dirPositions[jDir]\n\t\t}\n\n\t\t// directives are the same; sub-sort by path matcher length if there's\n\t\t// only one matcher set and one path (this is a very common case and\n\t\t// usually -- but not always -- helpful/expected, oh well; user can\n\t\t// always take manual control of order using handler or route blocks)\n\t\tiRoute, ok := routes[i].Value.(caddyhttp.Route)\n\t\tif !ok {\n\t\t\treturn false\n\t\t}\n\t\tjRoute, ok := routes[j].Value.(caddyhttp.Route)\n\t\tif !ok {\n\t\t\treturn false\n\t\t}\n\n\t\t// decode the path matchers if there is just one matcher set\n\t\tvar iPM, jPM caddyhttp.MatchPath\n\t\tif len(iRoute.MatcherSetsRaw) == 1 {\n\t\t\t_ = json.Unmarshal(iRoute.MatcherSetsRaw[0][\"path\"], &iPM)\n\t\t}\n\t\tif len(jRoute.MatcherSetsRaw) == 1 {\n\t\t\t_ = json.Unmarshal(jRoute.MatcherSetsRaw[0][\"path\"], &jPM)\n\t\t}\n\n\t\t// if there is only one path in the path matcher, sort by longer path\n\t\t// (more specific) first; missing path matchers or multi-matchers are\n\t\t// treated as zero-length paths\n\t\tvar iPathLen, jPathLen int\n\t\tif len(iPM) == 1 {\n\t\t\tiPathLen = len(iPM[0])\n\t\t}\n\t\tif len(jPM) == 1 {\n\t\t\tjPathLen = len(jPM[0])\n\t\t}\n\n\t\tsortByPath := func() bool {\n\t\t\t// we can only confidently compare path lengths if both\n\t\t\t// directives have a single path to match (issue #5037)\n\t\t\tif iPathLen > 0 && jPathLen > 0 {\n\t\t\t\t// trim the trailing wildcard if there is one\n\t\t\t\tiPathTrimmed := strings.TrimSuffix(iPM[0], \"*\")\n\t\t\t\tjPathTrimmed := strings.TrimSuffix(jPM[0], \"*\")\n\n\t\t\t\t// if both paths are the same except for a trailing wildcard,\n\t\t\t\t// sort by the shorter path first (which is more specific)\n\t\t\t\tif iPathTrimmed == jPathTrimmed {\n\t\t\t\t\treturn iPathLen < jPathLen\n\t\t\t\t}\n\n\t\t\t\t// we use the trimmed length to compare the paths\n\t\t\t\t// https://github.com/caddyserver/caddy/issues/7012#issuecomment-2870142195\n\t\t\t\t// credit to https://github.com/Hellio404\n\t\t\t\t// for sorts with many items, mixing matchers w/ and w/o wildcards will confuse the sort and result in incorrect orders\n\t\t\t\tiPathLen = len(iPathTrimmed)\n\t\t\t\tjPathLen = len(jPathTrimmed)\n\n\t\t\t\t// if both paths have the same length, sort lexically\n\t\t\t\t// https://github.com/caddyserver/caddy/pull/7015#issuecomment-2871993588\n\t\t\t\tif iPathLen == jPathLen {\n\t\t\t\t\treturn iPathTrimmed < jPathTrimmed\n\t\t\t\t}\n\n\t\t\t\t// sort most-specific (longest) path first\n\t\t\t\treturn iPathLen > jPathLen\n\t\t\t}\n\n\t\t\t// if both directives don't have a single path to compare,\n\t\t\t// sort whichever one has a matcher first; if both have\n\t\t\t// a matcher, sort equally (stable sort preserves order)\n\t\t\treturn len(iRoute.MatcherSetsRaw) > 0 && len(jRoute.MatcherSetsRaw) == 0\n\t\t}()\n\n\t\t// some directives involve setting values which can overwrite\n\t\t// each other, so it makes most sense to reverse the order so\n\t\t// that the least-specific matcher is first, allowing the last\n\t\t// matching one to win\n\t\tif iDir == \"vars\" {\n\t\t\treturn !sortByPath\n\t\t}\n\n\t\t// everything else is most-specific matcher first\n\t\treturn sortByPath\n\t})\n}\n\n// serverBlock pairs a Caddyfile server block with\n// a \"pile\" of config values, keyed by class name,\n// as well as its parsed keys for convenience.\ntype serverBlock struct {\n\tblock      caddyfile.ServerBlock\n\tpile       map[string][]ConfigValue // config values obtained from directives\n\tparsedKeys []Address\n}\n\n// hostsFromKeys returns a list of all the non-empty hostnames found in\n// the keys of the server block sb. If logger mode is false, a key with\n// an empty hostname portion will return an empty slice, since that\n// server block is interpreted to effectively match all hosts. An empty\n// string is never added to the slice.\n//\n// If loggerMode is true, then the non-standard ports of keys will be\n// joined to the hostnames. This is to effectively match the Host\n// header of requests that come in for that key.\n//\n// The resulting slice is not sorted but will never have duplicates.\nfunc (sb serverBlock) hostsFromKeys(loggerMode bool) []string {\n\t// ensure each entry in our list is unique\n\thostMap := make(map[string]struct{})\n\tfor _, addr := range sb.parsedKeys {\n\t\tif addr.Host == \"\" {\n\t\t\tif !loggerMode {\n\t\t\t\t// server block contains a key like \":443\", i.e. the host portion\n\t\t\t\t// is empty / catch-all, which means to match all hosts\n\t\t\t\treturn []string{}\n\t\t\t}\n\t\t\t// never append an empty string\n\t\t\tcontinue\n\t\t}\n\t\tif loggerMode &&\n\t\t\taddr.Port != \"\" &&\n\t\t\taddr.Port != strconv.Itoa(caddyhttp.DefaultHTTPPort) &&\n\t\t\taddr.Port != strconv.Itoa(caddyhttp.DefaultHTTPSPort) {\n\t\t\thostMap[net.JoinHostPort(addr.Host, addr.Port)] = struct{}{}\n\t\t} else {\n\t\t\thostMap[addr.Host] = struct{}{}\n\t\t}\n\t}\n\n\t// convert map to slice\n\tsblockHosts := make([]string, 0, len(hostMap))\n\tfor host := range hostMap {\n\t\tsblockHosts = append(sblockHosts, host)\n\t}\n\n\treturn sblockHosts\n}\n\nfunc (sb serverBlock) hostsFromKeysNotHTTP(httpPort string) []string {\n\t// ensure each entry in our list is unique\n\thostMap := make(map[string]struct{})\n\tfor _, addr := range sb.parsedKeys {\n\t\tif addr.Host == \"\" {\n\t\t\tcontinue\n\t\t}\n\t\tif addr.Scheme != \"http\" && addr.Port != httpPort {\n\t\t\thostMap[addr.Host] = struct{}{}\n\t\t}\n\t}\n\n\t// convert map to slice\n\tsblockHosts := make([]string, 0, len(hostMap))\n\tfor host := range hostMap {\n\t\tsblockHosts = append(sblockHosts, host)\n\t}\n\n\treturn sblockHosts\n}\n\n// hasHostCatchAllKey returns true if sb has a key that\n// omits a host portion, i.e. it \"catches all\" hosts.\nfunc (sb serverBlock) hasHostCatchAllKey() bool {\n\treturn slices.ContainsFunc(sb.parsedKeys, func(addr Address) bool {\n\t\treturn addr.Host == \"\"\n\t})\n}\n\n// isAllHTTP returns true if all sb keys explicitly specify\n// the http:// scheme\nfunc (sb serverBlock) isAllHTTP() bool {\n\treturn !slices.ContainsFunc(sb.parsedKeys, func(addr Address) bool {\n\t\treturn addr.Scheme != \"http\"\n\t})\n}\n\n// Positional are the supported modes for ordering directives.\ntype Positional string\n\nconst (\n\tBefore Positional = \"before\"\n\tAfter  Positional = \"after\"\n\tFirst  Positional = \"first\"\n\tLast   Positional = \"last\"\n)\n\ntype (\n\t// UnmarshalFunc is a function which can unmarshal Caddyfile\n\t// tokens into zero or more config values using a Helper type.\n\t// These are passed in a call to RegisterDirective.\n\tUnmarshalFunc func(h Helper) ([]ConfigValue, error)\n\n\t// UnmarshalHandlerFunc is like UnmarshalFunc, except the\n\t// output of the unmarshaling is an HTTP handler. This\n\t// function does not need to deal with HTTP request matching\n\t// which is abstracted away. Since writing HTTP handlers\n\t// with Caddyfile support is very common, this is a more\n\t// convenient way to add a handler to the chain since a lot\n\t// of the details common to HTTP handlers are taken care of\n\t// for you. These are passed to a call to\n\t// RegisterHandlerDirective.\n\tUnmarshalHandlerFunc func(h Helper) (caddyhttp.MiddlewareHandler, error)\n\n\t// UnmarshalGlobalFunc is a function which can unmarshal Caddyfile\n\t// tokens from a global option. It is passed the tokens to parse and\n\t// existing value from the previous instance of this global option\n\t// (if any). It returns the value to associate with this global option.\n\tUnmarshalGlobalFunc func(d *caddyfile.Dispenser, existingVal any) (any, error)\n)\n\nvar registeredDirectives = make(map[string]UnmarshalFunc)\n\nvar registeredGlobalOptions = make(map[string]UnmarshalGlobalFunc)\n"
  },
  {
    "path": "caddyconfig/httpcaddyfile/directives_test.go",
    "content": "package httpcaddyfile\n\nimport (\n\t\"reflect\"\n\t\"sort\"\n\t\"testing\"\n)\n\nfunc TestHostsFromKeys(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tkeys             []Address\n\t\texpectNormalMode []string\n\t\texpectLoggerMode []string\n\t}{\n\t\t{\n\t\t\t[]Address{\n\t\t\t\t{Original: \"foo\", Host: \"foo\"},\n\t\t\t},\n\t\t\t[]string{\"foo\"},\n\t\t\t[]string{\"foo\"},\n\t\t},\n\t\t{\n\t\t\t[]Address{\n\t\t\t\t{Original: \"foo\", Host: \"foo\"},\n\t\t\t\t{Original: \"bar\", Host: \"bar\"},\n\t\t\t},\n\t\t\t[]string{\"bar\", \"foo\"},\n\t\t\t[]string{\"bar\", \"foo\"},\n\t\t},\n\t\t{\n\t\t\t[]Address{\n\t\t\t\t{Original: \":2015\", Port: \"2015\"},\n\t\t\t},\n\t\t\t[]string{},\n\t\t\t[]string{},\n\t\t},\n\t\t{\n\t\t\t[]Address{\n\t\t\t\t{Original: \":443\", Port: \"443\"},\n\t\t\t},\n\t\t\t[]string{},\n\t\t\t[]string{},\n\t\t},\n\t\t{\n\t\t\t[]Address{\n\t\t\t\t{Original: \"foo\", Host: \"foo\"},\n\t\t\t\t{Original: \":2015\", Port: \"2015\"},\n\t\t\t},\n\t\t\t[]string{},\n\t\t\t[]string{\"foo\"},\n\t\t},\n\t\t{\n\t\t\t[]Address{\n\t\t\t\t{Original: \"example.com:2015\", Host: \"example.com\", Port: \"2015\"},\n\t\t\t},\n\t\t\t[]string{\"example.com\"},\n\t\t\t[]string{\"example.com:2015\"},\n\t\t},\n\t\t{\n\t\t\t[]Address{\n\t\t\t\t{Original: \"example.com:80\", Host: \"example.com\", Port: \"80\"},\n\t\t\t},\n\t\t\t[]string{\"example.com\"},\n\t\t\t[]string{\"example.com\"},\n\t\t},\n\t\t{\n\t\t\t[]Address{\n\t\t\t\t{Original: \"https://:2015/foo\", Scheme: \"https\", Port: \"2015\", Path: \"/foo\"},\n\t\t\t},\n\t\t\t[]string{},\n\t\t\t[]string{},\n\t\t},\n\t\t{\n\t\t\t[]Address{\n\t\t\t\t{Original: \"https://example.com:2015/foo\", Scheme: \"https\", Host: \"example.com\", Port: \"2015\", Path: \"/foo\"},\n\t\t\t},\n\t\t\t[]string{\"example.com\"},\n\t\t\t[]string{\"example.com:2015\"},\n\t\t},\n\t} {\n\t\tsb := serverBlock{parsedKeys: tc.keys}\n\n\t\t// test in normal mode\n\t\tactual := sb.hostsFromKeys(false)\n\t\tsort.Strings(actual)\n\t\tif !reflect.DeepEqual(tc.expectNormalMode, actual) {\n\t\t\tt.Errorf(\"Test %d (loggerMode=false): Expected: %v Actual: %v\", i, tc.expectNormalMode, actual)\n\t\t}\n\n\t\t// test in logger mode\n\t\tactual = sb.hostsFromKeys(true)\n\t\tsort.Strings(actual)\n\t\tif !reflect.DeepEqual(tc.expectLoggerMode, actual) {\n\t\t\tt.Errorf(\"Test %d (loggerMode=true): Expected: %v Actual: %v\", i, tc.expectLoggerMode, actual)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddyconfig/httpcaddyfile/httptype.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage httpcaddyfile\n\nimport (\n\t\"cmp\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"net\"\n\t\"reflect\"\n\t\"slices\"\n\t\"sort\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddypki\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddytls\"\n)\n\nfunc init() {\n\tcaddyconfig.RegisterAdapter(\"caddyfile\", caddyfile.Adapter{ServerType: ServerType{}})\n}\n\n// App represents the configuration for a non-standard\n// Caddy app module (e.g. third-party plugin) which was\n// parsed from a global options block.\ntype App struct {\n\t// The JSON key for the app being configured\n\tName string\n\n\t// The raw app config as JSON\n\tValue json.RawMessage\n}\n\n// ServerType can set up a config from an HTTP Caddyfile.\ntype ServerType struct{}\n\n// Setup makes a config from the tokens.\nfunc (st ServerType) Setup(\n\tinputServerBlocks []caddyfile.ServerBlock,\n\toptions map[string]any,\n) (*caddy.Config, []caddyconfig.Warning, error) {\n\tvar warnings []caddyconfig.Warning\n\tgc := counter{new(int)}\n\tstate := make(map[string]any)\n\n\t// load all the server blocks and associate them with a \"pile\" of config values\n\toriginalServerBlocks := make([]serverBlock, 0, len(inputServerBlocks))\n\tfor _, sblock := range inputServerBlocks {\n\t\tfor j, k := range sblock.Keys {\n\t\t\tif j == 0 && strings.HasPrefix(k.Text, \"@\") {\n\t\t\t\treturn nil, warnings, fmt.Errorf(\"%s:%d: cannot define a matcher outside of a site block: '%s'\", k.File, k.Line, k.Text)\n\t\t\t}\n\t\t\tif _, ok := registeredDirectives[k.Text]; ok {\n\t\t\t\treturn nil, warnings, fmt.Errorf(\"%s:%d: parsed '%s' as a site address, but it is a known directive; directives must appear in a site block\", k.File, k.Line, k.Text)\n\t\t\t}\n\t\t}\n\t\toriginalServerBlocks = append(originalServerBlocks, serverBlock{\n\t\t\tblock: sblock,\n\t\t\tpile:  make(map[string][]ConfigValue),\n\t\t})\n\t}\n\n\t// apply any global options\n\tvar err error\n\toriginalServerBlocks, err = st.evaluateGlobalOptionsBlock(originalServerBlocks, options)\n\tif err != nil {\n\t\treturn nil, warnings, err\n\t}\n\n\t// this will replace both static and user-defined placeholder shorthands\n\t// with actual identifiers used by Caddy\n\treplacer := NewShorthandReplacer()\n\n\toriginalServerBlocks, err = st.extractNamedRoutes(originalServerBlocks, options, &warnings, replacer)\n\tif err != nil {\n\t\treturn nil, warnings, err\n\t}\n\n\tfor _, sb := range originalServerBlocks {\n\t\tfor i := range sb.block.Segments {\n\t\t\treplacer.ApplyToSegment(&sb.block.Segments[i])\n\t\t}\n\n\t\tif len(sb.block.Keys) == 0 {\n\t\t\treturn nil, warnings, fmt.Errorf(\"server block without any key is global configuration, and if used, it must be first\")\n\t\t}\n\n\t\t// extract matcher definitions\n\t\tmatcherDefs := make(map[string]caddy.ModuleMap)\n\t\tfor _, segment := range sb.block.Segments {\n\t\t\tif dir := segment.Directive(); strings.HasPrefix(dir, matcherPrefix) {\n\t\t\t\td := sb.block.DispenseDirective(dir)\n\t\t\t\terr := parseMatcherDefinitions(d, matcherDefs)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, warnings, err\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// evaluate each directive (\"segment\") in this block\n\t\tfor _, segment := range sb.block.Segments {\n\t\t\tdir := segment.Directive()\n\n\t\t\tif strings.HasPrefix(dir, matcherPrefix) {\n\t\t\t\t// matcher definitions were pre-processed\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tdirFunc, ok := registeredDirectives[dir]\n\t\t\tif !ok {\n\t\t\t\ttkn := segment[0]\n\t\t\t\tmessage := \"%s:%d: unrecognized directive: %s\"\n\t\t\t\tif !sb.block.HasBraces {\n\t\t\t\t\tmessage += \"\\nDid you mean to define a second site? If so, you must use curly braces around each site to separate their configurations.\"\n\t\t\t\t}\n\t\t\t\treturn nil, warnings, fmt.Errorf(message, tkn.File, tkn.Line, dir)\n\t\t\t}\n\n\t\t\th := Helper{\n\t\t\t\tDispenser:    caddyfile.NewDispenser(segment),\n\t\t\t\toptions:      options,\n\t\t\t\twarnings:     &warnings,\n\t\t\t\tmatcherDefs:  matcherDefs,\n\t\t\t\tparentBlock:  sb.block,\n\t\t\t\tgroupCounter: gc,\n\t\t\t\tState:        state,\n\t\t\t}\n\n\t\t\tresults, err := dirFunc(h)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, warnings, fmt.Errorf(\"parsing caddyfile tokens for '%s': %v\", dir, err)\n\t\t\t}\n\n\t\t\tdir = normalizeDirectiveName(dir)\n\n\t\t\tfor _, result := range results {\n\t\t\t\tresult.directive = dir\n\t\t\t\tsb.pile[result.Class] = append(sb.pile[result.Class], result)\n\t\t\t}\n\n\t\t\t// specially handle named routes that were pulled out from\n\t\t\t// the invoke directive, which could be nested anywhere within\n\t\t\t// some subroutes in this directive; we add them to the pile\n\t\t\t// for this server block\n\t\t\tif state[namedRouteKey] != nil {\n\t\t\t\tfor name := range state[namedRouteKey].(map[string]struct{}) {\n\t\t\t\t\tresult := ConfigValue{Class: namedRouteKey, Value: name}\n\t\t\t\t\tsb.pile[result.Class] = append(sb.pile[result.Class], result)\n\t\t\t\t}\n\t\t\t\tstate[namedRouteKey] = nil\n\t\t\t}\n\t\t}\n\t}\n\n\t// map\n\tsbmap, err := st.mapAddressToProtocolToServerBlocks(originalServerBlocks, options)\n\tif err != nil {\n\t\treturn nil, warnings, err\n\t}\n\n\t// reduce\n\tpairings := st.consolidateAddrMappings(sbmap)\n\n\t// each pairing of listener addresses to list of server\n\t// blocks is basically a server definition\n\tservers, err := st.serversFromPairings(pairings, options, &warnings, gc)\n\tif err != nil {\n\t\treturn nil, warnings, err\n\t}\n\n\t// hoist the metrics config from per-server to global\n\tmetrics, _ := options[\"metrics\"].(*caddyhttp.Metrics)\n\tfor _, s := range servers {\n\t\tif s.Metrics != nil {\n\t\t\tmetrics = cmp.Or(metrics, &caddyhttp.Metrics{})\n\t\t\tmetrics = &caddyhttp.Metrics{\n\t\t\t\tPerHost: metrics.PerHost || s.Metrics.PerHost,\n\t\t\t}\n\t\t\ts.Metrics = nil // we don't need it anymore\n\t\t}\n\t}\n\n\t// now that each server is configured, make the HTTP app\n\thttpApp := caddyhttp.App{\n\t\tHTTPPort:      tryInt(options[\"http_port\"], &warnings),\n\t\tHTTPSPort:     tryInt(options[\"https_port\"], &warnings),\n\t\tGracePeriod:   tryDuration(options[\"grace_period\"], &warnings),\n\t\tShutdownDelay: tryDuration(options[\"shutdown_delay\"], &warnings),\n\t\tMetrics:       metrics,\n\t\tServers:       servers,\n\t}\n\n\t// then make the TLS app\n\ttlsApp, warnings, err := st.buildTLSApp(pairings, options, warnings)\n\tif err != nil {\n\t\treturn nil, warnings, err\n\t}\n\n\t// then make the PKI app\n\tpkiApp, warnings, err := st.buildPKIApp(pairings, options, warnings)\n\tif err != nil {\n\t\treturn nil, warnings, err\n\t}\n\n\t// extract any custom logs, and enforce configured levels\n\tvar customLogs []namedCustomLog\n\tvar hasDefaultLog bool\n\taddCustomLog := func(ncl namedCustomLog) {\n\t\tif ncl.name == \"\" {\n\t\t\treturn\n\t\t}\n\t\tif ncl.name == caddy.DefaultLoggerName {\n\t\t\thasDefaultLog = true\n\t\t}\n\t\tif _, ok := options[\"debug\"]; ok && ncl.log != nil && ncl.log.Level == \"\" {\n\t\t\tncl.log.Level = zap.DebugLevel.CapitalString()\n\t\t}\n\t\tcustomLogs = append(customLogs, ncl)\n\t}\n\n\t// Apply global log options, when set\n\tif options[\"log\"] != nil {\n\t\tfor _, logValue := range options[\"log\"].([]ConfigValue) {\n\t\t\taddCustomLog(logValue.Value.(namedCustomLog))\n\t\t}\n\t}\n\n\tif !hasDefaultLog {\n\t\t// if the default log was not customized, ensure we\n\t\t// configure it with any applicable options\n\t\tif _, ok := options[\"debug\"]; ok {\n\t\t\tcustomLogs = append(customLogs, namedCustomLog{\n\t\t\t\tname: caddy.DefaultLoggerName,\n\t\t\t\tlog: &caddy.CustomLog{\n\t\t\t\t\tBaseLog: caddy.BaseLog{Level: zap.DebugLevel.CapitalString()},\n\t\t\t\t},\n\t\t\t})\n\t\t}\n\t}\n\n\t// Apply server-specific log options\n\tfor _, p := range pairings {\n\t\tfor _, sb := range p.serverBlocks {\n\t\t\tfor _, clVal := range sb.pile[\"custom_log\"] {\n\t\t\t\taddCustomLog(clVal.Value.(namedCustomLog))\n\t\t\t}\n\t\t}\n\t}\n\n\t// annnd the top-level config, then we're done!\n\tcfg := &caddy.Config{AppsRaw: make(caddy.ModuleMap)}\n\n\t// loop through the configured options, and if any of\n\t// them are an httpcaddyfile App, then we insert them\n\t// into the config as raw Caddy apps\n\tfor _, opt := range options {\n\t\tif app, ok := opt.(App); ok {\n\t\t\tcfg.AppsRaw[app.Name] = app.Value\n\t\t}\n\t}\n\n\t// insert the standard Caddy apps into the config\n\tif len(httpApp.Servers) > 0 {\n\t\tcfg.AppsRaw[\"http\"] = caddyconfig.JSON(httpApp, &warnings)\n\t}\n\tif !reflect.DeepEqual(tlsApp, &caddytls.TLS{CertificatesRaw: make(caddy.ModuleMap)}) {\n\t\tcfg.AppsRaw[\"tls\"] = caddyconfig.JSON(tlsApp, &warnings)\n\t}\n\tif !reflect.DeepEqual(pkiApp, &caddypki.PKI{CAs: make(map[string]*caddypki.CA)}) {\n\t\tcfg.AppsRaw[\"pki\"] = caddyconfig.JSON(pkiApp, &warnings)\n\t}\n\tif filesystems, ok := options[\"filesystem\"].(caddy.Module); ok {\n\t\tcfg.AppsRaw[\"caddy.filesystems\"] = caddyconfig.JSON(\n\t\t\tfilesystems,\n\t\t\t&warnings)\n\t}\n\n\tif storageCvtr, ok := options[\"storage\"].(caddy.StorageConverter); ok {\n\t\tcfg.StorageRaw = caddyconfig.JSONModuleObject(storageCvtr,\n\t\t\t\"module\",\n\t\t\tstorageCvtr.(caddy.Module).CaddyModule().ID.Name(),\n\t\t\t&warnings)\n\t}\n\tif adminConfig, ok := options[\"admin\"].(*caddy.AdminConfig); ok && adminConfig != nil {\n\t\tcfg.Admin = adminConfig\n\t}\n\tif pc, ok := options[\"persist_config\"].(string); ok && pc == \"off\" {\n\t\tif cfg.Admin == nil {\n\t\t\tcfg.Admin = new(caddy.AdminConfig)\n\t\t}\n\t\tif cfg.Admin.Config == nil {\n\t\t\tcfg.Admin.Config = new(caddy.ConfigSettings)\n\t\t}\n\t\tcfg.Admin.Config.Persist = new(bool)\n\t}\n\n\tif len(customLogs) > 0 {\n\t\tif cfg.Logging == nil {\n\t\t\tcfg.Logging = &caddy.Logging{\n\t\t\t\tLogs: make(map[string]*caddy.CustomLog),\n\t\t\t}\n\t\t}\n\n\t\t// Add the default log first if defined, so that it doesn't\n\t\t// accidentally get re-created below due to the Exclude logic\n\t\tfor _, ncl := range customLogs {\n\t\t\tif ncl.name == caddy.DefaultLoggerName && ncl.log != nil {\n\t\t\t\tcfg.Logging.Logs[caddy.DefaultLoggerName] = ncl.log\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\n\t\t// Add the rest of the custom logs\n\t\tfor _, ncl := range customLogs {\n\t\t\tif ncl.log == nil || ncl.name == caddy.DefaultLoggerName {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif ncl.name != \"\" {\n\t\t\t\tcfg.Logging.Logs[ncl.name] = ncl.log\n\t\t\t}\n\t\t\t// most users seem to prefer not writing access logs\n\t\t\t// to the default log when they are directed to a\n\t\t\t// file or have any other special customization\n\t\t\tif ncl.name != caddy.DefaultLoggerName && len(ncl.log.Include) > 0 {\n\t\t\t\tdefaultLog, ok := cfg.Logging.Logs[caddy.DefaultLoggerName]\n\t\t\t\tif !ok {\n\t\t\t\t\tdefaultLog = new(caddy.CustomLog)\n\t\t\t\t\tcfg.Logging.Logs[caddy.DefaultLoggerName] = defaultLog\n\t\t\t\t}\n\t\t\t\tdefaultLog.Exclude = append(defaultLog.Exclude, ncl.log.Include...)\n\n\t\t\t\t// avoid duplicates by sorting + compacting\n\t\t\t\tsort.Strings(defaultLog.Exclude)\n\t\t\t\tdefaultLog.Exclude = slices.Compact(defaultLog.Exclude)\n\t\t\t}\n\t\t}\n\t\t// we may have not actually added anything, so remove if empty\n\t\tif len(cfg.Logging.Logs) == 0 {\n\t\t\tcfg.Logging = nil\n\t\t}\n\t}\n\n\treturn cfg, warnings, nil\n}\n\n// evaluateGlobalOptionsBlock evaluates the global options block,\n// which is expected to be the first server block if it has zero\n// keys. It returns the updated list of server blocks with the\n// global options block removed, and updates options accordingly.\nfunc (ServerType) evaluateGlobalOptionsBlock(serverBlocks []serverBlock, options map[string]any) ([]serverBlock, error) {\n\tif len(serverBlocks) == 0 || len(serverBlocks[0].block.Keys) > 0 {\n\t\treturn serverBlocks, nil\n\t}\n\n\tfor _, segment := range serverBlocks[0].block.Segments {\n\t\topt := segment.Directive()\n\t\tvar val any\n\t\tvar err error\n\t\tdisp := caddyfile.NewDispenser(segment)\n\n\t\toptFunc, ok := registeredGlobalOptions[opt]\n\t\tif !ok {\n\t\t\ttkn := segment[0]\n\t\t\treturn nil, fmt.Errorf(\"%s:%d: unrecognized global option: %s\", tkn.File, tkn.Line, opt)\n\t\t}\n\n\t\tval, err = optFunc(disp, options[opt])\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"parsing caddyfile tokens for '%s': %v\", opt, err)\n\t\t}\n\n\t\t// As a special case, fold multiple \"servers\" options together\n\t\t// in an array instead of overwriting a possible existing value\n\t\tif opt == \"servers\" {\n\t\t\texistingOpts, ok := options[opt].([]serverOptions)\n\t\t\tif !ok {\n\t\t\t\texistingOpts = []serverOptions{}\n\t\t\t}\n\t\t\tserverOpts, ok := val.(serverOptions)\n\t\t\tif !ok {\n\t\t\t\treturn nil, fmt.Errorf(\"unexpected type from 'servers' global options: %T\", val)\n\t\t\t}\n\t\t\toptions[opt] = append(existingOpts, serverOpts)\n\t\t\tcontinue\n\t\t}\n\t\t// Additionally, fold multiple \"log\" options together into an\n\t\t// array so that multiple loggers can be configured.\n\t\tif opt == \"log\" {\n\t\t\texistingOpts, ok := options[opt].([]ConfigValue)\n\t\t\tif !ok {\n\t\t\t\texistingOpts = []ConfigValue{}\n\t\t\t}\n\t\t\tlogOpts, ok := val.([]ConfigValue)\n\t\t\tif !ok {\n\t\t\t\treturn nil, fmt.Errorf(\"unexpected type from 'log' global options: %T\", val)\n\t\t\t}\n\t\t\toptions[opt] = append(existingOpts, logOpts...)\n\t\t\tcontinue\n\t\t}\n\t\t// Also fold multiple \"default_bind\" options together into an\n\t\t// array so that server blocks can have multiple binds by default.\n\t\tif opt == \"default_bind\" {\n\t\t\texistingOpts, ok := options[opt].([]ConfigValue)\n\t\t\tif !ok {\n\t\t\t\texistingOpts = []ConfigValue{}\n\t\t\t}\n\t\t\tdefaultBindOpts, ok := val.([]ConfigValue)\n\t\t\tif !ok {\n\t\t\t\treturn nil, fmt.Errorf(\"unexpected type from 'default_bind' global options: %T\", val)\n\t\t\t}\n\t\t\toptions[opt] = append(existingOpts, defaultBindOpts...)\n\t\t\tcontinue\n\t\t}\n\n\t\toptions[opt] = val\n\t}\n\n\t// If we got \"servers\" options, we'll sort them by their listener address\n\tif serverOpts, ok := options[\"servers\"].([]serverOptions); ok {\n\t\tsort.Slice(serverOpts, func(i, j int) bool {\n\t\t\treturn len(serverOpts[i].ListenerAddress) > len(serverOpts[j].ListenerAddress)\n\t\t})\n\n\t\t// Reject the config if there are duplicate listener address\n\t\tseen := make(map[string]bool)\n\t\tfor _, entry := range serverOpts {\n\t\t\tif _, alreadySeen := seen[entry.ListenerAddress]; alreadySeen {\n\t\t\t\treturn nil, fmt.Errorf(\"cannot have 'servers' global options with duplicate listener addresses: %s\", entry.ListenerAddress)\n\t\t\t}\n\t\t\tseen[entry.ListenerAddress] = true\n\t\t}\n\t}\n\n\treturn serverBlocks[1:], nil\n}\n\n// extractNamedRoutes pulls out any named route server blocks\n// so they don't get parsed as sites, and stores them in options\n// for later.\nfunc (ServerType) extractNamedRoutes(\n\tserverBlocks []serverBlock,\n\toptions map[string]any,\n\twarnings *[]caddyconfig.Warning,\n\treplacer ShorthandReplacer,\n) ([]serverBlock, error) {\n\tnamedRoutes := map[string]*caddyhttp.Route{}\n\n\tgc := counter{new(int)}\n\tstate := make(map[string]any)\n\n\t// copy the server blocks so we can\n\t// splice out the named route ones\n\tfiltered := append([]serverBlock{}, serverBlocks...)\n\tindex := -1\n\n\tfor _, sb := range serverBlocks {\n\t\tindex++\n\t\tif !sb.block.IsNamedRoute {\n\t\t\tcontinue\n\t\t}\n\n\t\t// splice out this block, because we know it's not a real server\n\t\tfiltered = append(filtered[:index], filtered[index+1:]...)\n\t\tindex--\n\n\t\tif len(sb.block.Segments) == 0 {\n\t\t\tcontinue\n\t\t}\n\n\t\twholeSegment := caddyfile.Segment{}\n\t\tfor i := range sb.block.Segments {\n\t\t\t// replace user-defined placeholder shorthands in extracted named routes\n\t\t\treplacer.ApplyToSegment(&sb.block.Segments[i])\n\n\t\t\t// zip up all the segments since ParseSegmentAsSubroute\n\t\t\t// was designed to take a directive+\n\t\t\twholeSegment = append(wholeSegment, sb.block.Segments[i]...)\n\t\t}\n\n\t\th := Helper{\n\t\t\tDispenser:    caddyfile.NewDispenser(wholeSegment),\n\t\t\toptions:      options,\n\t\t\twarnings:     warnings,\n\t\t\tmatcherDefs:  nil,\n\t\t\tparentBlock:  sb.block,\n\t\t\tgroupCounter: gc,\n\t\t\tState:        state,\n\t\t}\n\n\t\thandler, err := ParseSegmentAsSubroute(h)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tsubroute := handler.(*caddyhttp.Subroute)\n\t\troute := caddyhttp.Route{}\n\n\t\tif len(subroute.Routes) == 1 && len(subroute.Routes[0].MatcherSetsRaw) == 0 {\n\t\t\t// if there's only one route with no matcher, then we can simplify\n\t\t\troute.HandlersRaw = append(route.HandlersRaw, subroute.Routes[0].HandlersRaw[0])\n\t\t} else {\n\t\t\t// otherwise we need the whole subroute\n\t\t\troute.HandlersRaw = []json.RawMessage{caddyconfig.JSONModuleObject(handler, \"handler\", subroute.CaddyModule().ID.Name(), h.warnings)}\n\t\t}\n\n\t\tnamedRoutes[sb.block.GetKeysText()[0]] = &route\n\t}\n\toptions[\"named_routes\"] = namedRoutes\n\n\treturn filtered, nil\n}\n\n// serversFromPairings creates the servers for each pairing of addresses\n// to server blocks. Each pairing is essentially a server definition.\nfunc (st *ServerType) serversFromPairings(\n\tpairings []sbAddrAssociation,\n\toptions map[string]any,\n\twarnings *[]caddyconfig.Warning,\n\tgroupCounter counter,\n) (map[string]*caddyhttp.Server, error) {\n\tservers := make(map[string]*caddyhttp.Server)\n\tdefaultSNI := tryString(options[\"default_sni\"], warnings)\n\tfallbackSNI := tryString(options[\"fallback_sni\"], warnings)\n\n\thttpPort := strconv.Itoa(caddyhttp.DefaultHTTPPort)\n\tif hp, ok := options[\"http_port\"].(int); ok {\n\t\thttpPort = strconv.Itoa(hp)\n\t}\n\thttpsPort := strconv.Itoa(caddyhttp.DefaultHTTPSPort)\n\tif hsp, ok := options[\"https_port\"].(int); ok {\n\t\thttpsPort = strconv.Itoa(hsp)\n\t}\n\tautoHTTPS := []string{}\n\tif ah, ok := options[\"auto_https\"].([]string); ok {\n\t\tautoHTTPS = ah\n\t}\n\n\tfor i, p := range pairings {\n\t\t// detect ambiguous site definitions: server blocks which\n\t\t// have the same host bound to the same interface (listener\n\t\t// address), otherwise their routes will improperly be added\n\t\t// to the same server (see issue #4635)\n\t\tfor j, sblock1 := range p.serverBlocks {\n\t\t\tfor _, key := range sblock1.block.GetKeysText() {\n\t\t\t\tfor k, sblock2 := range p.serverBlocks {\n\t\t\t\t\tif k == j {\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\t\t\t\t\tif slices.Contains(sblock2.block.GetKeysText(), key) {\n\t\t\t\t\t\treturn nil, fmt.Errorf(\"ambiguous site definition: %s\", key)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tvar (\n\t\t\taddresses []string\n\t\t\tprotocols [][]string\n\t\t)\n\n\t\tfor _, addressWithProtocols := range p.addressesWithProtocols {\n\t\t\taddresses = append(addresses, addressWithProtocols.address)\n\t\t\tprotocols = append(protocols, addressWithProtocols.protocols)\n\t\t}\n\n\t\tsrv := &caddyhttp.Server{\n\t\t\tListen:          addresses,\n\t\t\tListenProtocols: protocols,\n\t\t}\n\n\t\t// remove srv.ListenProtocols[j] if it only contains the default protocols\n\t\tfor j, lnProtocols := range srv.ListenProtocols {\n\t\t\tsrv.ListenProtocols[j] = nil\n\t\t\tfor _, lnProtocol := range lnProtocols {\n\t\t\t\tif lnProtocol != \"\" {\n\t\t\t\t\tsrv.ListenProtocols[j] = lnProtocols\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// remove srv.ListenProtocols if it only contains the default protocols for all listen addresses\n\t\tlistenProtocols := srv.ListenProtocols\n\t\tsrv.ListenProtocols = nil\n\t\tfor _, lnProtocols := range listenProtocols {\n\t\t\tif lnProtocols != nil {\n\t\t\t\tsrv.ListenProtocols = listenProtocols\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\n\t\t// handle the auto_https global option\n\t\tfor _, val := range autoHTTPS {\n\t\t\tswitch val {\n\t\t\tcase \"off\":\n\t\t\t\tif srv.AutoHTTPS == nil {\n\t\t\t\t\tsrv.AutoHTTPS = new(caddyhttp.AutoHTTPSConfig)\n\t\t\t\t}\n\t\t\t\tsrv.AutoHTTPS.Disabled = true\n\n\t\t\tcase \"disable_redirects\":\n\t\t\t\tif srv.AutoHTTPS == nil {\n\t\t\t\t\tsrv.AutoHTTPS = new(caddyhttp.AutoHTTPSConfig)\n\t\t\t\t}\n\t\t\t\tsrv.AutoHTTPS.DisableRedir = true\n\n\t\t\tcase \"disable_certs\":\n\t\t\t\tif srv.AutoHTTPS == nil {\n\t\t\t\t\tsrv.AutoHTTPS = new(caddyhttp.AutoHTTPSConfig)\n\t\t\t\t}\n\t\t\t\tsrv.AutoHTTPS.DisableCerts = true\n\n\t\t\tcase \"ignore_loaded_certs\":\n\t\t\t\tif srv.AutoHTTPS == nil {\n\t\t\t\t\tsrv.AutoHTTPS = new(caddyhttp.AutoHTTPSConfig)\n\t\t\t\t}\n\t\t\t\tsrv.AutoHTTPS.IgnoreLoadedCerts = true\n\t\t\t}\n\t\t}\n\n\t\t// Using paths in site addresses is deprecated\n\t\t// See ParseAddress() where parsing should later reject paths\n\t\t// See https://github.com/caddyserver/caddy/pull/4728 for a full explanation\n\t\tfor _, sblock := range p.serverBlocks {\n\t\t\tfor _, addr := range sblock.parsedKeys {\n\t\t\t\tif addr.Path != \"\" {\n\t\t\t\t\tcaddy.Log().Named(\"caddyfile\").Warn(\"Using a path in a site address is deprecated; please use the 'handle' directive instead\", zap.String(\"address\", addr.String()))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// sort server blocks by their keys; this is important because\n\t\t// only the first matching site should be evaluated, and we should\n\t\t// attempt to match most specific site first (host and path), in\n\t\t// case their matchers overlap; we do this somewhat naively by\n\t\t// descending sort by length of host then path\n\t\tsort.SliceStable(p.serverBlocks, func(i, j int) bool {\n\t\t\t// TODO: we could pre-process the specificities for efficiency,\n\t\t\t// but I don't expect many blocks will have THAT many keys...\n\t\t\tvar iLongestPath, jLongestPath string\n\t\t\tvar iLongestHost, jLongestHost string\n\t\t\tvar iWildcardHost, jWildcardHost bool\n\t\t\tfor _, addr := range p.serverBlocks[i].parsedKeys {\n\t\t\t\tif strings.Contains(addr.Host, \"*\") || addr.Host == \"\" {\n\t\t\t\t\tiWildcardHost = true\n\t\t\t\t}\n\t\t\t\tif specificity(addr.Host) > specificity(iLongestHost) {\n\t\t\t\t\tiLongestHost = addr.Host\n\t\t\t\t}\n\t\t\t\tif specificity(addr.Path) > specificity(iLongestPath) {\n\t\t\t\t\tiLongestPath = addr.Path\n\t\t\t\t}\n\t\t\t}\n\t\t\tfor _, addr := range p.serverBlocks[j].parsedKeys {\n\t\t\t\tif strings.Contains(addr.Host, \"*\") || addr.Host == \"\" {\n\t\t\t\t\tjWildcardHost = true\n\t\t\t\t}\n\t\t\t\tif specificity(addr.Host) > specificity(jLongestHost) {\n\t\t\t\t\tjLongestHost = addr.Host\n\t\t\t\t}\n\t\t\t\tif specificity(addr.Path) > specificity(jLongestPath) {\n\t\t\t\t\tjLongestPath = addr.Path\n\t\t\t\t}\n\t\t\t}\n\t\t\t// catch-all blocks (blocks with no hostname) should always go\n\t\t\t// last, even after blocks with wildcard hosts\n\t\t\tif specificity(iLongestHost) == 0 {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tif specificity(jLongestHost) == 0 {\n\t\t\t\treturn true\n\t\t\t}\n\t\t\tif iWildcardHost != jWildcardHost {\n\t\t\t\t// site blocks that have a key with a wildcard in the hostname\n\t\t\t\t// must always be less specific than blocks without one; see\n\t\t\t\t// https://github.com/caddyserver/caddy/issues/3410\n\t\t\t\treturn jWildcardHost && !iWildcardHost\n\t\t\t}\n\t\t\tif specificity(iLongestHost) == specificity(jLongestHost) {\n\t\t\t\treturn len(iLongestPath) > len(jLongestPath)\n\t\t\t}\n\t\t\treturn specificity(iLongestHost) > specificity(jLongestHost)\n\t\t})\n\n\t\tvar hasCatchAllTLSConnPolicy, addressQualifiesForTLS bool\n\t\tautoHTTPSWillAddConnPolicy := srv.AutoHTTPS == nil || !srv.AutoHTTPS.Disabled\n\n\t\t// if needed, the ServerLogConfig is initialized beforehand so\n\t\t// that all server blocks can populate it with data, even when not\n\t\t// coming with a log directive\n\t\tfor _, sblock := range p.serverBlocks {\n\t\t\tif len(sblock.pile[\"custom_log\"]) != 0 {\n\t\t\t\tsrv.Logs = new(caddyhttp.ServerLogConfig)\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\n\t\t// add named routes to the server if 'invoke' was used inside of it\n\t\tconfiguredNamedRoutes := options[\"named_routes\"].(map[string]*caddyhttp.Route)\n\t\tfor _, sblock := range p.serverBlocks {\n\t\t\tif len(sblock.pile[namedRouteKey]) == 0 {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tfor _, value := range sblock.pile[namedRouteKey] {\n\t\t\t\tif srv.NamedRoutes == nil {\n\t\t\t\t\tsrv.NamedRoutes = map[string]*caddyhttp.Route{}\n\t\t\t\t}\n\t\t\t\tname := value.Value.(string)\n\t\t\t\tif configuredNamedRoutes[name] == nil {\n\t\t\t\t\treturn nil, fmt.Errorf(\"cannot invoke named route '%s', which was not defined\", name)\n\t\t\t\t}\n\t\t\t\tsrv.NamedRoutes[name] = configuredNamedRoutes[name]\n\t\t\t}\n\t\t}\n\n\t\t// create a subroute for each site in the server block\n\t\tfor _, sblock := range p.serverBlocks {\n\t\t\tmatcherSetsEnc, err := st.compileEncodedMatcherSets(sblock)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"server block %v: compiling matcher sets: %v\", sblock.block.Keys, err)\n\t\t\t}\n\n\t\t\thosts := sblock.hostsFromKeys(false)\n\n\t\t\t// emit warnings if user put unspecified IP addresses; they probably want the bind directive\n\t\t\tfor _, h := range hosts {\n\t\t\t\tif h == \"0.0.0.0\" || h == \"::\" {\n\t\t\t\t\tcaddy.Log().Named(\"caddyfile\").Warn(\"Site block has an unspecified IP address which only matches requests having that Host header; you probably want the 'bind' directive to configure the socket\", zap.String(\"address\", h))\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// collect hosts that are forced to be automated\n\t\t\tforceAutomatedNames := make(map[string]struct{})\n\t\t\tif _, ok := sblock.pile[\"tls.force_automate\"]; ok {\n\t\t\t\tfor _, host := range hosts {\n\t\t\t\t\tforceAutomatedNames[host] = struct{}{}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// tls: connection policies\n\t\t\tif cpVals, ok := sblock.pile[\"tls.connection_policy\"]; ok {\n\t\t\t\t// tls connection policies\n\t\t\t\tfor _, cpVal := range cpVals {\n\t\t\t\t\tcp := cpVal.Value.(*caddytls.ConnectionPolicy)\n\n\t\t\t\t\t// make sure the policy covers all hostnames from the block\n\t\t\t\t\tfor _, h := range hosts {\n\t\t\t\t\t\tif h == defaultSNI {\n\t\t\t\t\t\t\thosts = append(hosts, \"\")\n\t\t\t\t\t\t\tcp.DefaultSNI = defaultSNI\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif h == fallbackSNI {\n\t\t\t\t\t\t\thosts = append(hosts, \"\")\n\t\t\t\t\t\t\tcp.FallbackSNI = fallbackSNI\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif len(hosts) > 0 {\n\t\t\t\t\t\tslices.Sort(hosts) // for deterministic JSON output\n\t\t\t\t\t\tcp.MatchersRaw = caddy.ModuleMap{\n\t\t\t\t\t\t\t\"sni\": caddyconfig.JSON(hosts, warnings), // make sure to match all hosts, not just auto-HTTPS-qualified ones\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tcp.DefaultSNI = defaultSNI\n\t\t\t\t\t\tcp.FallbackSNI = fallbackSNI\n\t\t\t\t\t}\n\n\t\t\t\t\t// only append this policy if it actually changes something,\n\t\t\t\t\t// or if the configuration explicitly automates certs for\n\t\t\t\t\t// these names (this is necessary to hoist a connection policy\n\t\t\t\t\t// above one that may manually load a wildcard cert that would\n\t\t\t\t\t// otherwise clobber the automated one; the code that appends\n\t\t\t\t\t// policies that manually load certs comes later, so they're\n\t\t\t\t\t// lower in the list)\n\t\t\t\t\tif !cp.SettingsEmpty() || mapContains(forceAutomatedNames, hosts) {\n\t\t\t\t\t\tsrv.TLSConnPolicies = append(srv.TLSConnPolicies, cp)\n\t\t\t\t\t\thasCatchAllTLSConnPolicy = len(hosts) == 0\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tfor _, addr := range sblock.parsedKeys {\n\t\t\t\t// if server only uses HTTP port, auto-HTTPS will not apply\n\t\t\t\tif listenersUseAnyPortOtherThan(srv.Listen, httpPort) {\n\t\t\t\t\t// exclude any hosts that were defined explicitly with \"http://\"\n\t\t\t\t\t// in the key from automated cert management (issue #2998)\n\t\t\t\t\tif addr.Scheme == \"http\" && addr.Host != \"\" {\n\t\t\t\t\t\tif srv.AutoHTTPS == nil {\n\t\t\t\t\t\t\tsrv.AutoHTTPS = new(caddyhttp.AutoHTTPSConfig)\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif !slices.Contains(srv.AutoHTTPS.Skip, addr.Host) {\n\t\t\t\t\t\t\tsrv.AutoHTTPS.Skip = append(srv.AutoHTTPS.Skip, addr.Host)\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t// If TLS is specified as directive, it will also result in 1 or more connection policy being created\n\t\t\t\t// Thus, catch-all address with non-standard port, e.g. :8443, can have TLS enabled without\n\t\t\t\t// specifying prefix \"https://\"\n\t\t\t\t// Second part of the condition is to allow creating TLS conn policy even though `auto_https` has been disabled\n\t\t\t\t// ensuring compatibility with behavior described in below link\n\t\t\t\t// https://caddy.community/t/making-sense-of-auto-https-and-why-disabling-it-still-serves-https-instead-of-http/9761\n\t\t\t\tcreatedTLSConnPolicies, ok := sblock.pile[\"tls.connection_policy\"]\n\t\t\t\thasTLSEnabled := (ok && len(createdTLSConnPolicies) > 0) ||\n\t\t\t\t\t(addr.Host != \"\" && (srv.AutoHTTPS == nil || !slices.Contains(srv.AutoHTTPS.Skip, addr.Host)))\n\n\t\t\t\t// we'll need to remember if the address qualifies for auto-HTTPS, so we\n\t\t\t\t// can add a TLS conn policy if necessary\n\t\t\t\tif addr.Scheme == \"https\" ||\n\t\t\t\t\t(addr.Scheme != \"http\" && addr.Port != httpPort && hasTLSEnabled) {\n\t\t\t\t\taddressQualifiesForTLS = true\n\t\t\t\t}\n\n\t\t\t\t// predict whether auto-HTTPS will add the conn policy for us; if so, we\n\t\t\t\t// may not need to add one for this server\n\t\t\t\tautoHTTPSWillAddConnPolicy = autoHTTPSWillAddConnPolicy &&\n\t\t\t\t\t(addr.Port == httpsPort || (addr.Port != httpPort && addr.Host != \"\"))\n\t\t\t}\n\n\t\t\t// Look for any config values that provide listener wrappers on the server block\n\t\t\tfor _, listenerConfig := range sblock.pile[\"listener_wrapper\"] {\n\t\t\t\tlistenerWrapper, ok := listenerConfig.Value.(caddy.ListenerWrapper)\n\t\t\t\tif !ok {\n\t\t\t\t\treturn nil, fmt.Errorf(\"config for a listener wrapper did not provide a value that implements caddy.ListenerWrapper\")\n\t\t\t\t}\n\t\t\t\tjsonListenerWrapper := caddyconfig.JSONModuleObject(\n\t\t\t\t\tlistenerWrapper,\n\t\t\t\t\t\"wrapper\",\n\t\t\t\t\tlistenerWrapper.(caddy.Module).CaddyModule().ID.Name(),\n\t\t\t\t\twarnings)\n\t\t\t\tsrv.ListenerWrappersRaw = append(srv.ListenerWrappersRaw, jsonListenerWrapper)\n\t\t\t}\n\n\t\t\t// Look for any config values that provide packet conn wrappers on the server block\n\t\t\tfor _, listenerConfig := range sblock.pile[\"packet_conn_wrapper\"] {\n\t\t\t\tpacketConnWrapper, ok := listenerConfig.Value.(caddy.PacketConnWrapper)\n\t\t\t\tif !ok {\n\t\t\t\t\treturn nil, fmt.Errorf(\"config for a packet conn wrapper did not provide a value that implements caddy.PacketConnWrapper\")\n\t\t\t\t}\n\t\t\t\tjsonPacketConnWrapper := caddyconfig.JSONModuleObject(\n\t\t\t\t\tpacketConnWrapper,\n\t\t\t\t\t\"wrapper\",\n\t\t\t\t\tpacketConnWrapper.(caddy.Module).CaddyModule().ID.Name(),\n\t\t\t\t\twarnings)\n\t\t\t\tsrv.PacketConnWrappersRaw = append(srv.PacketConnWrappersRaw, jsonPacketConnWrapper)\n\t\t\t}\n\n\t\t\t// set up each handler directive, making sure to honor directive order\n\t\t\tdirRoutes := sblock.pile[\"route\"]\n\t\t\tsiteSubroute, err := buildSubroute(dirRoutes, groupCounter, true)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\n\t\t\t// add the site block's route(s) to the server\n\t\t\tsrv.Routes = appendSubrouteToRouteList(srv.Routes, siteSubroute, matcherSetsEnc, p, warnings)\n\n\t\t\t// if error routes are defined, add those too\n\t\t\tif errorSubrouteVals, ok := sblock.pile[\"error_route\"]; ok {\n\t\t\t\tif srv.Errors == nil {\n\t\t\t\t\tsrv.Errors = new(caddyhttp.HTTPErrorConfig)\n\t\t\t\t}\n\t\t\t\tsort.SliceStable(errorSubrouteVals, func(i, j int) bool {\n\t\t\t\t\tsri, srj := errorSubrouteVals[i].Value.(*caddyhttp.Subroute), errorSubrouteVals[j].Value.(*caddyhttp.Subroute)\n\t\t\t\t\tif len(sri.Routes[0].MatcherSetsRaw) == 0 && len(srj.Routes[0].MatcherSetsRaw) != 0 {\n\t\t\t\t\t\treturn false\n\t\t\t\t\t}\n\t\t\t\t\treturn true\n\t\t\t\t})\n\t\t\t\terrorsSubroute := &caddyhttp.Subroute{}\n\t\t\t\tfor _, val := range errorSubrouteVals {\n\t\t\t\t\tsr := val.Value.(*caddyhttp.Subroute)\n\t\t\t\t\terrorsSubroute.Routes = append(errorsSubroute.Routes, sr.Routes...)\n\t\t\t\t}\n\t\t\t\tsrv.Errors.Routes = appendSubrouteToRouteList(srv.Errors.Routes, errorsSubroute, matcherSetsEnc, p, warnings)\n\t\t\t}\n\n\t\t\t// add log associations\n\t\t\t// see https://github.com/caddyserver/caddy/issues/3310\n\t\t\tsblockLogHosts := sblock.hostsFromKeys(true)\n\t\t\tfor _, cval := range sblock.pile[\"custom_log\"] {\n\t\t\t\tncl := cval.Value.(namedCustomLog)\n\n\t\t\t\t// if `no_hostname` is set, then this logger will not\n\t\t\t\t// be associated with any of the site block's hostnames,\n\t\t\t\t// and only be usable via the `log_name` directive\n\t\t\t\t// or the `access_logger_names` variable\n\t\t\t\tif ncl.noHostname {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\n\t\t\t\tif sblock.hasHostCatchAllKey() && len(ncl.hostnames) == 0 {\n\t\t\t\t\t// all requests for hosts not able to be listed should use\n\t\t\t\t\t// this log because it's a catch-all-hosts server block\n\t\t\t\t\tsrv.Logs.DefaultLoggerName = ncl.name\n\t\t\t\t} else if len(ncl.hostnames) > 0 {\n\t\t\t\t\t// if the logger overrides the hostnames, map that to the logger name\n\t\t\t\t\tfor _, h := range ncl.hostnames {\n\t\t\t\t\t\tif srv.Logs.LoggerNames == nil {\n\t\t\t\t\t\t\tsrv.Logs.LoggerNames = make(map[string]caddyhttp.StringArray)\n\t\t\t\t\t\t}\n\t\t\t\t\t\tsrv.Logs.LoggerNames[h] = append(srv.Logs.LoggerNames[h], ncl.name)\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\t// otherwise, map each host to the logger name\n\t\t\t\t\tfor _, h := range sblockLogHosts {\n\t\t\t\t\t\t// strip the port from the host, if any\n\t\t\t\t\t\thost, _, err := net.SplitHostPort(h)\n\t\t\t\t\t\tif err != nil {\n\t\t\t\t\t\t\thost = h\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif srv.Logs.LoggerNames == nil {\n\t\t\t\t\t\t\tsrv.Logs.LoggerNames = make(map[string]caddyhttp.StringArray)\n\t\t\t\t\t\t}\n\t\t\t\t\t\tsrv.Logs.LoggerNames[host] = append(srv.Logs.LoggerNames[host], ncl.name)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif srv.Logs != nil && len(sblock.pile[\"custom_log\"]) == 0 {\n\t\t\t\t// server has access logs enabled, but this server block does not\n\t\t\t\t// enable access logs; therefore, all hosts of this server block\n\t\t\t\t// should not be access-logged\n\t\t\t\tif len(hosts) == 0 {\n\t\t\t\t\t// if the server block has a catch-all-hosts key, then we should\n\t\t\t\t\t// not log reqs to any host unless it appears in the map\n\t\t\t\t\tsrv.Logs.SkipUnmappedHosts = true\n\t\t\t\t}\n\t\t\t\tsrv.Logs.SkipHosts = append(srv.Logs.SkipHosts, sblockLogHosts...)\n\t\t\t}\n\t\t}\n\n\t\t// sort for deterministic JSON output\n\t\tif srv.Logs != nil {\n\t\t\tslices.Sort(srv.Logs.SkipHosts)\n\t\t}\n\n\t\t// a server cannot (natively) serve both HTTP and HTTPS at the\n\t\t// same time, so make sure the configuration isn't in conflict\n\t\terr := detectConflictingSchemes(srv, p.serverBlocks, options)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\t// a catch-all TLS conn policy is necessary to ensure TLS can\n\t\t// be offered to all hostnames of the server; even though only\n\t\t// one policy is needed to enable TLS for the server, that\n\t\t// policy might apply to only certain TLS handshakes; but when\n\t\t// using the Caddyfile, user would expect all handshakes to at\n\t\t// least have a matching connection policy, so here we append a\n\t\t// catch-all/default policy if there isn't one already (it's\n\t\t// important that it goes at the end) - see issue #3004:\n\t\t// https://github.com/caddyserver/caddy/issues/3004\n\t\t// TODO: maybe a smarter way to handle this might be to just make the\n\t\t// auto-HTTPS logic at provision-time detect if there is any connection\n\t\t// policy missing for any HTTPS-enabled hosts, if so, add it... maybe?\n\t\tif addressQualifiesForTLS &&\n\t\t\t!hasCatchAllTLSConnPolicy &&\n\t\t\t(len(srv.TLSConnPolicies) > 0 || !autoHTTPSWillAddConnPolicy || defaultSNI != \"\" || fallbackSNI != \"\") {\n\t\t\tsrv.TLSConnPolicies = append(srv.TLSConnPolicies, &caddytls.ConnectionPolicy{\n\t\t\t\tDefaultSNI:  defaultSNI,\n\t\t\t\tFallbackSNI: fallbackSNI,\n\t\t\t})\n\t\t}\n\n\t\t// tidy things up a bit\n\t\tsrv.TLSConnPolicies, err = consolidateConnPolicies(srv.TLSConnPolicies)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"consolidating TLS connection policies for server %d: %v\", i, err)\n\t\t}\n\t\tsrv.Routes = consolidateRoutes(srv.Routes)\n\n\t\tservers[fmt.Sprintf(\"srv%d\", i)] = srv\n\t}\n\n\tif err := applyServerOptions(servers, options, warnings); err != nil {\n\t\treturn nil, fmt.Errorf(\"applying global server options: %v\", err)\n\t}\n\n\treturn servers, nil\n}\n\nfunc detectConflictingSchemes(srv *caddyhttp.Server, serverBlocks []serverBlock, options map[string]any) error {\n\thttpPort := strconv.Itoa(caddyhttp.DefaultHTTPPort)\n\tif hp, ok := options[\"http_port\"].(int); ok {\n\t\thttpPort = strconv.Itoa(hp)\n\t}\n\thttpsPort := strconv.Itoa(caddyhttp.DefaultHTTPSPort)\n\tif hsp, ok := options[\"https_port\"].(int); ok {\n\t\thttpsPort = strconv.Itoa(hsp)\n\t}\n\n\tvar httpOrHTTPS string\n\tcheckAndSetHTTP := func(addr Address) error {\n\t\tif httpOrHTTPS == \"HTTPS\" {\n\t\t\terrMsg := fmt.Errorf(\"server listening on %v is configured for HTTPS and cannot natively multiplex HTTP and HTTPS: %s\",\n\t\t\t\tsrv.Listen, addr.Original)\n\t\t\tif addr.Scheme == \"\" && addr.Host == \"\" {\n\t\t\t\terrMsg = fmt.Errorf(\"%s (try specifying https:// in the address)\", errMsg)\n\t\t\t}\n\t\t\treturn errMsg\n\t\t}\n\t\tif len(srv.TLSConnPolicies) > 0 {\n\t\t\t// any connection policies created for an HTTP server\n\t\t\t// is a logical conflict, as it would enable HTTPS\n\t\t\treturn fmt.Errorf(\"server listening on %v is HTTP, but attempts to configure TLS connection policies\", srv.Listen)\n\t\t}\n\t\thttpOrHTTPS = \"HTTP\"\n\t\treturn nil\n\t}\n\tcheckAndSetHTTPS := func(addr Address) error {\n\t\tif httpOrHTTPS == \"HTTP\" {\n\t\t\treturn fmt.Errorf(\"server listening on %v is configured for HTTP and cannot natively multiplex HTTP and HTTPS: %s\",\n\t\t\t\tsrv.Listen, addr.Original)\n\t\t}\n\t\thttpOrHTTPS = \"HTTPS\"\n\t\treturn nil\n\t}\n\n\tfor _, sblock := range serverBlocks {\n\t\tfor _, addr := range sblock.parsedKeys {\n\t\t\tif addr.Scheme == \"http\" || addr.Port == httpPort {\n\t\t\t\tif err := checkAndSetHTTP(addr); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t} else if addr.Scheme == \"https\" || addr.Port == httpsPort || len(srv.TLSConnPolicies) > 0 {\n\t\t\t\tif err := checkAndSetHTTPS(addr); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t} else if addr.Host == \"\" {\n\t\t\t\tif err := checkAndSetHTTP(addr); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// consolidateConnPolicies sorts any catch-all policy to the end, removes empty TLS connection\n// policies, and combines equivalent ones for a cleaner overall output.\nfunc consolidateConnPolicies(cps caddytls.ConnectionPolicies) (caddytls.ConnectionPolicies, error) {\n\t// catch-all policies (those without any matcher) should be at the\n\t// end, otherwise it nullifies any more specific policies\n\tsort.SliceStable(cps, func(i, j int) bool {\n\t\treturn cps[j].MatchersRaw == nil && cps[i].MatchersRaw != nil\n\t})\n\n\tfor i := 0; i < len(cps); i++ {\n\t\t// compare it to the others\n\t\tfor j := 0; j < len(cps); j++ {\n\t\t\tif j == i {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// if they're exactly equal in every way, just keep one of them\n\t\t\tif reflect.DeepEqual(cps[i], cps[j]) {\n\t\t\t\tcps = slices.Delete(cps, j, j+1)\n\t\t\t\ti--\n\t\t\t\tbreak\n\t\t\t}\n\n\t\t\t// as a special case, if there are adjacent TLS conn policies that are identical except\n\t\t\t// by their matchers, and the matchers are specifically just ServerName (\"sni\") matchers\n\t\t\t// (by far the most common), we can combine them into a single policy\n\t\t\tif i == j-1 && len(cps[i].MatchersRaw) == 1 && len(cps[j].MatchersRaw) == 1 {\n\t\t\t\tif iSNIMatcherJSON, ok := cps[i].MatchersRaw[\"sni\"]; ok {\n\t\t\t\t\tif jSNIMatcherJSON, ok := cps[j].MatchersRaw[\"sni\"]; ok {\n\t\t\t\t\t\t// position of policies and the matcher criteria check out; if settings are\n\t\t\t\t\t\t// the same, then we can combine the policies; we have to unmarshal and\n\t\t\t\t\t\t// remarshal the matchers though\n\t\t\t\t\t\tif cps[i].SettingsEqual(*cps[j]) {\n\t\t\t\t\t\t\tvar iSNIMatcher caddytls.MatchServerName\n\t\t\t\t\t\t\tif err := json.Unmarshal(iSNIMatcherJSON, &iSNIMatcher); err == nil {\n\t\t\t\t\t\t\t\tvar jSNIMatcher caddytls.MatchServerName\n\t\t\t\t\t\t\t\tif err := json.Unmarshal(jSNIMatcherJSON, &jSNIMatcher); err == nil {\n\t\t\t\t\t\t\t\t\tiSNIMatcher = append(iSNIMatcher, jSNIMatcher...)\n\t\t\t\t\t\t\t\t\tcps[i].MatchersRaw[\"sni\"], err = json.Marshal(iSNIMatcher)\n\t\t\t\t\t\t\t\t\tif err != nil {\n\t\t\t\t\t\t\t\t\t\treturn nil, fmt.Errorf(\"recombining SNI matchers: %v\", err)\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\tcps = slices.Delete(cps, j, j+1)\n\t\t\t\t\t\t\t\t\ti--\n\t\t\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// if they have the same matcher, try to reconcile each field: either they must\n\t\t\t// be identical, or we have to be able to combine them safely\n\t\t\tif reflect.DeepEqual(cps[i].MatchersRaw, cps[j].MatchersRaw) {\n\t\t\t\tif len(cps[i].ALPN) > 0 &&\n\t\t\t\t\tlen(cps[j].ALPN) > 0 &&\n\t\t\t\t\t!reflect.DeepEqual(cps[i].ALPN, cps[j].ALPN) {\n\t\t\t\t\treturn nil, fmt.Errorf(\"two policies with same match criteria have conflicting ALPN: %v vs. %v\",\n\t\t\t\t\t\tcps[i].ALPN, cps[j].ALPN)\n\t\t\t\t}\n\t\t\t\tif len(cps[i].CipherSuites) > 0 &&\n\t\t\t\t\tlen(cps[j].CipherSuites) > 0 &&\n\t\t\t\t\t!reflect.DeepEqual(cps[i].CipherSuites, cps[j].CipherSuites) {\n\t\t\t\t\treturn nil, fmt.Errorf(\"two policies with same match criteria have conflicting cipher suites: %v vs. %v\",\n\t\t\t\t\t\tcps[i].CipherSuites, cps[j].CipherSuites)\n\t\t\t\t}\n\t\t\t\tif cps[i].ClientAuthentication == nil &&\n\t\t\t\t\tcps[j].ClientAuthentication != nil &&\n\t\t\t\t\t!reflect.DeepEqual(cps[i].ClientAuthentication, cps[j].ClientAuthentication) {\n\t\t\t\t\treturn nil, fmt.Errorf(\"two policies with same match criteria have conflicting client auth configuration: %+v vs. %+v\",\n\t\t\t\t\t\tcps[i].ClientAuthentication, cps[j].ClientAuthentication)\n\t\t\t\t}\n\t\t\t\tif len(cps[i].Curves) > 0 &&\n\t\t\t\t\tlen(cps[j].Curves) > 0 &&\n\t\t\t\t\t!reflect.DeepEqual(cps[i].Curves, cps[j].Curves) {\n\t\t\t\t\treturn nil, fmt.Errorf(\"two policies with same match criteria have conflicting curves: %v vs. %v\",\n\t\t\t\t\t\tcps[i].Curves, cps[j].Curves)\n\t\t\t\t}\n\t\t\t\tif cps[i].DefaultSNI != \"\" &&\n\t\t\t\t\tcps[j].DefaultSNI != \"\" &&\n\t\t\t\t\tcps[i].DefaultSNI != cps[j].DefaultSNI {\n\t\t\t\t\treturn nil, fmt.Errorf(\"two policies with same match criteria have conflicting default SNI: %s vs. %s\",\n\t\t\t\t\t\tcps[i].DefaultSNI, cps[j].DefaultSNI)\n\t\t\t\t}\n\t\t\t\tif cps[i].FallbackSNI != \"\" &&\n\t\t\t\t\tcps[j].FallbackSNI != \"\" &&\n\t\t\t\t\tcps[i].FallbackSNI != cps[j].FallbackSNI {\n\t\t\t\t\treturn nil, fmt.Errorf(\"two policies with same match criteria have conflicting fallback SNI: %s vs. %s\",\n\t\t\t\t\t\tcps[i].FallbackSNI, cps[j].FallbackSNI)\n\t\t\t\t}\n\t\t\t\tif cps[i].ProtocolMin != \"\" &&\n\t\t\t\t\tcps[j].ProtocolMin != \"\" &&\n\t\t\t\t\tcps[i].ProtocolMin != cps[j].ProtocolMin {\n\t\t\t\t\treturn nil, fmt.Errorf(\"two policies with same match criteria have conflicting min protocol: %s vs. %s\",\n\t\t\t\t\t\tcps[i].ProtocolMin, cps[j].ProtocolMin)\n\t\t\t\t}\n\t\t\t\tif cps[i].ProtocolMax != \"\" &&\n\t\t\t\t\tcps[j].ProtocolMax != \"\" &&\n\t\t\t\t\tcps[i].ProtocolMax != cps[j].ProtocolMax {\n\t\t\t\t\treturn nil, fmt.Errorf(\"two policies with same match criteria have conflicting max protocol: %s vs. %s\",\n\t\t\t\t\t\tcps[i].ProtocolMax, cps[j].ProtocolMax)\n\t\t\t\t}\n\t\t\t\tif cps[i].CertSelection != nil && cps[j].CertSelection != nil {\n\t\t\t\t\t// merging fields other than AnyTag is not implemented\n\t\t\t\t\tif !reflect.DeepEqual(cps[i].CertSelection.SerialNumber, cps[j].CertSelection.SerialNumber) ||\n\t\t\t\t\t\t!reflect.DeepEqual(cps[i].CertSelection.SubjectOrganization, cps[j].CertSelection.SubjectOrganization) ||\n\t\t\t\t\t\tcps[i].CertSelection.PublicKeyAlgorithm != cps[j].CertSelection.PublicKeyAlgorithm ||\n\t\t\t\t\t\t!reflect.DeepEqual(cps[i].CertSelection.AllTags, cps[j].CertSelection.AllTags) {\n\t\t\t\t\t\treturn nil, fmt.Errorf(\"two policies with same match criteria have conflicting cert selections: %+v vs. %+v\",\n\t\t\t\t\t\t\tcps[i].CertSelection, cps[j].CertSelection)\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t// by now we've decided that we can merge the two -- we'll keep i and drop j\n\n\t\t\t\tif len(cps[i].ALPN) == 0 && len(cps[j].ALPN) > 0 {\n\t\t\t\t\tcps[i].ALPN = cps[j].ALPN\n\t\t\t\t}\n\t\t\t\tif len(cps[i].CipherSuites) == 0 && len(cps[j].CipherSuites) > 0 {\n\t\t\t\t\tcps[i].CipherSuites = cps[j].CipherSuites\n\t\t\t\t}\n\t\t\t\tif cps[i].ClientAuthentication == nil && cps[j].ClientAuthentication != nil {\n\t\t\t\t\tcps[i].ClientAuthentication = cps[j].ClientAuthentication\n\t\t\t\t}\n\t\t\t\tif len(cps[i].Curves) == 0 && len(cps[j].Curves) > 0 {\n\t\t\t\t\tcps[i].Curves = cps[j].Curves\n\t\t\t\t}\n\t\t\t\tif cps[i].DefaultSNI == \"\" && cps[j].DefaultSNI != \"\" {\n\t\t\t\t\tcps[i].DefaultSNI = cps[j].DefaultSNI\n\t\t\t\t}\n\t\t\t\tif cps[i].FallbackSNI == \"\" && cps[j].FallbackSNI != \"\" {\n\t\t\t\t\tcps[i].FallbackSNI = cps[j].FallbackSNI\n\t\t\t\t}\n\t\t\t\tif cps[i].ProtocolMin == \"\" && cps[j].ProtocolMin != \"\" {\n\t\t\t\t\tcps[i].ProtocolMin = cps[j].ProtocolMin\n\t\t\t\t}\n\t\t\t\tif cps[i].ProtocolMax == \"\" && cps[j].ProtocolMax != \"\" {\n\t\t\t\t\tcps[i].ProtocolMax = cps[j].ProtocolMax\n\t\t\t\t}\n\n\t\t\t\tif cps[i].CertSelection == nil && cps[j].CertSelection != nil {\n\t\t\t\t\t// if j is the only one with a policy, move it over to i\n\t\t\t\t\tcps[i].CertSelection = cps[j].CertSelection\n\t\t\t\t} else if cps[i].CertSelection != nil && cps[j].CertSelection != nil {\n\t\t\t\t\t// if both have one, then combine AnyTag\n\t\t\t\t\tfor _, tag := range cps[j].CertSelection.AnyTag {\n\t\t\t\t\t\tif !slices.Contains(cps[i].CertSelection.AnyTag, tag) {\n\t\t\t\t\t\t\tcps[i].CertSelection.AnyTag = append(cps[i].CertSelection.AnyTag, tag)\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tcps = slices.Delete(cps, j, j+1)\n\t\t\t\ti--\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\n\treturn cps, nil\n}\n\n// appendSubrouteToRouteList appends the routes in subroute\n// to the routeList, optionally qualified by matchers.\nfunc appendSubrouteToRouteList(routeList caddyhttp.RouteList,\n\tsubroute *caddyhttp.Subroute,\n\tmatcherSetsEnc []caddy.ModuleMap,\n\tp sbAddrAssociation,\n\twarnings *[]caddyconfig.Warning,\n) caddyhttp.RouteList {\n\t// nothing to do if... there's nothing to do\n\tif len(matcherSetsEnc) == 0 && len(subroute.Routes) == 0 && subroute.Errors == nil {\n\t\treturn routeList\n\t}\n\n\t// No need to wrap the handlers in a subroute if this is the only server block\n\t// and there is no matcher for it (doing so would produce unnecessarily nested\n\t// JSON), *unless* there is a host matcher within this site block; if so, then\n\t// we still need to wrap in a subroute because otherwise the host matcher from\n\t// the inside of the site block would be a top-level host matcher, which is\n\t// subject to auto-HTTPS (cert management), and using a host matcher within\n\t// a site block is a valid, common pattern for excluding domains from cert\n\t// management, leading to unexpected behavior; see issue #5124.\n\twrapInSubroute := true\n\tif len(matcherSetsEnc) == 0 && len(p.serverBlocks) == 1 {\n\t\tvar hasHostMatcher bool\n\touter:\n\t\tfor _, route := range subroute.Routes {\n\t\t\tfor _, ms := range route.MatcherSetsRaw {\n\t\t\t\tfor matcherName := range ms {\n\t\t\t\t\tif matcherName == \"host\" {\n\t\t\t\t\t\thasHostMatcher = true\n\t\t\t\t\t\tbreak outer\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\twrapInSubroute = hasHostMatcher\n\t}\n\n\tif wrapInSubroute {\n\t\troute := caddyhttp.Route{\n\t\t\t// the semantics of a site block in the Caddyfile dictate\n\t\t\t// that only the first matching one is evaluated, since\n\t\t\t// site blocks do not cascade nor inherit\n\t\t\tTerminal: true,\n\t\t}\n\t\tif len(matcherSetsEnc) > 0 {\n\t\t\troute.MatcherSetsRaw = matcherSetsEnc\n\t\t}\n\t\tif len(subroute.Routes) > 0 || subroute.Errors != nil {\n\t\t\troute.HandlersRaw = []json.RawMessage{\n\t\t\t\tcaddyconfig.JSONModuleObject(subroute, \"handler\", \"subroute\", warnings),\n\t\t\t}\n\t\t}\n\t\tif len(route.MatcherSetsRaw) > 0 || len(route.HandlersRaw) > 0 {\n\t\t\trouteList = append(routeList, route)\n\t\t}\n\t} else {\n\t\trouteList = append(routeList, subroute.Routes...)\n\t}\n\n\treturn routeList\n}\n\n// buildSubroute turns the config values, which are expected to be routes\n// into a clean and orderly subroute that has all the routes within it.\nfunc buildSubroute(routes []ConfigValue, groupCounter counter, needsSorting bool) (*caddyhttp.Subroute, error) {\n\tif needsSorting {\n\t\tfor _, val := range routes {\n\t\t\tif !slices.Contains(directiveOrder, val.directive) {\n\t\t\t\treturn nil, fmt.Errorf(\"directive '%s' is not an ordered HTTP handler, so it cannot be used here - try placing within a route block or using the order global option\", val.directive)\n\t\t\t}\n\t\t}\n\n\t\tsortRoutes(routes)\n\t}\n\n\tsubroute := new(caddyhttp.Subroute)\n\n\t// some directives are mutually exclusive (only first matching\n\t// instance should be evaluated); this is done by putting their\n\t// routes in the same group\n\tmutuallyExclusiveDirs := map[string]*struct {\n\t\tcount     int\n\t\tgroupName string\n\t}{\n\t\t// as a special case, group rewrite directives so that they are mutually exclusive;\n\t\t// this means that only the first matching rewrite will be evaluated, and that's\n\t\t// probably a good thing, since there should never be a need to do more than one\n\t\t// rewrite (I think?), and cascading rewrites smell bad... imagine these rewrites:\n\t\t//     rewrite /docs/json/* /docs/json/index.html\n\t\t//     rewrite /docs/*      /docs/index.html\n\t\t// (We use this on the Caddy website, or at least we did once.) The first rewrite's\n\t\t// result is also matched by the second rewrite, making the first rewrite pointless.\n\t\t// See issue #2959.\n\t\t\"rewrite\": {},\n\n\t\t// handle blocks are also mutually exclusive by definition\n\t\t\"handle\": {},\n\n\t\t// root just sets a variable, so if it was not mutually exclusive, intersecting\n\t\t// root directives would overwrite previously-matched ones; they should not cascade\n\t\t\"root\": {},\n\t}\n\n\t// we need to deterministically loop over each of these directives\n\t// in order to keep the group numbers consistent\n\tkeys := make([]string, 0, len(mutuallyExclusiveDirs))\n\tfor k := range mutuallyExclusiveDirs {\n\t\tkeys = append(keys, k)\n\t}\n\tsort.Strings(keys)\n\n\tfor _, meDir := range keys {\n\t\tinfo := mutuallyExclusiveDirs[meDir]\n\n\t\t// see how many instances of the directive there are\n\t\tfor _, r := range routes {\n\t\t\tif r.directive == meDir {\n\t\t\t\tinfo.count++\n\t\t\t\tif info.count > 1 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\t// if there is more than one, put them in a group\n\t\t// (special case: \"rewrite\" directive must always be in\n\t\t// its own group--even if there is only one--because we\n\t\t// do not want a rewrite to be consolidated into other\n\t\t// adjacent routes that happen to have the same matcher,\n\t\t// see caddyserver/caddy#3108 - because the implied\n\t\t// intent of rewrite is to do an internal redirect,\n\t\t// we can't assume that the request will continue to\n\t\t// match the same matcher; anyway, giving a route a\n\t\t// unique group name should keep it from consolidating)\n\t\tif info.count > 1 || meDir == \"rewrite\" {\n\t\t\tinfo.groupName = groupCounter.nextGroup()\n\t\t}\n\t}\n\n\t// add all the routes piled in from directives\n\tfor _, r := range routes {\n\t\t// put this route into a group if it is mutually exclusive\n\t\tif info, ok := mutuallyExclusiveDirs[r.directive]; ok {\n\t\t\troute := r.Value.(caddyhttp.Route)\n\t\t\troute.Group = info.groupName\n\t\t\tr.Value = route\n\t\t}\n\n\t\tswitch route := r.Value.(type) {\n\t\tcase caddyhttp.Subroute:\n\t\t\t// if a route-class config value is actually a Subroute handler\n\t\t\t// with nothing but a list of routes, then it is the intention\n\t\t\t// of the directive to keep these handlers together and in this\n\t\t\t// same order, but not necessarily in a subroute (if it wanted\n\t\t\t// to keep them in a subroute, the directive would have returned\n\t\t\t// a route with a Subroute as its handler); this is useful to\n\t\t\t// keep multiple handlers/routes together and in the same order\n\t\t\t// so that the sorting procedure we did above doesn't reorder them\n\t\t\tif route.Errors != nil {\n\t\t\t\t// if error handlers are also set, this is confusing; it's\n\t\t\t\t// probably supposed to be wrapped in a Route and encoded\n\t\t\t\t// as a regular handler route... programmer error.\n\t\t\t\tpanic(\"found subroute with more than just routes; perhaps it should have been wrapped in a route?\")\n\t\t\t}\n\t\t\tsubroute.Routes = append(subroute.Routes, route.Routes...)\n\t\tcase caddyhttp.Route:\n\t\t\tsubroute.Routes = append(subroute.Routes, route)\n\t\t}\n\t}\n\n\tsubroute.Routes = consolidateRoutes(subroute.Routes)\n\n\treturn subroute, nil\n}\n\n// normalizeDirectiveName ensures directives that should be sorted\n// at the same level are named the same before sorting happens.\nfunc normalizeDirectiveName(directive string) string {\n\t// As a special case, we want \"handle_path\" to be sorted\n\t// at the same level as \"handle\", so we force them to use\n\t// the same directive name after their parsing is complete.\n\t// See https://github.com/caddyserver/caddy/issues/3675#issuecomment-678042377\n\tif directive == \"handle_path\" {\n\t\tdirective = \"handle\"\n\t}\n\treturn directive\n}\n\n// consolidateRoutes combines routes with the same properties\n// (same matchers, same Terminal and Group settings) for a\n// cleaner overall output.\nfunc consolidateRoutes(routes caddyhttp.RouteList) caddyhttp.RouteList {\n\tfor i := 0; i < len(routes)-1; i++ {\n\t\tif reflect.DeepEqual(routes[i].MatcherSetsRaw, routes[i+1].MatcherSetsRaw) &&\n\t\t\troutes[i].Terminal == routes[i+1].Terminal &&\n\t\t\troutes[i].Group == routes[i+1].Group {\n\t\t\t// keep the handlers in the same order, then splice out repetitive route\n\t\t\troutes[i].HandlersRaw = append(routes[i].HandlersRaw, routes[i+1].HandlersRaw...)\n\t\t\troutes = append(routes[:i+1], routes[i+2:]...)\n\t\t\ti--\n\t\t}\n\t}\n\treturn routes\n}\n\nfunc matcherSetFromMatcherToken(\n\ttkn caddyfile.Token,\n\tmatcherDefs map[string]caddy.ModuleMap,\n\twarnings *[]caddyconfig.Warning,\n) (caddy.ModuleMap, bool, error) {\n\t// matcher tokens can be wildcards, simple path matchers,\n\t// or refer to a pre-defined matcher by some name\n\tif tkn.Text == \"*\" {\n\t\t// match all requests == no matchers, so nothing to do\n\t\treturn nil, true, nil\n\t}\n\n\t// convenient way to specify a single path match\n\tif strings.HasPrefix(tkn.Text, \"/\") {\n\t\treturn caddy.ModuleMap{\n\t\t\t\"path\": caddyconfig.JSON(caddyhttp.MatchPath{tkn.Text}, warnings),\n\t\t}, true, nil\n\t}\n\n\t// pre-defined matcher\n\tif strings.HasPrefix(tkn.Text, matcherPrefix) {\n\t\tm, ok := matcherDefs[tkn.Text]\n\t\tif !ok {\n\t\t\treturn nil, false, fmt.Errorf(\"unrecognized matcher name: %+v\", tkn.Text)\n\t\t}\n\t\treturn m, true, nil\n\t}\n\n\treturn nil, false, nil\n}\n\nfunc (st *ServerType) compileEncodedMatcherSets(sblock serverBlock) ([]caddy.ModuleMap, error) {\n\ttype hostPathPair struct {\n\t\thostm caddyhttp.MatchHost\n\t\tpathm caddyhttp.MatchPath\n\t}\n\n\t// keep routes with common host and path matchers together\n\tvar matcherPairs []*hostPathPair\n\n\tvar catchAllHosts bool\n\tfor _, addr := range sblock.parsedKeys {\n\t\t// choose a matcher pair that should be shared by this\n\t\t// server block; if none exists yet, create one\n\t\tvar chosenMatcherPair *hostPathPair\n\t\tfor _, mp := range matcherPairs {\n\t\t\tif (len(mp.pathm) == 0 && addr.Path == \"\") ||\n\t\t\t\t(len(mp.pathm) == 1 && mp.pathm[0] == addr.Path) {\n\t\t\t\tchosenMatcherPair = mp\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tif chosenMatcherPair == nil {\n\t\t\tchosenMatcherPair = new(hostPathPair)\n\t\t\tif addr.Path != \"\" {\n\t\t\t\tchosenMatcherPair.pathm = []string{addr.Path}\n\t\t\t}\n\t\t\tmatcherPairs = append(matcherPairs, chosenMatcherPair)\n\t\t}\n\n\t\t// if one of the keys has no host (i.e. is a catch-all for\n\t\t// any hostname), then we need to null out the host matcher\n\t\t// entirely so that it matches all hosts\n\t\tif addr.Host == \"\" && !catchAllHosts {\n\t\t\tchosenMatcherPair.hostm = nil\n\t\t\tcatchAllHosts = true\n\t\t}\n\t\tif catchAllHosts {\n\t\t\tcontinue\n\t\t}\n\n\t\t// add this server block's keys to the matcher\n\t\t// pair if it doesn't already exist\n\t\tif addr.Host != \"\" && !slices.Contains(chosenMatcherPair.hostm, addr.Host) {\n\t\t\tchosenMatcherPair.hostm = append(chosenMatcherPair.hostm, addr.Host)\n\t\t}\n\t}\n\n\t// iterate each pairing of host and path matchers and\n\t// put them into a map for JSON encoding\n\tvar matcherSets []map[string]caddyhttp.RequestMatcherWithError\n\tfor _, mp := range matcherPairs {\n\t\tmatcherSet := make(map[string]caddyhttp.RequestMatcherWithError)\n\t\tif len(mp.hostm) > 0 {\n\t\t\tmatcherSet[\"host\"] = mp.hostm\n\t\t}\n\t\tif len(mp.pathm) > 0 {\n\t\t\tmatcherSet[\"path\"] = mp.pathm\n\t\t}\n\t\tif len(matcherSet) > 0 {\n\t\t\tmatcherSets = append(matcherSets, matcherSet)\n\t\t}\n\t}\n\n\t// finally, encode each of the matcher sets\n\tmatcherSetsEnc := make([]caddy.ModuleMap, 0, len(matcherSets))\n\tfor _, ms := range matcherSets {\n\t\tmsEncoded, err := encodeMatcherSet(ms)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"server block %v: %v\", sblock.block.Keys, err)\n\t\t}\n\t\tmatcherSetsEnc = append(matcherSetsEnc, msEncoded)\n\t}\n\n\treturn matcherSetsEnc, nil\n}\n\nfunc parseMatcherDefinitions(d *caddyfile.Dispenser, matchers map[string]caddy.ModuleMap) error {\n\td.Next() // advance to the first token\n\n\t// this is the \"name\" for \"named matchers\"\n\tdefinitionName := d.Val()\n\n\tif _, ok := matchers[definitionName]; ok {\n\t\treturn fmt.Errorf(\"matcher is defined more than once: %s\", definitionName)\n\t}\n\tmatchers[definitionName] = make(caddy.ModuleMap)\n\n\t// given a matcher name and the tokens following it, parse\n\t// the tokens as a matcher module and record it\n\tmakeMatcher := func(matcherName string, tokens []caddyfile.Token) error {\n\t\t// create a new dispenser from the tokens\n\t\tdispenser := caddyfile.NewDispenser(tokens)\n\n\t\t// set the matcher name (without @) in the dispenser context so\n\t\t// that matcher modules can access it to use it as their name\n\t\t// (e.g. regexp matchers which use the name for capture groups)\n\t\tdispenser.SetContext(caddyfile.MatcherNameCtxKey, definitionName[1:])\n\n\t\tmod, err := caddy.GetModule(\"http.matchers.\" + matcherName)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"getting matcher module '%s': %v\", matcherName, err)\n\t\t}\n\t\tunm, ok := mod.New().(caddyfile.Unmarshaler)\n\t\tif !ok {\n\t\t\treturn fmt.Errorf(\"matcher module '%s' is not a Caddyfile unmarshaler\", matcherName)\n\t\t}\n\t\terr = unm.UnmarshalCaddyfile(dispenser)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tif rm, ok := unm.(caddyhttp.RequestMatcherWithError); ok {\n\t\t\tmatchers[definitionName][matcherName] = caddyconfig.JSON(rm, nil)\n\t\t\treturn nil\n\t\t}\n\t\t// nolint:staticcheck\n\t\tif rm, ok := unm.(caddyhttp.RequestMatcher); ok {\n\t\t\tmatchers[definitionName][matcherName] = caddyconfig.JSON(rm, nil)\n\t\t\treturn nil\n\t\t}\n\t\treturn fmt.Errorf(\"matcher module '%s' is not a request matcher\", matcherName)\n\t}\n\n\t// if the next token is quoted, we can assume it's not a matcher name\n\t// and that it's probably an 'expression' matcher\n\tif d.NextArg() {\n\t\tif d.Token().Quoted() {\n\t\t\t// since it was missing the matcher name, we insert a token\n\t\t\t// in front of the expression token itself; we use Clone() to\n\t\t\t// make the new token to keep the same the import location as\n\t\t\t// the next token, if this is within a snippet or imported file.\n\t\t\t// see https://github.com/caddyserver/caddy/issues/6287\n\t\t\texpressionToken := d.Token().Clone()\n\t\t\texpressionToken.Text = \"expression\"\n\t\t\terr := makeMatcher(\"expression\", []caddyfile.Token{expressionToken, d.Token()})\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\treturn nil\n\t\t}\n\n\t\t// if it wasn't quoted, then we need to rewind after calling\n\t\t// d.NextArg() so the below properly grabs the matcher name\n\t\td.Prev()\n\t}\n\n\t// in case there are multiple instances of the same matcher, concatenate\n\t// their tokens (we expect that UnmarshalCaddyfile should be able to\n\t// handle more than one segment); otherwise, we'd overwrite other\n\t// instances of the matcher in this set\n\ttokensByMatcherName := make(map[string][]caddyfile.Token)\n\tfor nesting := d.Nesting(); d.NextArg() || d.NextBlock(nesting); {\n\t\tmatcherName := d.Val()\n\t\ttokensByMatcherName[matcherName] = append(tokensByMatcherName[matcherName], d.NextSegment()...)\n\t}\n\tfor matcherName, tokens := range tokensByMatcherName {\n\t\terr := makeMatcher(matcherName, tokens)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc encodeMatcherSet(matchers map[string]caddyhttp.RequestMatcherWithError) (caddy.ModuleMap, error) {\n\tmsEncoded := make(caddy.ModuleMap)\n\tfor matcherName, val := range matchers {\n\t\tjsonBytes, err := json.Marshal(val)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"marshaling matcher set %#v: %v\", matchers, err)\n\t\t}\n\t\tmsEncoded[matcherName] = jsonBytes\n\t}\n\treturn msEncoded, nil\n}\n\n// WasReplacedPlaceholderShorthand checks if a token string was\n// likely a replaced shorthand of the known Caddyfile placeholder\n// replacement outputs. Useful to prevent some user-defined map\n// output destinations from overlapping with one of the\n// predefined shorthands.\nfunc WasReplacedPlaceholderShorthand(token string) string {\n\tprev := \"\"\n\tfor i, item := range placeholderShorthands() {\n\t\t// only look at every 2nd item, which is the replacement\n\t\tif i%2 == 0 {\n\t\t\tprev = item\n\t\t\tcontinue\n\t\t}\n\t\tif strings.Trim(token, \"{}\") == strings.Trim(item, \"{}\") {\n\t\t\t// we return the original shorthand so it\n\t\t\t// can be used for an error message\n\t\t\treturn prev\n\t\t}\n\t}\n\treturn \"\"\n}\n\n// tryInt tries to convert val to an integer. If it fails,\n// it downgrades the error to a warning and returns 0.\nfunc tryInt(val any, warnings *[]caddyconfig.Warning) int {\n\tintVal, ok := val.(int)\n\tif val != nil && !ok && warnings != nil {\n\t\t*warnings = append(*warnings, caddyconfig.Warning{Message: \"not an integer type\"})\n\t}\n\treturn intVal\n}\n\nfunc tryString(val any, warnings *[]caddyconfig.Warning) string {\n\tstringVal, ok := val.(string)\n\tif val != nil && !ok && warnings != nil {\n\t\t*warnings = append(*warnings, caddyconfig.Warning{Message: \"not a string type\"})\n\t}\n\treturn stringVal\n}\n\nfunc tryDuration(val any, warnings *[]caddyconfig.Warning) caddy.Duration {\n\tdurationVal, ok := val.(caddy.Duration)\n\tif val != nil && !ok && warnings != nil {\n\t\t*warnings = append(*warnings, caddyconfig.Warning{Message: \"not a duration type\"})\n\t}\n\treturn durationVal\n}\n\n// listenersUseAnyPortOtherThan returns true if there are any\n// listeners in addresses that use a port which is not otherPort.\n// Mostly borrowed from unexported method in caddyhttp package.\nfunc listenersUseAnyPortOtherThan(addresses []string, otherPort string) bool {\n\totherPortInt, err := strconv.Atoi(otherPort)\n\tif err != nil {\n\t\treturn false\n\t}\n\tfor _, lnAddr := range addresses {\n\t\tladdrs, err := caddy.ParseNetworkAddress(lnAddr)\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\t\tif uint(otherPortInt) > laddrs.EndPort || uint(otherPortInt) < laddrs.StartPort {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\nfunc mapContains[K comparable, V any](m map[K]V, keys []K) bool {\n\tif len(m) == 0 || len(keys) == 0 {\n\t\treturn false\n\t}\n\tfor _, key := range keys {\n\t\tif _, ok := m[key]; ok {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\n// specificity returns len(s) minus any wildcards (*) and\n// placeholders ({...}). Basically, it's a length count\n// that penalizes the use of wildcards and placeholders.\n// This is useful for comparing hostnames and paths.\n// However, wildcards in paths are not a sure answer to\n// the question of specificity. For example,\n// '*.example.com' is clearly less specific than\n// 'a.example.com', but is '/a' more or less specific\n// than '/a*'?\nfunc specificity(s string) int {\n\tl := len(s) - strings.Count(s, \"*\")\n\tfor len(s) > 0 {\n\t\tstart := strings.Index(s, \"{\")\n\t\tif start < 0 {\n\t\t\treturn l\n\t\t}\n\t\tend := strings.Index(s[start:], \"}\") + start + 1\n\t\tif end <= start {\n\t\t\treturn l\n\t\t}\n\t\tl -= end - start\n\t\ts = s[end:]\n\t}\n\treturn l\n}\n\ntype counter struct {\n\tn *int\n}\n\nfunc (c counter) nextGroup() string {\n\tname := fmt.Sprintf(\"group%d\", *c.n)\n\t*c.n++\n\treturn name\n}\n\ntype namedCustomLog struct {\n\tname       string\n\thostnames  []string\n\tlog        *caddy.CustomLog\n\tnoHostname bool\n}\n\n// addressWithProtocols associates a listen address with\n// the protocols to serve it with\ntype addressWithProtocols struct {\n\taddress   string\n\tprotocols []string\n}\n\n// sbAddrAssociation is a mapping from a list of\n// addresses with protocols, and a list of server\n// blocks that are served on those addresses.\ntype sbAddrAssociation struct {\n\taddressesWithProtocols []addressWithProtocols\n\tserverBlocks           []serverBlock\n}\n\nconst (\n\tmatcherPrefix = \"@\"\n\tnamedRouteKey = \"named_route\"\n)\n\n// Interface guard\nvar _ caddyfile.ServerType = (*ServerType)(nil)\n"
  },
  {
    "path": "caddyconfig/httpcaddyfile/httptype_test.go",
    "content": "package httpcaddyfile\n\nimport (\n\t\"encoding/json\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc TestMatcherSyntax(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinput       string\n\t\texpectError bool\n\t}{\n\t\t{\n\t\t\tinput: `http://localhost\n\t\t\t@debug {\n\t\t\t\tquery showdebug=1\n\t\t\t}\n\t\t\t`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `http://localhost\n\t\t\t@debug {\n\t\t\t\tquery bad format\n\t\t\t}\n\t\t\t`,\n\t\t\texpectError: true,\n\t\t},\n\t\t{\n\t\t\tinput: `http://localhost\n\t\t\t@debug {\n\t\t\t\tnot {\n\t\t\t\t\tpath /somepath*\n\t\t\t\t}\n\t\t\t}\n\t\t\t`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `http://localhost\n\t\t\t@debug {\n\t\t\t\tnot path /somepath*\n\t\t\t}\n\t\t\t`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `http://localhost\n\t\t\t@debug not path /somepath*\n\t\t\t`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `@matcher {\n\t\t\t\tpath /matcher-not-allowed/outside-of-site-block/*\n\t\t\t}\n\t\t\thttp://localhost\n\t\t\t`,\n\t\t\texpectError: true,\n\t\t},\n\t} {\n\n\t\tadapter := caddyfile.Adapter{\n\t\t\tServerType: ServerType{},\n\t\t}\n\n\t\t_, _, err := adapter.Adapt([]byte(tc.input), nil)\n\n\t\tif err != nil != tc.expectError {\n\t\t\tt.Errorf(\"Test %d error expectation failed Expected: %v, got %s\", i, tc.expectError, err)\n\t\t\tcontinue\n\t\t}\n\t}\n}\n\nfunc TestSpecificity(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinput  string\n\t\texpect int\n\t}{\n\t\t{\"\", 0},\n\t\t{\"*\", 0},\n\t\t{\"*.*\", 1},\n\t\t{\"{placeholder}\", 0},\n\t\t{\"/{placeholder}\", 1},\n\t\t{\"foo\", 3},\n\t\t{\"example.com\", 11},\n\t\t{\"a.example.com\", 13},\n\t\t{\"*.example.com\", 12},\n\t\t{\"/foo\", 4},\n\t\t{\"/foo*\", 4},\n\t\t{\"{placeholder}.example.com\", 12},\n\t\t{\"{placeholder.example.com\", 24},\n\t\t{\"}.\", 2},\n\t\t{\"}{\", 2},\n\t\t{\"{}\", 0},\n\t\t{\"{{{}}\", 1},\n\t} {\n\t\tactual := specificity(tc.input)\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d (%s): Expected %d but got %d\", i, tc.input, tc.expect, actual)\n\t\t}\n\t}\n}\n\nfunc TestGlobalOptions(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinput       string\n\t\texpectError bool\n\t}{\n\t\t{\n\t\t\tinput: `\n\t\t\t\t{\n\t\t\t\t\temail test@example.com\n\t\t\t\t}\n\t\t\t\t:80\n\t\t\t`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\t{\n\t\t\t\t\tadmin off\n\t\t\t\t}\n\t\t\t\t:80\n\t\t\t`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\t{\n\t\t\t\t\tadmin 127.0.0.1:2020\n\t\t\t\t}\n\t\t\t\t:80\n\t\t\t`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\t{\n\t\t\t\t\tadmin {\n\t\t\t\t\t\tdisabled false\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t:80\n\t\t\t`,\n\t\t\texpectError: true,\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\t{\n\t\t\t\t\tadmin {\n\t\t\t\t\t\tenforce_origin\n\t\t\t\t\t\torigins 192.168.1.1:2020 127.0.0.1:2020\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t:80\n\t\t\t`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\t{\n\t\t\t\t\tadmin 127.0.0.1:2020 {\n\t\t\t\t\t\tenforce_origin\n\t\t\t\t\t\torigins 192.168.1.1:2020 127.0.0.1:2020\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t:80\n\t\t\t`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\t{\n\t\t\t\t\tadmin 192.168.1.1:2020 127.0.0.1:2020 {\n\t\t\t\t\t\tenforce_origin\n\t\t\t\t\t\torigins 192.168.1.1:2020 127.0.0.1:2020\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t:80\n\t\t\t`,\n\t\t\texpectError: true,\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\t{\n\t\t\t\t\tadmin off {\n\t\t\t\t\t\tenforce_origin\n\t\t\t\t\t\torigins 192.168.1.1:2020 127.0.0.1:2020\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t:80\n\t\t\t`,\n\t\t\texpectError: true,\n\t\t},\n\t} {\n\n\t\tadapter := caddyfile.Adapter{\n\t\t\tServerType: ServerType{},\n\t\t}\n\n\t\t_, _, err := adapter.Adapt([]byte(tc.input), nil)\n\n\t\tif err != nil != tc.expectError {\n\t\t\tt.Errorf(\"Test %d error expectation failed Expected: %v, got %s\", i, tc.expectError, err)\n\t\t\tcontinue\n\t\t}\n\t}\n}\n\nfunc TestDefaultSNIWithoutHTTPS(t *testing.T) {\n\tcaddyfileStr := `{\n\t\tdefault_sni my-sni.com\n\t}\n\texample.com {\n\t}`\n\n\tadapter := caddyfile.Adapter{\n\t\tServerType: ServerType{},\n\t}\n\n\tresult, _, err := adapter.Adapt([]byte(caddyfileStr), nil)\n\tif err != nil {\n\t\tt.Fatalf(\"Failed to adapt Caddyfile: %v\", err)\n\t}\n\n\tvar config struct {\n\t\tApps struct {\n\t\t\tHTTP struct {\n\t\t\t\tServers map[string]*caddyhttp.Server `json:\"servers\"`\n\t\t\t} `json:\"http\"`\n\t\t} `json:\"apps\"`\n\t}\n\n\tif err := json.Unmarshal(result, &config); err != nil {\n\t\tt.Fatalf(\"Failed to unmarshal JSON config: %v\", err)\n\t}\n\n\tserver, ok := config.Apps.HTTP.Servers[\"srv0\"]\n\tif !ok {\n\t\tt.Fatalf(\"Expected server 'srv0' to be created\")\n\t}\n\n\tif len(server.TLSConnPolicies) == 0 {\n\t\tt.Fatalf(\"Expected TLS connection policies to be generated, got none\")\n\t}\n\n\tfound := false\n\tfor _, policy := range server.TLSConnPolicies {\n\t\tif policy.DefaultSNI == \"my-sni.com\" {\n\t\t\tfound = true\n\t\t\tbreak\n\t\t}\n\t}\n\n\tif !found {\n\t\tt.Errorf(\"Expected default_sni 'my-sni.com' in TLS connection policies, but it was missing. Generated JSON: %s\", string(result))\n\t}\n}\n"
  },
  {
    "path": "caddyconfig/httpcaddyfile/options.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage httpcaddyfile\n\nimport (\n\t\"slices\"\n\t\"strconv\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/libdns/libdns\"\n\t\"github.com/mholt/acmez/v3/acme\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddytls\"\n)\n\nfunc init() {\n\tRegisterGlobalOption(\"debug\", parseOptTrue)\n\tRegisterGlobalOption(\"http_port\", parseOptHTTPPort)\n\tRegisterGlobalOption(\"https_port\", parseOptHTTPSPort)\n\tRegisterGlobalOption(\"default_bind\", parseOptDefaultBind)\n\tRegisterGlobalOption(\"grace_period\", parseOptDuration)\n\tRegisterGlobalOption(\"shutdown_delay\", parseOptDuration)\n\tRegisterGlobalOption(\"default_sni\", parseOptSingleString)\n\tRegisterGlobalOption(\"fallback_sni\", parseOptSingleString)\n\tRegisterGlobalOption(\"order\", parseOptOrder)\n\tRegisterGlobalOption(\"storage\", parseOptStorage)\n\tRegisterGlobalOption(\"storage_check\", parseStorageCheck)\n\tRegisterGlobalOption(\"storage_clean_interval\", parseStorageCleanInterval)\n\tRegisterGlobalOption(\"renew_interval\", parseOptDuration)\n\tRegisterGlobalOption(\"ocsp_interval\", parseOptDuration)\n\tRegisterGlobalOption(\"acme_ca\", parseOptSingleString)\n\tRegisterGlobalOption(\"acme_ca_root\", parseOptSingleString)\n\tRegisterGlobalOption(\"acme_dns\", parseOptDNS)\n\tRegisterGlobalOption(\"acme_eab\", parseOptACMEEAB)\n\tRegisterGlobalOption(\"cert_issuer\", parseOptCertIssuer)\n\tRegisterGlobalOption(\"skip_install_trust\", parseOptTrue)\n\tRegisterGlobalOption(\"email\", parseOptSingleString)\n\tRegisterGlobalOption(\"admin\", parseOptAdmin)\n\tRegisterGlobalOption(\"on_demand_tls\", parseOptOnDemand)\n\tRegisterGlobalOption(\"local_certs\", parseOptTrue)\n\tRegisterGlobalOption(\"key_type\", parseOptSingleString)\n\tRegisterGlobalOption(\"auto_https\", parseOptAutoHTTPS)\n\tRegisterGlobalOption(\"metrics\", parseMetricsOptions)\n\tRegisterGlobalOption(\"servers\", parseServerOptions)\n\tRegisterGlobalOption(\"ocsp_stapling\", parseOCSPStaplingOptions)\n\tRegisterGlobalOption(\"cert_lifetime\", parseOptDuration)\n\tRegisterGlobalOption(\"log\", parseLogOptions)\n\tRegisterGlobalOption(\"preferred_chains\", parseOptPreferredChains)\n\tRegisterGlobalOption(\"persist_config\", parseOptPersistConfig)\n\tRegisterGlobalOption(\"dns\", parseOptDNS)\n\tRegisterGlobalOption(\"tls_resolvers\", parseOptTLSResolvers)\n\tRegisterGlobalOption(\"ech\", parseOptECH)\n\tRegisterGlobalOption(\"renewal_window_ratio\", parseOptRenewalWindowRatio)\n}\n\nfunc parseOptTrue(d *caddyfile.Dispenser, _ any) (any, error) { return true, nil }\n\nfunc parseOptHTTPPort(d *caddyfile.Dispenser, _ any) (any, error) {\n\td.Next() // consume option name\n\tvar httpPort int\n\tvar httpPortStr string\n\tif !d.AllArgs(&httpPortStr) {\n\t\treturn 0, d.ArgErr()\n\t}\n\tvar err error\n\thttpPort, err = strconv.Atoi(httpPortStr)\n\tif err != nil {\n\t\treturn 0, d.Errf(\"converting port '%s' to integer value: %v\", httpPortStr, err)\n\t}\n\treturn httpPort, nil\n}\n\nfunc parseOptHTTPSPort(d *caddyfile.Dispenser, _ any) (any, error) {\n\td.Next() // consume option name\n\tvar httpsPort int\n\tvar httpsPortStr string\n\tif !d.AllArgs(&httpsPortStr) {\n\t\treturn 0, d.ArgErr()\n\t}\n\tvar err error\n\thttpsPort, err = strconv.Atoi(httpsPortStr)\n\tif err != nil {\n\t\treturn 0, d.Errf(\"converting port '%s' to integer value: %v\", httpsPortStr, err)\n\t}\n\treturn httpsPort, nil\n}\n\nfunc parseOptOrder(d *caddyfile.Dispenser, _ any) (any, error) {\n\td.Next() // consume option name\n\n\t// get directive name\n\tif !d.Next() {\n\t\treturn nil, d.ArgErr()\n\t}\n\tdirName := d.Val()\n\tif _, ok := registeredDirectives[dirName]; !ok {\n\t\treturn nil, d.Errf(\"%s is not a registered directive\", dirName)\n\t}\n\n\t// get positional token\n\tif !d.Next() {\n\t\treturn nil, d.ArgErr()\n\t}\n\tpos := Positional(d.Val())\n\n\t// if directive already had an order, drop it\n\tnewOrder := slices.DeleteFunc(directiveOrder, func(d string) bool {\n\t\treturn d == dirName\n\t})\n\n\t// act on the positional; if it's First or Last, we're done right away\n\tswitch pos {\n\tcase First:\n\t\tnewOrder = append([]string{dirName}, newOrder...)\n\t\tif d.NextArg() {\n\t\t\treturn nil, d.ArgErr()\n\t\t}\n\t\tdirectiveOrder = newOrder\n\t\treturn newOrder, nil\n\n\tcase Last:\n\t\tnewOrder = append(newOrder, dirName)\n\t\tif d.NextArg() {\n\t\t\treturn nil, d.ArgErr()\n\t\t}\n\t\tdirectiveOrder = newOrder\n\t\treturn newOrder, nil\n\n\t// if it's Before or After, continue\n\tcase Before:\n\tcase After:\n\n\tdefault:\n\t\treturn nil, d.Errf(\"unknown positional '%s'\", pos)\n\t}\n\n\t// get name of other directive\n\tif !d.NextArg() {\n\t\treturn nil, d.ArgErr()\n\t}\n\totherDir := d.Val()\n\tif d.NextArg() {\n\t\treturn nil, d.ArgErr()\n\t}\n\n\t// get the position of the target directive\n\ttargetIndex := slices.Index(newOrder, otherDir)\n\tif targetIndex == -1 {\n\t\treturn nil, d.Errf(\"directive '%s' not found\", otherDir)\n\t}\n\t// if we're inserting after, we need to increment the index to go after\n\tif pos == After {\n\t\ttargetIndex++\n\t}\n\t// insert the directive into the new order\n\tnewOrder = slices.Insert(newOrder, targetIndex, dirName)\n\n\tdirectiveOrder = newOrder\n\n\treturn newOrder, nil\n}\n\nfunc parseOptStorage(d *caddyfile.Dispenser, _ any) (any, error) {\n\tif !d.Next() { // consume option name\n\t\treturn nil, d.ArgErr()\n\t}\n\tif !d.Next() { // get storage module name\n\t\treturn nil, d.ArgErr()\n\t}\n\tmodID := \"caddy.storage.\" + d.Val()\n\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tstorage, ok := unm.(caddy.StorageConverter)\n\tif !ok {\n\t\treturn nil, d.Errf(\"module %s is not a caddy.StorageConverter\", modID)\n\t}\n\treturn storage, nil\n}\n\nfunc parseStorageCheck(d *caddyfile.Dispenser, _ any) (any, error) {\n\td.Next() // consume option name\n\tif !d.Next() {\n\t\treturn \"\", d.ArgErr()\n\t}\n\tval := d.Val()\n\tif d.Next() {\n\t\treturn \"\", d.ArgErr()\n\t}\n\tif val != \"off\" {\n\t\treturn \"\", d.Errf(\"storage_check must be 'off'\")\n\t}\n\treturn val, nil\n}\n\nfunc parseStorageCleanInterval(d *caddyfile.Dispenser, _ any) (any, error) {\n\td.Next() // consume option name\n\tif !d.Next() {\n\t\treturn \"\", d.ArgErr()\n\t}\n\tval := d.Val()\n\tif d.Next() {\n\t\treturn \"\", d.ArgErr()\n\t}\n\tif val == \"off\" {\n\t\treturn false, nil\n\t}\n\tdur, err := caddy.ParseDuration(d.Val())\n\tif err != nil {\n\t\treturn nil, d.Errf(\"failed to parse storage_clean_interval, must be a duration or 'off' %w\", err)\n\t}\n\treturn caddy.Duration(dur), nil\n}\n\nfunc parseOptDuration(d *caddyfile.Dispenser, _ any) (any, error) {\n\tif !d.Next() { // consume option name\n\t\treturn nil, d.ArgErr()\n\t}\n\tif !d.Next() { // get duration value\n\t\treturn nil, d.ArgErr()\n\t}\n\tdur, err := caddy.ParseDuration(d.Val())\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn caddy.Duration(dur), nil\n}\n\nfunc parseOptACMEEAB(d *caddyfile.Dispenser, _ any) (any, error) {\n\teab := new(acme.EAB)\n\td.Next() // consume option name\n\tif d.NextArg() {\n\t\treturn nil, d.ArgErr()\n\t}\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"key_id\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\teab.KeyID = d.Val()\n\n\t\tcase \"mac_key\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\teab.MACKey = d.Val()\n\n\t\tdefault:\n\t\t\treturn nil, d.Errf(\"unrecognized parameter '%s'\", d.Val())\n\t\t}\n\t}\n\treturn eab, nil\n}\n\nfunc parseOptCertIssuer(d *caddyfile.Dispenser, existing any) (any, error) {\n\td.Next() // consume option name\n\n\tvar issuers []certmagic.Issuer\n\tif existing != nil {\n\t\tissuers = existing.([]certmagic.Issuer)\n\t}\n\n\t// get issuer module name\n\tif !d.Next() {\n\t\treturn nil, d.ArgErr()\n\t}\n\tmodID := \"tls.issuance.\" + d.Val()\n\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tiss, ok := unm.(certmagic.Issuer)\n\tif !ok {\n\t\treturn nil, d.Errf(\"module %s (%T) is not a certmagic.Issuer\", modID, unm)\n\t}\n\tissuers = append(issuers, iss)\n\treturn issuers, nil\n}\n\nfunc parseOptSingleString(d *caddyfile.Dispenser, _ any) (any, error) {\n\td.Next() // consume option name\n\tif !d.Next() {\n\t\treturn \"\", d.ArgErr()\n\t}\n\tval := d.Val()\n\tif d.Next() {\n\t\treturn \"\", d.ArgErr()\n\t}\n\treturn val, nil\n}\n\nfunc parseOptTLSResolvers(d *caddyfile.Dispenser, _ any) (any, error) {\n\td.Next() // consume option name\n\tresolvers := d.RemainingArgs()\n\tif len(resolvers) == 0 {\n\t\treturn nil, d.ArgErr()\n\t}\n\treturn resolvers, nil\n}\n\nfunc parseOptDefaultBind(d *caddyfile.Dispenser, _ any) (any, error) {\n\td.Next() // consume option name\n\n\tvar addresses, protocols []string\n\taddresses = d.RemainingArgs()\n\n\tif len(addresses) == 0 {\n\t\taddresses = append(addresses, \"\")\n\t}\n\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"protocols\":\n\t\t\tprotocols = d.RemainingArgs()\n\t\t\tif len(protocols) == 0 {\n\t\t\t\treturn nil, d.Errf(\"protocols requires one or more arguments\")\n\t\t\t}\n\t\tdefault:\n\t\t\treturn nil, d.Errf(\"unknown subdirective: %s\", d.Val())\n\t\t}\n\t}\n\n\treturn []ConfigValue{{Class: \"bind\", Value: addressesWithProtocols{\n\t\taddresses: addresses,\n\t\tprotocols: protocols,\n\t}}}, nil\n}\n\nfunc parseOptAdmin(d *caddyfile.Dispenser, _ any) (any, error) {\n\td.Next() // consume option name\n\n\tadminCfg := new(caddy.AdminConfig)\n\tif d.NextArg() {\n\t\tlistenAddress := d.Val()\n\t\tif listenAddress == \"off\" {\n\t\t\tadminCfg.Disabled = true\n\t\t\tif d.Next() { // Do not accept any remaining options including block\n\t\t\t\treturn nil, d.Err(\"No more option is allowed after turning off admin config\")\n\t\t\t}\n\t\t} else {\n\t\t\tadminCfg.Listen = listenAddress\n\t\t\tif d.NextArg() { // At most 1 arg is allowed\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t}\n\t}\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"enforce_origin\":\n\t\t\tadminCfg.EnforceOrigin = true\n\n\t\tcase \"origins\":\n\t\t\tadminCfg.Origins = d.RemainingArgs()\n\n\t\tdefault:\n\t\t\treturn nil, d.Errf(\"unrecognized parameter '%s'\", d.Val())\n\t\t}\n\t}\n\tif adminCfg.Listen == \"\" && !adminCfg.Disabled {\n\t\tadminCfg.Listen = caddy.DefaultAdminListen\n\t}\n\treturn adminCfg, nil\n}\n\nfunc parseOptOnDemand(d *caddyfile.Dispenser, _ any) (any, error) {\n\td.Next() // consume option name\n\tif d.NextArg() {\n\t\treturn nil, d.ArgErr()\n\t}\n\n\tvar ond *caddytls.OnDemandConfig\n\n\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\tswitch d.Val() {\n\t\tcase \"ask\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\tif ond == nil {\n\t\t\t\tond = new(caddytls.OnDemandConfig)\n\t\t\t}\n\t\t\tif ond.PermissionRaw != nil {\n\t\t\t\treturn nil, d.Err(\"on-demand TLS permission module (or 'ask') already specified\")\n\t\t\t}\n\t\t\tperm := caddytls.PermissionByHTTP{Endpoint: d.Val()}\n\t\t\tond.PermissionRaw = caddyconfig.JSONModuleObject(perm, \"module\", \"http\", nil)\n\n\t\tcase \"permission\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\tif ond == nil {\n\t\t\t\tond = new(caddytls.OnDemandConfig)\n\t\t\t}\n\t\t\tif ond.PermissionRaw != nil {\n\t\t\t\treturn nil, d.Err(\"on-demand TLS permission module (or 'ask') already specified\")\n\t\t\t}\n\t\t\tmodName := d.Val()\n\t\t\tmodID := \"tls.permission.\" + modName\n\t\t\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tperm, ok := unm.(caddytls.OnDemandPermission)\n\t\t\tif !ok {\n\t\t\t\treturn nil, d.Errf(\"module %s (%T) is not an on-demand TLS permission module\", modID, unm)\n\t\t\t}\n\t\t\tond.PermissionRaw = caddyconfig.JSONModuleObject(perm, \"module\", modName, nil)\n\n\t\tcase \"interval\":\n\t\t\treturn nil, d.Errf(\"the on_demand_tls 'interval' option is no longer supported, remove it from your config\")\n\n\t\tcase \"burst\":\n\t\t\treturn nil, d.Errf(\"the on_demand_tls 'burst' option is no longer supported, remove it from your config\")\n\n\t\tdefault:\n\t\t\treturn nil, d.Errf(\"unrecognized parameter '%s'\", d.Val())\n\t\t}\n\t}\n\tif ond == nil {\n\t\treturn nil, d.Err(\"expected at least one config parameter for on_demand_tls\")\n\t}\n\treturn ond, nil\n}\n\nfunc parseOptPersistConfig(d *caddyfile.Dispenser, _ any) (any, error) {\n\td.Next() // consume option name\n\tif !d.Next() {\n\t\treturn \"\", d.ArgErr()\n\t}\n\tval := d.Val()\n\tif d.Next() {\n\t\treturn \"\", d.ArgErr()\n\t}\n\tif val != \"off\" {\n\t\treturn \"\", d.Errf(\"persist_config must be 'off'\")\n\t}\n\treturn val, nil\n}\n\nfunc parseOptAutoHTTPS(d *caddyfile.Dispenser, _ any) (any, error) {\n\td.Next() // consume option name\n\tval := d.RemainingArgs()\n\tif len(val) == 0 {\n\t\treturn \"\", d.ArgErr()\n\t}\n\tfor _, v := range val {\n\t\tswitch v {\n\t\tcase \"off\":\n\t\tcase \"disable_redirects\":\n\t\tcase \"disable_certs\":\n\t\tcase \"ignore_loaded_certs\":\n\t\tdefault:\n\t\t\treturn \"\", d.Errf(\"auto_https must be one of 'off', 'disable_redirects', 'disable_certs', or 'ignore_loaded_certs'\")\n\t\t}\n\t}\n\treturn val, nil\n}\n\nfunc unmarshalCaddyfileMetricsOptions(d *caddyfile.Dispenser) (any, error) {\n\td.Next() // consume option name\n\tmetrics := new(caddyhttp.Metrics)\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"per_host\":\n\t\t\tmetrics.PerHost = true\n\t\tcase \"observe_catchall_hosts\":\n\t\t\tmetrics.ObserveCatchallHosts = true\n\t\tdefault:\n\t\t\treturn nil, d.Errf(\"unrecognized servers option '%s'\", d.Val())\n\t\t}\n\t}\n\treturn metrics, nil\n}\n\nfunc parseMetricsOptions(d *caddyfile.Dispenser, _ any) (any, error) {\n\treturn unmarshalCaddyfileMetricsOptions(d)\n}\n\nfunc parseServerOptions(d *caddyfile.Dispenser, _ any) (any, error) {\n\treturn unmarshalCaddyfileServerOptions(d)\n}\n\nfunc parseOCSPStaplingOptions(d *caddyfile.Dispenser, _ any) (any, error) {\n\td.Next() // consume option name\n\tvar val string\n\tif !d.AllArgs(&val) {\n\t\treturn nil, d.ArgErr()\n\t}\n\tif val != \"off\" {\n\t\treturn nil, d.Errf(\"invalid argument '%s'\", val)\n\t}\n\treturn certmagic.OCSPConfig{\n\t\tDisableStapling: val == \"off\",\n\t}, nil\n}\n\n// parseLogOptions parses the global log option. Syntax:\n//\n//\tlog [name] {\n//\t    output  <writer_module> ...\n//\t    format  <encoder_module> ...\n//\t    level   <level>\n//\t    include <namespaces...>\n//\t    exclude <namespaces...>\n//\t}\n//\n// When the name argument is unspecified, this directive modifies the default\n// logger.\nfunc parseLogOptions(d *caddyfile.Dispenser, existingVal any) (any, error) {\n\tcurrentNames := make(map[string]struct{})\n\tif existingVal != nil {\n\t\tinnerVals, ok := existingVal.([]ConfigValue)\n\t\tif !ok {\n\t\t\treturn nil, d.Errf(\"existing log values of unexpected type: %T\", existingVal)\n\t\t}\n\t\tfor _, rawVal := range innerVals {\n\t\t\tval, ok := rawVal.Value.(namedCustomLog)\n\t\t\tif !ok {\n\t\t\t\treturn nil, d.Errf(\"existing log value of unexpected type: %T\", existingVal)\n\t\t\t}\n\t\t\tcurrentNames[val.name] = struct{}{}\n\t\t}\n\t}\n\n\tvar warnings []caddyconfig.Warning\n\t// Call out the same parser that handles server-specific log configuration.\n\tconfigValues, err := parseLogHelper(\n\t\tHelper{\n\t\t\tDispenser: d,\n\t\t\twarnings:  &warnings,\n\t\t},\n\t\tcurrentNames,\n\t)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif len(warnings) > 0 {\n\t\treturn nil, d.Errf(\"warnings found in parsing global log options: %+v\", warnings)\n\t}\n\n\treturn configValues, nil\n}\n\nfunc parseOptPreferredChains(d *caddyfile.Dispenser, _ any) (any, error) {\n\td.Next()\n\treturn caddytls.ParseCaddyfilePreferredChainsOptions(d)\n}\n\nfunc parseOptDNS(d *caddyfile.Dispenser, _ any) (any, error) {\n\td.Next() // consume option name\n\toptName := d.Val()\n\n\t// get DNS module name\n\tif !d.Next() {\n\t\t// this is allowed if this is the \"acme_dns\" option since it may refer to the globally-configured \"dns\" option's value\n\t\tif optName == \"acme_dns\" {\n\t\t\treturn nil, nil\n\t\t}\n\t\treturn nil, d.ArgErr()\n\t}\n\tmodID := \"dns.providers.\" + d.Val()\n\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tswitch unm.(type) {\n\tcase libdns.RecordGetter,\n\t\tlibdns.RecordSetter,\n\t\tlibdns.RecordAppender,\n\t\tlibdns.RecordDeleter:\n\tdefault:\n\t\treturn nil, d.Errf(\"module %s (%T) is not a libdns provider\", modID, unm)\n\t}\n\treturn unm, nil\n}\n\nfunc parseOptECH(d *caddyfile.Dispenser, _ any) (any, error) {\n\td.Next() // consume option name\n\n\tech := new(caddytls.ECH)\n\n\tpublicNames := d.RemainingArgs()\n\tfor _, publicName := range publicNames {\n\t\tech.Configs = append(ech.Configs, caddytls.ECHConfiguration{\n\t\t\tPublicName: publicName,\n\t\t})\n\t}\n\tif len(ech.Configs) == 0 {\n\t\treturn nil, d.ArgErr()\n\t}\n\n\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\tswitch d.Val() {\n\t\tcase \"dns\":\n\t\t\tif !d.Next() {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\tproviderName := d.Val()\n\t\t\tmodID := \"dns.providers.\" + providerName\n\t\t\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tech.Publication = append(ech.Publication, &caddytls.ECHPublication{\n\t\t\t\tConfigs: publicNames,\n\t\t\t\tPublishersRaw: caddy.ModuleMap{\n\t\t\t\t\t\"dns\": caddyconfig.JSON(caddytls.ECHDNSPublisher{\n\t\t\t\t\t\tProviderRaw: caddyconfig.JSONModuleObject(unm, \"name\", providerName, nil),\n\t\t\t\t\t}, nil),\n\t\t\t\t},\n\t\t\t})\n\t\tdefault:\n\t\t\treturn nil, d.Errf(\"ech: unrecognized subdirective '%s'\", d.Val())\n\t\t}\n\t}\n\n\treturn ech, nil\n}\n\nfunc parseOptRenewalWindowRatio(d *caddyfile.Dispenser, _ any) (any, error) {\n\td.Next() // consume option name\n\tif !d.Next() {\n\t\treturn 0, d.ArgErr()\n\t}\n\tval := d.Val()\n\tratio, err := strconv.ParseFloat(val, 64)\n\tif err != nil {\n\t\treturn 0, d.Errf(\"parsing renewal_window_ratio: %v\", err)\n\t}\n\tif ratio <= 0 || ratio >= 1 {\n\t\treturn 0, d.Errf(\"renewal_window_ratio must be between 0 and 1 (exclusive)\")\n\t}\n\tif d.Next() {\n\t\treturn 0, d.ArgErr()\n\t}\n\treturn ratio, nil\n}\n"
  },
  {
    "path": "caddyconfig/httpcaddyfile/options_test.go",
    "content": "package httpcaddyfile\n\nimport (\n\t\"encoding/json\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddytls\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/logging\"\n)\n\nfunc TestGlobalLogOptionSyntax(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinput       string\n\t\toutput      string\n\t\texpectError bool\n\t}{\n\t\t// NOTE: Additional test cases of successful Caddyfile parsing\n\t\t// are present in: caddytest/integration/caddyfile_adapt/\n\t\t{\n\t\t\tinput: `{\n\t\t\t\tlog default\n\t\t\t}\n\t\t\t`,\n\t\t\toutput:      `{}`,\n\t\t\texpectError: false,\n\t\t},\n\t\t{\n\t\t\tinput: `{\n\t\t\t\tlog example {\n\t\t\t\t\toutput file foo.log\n\t\t\t\t}\n\t\t\t\tlog example {\n\t\t\t\t\tformat json\n\t\t\t\t}\n\t\t\t}\n\t\t\t`,\n\t\t\texpectError: true,\n\t\t},\n\t\t{\n\t\t\tinput: `{\n\t\t\t\tlog example /foo {\n\t\t\t\t\toutput file foo.log\n\t\t\t\t}\n\t\t\t}\n\t\t\t`,\n\t\t\texpectError: true,\n\t\t},\n\t} {\n\n\t\tadapter := caddyfile.Adapter{\n\t\t\tServerType: ServerType{},\n\t\t}\n\n\t\tout, _, err := adapter.Adapt([]byte(tc.input), nil)\n\n\t\tif err != nil != tc.expectError {\n\t\t\tt.Errorf(\"Test %d error expectation failed Expected: %v, got %v\", i, tc.expectError, err)\n\t\t\tcontinue\n\t\t}\n\n\t\tif string(out) != tc.output {\n\t\t\tt.Errorf(\"Test %d error output mismatch Expected: %s, got %s\", i, tc.output, out)\n\t\t}\n\t}\n}\n\nfunc TestGlobalResolversOption(t *testing.T) {\n\ttests := []struct {\n\t\tname            string\n\t\tinput           string\n\t\texpectResolvers []string\n\t\texpectError     bool\n\t}{\n\t\t{\n\t\t\tname: \"single resolver\",\n\t\t\tinput: `{\n\t\t\t\ttls_resolvers 1.1.1.1\n\t\t\t}\n\t\t\texample.com {\n\t\t\t}`,\n\t\t\texpectResolvers: []string{\"1.1.1.1\"},\n\t\t\texpectError:     false,\n\t\t},\n\t\t{\n\t\t\tname: \"two resolvers\",\n\t\t\tinput: `{\n\t\t\t\ttls_resolvers 1.1.1.1 8.8.8.8\n\t\t\t}\n\t\t\texample.com {\n\t\t\t}`,\n\t\t\texpectResolvers: []string{\"1.1.1.1\", \"8.8.8.8\"},\n\t\t\texpectError:     false,\n\t\t},\n\t\t{\n\t\t\tname: \"multiple resolvers\",\n\t\t\tinput: `{\n\t\t\t\ttls_resolvers 1.1.1.1 8.8.8.8 9.9.9.9\n\t\t\t}\n\t\t\texample.com {\n\t\t\t}`,\n\t\t\texpectResolvers: []string{\"1.1.1.1\", \"8.8.8.8\", \"9.9.9.9\"},\n\t\t\texpectError:     false,\n\t\t},\n\t\t{\n\t\t\tname: \"no resolvers specified\",\n\t\t\tinput: `{\n\t\t\t}\n\t\t\texample.com {\n\t\t\t}`,\n\t\t\texpectResolvers: nil,\n\t\t\texpectError:     false,\n\t\t},\n\t}\n\n\tfor _, tc := range tests {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tadapter := caddyfile.Adapter{\n\t\t\t\tServerType: ServerType{},\n\t\t\t}\n\n\t\t\tout, _, err := adapter.Adapt([]byte(tc.input), nil)\n\n\t\t\tif (err != nil) != tc.expectError {\n\t\t\t\tt.Errorf(\"error expectation failed. Expected error: %v, got: %v\", tc.expectError, err)\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tif tc.expectError {\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\t// Parse the output JSON to check resolvers\n\t\t\tvar config struct {\n\t\t\t\tApps struct {\n\t\t\t\t\tTLS *caddytls.TLS `json:\"tls\"`\n\t\t\t\t} `json:\"apps\"`\n\t\t\t}\n\n\t\t\tif err := json.Unmarshal(out, &config); err != nil {\n\t\t\t\tt.Errorf(\"failed to unmarshal output: %v\", err)\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\t// Check if resolvers match expected\n\t\t\tif config.Apps.TLS == nil {\n\t\t\t\tif tc.expectResolvers != nil {\n\t\t\t\t\tt.Errorf(\"Expected TLS config with resolvers %v, but TLS config is nil\", tc.expectResolvers)\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tactualResolvers := config.Apps.TLS.Resolvers\n\t\t\tif len(tc.expectResolvers) == 0 && len(actualResolvers) == 0 {\n\t\t\t\treturn // Both empty, ok\n\t\t\t}\n\t\t\tif len(actualResolvers) != len(tc.expectResolvers) {\n\t\t\t\tt.Errorf(\"Expected %d resolvers, got %d. Expected: %v, got: %v\", len(tc.expectResolvers), len(actualResolvers), tc.expectResolvers, actualResolvers)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tfor j, expected := range tc.expectResolvers {\n\t\t\t\tif actualResolvers[j] != expected {\n\t\t\t\t\tt.Errorf(\"Resolver %d mismatch. Expected: %s, got: %s\", j, expected, actualResolvers[j])\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "caddyconfig/httpcaddyfile/pkiapp.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage httpcaddyfile\n\nimport (\n\t\"slices\"\n\t\"strconv\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddypki\"\n)\n\nfunc init() {\n\tRegisterGlobalOption(\"pki\", parsePKIApp)\n}\n\n// parsePKIApp parses the global pki option. Syntax:\n//\n//\tpki {\n//\t    ca [<id>] {\n//\t        name                    <name>\n//\t        root_cn                 <name>\n//\t        intermediate_cn         <name>\n//\t        intermediate_lifetime   <duration>\n//\t        maintenance_interval    <duration>\n//\t        renewal_window_ratio    <ratio>\n//\t        root {\n//\t            cert   <path>\n//\t            key    <path>\n//\t            format <format>\n//\t        }\n//\t        intermediate {\n//\t            cert   <path>\n//\t            key    <path>\n//\t            format <format>\n//\t        }\n//\t    }\n//\t}\n//\n// When the CA ID is unspecified, 'local' is assumed.\nfunc parsePKIApp(d *caddyfile.Dispenser, existingVal any) (any, error) {\n\td.Next() // consume app name\n\n\tpki := &caddypki.PKI{\n\t\tCAs: make(map[string]*caddypki.CA),\n\t}\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"ca\":\n\t\t\tpkiCa := new(caddypki.CA)\n\t\t\tif d.NextArg() {\n\t\t\t\tpkiCa.ID = d.Val()\n\t\t\t\tif d.NextArg() {\n\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t}\n\t\t\t}\n\t\t\tif pkiCa.ID == \"\" {\n\t\t\t\tpkiCa.ID = caddypki.DefaultCAID\n\t\t\t}\n\n\t\t\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\t\t\tswitch d.Val() {\n\t\t\t\tcase \"name\":\n\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t}\n\t\t\t\t\tpkiCa.Name = d.Val()\n\n\t\t\t\tcase \"root_cn\":\n\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t}\n\t\t\t\t\tpkiCa.RootCommonName = d.Val()\n\n\t\t\t\tcase \"intermediate_cn\":\n\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t}\n\t\t\t\t\tpkiCa.IntermediateCommonName = d.Val()\n\n\t\t\t\tcase \"intermediate_lifetime\":\n\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t}\n\t\t\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn nil, err\n\t\t\t\t\t}\n\t\t\t\t\tpkiCa.IntermediateLifetime = caddy.Duration(dur)\n\n\t\t\t\tcase \"maintenance_interval\":\n\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t}\n\t\t\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn nil, err\n\t\t\t\t\t}\n\t\t\t\t\tpkiCa.MaintenanceInterval = caddy.Duration(dur)\n\n\t\t\t\tcase \"renewal_window_ratio\":\n\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t}\n\t\t\t\t\tratio, err := strconv.ParseFloat(d.Val(), 64)\n\t\t\t\t\tif err != nil || ratio <= 0 || ratio > 1 {\n\t\t\t\t\t\treturn nil, d.Errf(\"renewal_window_ratio must be a number in (0, 1], got %s\", d.Val())\n\t\t\t\t\t}\n\t\t\t\t\tpkiCa.RenewalWindowRatio = ratio\n\n\t\t\t\tcase \"root\":\n\t\t\t\t\tif pkiCa.Root == nil {\n\t\t\t\t\t\tpkiCa.Root = new(caddypki.KeyPair)\n\t\t\t\t\t}\n\t\t\t\t\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\t\t\t\t\tswitch d.Val() {\n\t\t\t\t\t\tcase \"cert\":\n\t\t\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tpkiCa.Root.Certificate = d.Val()\n\n\t\t\t\t\t\tcase \"key\":\n\t\t\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tpkiCa.Root.PrivateKey = d.Val()\n\n\t\t\t\t\t\tcase \"format\":\n\t\t\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tpkiCa.Root.Format = d.Val()\n\n\t\t\t\t\t\tdefault:\n\t\t\t\t\t\t\treturn nil, d.Errf(\"unrecognized pki ca root option '%s'\", d.Val())\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\tcase \"intermediate\":\n\t\t\t\t\tif pkiCa.Intermediate == nil {\n\t\t\t\t\t\tpkiCa.Intermediate = new(caddypki.KeyPair)\n\t\t\t\t\t}\n\t\t\t\t\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\t\t\t\t\tswitch d.Val() {\n\t\t\t\t\t\tcase \"cert\":\n\t\t\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tpkiCa.Intermediate.Certificate = d.Val()\n\n\t\t\t\t\t\tcase \"key\":\n\t\t\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tpkiCa.Intermediate.PrivateKey = d.Val()\n\n\t\t\t\t\t\tcase \"format\":\n\t\t\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tpkiCa.Intermediate.Format = d.Val()\n\n\t\t\t\t\t\tdefault:\n\t\t\t\t\t\t\treturn nil, d.Errf(\"unrecognized pki ca intermediate option '%s'\", d.Val())\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\tdefault:\n\t\t\t\t\treturn nil, d.Errf(\"unrecognized pki ca option '%s'\", d.Val())\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tpki.CAs[pkiCa.ID] = pkiCa\n\n\t\tdefault:\n\t\t\treturn nil, d.Errf(\"unrecognized pki option '%s'\", d.Val())\n\t\t}\n\t}\n\treturn pki, nil\n}\n\nfunc (st ServerType) buildPKIApp(\n\tpairings []sbAddrAssociation,\n\toptions map[string]any,\n\twarnings []caddyconfig.Warning,\n) (*caddypki.PKI, []caddyconfig.Warning, error) {\n\tskipInstallTrust := false\n\tif _, ok := options[\"skip_install_trust\"]; ok {\n\t\tskipInstallTrust = true\n\t}\n\n\t// check if auto_https is off - in that case we should not create\n\t// any PKI infrastructure even with skip_install_trust directive\n\tautoHTTPS := []string{}\n\tif ah, ok := options[\"auto_https\"].([]string); ok {\n\t\tautoHTTPS = ah\n\t}\n\tautoHTTPSOff := slices.Contains(autoHTTPS, \"off\")\n\n\tfalseBool := false\n\n\t// Load the PKI app configured via global options\n\tvar pkiApp *caddypki.PKI\n\tunwrappedPki, ok := options[\"pki\"].(*caddypki.PKI)\n\tif ok {\n\t\tpkiApp = unwrappedPki\n\t} else {\n\t\tpkiApp = &caddypki.PKI{CAs: make(map[string]*caddypki.CA)}\n\t}\n\tfor _, ca := range pkiApp.CAs {\n\t\tif skipInstallTrust {\n\t\t\tca.InstallTrust = &falseBool\n\t\t}\n\t\tpkiApp.CAs[ca.ID] = ca\n\t}\n\n\t// Add in the CAs configured via directives\n\tfor _, p := range pairings {\n\t\tfor _, sblock := range p.serverBlocks {\n\t\t\t// find all the CAs that were defined and add them to the app config\n\t\t\t// i.e. from any \"acme_server\" directives\n\t\t\tfor _, caCfgValue := range sblock.pile[\"pki.ca\"] {\n\t\t\t\tca := caCfgValue.Value.(*caddypki.CA)\n\t\t\t\tif skipInstallTrust {\n\t\t\t\t\tca.InstallTrust = &falseBool\n\t\t\t\t}\n\n\t\t\t\t// the CA might already exist from global options, so\n\t\t\t\t// don't overwrite it in that case\n\t\t\t\tif _, ok := pkiApp.CAs[ca.ID]; !ok {\n\t\t\t\t\tpkiApp.CAs[ca.ID] = ca\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t// if there was no CAs defined in any of the servers,\n\t// and we were requested to not install trust, then\n\t// add one for the default/local CA to do so\n\t// only if auto_https is not completely disabled\n\tif len(pkiApp.CAs) == 0 && skipInstallTrust && !autoHTTPSOff {\n\t\tca := new(caddypki.CA)\n\t\tca.ID = caddypki.DefaultCAID\n\t\tca.InstallTrust = &falseBool\n\t\tpkiApp.CAs[ca.ID] = ca\n\t}\n\n\treturn pkiApp, warnings, nil\n}\n"
  },
  {
    "path": "caddyconfig/httpcaddyfile/pkiapp_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage httpcaddyfile\n\nimport (\n\t\"encoding/json\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc TestParsePKIApp_maintenanceIntervalAndRenewalWindowRatio(t *testing.T) {\n\tinput := `{\n\t\tpki {\n\t\t\tca local {\n\t\t\t\tmaintenance_interval 5m\n\t\t\t\trenewal_window_ratio 0.15\n\t\t\t}\n\t\t}\n\t}\n\t:8080 {\n\t}\n\t`\n\tadapter := caddyfile.Adapter{ServerType: ServerType{}}\n\tout, _, err := adapter.Adapt([]byte(input), nil)\n\tif err != nil {\n\t\tt.Fatalf(\"Adapt failed: %v\", err)\n\t}\n\n\tvar cfg struct {\n\t\tApps struct {\n\t\t\tPKI struct {\n\t\t\t\tCertificateAuthorities map[string]struct {\n\t\t\t\t\tMaintenanceInterval int64   `json:\"maintenance_interval,omitempty\"`\n\t\t\t\t\tRenewalWindowRatio  float64 `json:\"renewal_window_ratio,omitempty\"`\n\t\t\t\t} `json:\"certificate_authorities,omitempty\"`\n\t\t\t} `json:\"pki,omitempty\"`\n\t\t} `json:\"apps\"`\n\t}\n\tif err := json.Unmarshal(out, &cfg); err != nil {\n\t\tt.Fatalf(\"unmarshal config: %v\", err)\n\t}\n\n\tca, ok := cfg.Apps.PKI.CertificateAuthorities[\"local\"]\n\tif !ok {\n\t\tt.Fatal(\"expected certificate_authorities.local to exist\")\n\t}\n\twantInterval := 5 * time.Minute.Nanoseconds()\n\tif ca.MaintenanceInterval != wantInterval {\n\t\tt.Errorf(\"maintenance_interval = %d, want %d (5m)\", ca.MaintenanceInterval, wantInterval)\n\t}\n\tif ca.RenewalWindowRatio != 0.15 {\n\t\tt.Errorf(\"renewal_window_ratio = %v, want 0.15\", ca.RenewalWindowRatio)\n\t}\n}\n\nfunc TestParsePKIApp_renewalWindowRatioInvalid(t *testing.T) {\n\tinput := `{\n\t\tpki {\n\t\t\tca local {\n\t\t\t\trenewal_window_ratio 1.5\n\t\t\t}\n\t\t}\n\t}\n\t:8080 {\n\t}\n\t`\n\tadapter := caddyfile.Adapter{ServerType: ServerType{}}\n\t_, _, err := adapter.Adapt([]byte(input), nil)\n\tif err == nil {\n\t\tt.Error(\"expected error for renewal_window_ratio > 1\")\n\t}\n}\n"
  },
  {
    "path": "caddyconfig/httpcaddyfile/serveroptions.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage httpcaddyfile\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"slices\"\n\t\"strconv\"\n\n\t\"github.com/dustin/go-humanize\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\n// serverOptions collects server config overrides parsed from Caddyfile global options\ntype serverOptions struct {\n\t// If set, will only apply these options to servers that contain a\n\t// listener address that matches exactly. If empty, will apply to all\n\t// servers that were not already matched by another serverOptions.\n\tListenerAddress string\n\n\t// These will all map 1:1 to the caddyhttp.Server struct\n\tName                  string\n\tListenerWrappersRaw   []json.RawMessage\n\tPacketConnWrappersRaw []json.RawMessage\n\tReadTimeout           caddy.Duration\n\tReadHeaderTimeout     caddy.Duration\n\tWriteTimeout          caddy.Duration\n\tIdleTimeout           caddy.Duration\n\tKeepAliveInterval     caddy.Duration\n\tKeepAliveIdle         caddy.Duration\n\tKeepAliveCount        int\n\tMaxHeaderBytes        int\n\tEnableFullDuplex      bool\n\tProtocols             []string\n\tStrictSNIHost         *bool\n\tTrustedProxiesRaw     json.RawMessage\n\tTrustedProxiesStrict  int\n\tTrustedProxiesUnix    bool\n\tClientIPHeaders       []string\n\tShouldLogCredentials  bool\n\tMetrics               *caddyhttp.Metrics\n\tTrace                 bool // TODO: EXPERIMENTAL\n\t// If set, overrides whether QUIC listeners allow 0-RTT (early data).\n\t// If nil, the default behavior is used (currently allowed).\n\tAllow0RTT *bool\n}\n\nfunc unmarshalCaddyfileServerOptions(d *caddyfile.Dispenser) (any, error) {\n\td.Next() // consume option name\n\n\tserverOpts := serverOptions{}\n\tif d.NextArg() {\n\t\tserverOpts.ListenerAddress = d.Val()\n\t\tif d.NextArg() {\n\t\t\treturn nil, d.ArgErr()\n\t\t}\n\t}\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"name\":\n\t\t\tif serverOpts.ListenerAddress == \"\" {\n\t\t\t\treturn nil, d.Errf(\"cannot set a name for a server without a listener address\")\n\t\t\t}\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\tserverOpts.Name = d.Val()\n\n\t\tcase \"listener_wrappers\":\n\t\t\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\t\t\tmodID := \"caddy.listeners.\" + d.Val()\n\t\t\t\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t\tlistenerWrapper, ok := unm.(caddy.ListenerWrapper)\n\t\t\t\tif !ok {\n\t\t\t\t\treturn nil, fmt.Errorf(\"module %s (%T) is not a listener wrapper\", modID, unm)\n\t\t\t\t}\n\t\t\t\tjsonListenerWrapper := caddyconfig.JSONModuleObject(\n\t\t\t\t\tlistenerWrapper,\n\t\t\t\t\t\"wrapper\",\n\t\t\t\t\tlistenerWrapper.(caddy.Module).CaddyModule().ID.Name(),\n\t\t\t\t\tnil,\n\t\t\t\t)\n\t\t\t\tserverOpts.ListenerWrappersRaw = append(serverOpts.ListenerWrappersRaw, jsonListenerWrapper)\n\t\t\t}\n\n\t\tcase \"packet_conn_wrappers\":\n\t\t\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\t\t\tmodID := \"caddy.packetconns.\" + d.Val()\n\t\t\t\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t\tpacketConnWrapper, ok := unm.(caddy.PacketConnWrapper)\n\t\t\t\tif !ok {\n\t\t\t\t\treturn nil, fmt.Errorf(\"module %s (%T) is not a packet conn wrapper\", modID, unm)\n\t\t\t\t}\n\t\t\t\tjsonPacketConnWrapper := caddyconfig.JSONModuleObject(\n\t\t\t\t\tpacketConnWrapper,\n\t\t\t\t\t\"wrapper\",\n\t\t\t\t\tpacketConnWrapper.(caddy.Module).CaddyModule().ID.Name(),\n\t\t\t\t\tnil,\n\t\t\t\t)\n\t\t\t\tserverOpts.PacketConnWrappersRaw = append(serverOpts.PacketConnWrappersRaw, jsonPacketConnWrapper)\n\t\t\t}\n\n\t\tcase \"timeouts\":\n\t\t\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\t\t\tswitch d.Val() {\n\t\t\t\tcase \"read_body\":\n\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t}\n\t\t\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn nil, d.Errf(\"parsing read_body timeout duration: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t\tserverOpts.ReadTimeout = caddy.Duration(dur)\n\n\t\t\t\tcase \"read_header\":\n\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t}\n\t\t\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn nil, d.Errf(\"parsing read_header timeout duration: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t\tserverOpts.ReadHeaderTimeout = caddy.Duration(dur)\n\n\t\t\t\tcase \"write\":\n\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t}\n\t\t\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn nil, d.Errf(\"parsing write timeout duration: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t\tserverOpts.WriteTimeout = caddy.Duration(dur)\n\n\t\t\t\tcase \"idle\":\n\t\t\t\t\tif !d.NextArg() {\n\t\t\t\t\t\treturn nil, d.ArgErr()\n\t\t\t\t\t}\n\t\t\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn nil, d.Errf(\"parsing idle timeout duration: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t\tserverOpts.IdleTimeout = caddy.Duration(dur)\n\n\t\t\t\tdefault:\n\t\t\t\t\treturn nil, d.Errf(\"unrecognized timeouts option '%s'\", d.Val())\n\t\t\t\t}\n\t\t\t}\n\n\t\tcase \"keepalive_interval\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn nil, d.Errf(\"parsing keepalive interval duration: %v\", err)\n\t\t\t}\n\t\t\tserverOpts.KeepAliveInterval = caddy.Duration(dur)\n\n\t\tcase \"keepalive_idle\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn nil, d.Errf(\"parsing keepalive idle duration: %v\", err)\n\t\t\t}\n\t\t\tserverOpts.KeepAliveIdle = caddy.Duration(dur)\n\n\t\tcase \"keepalive_count\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\tcnt, err := strconv.ParseInt(d.Val(), 10, 32)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, d.Errf(\"parsing keepalive count int: %v\", err)\n\t\t\t}\n\t\t\tserverOpts.KeepAliveCount = int(cnt)\n\n\t\tcase \"max_header_size\":\n\t\t\tvar sizeStr string\n\t\t\tif !d.AllArgs(&sizeStr) {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\tsize, err := humanize.ParseBytes(sizeStr)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, d.Errf(\"parsing max_header_size: %v\", err)\n\t\t\t}\n\t\t\tserverOpts.MaxHeaderBytes = int(size)\n\n\t\tcase \"enable_full_duplex\":\n\t\t\tif d.NextArg() {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\tserverOpts.EnableFullDuplex = true\n\n\t\tcase \"log_credentials\":\n\t\t\tif d.NextArg() {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\tserverOpts.ShouldLogCredentials = true\n\n\t\tcase \"protocols\":\n\t\t\tprotos := d.RemainingArgs()\n\t\t\tfor _, proto := range protos {\n\t\t\t\tif proto != \"h1\" && proto != \"h2\" && proto != \"h2c\" && proto != \"h3\" {\n\t\t\t\t\treturn nil, d.Errf(\"unknown protocol '%s': expected h1, h2, h2c, or h3\", proto)\n\t\t\t\t}\n\t\t\t\tif slices.Contains(serverOpts.Protocols, proto) {\n\t\t\t\t\treturn nil, d.Errf(\"protocol %s specified more than once\", proto)\n\t\t\t\t}\n\t\t\t\tserverOpts.Protocols = append(serverOpts.Protocols, proto)\n\t\t\t}\n\t\t\tif nesting := d.Nesting(); d.NextBlock(nesting) {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\n\t\tcase \"strict_sni_host\":\n\t\t\tif d.NextArg() && d.Val() != \"insecure_off\" && d.Val() != \"on\" {\n\t\t\t\treturn nil, d.Errf(\"strict_sni_host only supports 'on' or 'insecure_off', got '%s'\", d.Val())\n\t\t\t}\n\t\t\tboolVal := true\n\t\t\tif d.Val() == \"insecure_off\" {\n\t\t\t\tboolVal = false\n\t\t\t}\n\t\t\tserverOpts.StrictSNIHost = &boolVal\n\n\t\tcase \"trusted_proxies\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn nil, d.Err(\"trusted_proxies expects an IP range source module name as its first argument\")\n\t\t\t}\n\t\t\tmodID := \"http.ip_sources.\" + d.Val()\n\t\t\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tsource, ok := unm.(caddyhttp.IPRangeSource)\n\t\t\tif !ok {\n\t\t\t\treturn nil, fmt.Errorf(\"module %s (%T) is not an IP range source\", modID, unm)\n\t\t\t}\n\t\t\tjsonSource := caddyconfig.JSONModuleObject(\n\t\t\t\tsource,\n\t\t\t\t\"source\",\n\t\t\t\tsource.(caddy.Module).CaddyModule().ID.Name(),\n\t\t\t\tnil,\n\t\t\t)\n\t\t\tserverOpts.TrustedProxiesRaw = jsonSource\n\n\t\tcase \"trusted_proxies_strict\":\n\t\t\tif d.NextArg() {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\tserverOpts.TrustedProxiesStrict = 1\n\n\t\tcase \"trusted_proxies_unix\":\n\t\t\tif d.NextArg() {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\tserverOpts.TrustedProxiesUnix = true\n\n\t\tcase \"client_ip_headers\":\n\t\t\theaders := d.RemainingArgs()\n\t\t\tfor _, header := range headers {\n\t\t\t\tif slices.Contains(serverOpts.ClientIPHeaders, header) {\n\t\t\t\t\treturn nil, d.Errf(\"client IP header %s specified more than once\", header)\n\t\t\t\t}\n\t\t\t\tserverOpts.ClientIPHeaders = append(serverOpts.ClientIPHeaders, header)\n\t\t\t}\n\t\t\tif nesting := d.Nesting(); d.NextBlock(nesting) {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\n\t\tcase \"metrics\":\n\t\t\tcaddy.Log().Warn(\"The nested 'metrics' option inside `servers` is deprecated and will be removed in the next major version. Use the global 'metrics' option instead.\")\n\t\t\tserverOpts.Metrics = new(caddyhttp.Metrics)\n\t\t\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\t\t\tswitch d.Val() {\n\t\t\t\tcase \"per_host\":\n\t\t\t\t\tserverOpts.Metrics.PerHost = true\n\t\t\t\tdefault:\n\t\t\t\t\treturn nil, d.Errf(\"unrecognized metrics option '%s'\", d.Val())\n\t\t\t\t}\n\t\t\t}\n\n\t\tcase \"trace\":\n\t\t\tif d.NextArg() {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\tserverOpts.Trace = true\n\n\t\tcase \"0rtt\":\n\t\t\t// only supports \"off\" for now\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\tif d.Val() != \"off\" {\n\t\t\t\treturn nil, d.Errf(\"unsupported 0rtt argument '%s' (only 'off' is supported)\", d.Val())\n\t\t\t}\n\t\t\tboolVal := false\n\t\t\tserverOpts.Allow0RTT = &boolVal\n\n\t\tdefault:\n\t\t\treturn nil, d.Errf(\"unrecognized servers option '%s'\", d.Val())\n\t\t}\n\t}\n\treturn serverOpts, nil\n}\n\n// applyServerOptions sets the server options on the appropriate servers\nfunc applyServerOptions(\n\tservers map[string]*caddyhttp.Server,\n\toptions map[string]any,\n\t_ *[]caddyconfig.Warning,\n) error {\n\tserverOpts, ok := options[\"servers\"].([]serverOptions)\n\tif !ok {\n\t\treturn nil\n\t}\n\n\t// check for duplicate names, which would clobber the config\n\texistingNames := map[string]bool{}\n\tfor _, opts := range serverOpts {\n\t\tif opts.Name == \"\" {\n\t\t\tcontinue\n\t\t}\n\t\tif existingNames[opts.Name] {\n\t\t\treturn fmt.Errorf(\"cannot use duplicate server name '%s'\", opts.Name)\n\t\t}\n\t\texistingNames[opts.Name] = true\n\t}\n\n\t// collect the server name overrides\n\tnameReplacements := map[string]string{}\n\n\tfor key, server := range servers {\n\t\t// find the options that apply to this server\n\t\toptsIndex := slices.IndexFunc(serverOpts, func(s serverOptions) bool {\n\t\t\treturn s.ListenerAddress == \"\" || slices.Contains(server.Listen, s.ListenerAddress)\n\t\t})\n\n\t\t// if none apply, then move to the next server\n\t\tif optsIndex == -1 {\n\t\t\tcontinue\n\t\t}\n\t\topts := serverOpts[optsIndex]\n\n\t\t// set all the options\n\t\tserver.ListenerWrappersRaw = opts.ListenerWrappersRaw\n\t\tserver.PacketConnWrappersRaw = opts.PacketConnWrappersRaw\n\t\tserver.ReadTimeout = opts.ReadTimeout\n\t\tserver.ReadHeaderTimeout = opts.ReadHeaderTimeout\n\t\tserver.WriteTimeout = opts.WriteTimeout\n\t\tserver.IdleTimeout = opts.IdleTimeout\n\t\tserver.KeepAliveInterval = opts.KeepAliveInterval\n\t\tserver.KeepAliveIdle = opts.KeepAliveIdle\n\t\tserver.KeepAliveCount = opts.KeepAliveCount\n\t\tserver.MaxHeaderBytes = opts.MaxHeaderBytes\n\t\tserver.EnableFullDuplex = opts.EnableFullDuplex\n\t\tserver.Protocols = opts.Protocols\n\t\tserver.StrictSNIHost = opts.StrictSNIHost\n\t\tserver.TrustedProxiesRaw = opts.TrustedProxiesRaw\n\t\tserver.ClientIPHeaders = opts.ClientIPHeaders\n\t\tserver.TrustedProxiesStrict = opts.TrustedProxiesStrict\n\t\tserver.TrustedProxiesUnix = opts.TrustedProxiesUnix\n\t\tserver.Metrics = opts.Metrics\n\t\tserver.Allow0RTT = opts.Allow0RTT\n\t\tif opts.ShouldLogCredentials {\n\t\t\tif server.Logs == nil {\n\t\t\t\tserver.Logs = new(caddyhttp.ServerLogConfig)\n\t\t\t}\n\t\t\tserver.Logs.ShouldLogCredentials = opts.ShouldLogCredentials\n\t\t}\n\t\tif opts.Trace {\n\t\t\t// TODO: THIS IS EXPERIMENTAL (MAY 2024)\n\t\t\tif server.Logs == nil {\n\t\t\t\tserver.Logs = new(caddyhttp.ServerLogConfig)\n\t\t\t}\n\t\t\tserver.Logs.Trace = opts.Trace\n\t\t}\n\n\t\tif opts.Name != \"\" {\n\t\t\tnameReplacements[key] = opts.Name\n\t\t}\n\t}\n\n\t// rename the servers if marked to do so\n\tfor old, new := range nameReplacements {\n\t\tservers[new] = servers[old]\n\t\tdelete(servers, old)\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "caddyconfig/httpcaddyfile/shorthands.go",
    "content": "package httpcaddyfile\n\nimport (\n\t\"regexp\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\ntype ComplexShorthandReplacer struct {\n\tsearch  *regexp.Regexp\n\treplace string\n}\n\ntype ShorthandReplacer struct {\n\tcomplex []ComplexShorthandReplacer\n\tsimple  *strings.Replacer\n}\n\nfunc NewShorthandReplacer() ShorthandReplacer {\n\t// replace shorthand placeholders (which are convenient\n\t// when writing a Caddyfile) with their actual placeholder\n\t// identifiers or variable names\n\treplacer := strings.NewReplacer(placeholderShorthands()...)\n\n\t// these are placeholders that allow a user-defined final\n\t// parameters, but we still want to provide a shorthand\n\t// for those, so we use a regexp to replace\n\tregexpReplacements := []ComplexShorthandReplacer{\n\t\t{regexp.MustCompile(`{header\\.([\\w-]*)}`), \"{http.request.header.$1}\"},\n\t\t{regexp.MustCompile(`{cookie\\.([\\w-]*)}`), \"{http.request.cookie.$1}\"},\n\t\t{regexp.MustCompile(`{labels\\.([\\w-]*)}`), \"{http.request.host.labels.$1}\"},\n\t\t{regexp.MustCompile(`{path\\.([\\w-]*)}`), \"{http.request.uri.path.$1}\"},\n\t\t{regexp.MustCompile(`{file\\.([\\w-]*)}`), \"{http.request.uri.path.file.$1}\"},\n\t\t{regexp.MustCompile(`{query\\.([\\w-]*)}`), \"{http.request.uri.query.$1}\"},\n\t\t{regexp.MustCompile(`{re\\.([\\w-\\.]*)}`), \"{http.regexp.$1}\"},\n\t\t{regexp.MustCompile(`{vars\\.([\\w-]*)}`), \"{http.vars.$1}\"},\n\t\t{regexp.MustCompile(`{rp\\.([\\w-\\.]*)}`), \"{http.reverse_proxy.$1}\"},\n\t\t{regexp.MustCompile(`{resp\\.([\\w-\\.]*)}`), \"{http.intercept.$1}\"},\n\t\t{regexp.MustCompile(`{err\\.([\\w-\\.]*)}`), \"{http.error.$1}\"},\n\t\t{regexp.MustCompile(`{file_match\\.([\\w-]*)}`), \"{http.matchers.file.$1}\"},\n\t}\n\n\treturn ShorthandReplacer{\n\t\tcomplex: regexpReplacements,\n\t\tsimple:  replacer,\n\t}\n}\n\n// placeholderShorthands returns a slice of old-new string pairs,\n// where the left of the pair is a placeholder shorthand that may\n// be used in the Caddyfile, and the right is the replacement.\nfunc placeholderShorthands() []string {\n\treturn []string{\n\t\t\"{host}\", \"{http.request.host}\",\n\t\t\"{hostport}\", \"{http.request.hostport}\",\n\t\t\"{port}\", \"{http.request.port}\",\n\t\t\"{orig_method}\", \"{http.request.orig_method}\",\n\t\t\"{orig_uri}\", \"{http.request.orig_uri}\",\n\t\t\"{orig_path}\", \"{http.request.orig_uri.path}\",\n\t\t\"{orig_dir}\", \"{http.request.orig_uri.path.dir}\",\n\t\t\"{orig_file}\", \"{http.request.orig_uri.path.file}\",\n\t\t\"{orig_query}\", \"{http.request.orig_uri.query}\",\n\t\t\"{orig_?query}\", \"{http.request.orig_uri.prefixed_query}\",\n\t\t\"{method}\", \"{http.request.method}\",\n\t\t\"{uri}\", \"{http.request.uri}\",\n\t\t\"{%uri}\", \"{http.request.uri_escaped}\",\n\t\t\"{path}\", \"{http.request.uri.path}\",\n\t\t\"{%path}\", \"{http.request.uri.path_escaped}\",\n\t\t\"{dir}\", \"{http.request.uri.path.dir}\",\n\t\t\"{file}\", \"{http.request.uri.path.file}\",\n\t\t\"{query}\", \"{http.request.uri.query}\",\n\t\t\"{%query}\", \"{http.request.uri.query_escaped}\",\n\t\t\"{?query}\", \"{http.request.uri.prefixed_query}\",\n\t\t\"{remote}\", \"{http.request.remote}\",\n\t\t\"{remote_host}\", \"{http.request.remote.host}\",\n\t\t\"{remote_port}\", \"{http.request.remote.port}\",\n\t\t\"{scheme}\", \"{http.request.scheme}\",\n\t\t\"{uuid}\", \"{http.request.uuid}\",\n\t\t\"{tls_cipher}\", \"{http.request.tls.cipher_suite}\",\n\t\t\"{tls_version}\", \"{http.request.tls.version}\",\n\t\t\"{tls_client_fingerprint}\", \"{http.request.tls.client.fingerprint}\",\n\t\t\"{tls_client_issuer}\", \"{http.request.tls.client.issuer}\",\n\t\t\"{tls_client_serial}\", \"{http.request.tls.client.serial}\",\n\t\t\"{tls_client_subject}\", \"{http.request.tls.client.subject}\",\n\t\t\"{tls_client_certificate_pem}\", \"{http.request.tls.client.certificate_pem}\",\n\t\t\"{tls_client_certificate_der_base64}\", \"{http.request.tls.client.certificate_der_base64}\",\n\t\t\"{upstream_hostport}\", \"{http.reverse_proxy.upstream.hostport}\",\n\t\t\"{client_ip}\", \"{http.vars.client_ip}\",\n\t}\n}\n\n// ApplyToSegment replaces shorthand placeholder to its full placeholder, understandable by Caddy.\nfunc (s ShorthandReplacer) ApplyToSegment(segment *caddyfile.Segment) {\n\tif segment != nil {\n\t\tfor i := 0; i < len(*segment); i++ {\n\t\t\t// simple string replacements\n\t\t\t(*segment)[i].Text = s.simple.Replace((*segment)[i].Text)\n\t\t\t// complex regexp replacements\n\t\t\tfor _, r := range s.complex {\n\t\t\t\t(*segment)[i].Text = r.search.ReplaceAllString((*segment)[i].Text, r.replace)\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddyconfig/httpcaddyfile/testdata/import_variadic.txt",
    "content": "(t2) {\n    respond 200 {\n        body {args[:]}\n    }\n}\n\n:8082 {\n    import t2 false\n}"
  },
  {
    "path": "caddyconfig/httpcaddyfile/testdata/import_variadic_snippet.txt",
    "content": "(t1) {\n    respond 200 {\n        body {args[:]}\n    }\n}\n\n:8081 {\n    import t1 false\n}"
  },
  {
    "path": "caddyconfig/httpcaddyfile/testdata/import_variadic_with_import.txt",
    "content": "(t1) {\n    respond 200 {\n        body {args[:]}\n    }\n}\n\n:8081 {\n    import t1 false\n}\n\nimport import_variadic.txt\n\n:8083 {\n    import t2 true\n}"
  },
  {
    "path": "caddyconfig/httpcaddyfile/tlsapp.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage httpcaddyfile\n\nimport (\n\t\"bytes\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"reflect\"\n\t\"slices\"\n\t\"sort\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/mholt/acmez/v3/acme\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddytls\"\n)\n\nfunc (st ServerType) buildTLSApp(\n\tpairings []sbAddrAssociation,\n\toptions map[string]any,\n\twarnings []caddyconfig.Warning,\n) (*caddytls.TLS, []caddyconfig.Warning, error) {\n\ttlsApp := &caddytls.TLS{CertificatesRaw: make(caddy.ModuleMap)}\n\tvar certLoaders []caddytls.CertificateLoader\n\n\thttpPort := strconv.Itoa(caddyhttp.DefaultHTTPPort)\n\tif hp, ok := options[\"http_port\"].(int); ok {\n\t\thttpPort = strconv.Itoa(hp)\n\t}\n\tautoHTTPS := []string{}\n\tif ah, ok := options[\"auto_https\"].([]string); ok {\n\t\tautoHTTPS = ah\n\t}\n\n\t// find all hosts that share a server block with a hostless\n\t// key, so that they don't get forgotten/omitted by auto-HTTPS\n\t// (since they won't appear in route matchers)\n\thttpsHostsSharedWithHostlessKey := make(map[string]struct{})\n\tif !slices.Contains(autoHTTPS, \"off\") {\n\t\tfor _, pair := range pairings {\n\t\t\tfor _, sb := range pair.serverBlocks {\n\t\t\t\tfor _, addr := range sb.parsedKeys {\n\t\t\t\t\tif addr.Host != \"\" {\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\n\t\t\t\t\t// this server block has a hostless key, now\n\t\t\t\t\t// go through and add all the hosts to the set\n\t\t\t\t\tfor _, otherAddr := range sb.parsedKeys {\n\t\t\t\t\t\tif otherAddr.Original == addr.Original {\n\t\t\t\t\t\t\tcontinue\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif otherAddr.Host != \"\" && otherAddr.Scheme != \"http\" && otherAddr.Port != httpPort {\n\t\t\t\t\t\t\thttpsHostsSharedWithHostlessKey[otherAddr.Host] = struct{}{}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t// a catch-all automation policy is used as a \"default\" for all subjects that\n\t// don't have custom configuration explicitly associated with them; this\n\t// is only to add if the global settings or defaults are non-empty\n\tcatchAllAP, err := newBaseAutomationPolicy(options, warnings, false)\n\tif err != nil {\n\t\treturn nil, warnings, err\n\t}\n\tif catchAllAP != nil {\n\t\tif tlsApp.Automation == nil {\n\t\t\ttlsApp.Automation = new(caddytls.AutomationConfig)\n\t\t}\n\t\ttlsApp.Automation.Policies = append(tlsApp.Automation.Policies, catchAllAP)\n\t}\n\n\tforcedAutomatedNames := make(map[string]struct{}) // explicitly configured to be automated, even if covered by a wildcard\n\n\tfor _, p := range pairings {\n\t\t// avoid setting up TLS automation policies for a server that is HTTP-only\n\t\tvar addresses []string\n\t\tfor _, addressWithProtocols := range p.addressesWithProtocols {\n\t\t\taddresses = append(addresses, addressWithProtocols.address)\n\t\t}\n\t\tif !listenersUseAnyPortOtherThan(addresses, httpPort) {\n\t\t\tcontinue\n\t\t}\n\n\t\tfor _, sblock := range p.serverBlocks {\n\t\t\t// check the scheme of all the site addresses,\n\t\t\t// skip building AP if they all had http://\n\t\t\tif sblock.isAllHTTP() {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// get values that populate an automation policy for this block\n\t\t\tap, err := newBaseAutomationPolicy(options, warnings, true)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, warnings, err\n\t\t\t}\n\n\t\t\tsblockHosts := sblock.hostsFromKeys(false)\n\t\t\tif len(sblockHosts) == 0 && catchAllAP != nil {\n\t\t\t\tap = catchAllAP\n\t\t\t}\n\n\t\t\t// on-demand tls\n\t\t\tif _, ok := sblock.pile[\"tls.on_demand\"]; ok {\n\t\t\t\tap.OnDemand = true\n\t\t\t}\n\n\t\t\t// collect hosts that are forced to have certs automated for their specific name\n\t\t\tif _, ok := sblock.pile[\"tls.force_automate\"]; ok {\n\t\t\t\tfor _, host := range sblockHosts {\n\t\t\t\t\tforcedAutomatedNames[host] = struct{}{}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// reuse private keys tls\n\t\t\tif _, ok := sblock.pile[\"tls.reuse_private_keys\"]; ok {\n\t\t\t\tap.ReusePrivateKeys = true\n\t\t\t}\n\n\t\t\tif keyTypeVals, ok := sblock.pile[\"tls.key_type\"]; ok {\n\t\t\t\tap.KeyType = keyTypeVals[0].Value.(string)\n\t\t\t}\n\n\t\t\tif renewalWindowRatioVals, ok := sblock.pile[\"tls.renewal_window_ratio\"]; ok {\n\t\t\t\tap.RenewalWindowRatio = renewalWindowRatioVals[0].Value.(float64)\n\t\t\t} else if globalRenewalWindowRatio, ok := options[\"renewal_window_ratio\"]; ok {\n\t\t\t\tap.RenewalWindowRatio = globalRenewalWindowRatio.(float64)\n\t\t\t}\n\n\t\t\t// certificate issuers\n\t\t\tif issuerVals, ok := sblock.pile[\"tls.cert_issuer\"]; ok {\n\t\t\t\tvar issuers []certmagic.Issuer\n\t\t\t\tfor _, issuerVal := range issuerVals {\n\t\t\t\t\tissuers = append(issuers, issuerVal.Value.(certmagic.Issuer))\n\t\t\t\t}\n\t\t\t\tif ap == catchAllAP && !reflect.DeepEqual(ap.Issuers, issuers) {\n\t\t\t\t\t// this more correctly implements an error check that was removed\n\t\t\t\t\t// below; try it with this config:\n\t\t\t\t\t//\n\t\t\t\t\t// :443 {\n\t\t\t\t\t// \tbind 127.0.0.1\n\t\t\t\t\t// }\n\t\t\t\t\t//\n\t\t\t\t\t// :443 {\n\t\t\t\t\t// \tbind ::1\n\t\t\t\t\t// \ttls {\n\t\t\t\t\t// \t\tissuer acme\n\t\t\t\t\t// \t}\n\t\t\t\t\t// }\n\t\t\t\t\treturn nil, warnings, fmt.Errorf(\"automation policy from site block is also default/catch-all policy because of key without hostname, and the two are in conflict: %#v != %#v\", ap.Issuers, issuers)\n\t\t\t\t}\n\t\t\t\tap.Issuers = issuers\n\t\t\t}\n\n\t\t\t// certificate managers\n\t\t\tif certManagerVals, ok := sblock.pile[\"tls.cert_manager\"]; ok {\n\t\t\t\tfor _, certManager := range certManagerVals {\n\t\t\t\t\tcertGetterName := certManager.Value.(caddy.Module).CaddyModule().ID.Name()\n\t\t\t\t\tap.ManagersRaw = append(ap.ManagersRaw, caddyconfig.JSONModuleObject(certManager.Value, \"via\", certGetterName, &warnings))\n\t\t\t\t}\n\t\t\t}\n\t\t\t// custom bind host\n\t\t\tfor _, cfgVal := range sblock.pile[\"bind\"] {\n\t\t\t\tfor _, iss := range ap.Issuers {\n\t\t\t\t\t// if an issuer was already configured and it is NOT an ACME issuer,\n\t\t\t\t\t// skip, since we intend to adjust only ACME issuers; ensure we\n\t\t\t\t\t// include any issuer that embeds/wraps an underlying ACME issuer\n\t\t\t\t\tvar acmeIssuer *caddytls.ACMEIssuer\n\t\t\t\t\tif acmeWrapper, ok := iss.(acmeCapable); ok {\n\t\t\t\t\t\tacmeIssuer = acmeWrapper.GetACMEIssuer()\n\t\t\t\t\t}\n\t\t\t\t\tif acmeIssuer == nil {\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\n\t\t\t\t\t// proceed to configure the ACME issuer's bind host, without\n\t\t\t\t\t// overwriting any existing settings\n\t\t\t\t\tif acmeIssuer.Challenges == nil {\n\t\t\t\t\t\tacmeIssuer.Challenges = new(caddytls.ChallengesConfig)\n\t\t\t\t\t}\n\t\t\t\t\tif acmeIssuer.Challenges.BindHost == \"\" {\n\t\t\t\t\t\t// only binding to one host is supported\n\t\t\t\t\t\tvar bindHost string\n\t\t\t\t\t\tif asserted, ok := cfgVal.Value.(addressesWithProtocols); ok && len(asserted.addresses) > 0 {\n\t\t\t\t\t\t\tbindHost = asserted.addresses[0]\n\t\t\t\t\t\t}\n\t\t\t\t\t\tacmeIssuer.Challenges.BindHost = bindHost\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// we used to ensure this block is allowed to create an automation policy;\n\t\t\t// doing so was forbidden if it has a key with no host (i.e. \":443\")\n\t\t\t// and if there is a different server block that also has a key with no\n\t\t\t// host -- since a key with no host matches any host, we need its\n\t\t\t// associated automation policy to have an empty Subjects list, i.e. no\n\t\t\t// host filter, which is indistinguishable between the two server blocks\n\t\t\t// because automation is not done in the context of a particular server...\n\t\t\t// this is an example of a poor mapping from Caddyfile to JSON but that's\n\t\t\t// the least-leaky abstraction I could figure out -- however, this check\n\t\t\t// was preventing certain listeners, like those provided by plugins, from\n\t\t\t// being used as desired (see the Tailscale listener plugin), so I removed\n\t\t\t// the check: and I think since I originally wrote the check I added a new\n\t\t\t// check above which *properly* detects this ambiguity without breaking the\n\t\t\t// listener plugin; see the check above with a commented example config\n\t\t\tif len(sblockHosts) == 0 && catchAllAP == nil {\n\t\t\t\t// this server block has a key with no hosts, but there is not yet\n\t\t\t\t// a catch-all automation policy (probably because no global options\n\t\t\t\t// were set), so this one becomes it\n\t\t\t\tcatchAllAP = ap\n\t\t\t}\n\n\t\t\thostsNotHTTP := sblock.hostsFromKeysNotHTTP(httpPort)\n\t\t\tsort.Strings(hostsNotHTTP) // solely for deterministic test results\n\n\t\t\t// associate our new automation policy with this server block's hosts\n\t\t\tap.SubjectsRaw = hostsNotHTTP\n\n\t\t\t// if a combination of public and internal names were given\n\t\t\t// for this same server block and no issuer was specified, we\n\t\t\t// need to separate them out in the automation policies so\n\t\t\t// that the internal names can use the internal issuer and\n\t\t\t// the other names can use the default/public/ACME issuer\n\t\t\tvar ap2 *caddytls.AutomationPolicy\n\t\t\tif len(ap.Issuers) == 0 {\n\t\t\t\tvar internal, external []string\n\t\t\t\tfor _, s := range ap.SubjectsRaw {\n\t\t\t\t\t// do not create Issuers for Tailscale domains; they will be given a Manager instead\n\t\t\t\t\tif isTailscaleDomain(s) {\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\t\t\t\t\tif !certmagic.SubjectQualifiesForCert(s) {\n\t\t\t\t\t\treturn nil, warnings, fmt.Errorf(\"subject does not qualify for certificate: '%s'\", s)\n\t\t\t\t\t}\n\t\t\t\t\t// we don't use certmagic.SubjectQualifiesForPublicCert() because of one nuance:\n\t\t\t\t\t// names like *.*.tld that may not qualify for a public certificate are actually\n\t\t\t\t\t// fine when used with OnDemand, since OnDemand (currently) does not obtain\n\t\t\t\t\t// wildcards (if it ever does, there will be a separate config option to enable\n\t\t\t\t\t// it that we would need to check here) since the hostname is known at handshake;\n\t\t\t\t\t// and it is unexpected to switch to internal issuer when the user wants to get\n\t\t\t\t\t// regular certificates on-demand for a class of certs like *.*.tld.\n\t\t\t\t\tif subjectQualifiesForPublicCert(ap, s) {\n\t\t\t\t\t\texternal = append(external, s)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tinternal = append(internal, s)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif len(external) > 0 && len(internal) > 0 {\n\t\t\t\t\tap.SubjectsRaw = external\n\t\t\t\t\tapCopy := *ap\n\t\t\t\t\tap2 = &apCopy\n\t\t\t\t\tap2.SubjectsRaw = internal\n\t\t\t\t\tap2.IssuersRaw = []json.RawMessage{caddyconfig.JSONModuleObject(caddytls.InternalIssuer{}, \"module\", \"internal\", &warnings)}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif tlsApp.Automation == nil {\n\t\t\t\ttlsApp.Automation = new(caddytls.AutomationConfig)\n\t\t\t}\n\t\t\ttlsApp.Automation.Policies = append(tlsApp.Automation.Policies, ap)\n\t\t\tif ap2 != nil {\n\t\t\t\ttlsApp.Automation.Policies = append(tlsApp.Automation.Policies, ap2)\n\t\t\t}\n\n\t\t\t// certificate loaders\n\t\t\tif clVals, ok := sblock.pile[\"tls.cert_loader\"]; ok {\n\t\t\t\tfor _, clVal := range clVals {\n\t\t\t\t\tcertLoaders = append(certLoaders, clVal.Value.(caddytls.CertificateLoader))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t// group certificate loaders by module name, then add to config\n\tif len(certLoaders) > 0 {\n\t\tloadersByName := make(map[string]caddytls.CertificateLoader)\n\t\tfor _, cl := range certLoaders {\n\t\t\tname := caddy.GetModuleName(cl)\n\t\t\t// ugh... technically, we may have multiple FileLoader and FolderLoader\n\t\t\t// modules (because the tls directive returns one per occurrence), but\n\t\t\t// the config structure expects only one instance of each kind of loader\n\t\t\t// module, so we have to combine them... instead of enumerating each\n\t\t\t// possible cert loader module in a type switch, we can use reflection,\n\t\t\t// which works on any cert loaders that are slice types\n\t\t\tif reflect.TypeOf(cl).Kind() == reflect.Slice {\n\t\t\t\tcombined := reflect.ValueOf(loadersByName[name])\n\t\t\t\tif !combined.IsValid() {\n\t\t\t\t\tcombined = reflect.New(reflect.TypeOf(cl)).Elem()\n\t\t\t\t}\n\t\t\t\tclVal := reflect.ValueOf(cl)\n\t\t\t\tfor i := range clVal.Len() {\n\t\t\t\t\tcombined = reflect.Append(combined, clVal.Index(i))\n\t\t\t\t}\n\t\t\t\tloadersByName[name] = combined.Interface().(caddytls.CertificateLoader)\n\t\t\t}\n\t\t}\n\t\tfor certLoaderName, loaders := range loadersByName {\n\t\t\ttlsApp.CertificatesRaw[certLoaderName] = caddyconfig.JSON(loaders, &warnings)\n\t\t}\n\t}\n\n\t// set any of the on-demand options, for if/when on-demand TLS is enabled\n\tif onDemand, ok := options[\"on_demand_tls\"].(*caddytls.OnDemandConfig); ok {\n\t\tif tlsApp.Automation == nil {\n\t\t\ttlsApp.Automation = new(caddytls.AutomationConfig)\n\t\t}\n\t\ttlsApp.Automation.OnDemand = onDemand\n\t}\n\n\t// set up \"global\" (to the TLS app) DNS provider config\n\tif globalDNS, ok := options[\"dns\"]; ok && globalDNS != nil {\n\t\ttlsApp.DNSRaw = caddyconfig.JSONModuleObject(globalDNS, \"name\", globalDNS.(caddy.Module).CaddyModule().ID.Name(), nil)\n\t}\n\n\t// set up \"global\" (to the TLS app) DNS resolvers config\n\tif globalResolvers, ok := options[\"tls_resolvers\"]; ok && globalResolvers != nil {\n\t\ttlsApp.Resolvers = globalResolvers.([]string)\n\t}\n\n\t// set up ECH from Caddyfile options\n\tif ech, ok := options[\"ech\"].(*caddytls.ECH); ok {\n\t\ttlsApp.EncryptedClientHello = ech\n\n\t\t// outer server names will need certificates, so make sure they're included\n\t\t// in an automation policy for them that applies any global options\n\t\tap, err := newBaseAutomationPolicy(options, warnings, true)\n\t\tif err != nil {\n\t\t\treturn nil, warnings, err\n\t\t}\n\t\tfor _, cfg := range ech.Configs {\n\t\t\tif cfg.PublicName != \"\" {\n\t\t\t\tap.SubjectsRaw = append(ap.SubjectsRaw, cfg.PublicName)\n\t\t\t}\n\t\t}\n\t\tif tlsApp.Automation == nil {\n\t\t\ttlsApp.Automation = new(caddytls.AutomationConfig)\n\t\t}\n\t\ttlsApp.Automation.Policies = append(tlsApp.Automation.Policies, ap)\n\t}\n\n\t// if the storage clean interval is a boolean, then it's \"off\" to disable cleaning\n\tif sc, ok := options[\"storage_check\"].(string); ok && sc == \"off\" {\n\t\ttlsApp.DisableStorageCheck = true\n\t}\n\n\t// if the storage clean interval is a boolean, then it's \"off\" to disable cleaning\n\tif sci, ok := options[\"storage_clean_interval\"].(bool); ok && !sci {\n\t\ttlsApp.DisableStorageClean = true\n\t}\n\n\t// set the storage clean interval if configured\n\tif storageCleanInterval, ok := options[\"storage_clean_interval\"].(caddy.Duration); ok {\n\t\tif tlsApp.Automation == nil {\n\t\t\ttlsApp.Automation = new(caddytls.AutomationConfig)\n\t\t}\n\t\ttlsApp.Automation.StorageCleanInterval = storageCleanInterval\n\t}\n\n\t// set the expired certificates renew interval if configured\n\tif renewCheckInterval, ok := options[\"renew_interval\"].(caddy.Duration); ok {\n\t\tif tlsApp.Automation == nil {\n\t\t\ttlsApp.Automation = new(caddytls.AutomationConfig)\n\t\t}\n\t\ttlsApp.Automation.RenewCheckInterval = renewCheckInterval\n\t}\n\n\t// set the OCSP check interval if configured\n\tif ocspCheckInterval, ok := options[\"ocsp_interval\"].(caddy.Duration); ok {\n\t\tif tlsApp.Automation == nil {\n\t\t\ttlsApp.Automation = new(caddytls.AutomationConfig)\n\t\t}\n\t\ttlsApp.Automation.OCSPCheckInterval = ocspCheckInterval\n\t}\n\n\t// set whether OCSP stapling should be disabled for manually-managed certificates\n\tif ocspConfig, ok := options[\"ocsp_stapling\"].(certmagic.OCSPConfig); ok {\n\t\ttlsApp.DisableOCSPStapling = ocspConfig.DisableStapling\n\t}\n\n\t// if any hostnames appear on the same server block as a key with\n\t// no host, they will not be used with route matchers because the\n\t// hostless key matches all hosts, therefore, it wouldn't be\n\t// considered for auto-HTTPS, so we need to make sure those hosts\n\t// are manually considered for managed certificates; we also need\n\t// to make sure that any of these names which are internal-only\n\t// get internal certificates by default rather than ACME\n\tvar al caddytls.AutomateLoader\n\tinternalAP := &caddytls.AutomationPolicy{\n\t\tIssuersRaw: []json.RawMessage{json.RawMessage(`{\"module\":\"internal\"}`)},\n\t}\n\tif !slices.Contains(autoHTTPS, \"off\") && !slices.Contains(autoHTTPS, \"disable_certs\") {\n\t\tfor h := range httpsHostsSharedWithHostlessKey {\n\t\t\tal = append(al, h)\n\t\t\tif !certmagic.SubjectQualifiesForPublicCert(h) {\n\t\t\t\tinternalAP.SubjectsRaw = append(internalAP.SubjectsRaw, h)\n\t\t\t}\n\t\t}\n\t}\n\tfor name := range forcedAutomatedNames {\n\t\tif slices.Contains(al, name) {\n\t\t\tcontinue\n\t\t}\n\t\tal = append(al, name)\n\t}\n\tslices.Sort(al) // to stabilize the adapt output\n\tif len(al) > 0 {\n\t\ttlsApp.CertificatesRaw[\"automate\"] = caddyconfig.JSON(al, &warnings)\n\t}\n\tif len(internalAP.SubjectsRaw) > 0 {\n\t\tif tlsApp.Automation == nil {\n\t\t\ttlsApp.Automation = new(caddytls.AutomationConfig)\n\t\t}\n\t\ttlsApp.Automation.Policies = append(tlsApp.Automation.Policies, internalAP)\n\t}\n\n\t// if there are any global options set for issuers (ACME ones in particular), make sure they\n\t// take effect in every automation policy that does not have any issuers\n\tif tlsApp.Automation != nil {\n\t\tglobalEmail := options[\"email\"]\n\t\tglobalACMECA := options[\"acme_ca\"]\n\t\tglobalACMECARoot := options[\"acme_ca_root\"]\n\t\t_, globalACMEDNS := options[\"acme_dns\"] // can be set to nil (to use globally-defined \"dns\" value instead), but it is still set\n\t\tglobalACMEEAB := options[\"acme_eab\"]\n\t\tglobalPreferredChains := options[\"preferred_chains\"]\n\t\thasGlobalACMEDefaults := globalEmail != nil || globalACMECA != nil || globalACMECARoot != nil || globalACMEDNS || globalACMEEAB != nil || globalPreferredChains != nil\n\t\tif hasGlobalACMEDefaults {\n\t\t\tfor i := range tlsApp.Automation.Policies {\n\t\t\t\tap := tlsApp.Automation.Policies[i]\n\t\t\t\tif len(ap.Issuers) == 0 && automationPolicyHasAllPublicNames(ap) {\n\t\t\t\t\t// for public names, create default issuers which will later be filled in with configured global defaults\n\t\t\t\t\t// (internal names will implicitly use the internal issuer at auto-https time)\n\t\t\t\t\temailStr, _ := globalEmail.(string)\n\t\t\t\t\tap.Issuers = caddytls.DefaultIssuers(emailStr)\n\n\t\t\t\t\t// if a specific endpoint is configured, can't use multiple default issuers\n\t\t\t\t\tif globalACMECA != nil {\n\t\t\t\t\t\tap.Issuers = []certmagic.Issuer{new(caddytls.ACMEIssuer)}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t// finalize and verify policies; do cleanup\n\tif tlsApp.Automation != nil {\n\t\tfor i, ap := range tlsApp.Automation.Policies {\n\t\t\t// ensure all issuers have global defaults filled in\n\t\t\tfor j, issuer := range ap.Issuers {\n\t\t\t\terr := fillInGlobalACMEDefaults(issuer, options)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, warnings, fmt.Errorf(\"filling in global issuer defaults for AP %d, issuer %d: %v\", i, j, err)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// encode all issuer values we created, so they will be rendered in the output\n\t\t\tif len(ap.Issuers) > 0 && ap.IssuersRaw == nil {\n\t\t\t\tfor _, iss := range ap.Issuers {\n\t\t\t\t\tissuerName := iss.(caddy.Module).CaddyModule().ID.Name()\n\t\t\t\t\tap.IssuersRaw = append(ap.IssuersRaw, caddyconfig.JSONModuleObject(iss, \"module\", issuerName, &warnings))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// consolidate automation policies that are the exact same\n\t\ttlsApp.Automation.Policies = consolidateAutomationPolicies(tlsApp.Automation.Policies)\n\n\t\t// ensure automation policies don't overlap subjects (this should be\n\t\t// an error at provision-time as well, but catch it in the adapt phase\n\t\t// for convenience)\n\t\tautomationHostSet := make(map[string]struct{})\n\t\tfor _, ap := range tlsApp.Automation.Policies {\n\t\t\tfor _, s := range ap.SubjectsRaw {\n\t\t\t\tif _, ok := automationHostSet[s]; ok {\n\t\t\t\t\treturn nil, warnings, fmt.Errorf(\"hostname appears in more than one automation policy, making certificate management ambiguous: %s\", s)\n\t\t\t\t}\n\t\t\t\tautomationHostSet[s] = struct{}{}\n\t\t\t}\n\t\t}\n\n\t\t// if nothing remains, remove any excess values to clean up the resulting config\n\t\tif len(tlsApp.Automation.Policies) == 0 {\n\t\t\ttlsApp.Automation.Policies = nil\n\t\t}\n\t\tif reflect.DeepEqual(tlsApp.Automation, new(caddytls.AutomationConfig)) {\n\t\t\ttlsApp.Automation = nil\n\t\t}\n\t}\n\n\treturn tlsApp, warnings, nil\n}\n\ntype acmeCapable interface{ GetACMEIssuer() *caddytls.ACMEIssuer }\n\nfunc fillInGlobalACMEDefaults(issuer certmagic.Issuer, options map[string]any) error {\n\tacmeWrapper, ok := issuer.(acmeCapable)\n\tif !ok {\n\t\treturn nil\n\t}\n\tacmeIssuer := acmeWrapper.GetACMEIssuer()\n\tif acmeIssuer == nil {\n\t\treturn nil\n\t}\n\n\tglobalEmail := options[\"email\"]\n\tglobalACMECA := options[\"acme_ca\"]\n\tglobalACMECARoot := options[\"acme_ca_root\"]\n\tglobalACMEDNS, globalACMEDNSok := options[\"acme_dns\"] // can be set to nil (to use globally-defined \"dns\" value instead), but it is still set\n\tglobalACMEEAB := options[\"acme_eab\"]\n\tglobalPreferredChains := options[\"preferred_chains\"]\n\tglobalCertLifetime := options[\"cert_lifetime\"]\n\tglobalHTTPPort, globalHTTPSPort := options[\"http_port\"], options[\"https_port\"]\n\tglobalDefaultBind := options[\"default_bind\"]\n\n\tif globalEmail != nil && acmeIssuer.Email == \"\" {\n\t\tacmeIssuer.Email = globalEmail.(string)\n\t}\n\tif globalACMECA != nil && acmeIssuer.CA == \"\" {\n\t\tacmeIssuer.CA = globalACMECA.(string)\n\t}\n\tif globalACMECARoot != nil && !slices.Contains(acmeIssuer.TrustedRootsPEMFiles, globalACMECARoot.(string)) {\n\t\tacmeIssuer.TrustedRootsPEMFiles = append(acmeIssuer.TrustedRootsPEMFiles, globalACMECARoot.(string))\n\t}\n\tif globalACMEDNSok && (acmeIssuer.Challenges == nil || acmeIssuer.Challenges.DNS == nil || acmeIssuer.Challenges.DNS.ProviderRaw == nil) {\n\t\tglobalDNS := options[\"dns\"]\n\t\tif globalDNS == nil && globalACMEDNS == nil {\n\t\t\treturn fmt.Errorf(\"acme_dns specified without DNS provider config, but no provider specified with 'dns' global option\")\n\t\t}\n\t\tif acmeIssuer.Challenges == nil {\n\t\t\tacmeIssuer.Challenges = new(caddytls.ChallengesConfig)\n\t\t}\n\t\tif acmeIssuer.Challenges.DNS == nil {\n\t\t\tacmeIssuer.Challenges.DNS = new(caddytls.DNSChallengeConfig)\n\t\t}\n\t\tif globalACMEDNS != nil && acmeIssuer.Challenges.DNS.ProviderRaw == nil {\n\t\t\t// Set a global DNS provider if `acme_dns` is set\n\t\t\tacmeIssuer.Challenges.DNS.ProviderRaw = caddyconfig.JSONModuleObject(globalACMEDNS, \"name\", globalACMEDNS.(caddy.Module).CaddyModule().ID.Name(), nil)\n\t\t}\n\t}\n\tif globalACMEEAB != nil && acmeIssuer.ExternalAccount == nil {\n\t\tacmeIssuer.ExternalAccount = globalACMEEAB.(*acme.EAB)\n\t}\n\tif globalPreferredChains != nil && acmeIssuer.PreferredChains == nil {\n\t\tacmeIssuer.PreferredChains = globalPreferredChains.(*caddytls.ChainPreference)\n\t}\n\t// only configure alt HTTP and TLS-ALPN ports if the DNS challenge is not enabled (wouldn't hurt, but isn't necessary since the DNS challenge is exclusive of others)\n\tif globalHTTPPort != nil && (acmeIssuer.Challenges == nil || acmeIssuer.Challenges.DNS == nil) && (acmeIssuer.Challenges == nil || acmeIssuer.Challenges.HTTP == nil || acmeIssuer.Challenges.HTTP.AlternatePort == 0) {\n\t\tif acmeIssuer.Challenges == nil {\n\t\t\tacmeIssuer.Challenges = new(caddytls.ChallengesConfig)\n\t\t}\n\t\tif acmeIssuer.Challenges.HTTP == nil {\n\t\t\tacmeIssuer.Challenges.HTTP = new(caddytls.HTTPChallengeConfig)\n\t\t}\n\t\tacmeIssuer.Challenges.HTTP.AlternatePort = globalHTTPPort.(int)\n\t}\n\tif globalHTTPSPort != nil && (acmeIssuer.Challenges == nil || acmeIssuer.Challenges.DNS == nil) && (acmeIssuer.Challenges == nil || acmeIssuer.Challenges.TLSALPN == nil || acmeIssuer.Challenges.TLSALPN.AlternatePort == 0) {\n\t\tif acmeIssuer.Challenges == nil {\n\t\t\tacmeIssuer.Challenges = new(caddytls.ChallengesConfig)\n\t\t}\n\t\tif acmeIssuer.Challenges.TLSALPN == nil {\n\t\t\tacmeIssuer.Challenges.TLSALPN = new(caddytls.TLSALPNChallengeConfig)\n\t\t}\n\t\tacmeIssuer.Challenges.TLSALPN.AlternatePort = globalHTTPSPort.(int)\n\t}\n\t// If BindHost is still unset, fall back to the first default_bind address if set\n\t// This avoids binding the automation policy to the wildcard socket, which is unexpected behavior when a more selective socket is specified via default_bind\n\t// In BSD it is valid to bind to the wildcard socket even though a more selective socket is already open (still unexpected behavior by the caller though)\n\t// In Linux the same call will error with EADDRINUSE whenever the listener for the automation policy is opened\n\tif acmeIssuer.Challenges == nil || (acmeIssuer.Challenges.DNS == nil && acmeIssuer.Challenges.BindHost == \"\") {\n\t\tif defBinds, ok := globalDefaultBind.([]ConfigValue); ok && len(defBinds) > 0 {\n\t\t\tif abp, ok := defBinds[0].Value.(addressesWithProtocols); ok && len(abp.addresses) > 0 {\n\t\t\t\tif acmeIssuer.Challenges == nil {\n\t\t\t\t\tacmeIssuer.Challenges = new(caddytls.ChallengesConfig)\n\t\t\t\t}\n\t\t\t\tacmeIssuer.Challenges.BindHost = abp.addresses[0]\n\t\t\t}\n\t\t}\n\t}\n\tif globalCertLifetime != nil && acmeIssuer.CertificateLifetime == 0 {\n\t\tacmeIssuer.CertificateLifetime = globalCertLifetime.(caddy.Duration)\n\t}\n\t// apply global resolvers if DNS challenge is configured and resolvers are not already set\n\tglobalResolvers := options[\"tls_resolvers\"]\n\tif globalResolvers != nil && acmeIssuer.Challenges != nil && acmeIssuer.Challenges.DNS != nil {\n\t\t// Check if DNS challenge is actually configured\n\t\thasDNSChallenge := globalACMEDNSok || acmeIssuer.Challenges.DNS.ProviderRaw != nil\n\t\tif hasDNSChallenge && len(acmeIssuer.Challenges.DNS.Resolvers) == 0 {\n\t\t\tacmeIssuer.Challenges.DNS.Resolvers = globalResolvers.([]string)\n\t\t}\n\t}\n\treturn nil\n}\n\n// newBaseAutomationPolicy returns a new TLS automation policy that gets\n// its values from the global options map. It should be used as the base\n// for any other automation policies. A nil policy (and no error) will be\n// returned if there are no default/global options. However, if always is\n// true, a non-nil value will always be returned (unless there is an error).\nfunc newBaseAutomationPolicy(\n\toptions map[string]any,\n\t_ []caddyconfig.Warning,\n\talways bool,\n) (*caddytls.AutomationPolicy, error) {\n\tissuers, hasIssuers := options[\"cert_issuer\"]\n\t_, hasLocalCerts := options[\"local_certs\"]\n\tkeyType, hasKeyType := options[\"key_type\"]\n\tocspStapling, hasOCSPStapling := options[\"ocsp_stapling\"]\n\trenewalWindowRatio, hasRenewalWindowRatio := options[\"renewal_window_ratio\"]\n\thasGlobalAutomationOpts := hasIssuers || hasLocalCerts || hasKeyType || hasOCSPStapling || hasRenewalWindowRatio\n\n\tglobalACMECA := options[\"acme_ca\"]\n\tglobalACMECARoot := options[\"acme_ca_root\"]\n\t_, globalACMEDNS := options[\"acme_dns\"] // can be set to nil (to use globally-defined \"dns\" value instead), but it is still set\n\tglobalACMEEAB := options[\"acme_eab\"]\n\tglobalPreferredChains := options[\"preferred_chains\"]\n\thasGlobalACMEDefaults := globalACMECA != nil || globalACMECARoot != nil || globalACMEDNS || globalACMEEAB != nil || globalPreferredChains != nil\n\n\t// if there are no global options related to automation policies\n\t// set, then we can just return right away\n\tif !hasGlobalAutomationOpts && !hasGlobalACMEDefaults {\n\t\tif always {\n\t\t\treturn new(caddytls.AutomationPolicy), nil\n\t\t}\n\t\treturn nil, nil\n\t}\n\n\tap := new(caddytls.AutomationPolicy)\n\tif hasKeyType {\n\t\tap.KeyType = keyType.(string)\n\t}\n\n\tif hasIssuers && hasLocalCerts {\n\t\treturn nil, fmt.Errorf(\"global options are ambiguous: local_certs is confusing when combined with cert_issuer, because local_certs is also a specific kind of issuer\")\n\t}\n\n\tif hasIssuers {\n\t\tap.Issuers = issuers.([]certmagic.Issuer)\n\t} else if hasLocalCerts {\n\t\tap.Issuers = []certmagic.Issuer{new(caddytls.InternalIssuer)}\n\t}\n\n\tif hasGlobalACMEDefaults {\n\t\tfor i := range ap.Issuers {\n\t\t\tif err := fillInGlobalACMEDefaults(ap.Issuers[i], options); err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"filling in global issuer defaults for issuer %d: %v\", i, err)\n\t\t\t}\n\t\t}\n\t}\n\n\tif hasOCSPStapling {\n\t\tocspConfig := ocspStapling.(certmagic.OCSPConfig)\n\t\tap.DisableOCSPStapling = ocspConfig.DisableStapling\n\t\tap.OCSPOverrides = ocspConfig.ResponderOverrides\n\t}\n\n\tif hasRenewalWindowRatio {\n\t\tap.RenewalWindowRatio = renewalWindowRatio.(float64)\n\t}\n\n\treturn ap, nil\n}\n\n// consolidateAutomationPolicies combines automation policies that are the same,\n// for a cleaner overall output.\nfunc consolidateAutomationPolicies(aps []*caddytls.AutomationPolicy) []*caddytls.AutomationPolicy {\n\t// sort from most specific to least specific; we depend on this ordering\n\tsort.SliceStable(aps, func(i, j int) bool {\n\t\tif automationPolicyIsSubset(aps[i], aps[j]) {\n\t\t\treturn true\n\t\t}\n\t\tif automationPolicyIsSubset(aps[j], aps[i]) {\n\t\t\treturn false\n\t\t}\n\t\treturn len(aps[i].SubjectsRaw) > len(aps[j].SubjectsRaw)\n\t})\n\n\temptyAPCount := 0\n\torigLenAPs := len(aps)\n\t// compute the number of empty policies (disregarding subjects) - see #4128\n\temptyAP := new(caddytls.AutomationPolicy)\n\tfor i := 0; i < len(aps); i++ {\n\t\temptyAP.SubjectsRaw = aps[i].SubjectsRaw\n\t\tif reflect.DeepEqual(aps[i], emptyAP) {\n\t\t\temptyAPCount++\n\t\t\tif !automationPolicyHasAllPublicNames(aps[i]) {\n\t\t\t\t// if this automation policy has internal names, we might as well remove it\n\t\t\t\t// so auto-https can implicitly use the internal issuer\n\t\t\t\taps = slices.Delete(aps, i, i+1)\n\t\t\t\ti--\n\t\t\t}\n\t\t}\n\t}\n\t// If all policies are empty, we can return nil, as there is no need to set any policy\n\tif emptyAPCount == origLenAPs {\n\t\treturn nil\n\t}\n\n\t// remove or combine duplicate policies\nouter:\n\tfor i := 0; i < len(aps); i++ {\n\t\t// compare only with next policies; we sorted by specificity so we must not delete earlier policies\n\t\tfor j := i + 1; j < len(aps); j++ {\n\t\t\t// if they're exactly equal in every way, just keep one of them\n\t\t\tif reflect.DeepEqual(aps[i], aps[j]) {\n\t\t\t\taps = slices.Delete(aps, j, j+1)\n\t\t\t\t// must re-evaluate current i against next j; can't skip it!\n\t\t\t\t// even if i decrements to -1, will be incremented to 0 immediately\n\t\t\t\ti--\n\t\t\t\tcontinue outer\n\t\t\t}\n\n\t\t\t// if the policy is the same, we can keep just one, but we have\n\t\t\t// to be careful which one we keep; if only one has any hostnames\n\t\t\t// defined, then we need to keep the one without any hostnames,\n\t\t\t// otherwise the one without any subjects (a catch-all) would be\n\t\t\t// eaten up by the one with subjects; and if both have subjects, we\n\t\t\t// need to combine their lists\n\t\t\tif reflect.DeepEqual(aps[i].IssuersRaw, aps[j].IssuersRaw) &&\n\t\t\t\treflect.DeepEqual(aps[i].ManagersRaw, aps[j].ManagersRaw) &&\n\t\t\t\tbytes.Equal(aps[i].StorageRaw, aps[j].StorageRaw) &&\n\t\t\t\taps[i].MustStaple == aps[j].MustStaple &&\n\t\t\t\taps[i].KeyType == aps[j].KeyType &&\n\t\t\t\taps[i].OnDemand == aps[j].OnDemand &&\n\t\t\t\taps[i].ReusePrivateKeys == aps[j].ReusePrivateKeys &&\n\t\t\t\taps[i].RenewalWindowRatio == aps[j].RenewalWindowRatio {\n\t\t\t\tif len(aps[i].SubjectsRaw) > 0 && len(aps[j].SubjectsRaw) == 0 {\n\t\t\t\t\t// later policy (at j) has no subjects (\"catch-all\"), so we can\n\t\t\t\t\t// remove the identical-but-more-specific policy that comes first\n\t\t\t\t\t// AS LONG AS it is not shadowed by another policy before it; e.g.\n\t\t\t\t\t// if policy i is for example.com, policy i+1 is '*.com', and policy\n\t\t\t\t\t// j is catch-all, we cannot remove policy i because that would\n\t\t\t\t\t// cause example.com to be served by the less specific policy for\n\t\t\t\t\t// '*.com', which might be different (yes we've seen this happen)\n\t\t\t\t\tif automationPolicyShadows(i, aps) >= j {\n\t\t\t\t\t\taps = slices.Delete(aps, i, i+1)\n\t\t\t\t\t\ti--\n\t\t\t\t\t\tcontinue outer\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\t// avoid repeated subjects\n\t\t\t\t\tfor _, subj := range aps[j].SubjectsRaw {\n\t\t\t\t\t\tif !slices.Contains(aps[i].SubjectsRaw, subj) {\n\t\t\t\t\t\t\taps[i].SubjectsRaw = append(aps[i].SubjectsRaw, subj)\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\taps = slices.Delete(aps, j, j+1)\n\t\t\t\t\tj--\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn aps\n}\n\n// automationPolicyIsSubset returns true if a's subjects are a subset\n// of b's subjects.\nfunc automationPolicyIsSubset(a, b *caddytls.AutomationPolicy) bool {\n\tif len(b.SubjectsRaw) == 0 {\n\t\treturn true\n\t}\n\tif len(a.SubjectsRaw) == 0 {\n\t\treturn false\n\t}\n\tfor _, aSubj := range a.SubjectsRaw {\n\t\tinSuperset := slices.ContainsFunc(b.SubjectsRaw, func(bSubj string) bool {\n\t\t\treturn certmagic.MatchWildcard(aSubj, bSubj)\n\t\t})\n\t\tif !inSuperset {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n\n// automationPolicyShadows returns the index of a policy that aps[i] shadows;\n// in other words, for all policies after position i, if that policy covers\n// the same subjects but is less specific, that policy's position is returned,\n// or -1 if no shadowing is found. For example, if policy i is for\n// \"foo.example.com\" and policy i+2 is for \"*.example.com\", then i+2 will be\n// returned, since that policy is shadowed by i, which is in front.\nfunc automationPolicyShadows(i int, aps []*caddytls.AutomationPolicy) int {\n\tfor j := i + 1; j < len(aps); j++ {\n\t\tif automationPolicyIsSubset(aps[i], aps[j]) {\n\t\t\treturn j\n\t\t}\n\t}\n\treturn -1\n}\n\n// subjectQualifiesForPublicCert is like certmagic.SubjectQualifiesForPublicCert() except\n// that this allows domains with multiple wildcard levels like '*.*.example.com' to qualify\n// if the automation policy has OnDemand enabled (i.e. this function is more lenient).\n//\n// IP subjects are considered as non-qualifying for public certs. Technically, there are\n// now public ACME CAs as well as non-ACME CAs that issue IP certificates. But this function\n// is used solely for implicit automation (defaults), where it gets really complicated to\n// keep track of which issuers support IP certificates in which circumstances. Currently,\n// issuers that support IP certificates are very few, and all require some sort of config\n// from the user anyway (such as an account credential). Since we cannot implicitly and\n// automatically get public IP certs without configuration from the user, we treat IPs as\n// not qualifying for public certificates. Users should expressly configure an issuer\n// that supports IP certs for that purpose.\nfunc subjectQualifiesForPublicCert(ap *caddytls.AutomationPolicy, subj string) bool {\n\treturn !certmagic.SubjectIsIP(subj) &&\n\t\t!certmagic.SubjectIsInternal(subj) &&\n\t\t(strings.Count(subj, \"*.\") < 2 || ap.OnDemand)\n}\n\n// automationPolicyHasAllPublicNames returns true if all the names on the policy\n// do NOT qualify for public certs OR are tailscale domains.\nfunc automationPolicyHasAllPublicNames(ap *caddytls.AutomationPolicy) bool {\n\treturn !slices.ContainsFunc(ap.SubjectsRaw, func(i string) bool {\n\t\treturn !subjectQualifiesForPublicCert(ap, i) || isTailscaleDomain(i)\n\t})\n}\n\nfunc isTailscaleDomain(name string) bool {\n\treturn strings.HasSuffix(strings.ToLower(name), \".ts.net\")\n}\n"
  },
  {
    "path": "caddyconfig/httpcaddyfile/tlsapp_test.go",
    "content": "package httpcaddyfile\n\nimport (\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/modules/caddytls\"\n)\n\nfunc TestAutomationPolicyIsSubset(t *testing.T) {\n\tfor i, test := range []struct {\n\t\ta, b   []string\n\t\texpect bool\n\t}{\n\t\t{\n\t\t\ta:      []string{\"example.com\"},\n\t\t\tb:      []string{},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\ta:      []string{},\n\t\t\tb:      []string{\"example.com\"},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\ta:      []string{\"foo.example.com\"},\n\t\t\tb:      []string{\"*.example.com\"},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\ta:      []string{\"foo.example.com\"},\n\t\t\tb:      []string{\"foo.example.com\"},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\ta:      []string{\"foo.example.com\"},\n\t\t\tb:      []string{\"example.com\"},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\ta:      []string{\"example.com\", \"foo.example.com\"},\n\t\t\tb:      []string{\"*.com\", \"*.*.com\"},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\ta:      []string{\"example.com\", \"foo.example.com\"},\n\t\t\tb:      []string{\"*.com\"},\n\t\t\texpect: false,\n\t\t},\n\t} {\n\t\tapA := &caddytls.AutomationPolicy{SubjectsRaw: test.a}\n\t\tapB := &caddytls.AutomationPolicy{SubjectsRaw: test.b}\n\t\tif actual := automationPolicyIsSubset(apA, apB); actual != test.expect {\n\t\t\tt.Errorf(\"Test %d: Expected %t but got %t (A: %v  B: %v)\", i, test.expect, actual, test.a, test.b)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddyconfig/httploader.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyconfig\n\nimport (\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t\"os\"\n\t\"time\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(HTTPLoader{})\n}\n\n// HTTPLoader can load Caddy configs over HTTP(S).\n//\n// If the response is not a JSON config, a config adapter must be specified\n// either in the loader config (`adapter`), or in the Content-Type HTTP header\n// returned in the HTTP response from the server. The Content-Type header is\n// read just like the admin API's `/load` endpoint. If you don't have control\n// over the HTTP server (but can still trust its response), you can override\n// the Content-Type header by setting the `adapter` property in this config.\ntype HTTPLoader struct {\n\t// The method for the request. Default: GET\n\tMethod string `json:\"method,omitempty\"`\n\n\t// The URL of the request.\n\tURL string `json:\"url,omitempty\"`\n\n\t// HTTP headers to add to the request.\n\tHeaders http.Header `json:\"header,omitempty\"`\n\n\t// Maximum time allowed for a complete connection and request.\n\tTimeout caddy.Duration `json:\"timeout,omitempty\"`\n\n\t// The name of the config adapter to use, if any. Only needed\n\t// if the HTTP response is not a JSON config and if the server's\n\t// Content-Type header is missing or incorrect.\n\tAdapter string `json:\"adapter,omitempty\"`\n\n\tTLS *struct {\n\t\t// Present this instance's managed remote identity credentials to the server.\n\t\tUseServerIdentity bool `json:\"use_server_identity,omitempty\"`\n\n\t\t// PEM-encoded client certificate filename to present to the server.\n\t\tClientCertificateFile string `json:\"client_certificate_file,omitempty\"`\n\n\t\t// PEM-encoded key to use with the client certificate.\n\t\tClientCertificateKeyFile string `json:\"client_certificate_key_file,omitempty\"`\n\n\t\t// List of PEM-encoded CA certificate files to add to the same trust\n\t\t// store as RootCAPool (or root_ca_pool in the JSON).\n\t\tRootCAPEMFiles []string `json:\"root_ca_pem_files,omitempty\"`\n\t} `json:\"tls,omitempty\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (HTTPLoader) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.config_loaders.http\",\n\t\tNew: func() caddy.Module { return new(HTTPLoader) },\n\t}\n}\n\n// LoadConfig loads a Caddy config.\nfunc (hl HTTPLoader) LoadConfig(ctx caddy.Context) ([]byte, error) {\n\trepl := caddy.NewReplacer()\n\n\tclient, err := hl.makeClient(ctx)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tmethod := repl.ReplaceAll(hl.Method, \"\")\n\tif method == \"\" {\n\t\tmethod = http.MethodGet\n\t}\n\n\turl := repl.ReplaceAll(hl.URL, \"\")\n\treq, err := http.NewRequest(method, url, nil)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tfor key, vals := range hl.Headers {\n\t\tfor _, val := range vals {\n\t\t\treq.Header.Add(repl.ReplaceAll(key, \"\"), repl.ReplaceKnown(val, \"\"))\n\t\t}\n\t}\n\n\tresp, err := doHttpCallWithRetries(ctx, client, req)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer resp.Body.Close()\n\tif resp.StatusCode >= 400 {\n\t\treturn nil, fmt.Errorf(\"server responded with HTTP %d\", resp.StatusCode)\n\t}\n\n\tbody, err := io.ReadAll(resp.Body)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// adapt the config based on either manually-configured adapter or server's response header\n\tct := resp.Header.Get(\"Content-Type\")\n\tif hl.Adapter != \"\" {\n\t\tct = \"text/\" + hl.Adapter\n\t}\n\tresult, warnings, err := adaptByContentType(ct, body)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tfor _, warn := range warnings {\n\t\tctx.Logger().Warn(warn.String())\n\t}\n\n\treturn result, nil\n}\n\nfunc attemptHttpCall(client *http.Client, request *http.Request) (*http.Response, error) {\n\tresp, err := client.Do(request) //nolint:gosec // no SSRF; comes from trusted config\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"problem calling http loader url: %v\", err)\n\t} else if resp.StatusCode < 200 || resp.StatusCode > 499 {\n\t\tresp.Body.Close()\n\t\treturn nil, fmt.Errorf(\"bad response status code from http loader url: %v\", resp.StatusCode)\n\t}\n\treturn resp, nil\n}\n\nfunc doHttpCallWithRetries(ctx caddy.Context, client *http.Client, request *http.Request) (*http.Response, error) {\n\tvar resp *http.Response\n\tvar err error\n\tconst maxAttempts = 10\n\n\tfor i := range maxAttempts {\n\t\tresp, err = attemptHttpCall(client, request)\n\t\tif err != nil && i < maxAttempts-1 {\n\t\t\tselect {\n\t\t\tcase <-time.After(time.Millisecond * 500):\n\t\t\tcase <-ctx.Done():\n\t\t\t\treturn resp, ctx.Err()\n\t\t\t}\n\t\t} else {\n\t\t\tbreak\n\t\t}\n\t}\n\n\treturn resp, err\n}\n\nfunc (hl HTTPLoader) makeClient(ctx caddy.Context) (*http.Client, error) {\n\tclient := &http.Client{\n\t\tTimeout: time.Duration(hl.Timeout),\n\t}\n\n\tif hl.TLS != nil {\n\t\tvar tlsConfig *tls.Config\n\n\t\t// client authentication\n\t\tif hl.TLS.UseServerIdentity {\n\t\t\tcerts, err := ctx.IdentityCredentials(ctx.Logger())\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"getting server identity credentials: %v\", err)\n\t\t\t}\n\t\t\t// See https://github.com/securego/gosec/issues/1054#issuecomment-2072235199\n\t\t\t//nolint:gosec\n\t\t\ttlsConfig = &tls.Config{Certificates: certs}\n\t\t} else if hl.TLS.ClientCertificateFile != \"\" && hl.TLS.ClientCertificateKeyFile != \"\" {\n\t\t\tcert, err := tls.LoadX509KeyPair(hl.TLS.ClientCertificateFile, hl.TLS.ClientCertificateKeyFile)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\t//nolint:gosec\n\t\t\ttlsConfig = &tls.Config{Certificates: []tls.Certificate{cert}}\n\t\t}\n\n\t\t// trusted server certs\n\t\tif len(hl.TLS.RootCAPEMFiles) > 0 {\n\t\t\trootPool := x509.NewCertPool()\n\t\t\tfor _, pemFile := range hl.TLS.RootCAPEMFiles {\n\t\t\t\tpemData, err := os.ReadFile(pemFile)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, fmt.Errorf(\"failed reading ca cert: %v\", err)\n\t\t\t\t}\n\t\t\t\trootPool.AppendCertsFromPEM(pemData)\n\t\t\t}\n\t\t\tif tlsConfig == nil {\n\t\t\t\ttlsConfig = new(tls.Config)\n\t\t\t}\n\t\t\ttlsConfig.RootCAs = rootPool\n\t\t}\n\n\t\tclient.Transport = &http.Transport{TLSClientConfig: tlsConfig}\n\t}\n\n\treturn client, nil\n}\n\nvar _ caddy.ConfigLoader = (*HTTPLoader)(nil)\n"
  },
  {
    "path": "caddyconfig/load.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyconfig\n\nimport (\n\t\"bytes\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"mime\"\n\t\"net/http\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(adminLoad{})\n}\n\n// adminLoad is a module that provides the /load endpoint\n// for the Caddy admin API. The only reason it's not baked\n// into the caddy package directly is because of the import\n// of the caddyconfig package for its GetAdapter function.\n// If the caddy package depends on the caddyconfig package,\n// then the caddyconfig package will not be able to import\n// the caddy package, and it can more easily cause backward\n// edges in the dependency tree (i.e. import cycle).\n// Fortunately, the admin API has first-class support for\n// adding endpoints from modules.\ntype adminLoad struct{}\n\n// CaddyModule returns the Caddy module information.\nfunc (adminLoad) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"admin.api.load\",\n\t\tNew: func() caddy.Module { return new(adminLoad) },\n\t}\n}\n\n// Routes returns a route for the /load endpoint.\nfunc (al adminLoad) Routes() []caddy.AdminRoute {\n\treturn []caddy.AdminRoute{\n\t\t{\n\t\t\tPattern: \"/load\",\n\t\t\tHandler: caddy.AdminHandlerFunc(al.handleLoad),\n\t\t},\n\t\t{\n\t\t\tPattern: \"/adapt\",\n\t\t\tHandler: caddy.AdminHandlerFunc(al.handleAdapt),\n\t\t},\n\t}\n}\n\n// handleLoad replaces the entire current configuration with\n// a new one provided in the response body. It supports config\n// adapters through the use of the Content-Type header. A\n// config that is identical to the currently-running config\n// will be a no-op unless Cache-Control: must-revalidate is set.\nfunc (adminLoad) handleLoad(w http.ResponseWriter, r *http.Request) error {\n\tif r.Method != http.MethodPost {\n\t\treturn caddy.APIError{\n\t\t\tHTTPStatus: http.StatusMethodNotAllowed,\n\t\t\tErr:        fmt.Errorf(\"method not allowed\"),\n\t\t}\n\t}\n\n\tbuf := bufPool.Get().(*bytes.Buffer)\n\tbuf.Reset()\n\tdefer bufPool.Put(buf)\n\n\t_, err := io.Copy(buf, r.Body)\n\tif err != nil {\n\t\treturn caddy.APIError{\n\t\t\tHTTPStatus: http.StatusBadRequest,\n\t\t\tErr:        fmt.Errorf(\"reading request body: %v\", err),\n\t\t}\n\t}\n\tbody := buf.Bytes()\n\n\t// if the config is formatted other than Caddy's native\n\t// JSON, we need to adapt it before loading it\n\tif ctHeader := r.Header.Get(\"Content-Type\"); ctHeader != \"\" {\n\t\tresult, warnings, err := adaptByContentType(ctHeader, body)\n\t\tif err != nil {\n\t\t\treturn caddy.APIError{\n\t\t\t\tHTTPStatus: http.StatusBadRequest,\n\t\t\t\tErr:        err,\n\t\t\t}\n\t\t}\n\t\tif len(warnings) > 0 {\n\t\t\trespBody, err := json.Marshal(warnings)\n\t\t\tif err != nil {\n\t\t\t\tcaddy.Log().Named(\"admin.api.load\").Error(err.Error())\n\t\t\t}\n\t\t\t_, _ = w.Write(respBody) //nolint:gosec // false positive: no XSS here\n\t\t}\n\t\tbody = result\n\t}\n\n\tforceReload := r.Header.Get(\"Cache-Control\") == \"must-revalidate\"\n\n\terr = caddy.Load(body, forceReload)\n\tif err != nil {\n\t\treturn caddy.APIError{\n\t\t\tHTTPStatus: http.StatusBadRequest,\n\t\t\tErr:        fmt.Errorf(\"loading config: %v\", err),\n\t\t}\n\t}\n\n\t// If this request changed the config, clear the last\n\t// config info we have stored, if it is different from\n\t// the original source.\n\tcaddy.ClearLastConfigIfDifferent(\n\t\tr.Header.Get(\"Caddy-Config-Source-File\"),\n\t\tr.Header.Get(\"Caddy-Config-Source-Adapter\"))\n\n\tcaddy.Log().Named(\"admin.api\").Info(\"load complete\")\n\n\treturn nil\n}\n\n// handleAdapt adapts the given Caddy config to JSON and responds with the result.\nfunc (adminLoad) handleAdapt(w http.ResponseWriter, r *http.Request) error {\n\tif r.Method != http.MethodPost {\n\t\treturn caddy.APIError{\n\t\t\tHTTPStatus: http.StatusMethodNotAllowed,\n\t\t\tErr:        fmt.Errorf(\"method not allowed\"),\n\t\t}\n\t}\n\n\tbuf := bufPool.Get().(*bytes.Buffer)\n\tbuf.Reset()\n\tdefer bufPool.Put(buf)\n\n\t_, err := io.Copy(buf, r.Body)\n\tif err != nil {\n\t\treturn caddy.APIError{\n\t\t\tHTTPStatus: http.StatusBadRequest,\n\t\t\tErr:        fmt.Errorf(\"reading request body: %v\", err),\n\t\t}\n\t}\n\n\tresult, warnings, err := adaptByContentType(r.Header.Get(\"Content-Type\"), buf.Bytes())\n\tif err != nil {\n\t\treturn caddy.APIError{\n\t\t\tHTTPStatus: http.StatusBadRequest,\n\t\t\tErr:        err,\n\t\t}\n\t}\n\n\tout := struct {\n\t\tWarnings []Warning       `json:\"warnings,omitempty\"`\n\t\tResult   json.RawMessage `json:\"result\"`\n\t}{\n\t\tWarnings: warnings,\n\t\tResult:   result,\n\t}\n\n\tw.Header().Set(\"Content-Type\", \"application/json\")\n\treturn json.NewEncoder(w).Encode(out)\n}\n\n// adaptByContentType adapts body to Caddy JSON using the adapter specified by contentType.\n// If contentType is empty or ends with \"/json\", the input will be returned, as a no-op.\nfunc adaptByContentType(contentType string, body []byte) ([]byte, []Warning, error) {\n\t// assume JSON as the default\n\tif contentType == \"\" {\n\t\treturn body, nil, nil\n\t}\n\n\tct, _, err := mime.ParseMediaType(contentType)\n\tif err != nil {\n\t\treturn nil, nil, caddy.APIError{\n\t\t\tHTTPStatus: http.StatusBadRequest,\n\t\t\tErr:        fmt.Errorf(\"invalid Content-Type: %v\", err),\n\t\t}\n\t}\n\n\t// if already JSON, no need to adapt\n\tif strings.HasSuffix(ct, \"/json\") {\n\t\treturn body, nil, nil\n\t}\n\n\t// adapter name should be suffix of MIME type\n\t_, adapterName, slashFound := strings.Cut(ct, \"/\")\n\tif !slashFound {\n\t\treturn nil, nil, fmt.Errorf(\"malformed Content-Type\")\n\t}\n\n\tcfgAdapter := GetAdapter(adapterName)\n\tif cfgAdapter == nil {\n\t\treturn nil, nil, fmt.Errorf(\"unrecognized config adapter '%s'\", adapterName)\n\t}\n\n\tresult, warnings, err := cfgAdapter.Adapt(body, nil)\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"adapting config using %s adapter: %v\", adapterName, err)\n\t}\n\n\treturn result, warnings, nil\n}\n\nvar bufPool = sync.Pool{\n\tNew: func() any {\n\t\treturn new(bytes.Buffer)\n\t},\n}\n"
  },
  {
    "path": "caddytest/a.caddy.localhost.crt",
    "content": "-----BEGIN CERTIFICATE-----\nMIID5zCCAs8CFG4+w/pqR5AZQ+aVB330uRRRKMF0MA0GCSqGSIb3DQEBCwUAMIGv\nMQswCQYDVQQGEwJVUzELMAkGA1UECAwCTlkxGzAZBgNVBAoMEkxvY2FsIERldmVs\nb3BlbWVudDEbMBkGA1UEBwwSTG9jYWwgRGV2ZWxvcGVtZW50MRowGAYDVQQDDBFh\nLmNhZGR5LmxvY2FsaG9zdDEbMBkGA1UECwwSTG9jYWwgRGV2ZWxvcGVtZW50MSAw\nHgYJKoZIhvcNAQkBFhFhZG1pbkBjYWRkeS5sb2NhbDAeFw0yMDAzMTMxODUwMTda\nFw0zMDAzMTExODUwMTdaMIGvMQswCQYDVQQGEwJVUzELMAkGA1UECAwCTlkxGzAZ\nBgNVBAoMEkxvY2FsIERldmVsb3BlbWVudDEbMBkGA1UEBwwSTG9jYWwgRGV2ZWxv\ncGVtZW50MRowGAYDVQQDDBFhLmNhZGR5LmxvY2FsaG9zdDEbMBkGA1UECwwSTG9j\nYWwgRGV2ZWxvcGVtZW50MSAwHgYJKoZIhvcNAQkBFhFhZG1pbkBjYWRkeS5sb2Nh\nbDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMd9pC9wF7j0459FndPs\nDeud/rq41jEZFsVOVtjQgjS1A5ct6NfeMmSlq8i1F7uaTMPZjbOHzY6y6hzLc9+y\n/VWNgyUC543HjXnNTnp9Xug6tBBxOxvRMw5mv2nAyzjBGDePPgN84xKhOXG2Wj3u\nfOZ+VPVISefRNvjKfN87WLJ0B0HI9wplG5ASVdPQsWDY1cndrZgt2sxQ/3fjIno4\nVvrgRWC9Penizgps/a0ZcFZMD/6HJoX/mSZVa1LjopwbMTXvyHCpXkth21E+rBt6\nI9DMHerdioVQcX25CqPmAwePxPZSNGEQo/Qu32kzcmscmYxTtYBhDa+yLuHgGggI\nj7ECAwEAATANBgkqhkiG9w0BAQsFAAOCAQEAP/94KPtkpYtkWADnhtzDmgQ6Q1pH\nSubTUZdCwQtm6/LrvpT+uFNsOj4L3Mv3TVUnIQDmKd5VvR42W2MRBiTN2LQptgEn\nC7g9BB+UA9kjL3DPk1pJMjzxLHohh0uNLi7eh4mAj8eNvjz9Z4qMWPQoVS0y7/ZK\ncCBRKh2GkIqKm34ih6pX7xmMpPEQsFoTVPRHYJfYD1SZ8Iui+EN+7WqLuJWPsPXw\nJM1HuZKn7pZmJU2MZZBsrupHGUvNMbBg2mFJcxt4D1VvU+p+a67PSjpFQ6dJG2re\npZoF+N1vMGAFkxe6UqhcC/bXDX+ILVQHJ+RNhzDO6DcWf8dRrC2LaJk3WA==\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "caddytest/a.caddy.localhost.key",
    "content": "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEAx32kL3AXuPTjn0Wd0+wN653+urjWMRkWxU5W2NCCNLUDly3o\n194yZKWryLUXu5pMw9mNs4fNjrLqHMtz37L9VY2DJQLnjceNec1Oen1e6Dq0EHE7\nG9EzDma/acDLOMEYN48+A3zjEqE5cbZaPe585n5U9UhJ59E2+Mp83ztYsnQHQcj3\nCmUbkBJV09CxYNjVyd2tmC3azFD/d+MiejhW+uBFYL096eLOCmz9rRlwVkwP/ocm\nhf+ZJlVrUuOinBsxNe/IcKleS2HbUT6sG3oj0Mwd6t2KhVBxfbkKo+YDB4/E9lI0\nYRCj9C7faTNyaxyZjFO1gGENr7Iu4eAaCAiPsQIDAQABAoIBAQDD/YFIBeWYlifn\ne9risQDAIrp3sk7lb9O6Rwv1+Wxi4hBEABvJsYhq74VFK/3EF4UhyWR5JIvkjYyK\ne6w887oGyoA05ZSe65XoO7fFidSrbbkoikZbPv3dQT7/ZCWEfdkQBNAVVyY0UGeC\ne3hPbjYRsb5AOSQ694X9idqC6uhqcOrBDjITFrctUoP4S6l9A6a+mLSUIwiICcuh\nmrNl+j0lzy7DMXRp/Z5Hyo5kuUlrC0dCLa1UHqtrrK7MR55AVEOihSNp1w+OC+vw\nf0VjE4JUtO7RQEQUmD1tDfLXwNfMFeWaobB2W0WMvRg0IqoitiqPxsPHRm56OxfM\nSRo/Q7QBAoGBAP8DapzBMuaIcJ7cE8Yl07ZGndWWf8buIKIItGF8rkEO3BXhrIke\nEmpOi+ELtpbMOG0APhORZyQ58f4ZOVrqZfneNKtDiEZV4mJZaYUESm1pU+2Y6+y5\ng4bpQSVKN0ow0xR+MH7qDYtSlsmBU7qAOz775L7BmMA1Bnu72aN/H1JBAoGBAMhD\nOzqCSakHOjUbEd22rPwqWmcIyVyo04gaSmcVVT2dHbqR4/t0gX5a9D9U2qwyO6xi\n/R+PXyMd32xIeVR2D/7SQ0x6dK68HXICLV8ofHZ5UQcHbxy5og4v/YxSZVTkN374\ncEsUeyB0s/UPOHLktFU5hpIlON72/Rp7b+pNIwFxAoGAczpq+Qu/YTWzlcSh1r4O\n7OT5uqI3eH7vFehTAV3iKxl4zxZa7NY+wfRd9kFhrr/2myIp6pOgBFl+hC+HoBIc\nJAyIxf5M3GNAWOpH6MfojYmzV7/qktu8l8BcJGplk0t+hVsDtMUze4nFAqZCXBpH\nKw2M7bjyuZ78H/rgu6TcVUECgYEAo1M5ldE2U/VCApeuLX1TfWDpU8i1uK0zv3d5\noLKkT1i5KzTak3SEO9HgC1qf8PoS8tfUio26UICHe99rnHehOfivzEq+qNdgyF+A\nM3BoeZMdgzcL5oh640k+Zte4LtDlddcWdhUhCepD7iPYrNNbQ3pkBwL2a9lRuOxc\n7OC2IPECgYBH8f3OrwXjDltIG1dDvuDPNljxLZbFEFbQyVzMePYNftgZknAyGEdh\nNW/LuWeTzstnmz/s6RE3jN5ZrrMa4sW77VA9+yU9QW2dkHqFyukQ4sfuNg6kDDNZ\n+lqZYMCLw0M5P9fIbmnIYwey7tXkHfmzoCpnYHGQDN6hL0Bh0zGwmg==\n-----END RSA PRIVATE KEY-----\n"
  },
  {
    "path": "caddytest/caddy.ca.cer",
    "content": "-----BEGIN CERTIFICATE-----\nMIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQEL\nBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkw\nODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcN\nAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU\n7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl0\n3WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45t\nwOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNx\ntdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTU\nApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAd\nBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS\n2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5u\nNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkq\nhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfK\nD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEO\nfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnk\noNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZ\nks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdle\nIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\n-----END CERTIFICATE-----"
  },
  {
    "path": "caddytest/caddy.localhost.crt",
    "content": "-----BEGIN CERTIFICATE-----\nMIID5zCCAs8CFFmAAFKV79uhzxc5qXbUw3oBNsYXMA0GCSqGSIb3DQEBCwUAMIGv\nMQswCQYDVQQGEwJVUzELMAkGA1UECAwCTlkxGzAZBgNVBAoMEkxvY2FsIERldmVs\nb3BlbWVudDEbMBkGA1UEBwwSTG9jYWwgRGV2ZWxvcGVtZW50MRowGAYDVQQDDBEq\nLmNhZGR5LmxvY2FsaG9zdDEbMBkGA1UECwwSTG9jYWwgRGV2ZWxvcGVtZW50MSAw\nHgYJKoZIhvcNAQkBFhFhZG1pbkBjYWRkeS5sb2NhbDAeFw0yMDAzMDIwODAxMTZa\nFw0zMDAyMjgwODAxMTZaMIGvMQswCQYDVQQGEwJVUzELMAkGA1UECAwCTlkxGzAZ\nBgNVBAoMEkxvY2FsIERldmVsb3BlbWVudDEbMBkGA1UEBwwSTG9jYWwgRGV2ZWxv\ncGVtZW50MRowGAYDVQQDDBEqLmNhZGR5LmxvY2FsaG9zdDEbMBkGA1UECwwSTG9j\nYWwgRGV2ZWxvcGVtZW50MSAwHgYJKoZIhvcNAQkBFhFhZG1pbkBjYWRkeS5sb2Nh\nbDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJngfeirQkWaU8ihgIC5\nSKpRQX/3koRjljDK/oCbhLs+wg592kIwVv06l7+mn7NSaNBloabjuA1GqyLRsNLL\nptrv0HvXa5qLx28+icsb2Ny3dJnQaj9w9PwjxQ1qZqEJfWRH1D8Vz9AmB+QSV/Gu\n8e8alGFewlYZVfH1kbxoTT6QorF37TeA3bh1fgKFtzsGYKswcaZNdDBBHzLunCKZ\nHU6U6L45hm+yLADj3mmDLafUeiVOt6MRLLoSD1eLRVSXGrNo+brJ87zkZntI9+W1\nJxOBoXtZCwka7k2DlAtLihsrmBZA2ZC9yVeu/SQy3qb3iCNnTFTCyAnWeTCr6Tcq\n6w8CAwEAATANBgkqhkiG9w0BAQsFAAOCAQEAOWfXqpAmD4C3wGiMeZAeaaS4hDAR\n+JmN+avPDA6F6Bq7DB4NJuIwVUlaDL2s07w5VJJtW52aZVKoBlgHR5yG/XUli6J7\nYUJRmdQJvHUSu26cmKvyoOaTrEYbmvtGICWtZc8uTlMf9wQZbJA4KyxTgEQJDXsZ\nB2XFe+wVdhAgEpobYDROi+l/p8TL5z3U24LpwVTcJy5sEZVv7Wfs886IyxU8ORt8\nVZNcDiH6V53OIGeiufIhia/mPe6jbLntfGZfIFxtCcow4IA/lTy1ned7K5fmvNNb\nZilxOQUk+wVK8genjdrZVAnAxsYLHJIb5yf9O7rr6fWciVMF3a0k5uNK1w==\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "caddytest/caddy.localhost.key",
    "content": "-----BEGIN RSA PRIVATE KEY-----\nMIIEogIBAAKCAQEAmeB96KtCRZpTyKGAgLlIqlFBf/eShGOWMMr+gJuEuz7CDn3a\nQjBW/TqXv6afs1Jo0GWhpuO4DUarItGw0sum2u/Qe9drmovHbz6JyxvY3Ld0mdBq\nP3D0/CPFDWpmoQl9ZEfUPxXP0CYH5BJX8a7x7xqUYV7CVhlV8fWRvGhNPpCisXft\nN4DduHV+AoW3OwZgqzBxpk10MEEfMu6cIpkdTpTovjmGb7IsAOPeaYMtp9R6JU63\noxEsuhIPV4tFVJcas2j5usnzvORme0j35bUnE4Ghe1kLCRruTYOUC0uKGyuYFkDZ\nkL3JV679JDLepveII2dMVMLICdZ5MKvpNyrrDwIDAQABAoIBAFcPK01zb6hfm12c\n+k5aBiHOnUdgc/YRPg1XHEz5MEycQkDetZjTLrRQ7UBSbnKPgpu9lIsOtbhVLkgh\n6XAqJroiCou2oruqr+hhsqZGmBiwdvj7cNF6ADGTr05az7v22YneFdinZ481pStF\nsZocx+bm2+KHMV5zMSwXKyA0xtdJLxs2yklniDBxSZRppgppq1pDPprP5DkgKPfe\n3ekUmbQd5bHmivhW8ItbJLuf82XSsMBZ9ZhKiKIlWlbKAgiSV3SqnUQb5fi7l8hG\nyYZxbuCUIGFwKmEpUBBt/nyxrOlMiNtDh9JhrPmijTV3slq70pCLwLL/Ai2aeear\nEVA5VhkCgYEAyAmxfPqc2P7BsDAp67/sA7OEPso9qM4WyuWiVdlX2gb9TLNLYbPX\nKk/UmpAIVzpoTAGY5Zp3wkvdD/ou8uUQsE8ioNn4S1a4G9XURH1wVhcEbUiAKI1S\nQVBH9B/Pj3eIp5OTKwob0Wj7DNdxoH7ed/Eok0EaTWzOA8pCWADKv/MCgYEAxOzY\nYsX7Nl+eyZr2+9unKyeAK/D1DCT/o99UUAHx72/xaBVP/06cfzpvKBNcF9iYc+fq\nR1yIUIrDRoSmYKBq+Kb3+nOg1nrqih/NBTokbTiI4Q+/30OQt0Al1e7y9iNKqV8H\njYZItzluGNrWKedZbATwBwbVCY2jnNl6RMDnS3UCgYBxj3cwQUHLuoyQjjcuO80r\nqLzZvIxWiXDNDKIk5HcIMlGYOmz/8U2kGp/SgxQJGQJeq8V2C0QTjGfaCyieAcaA\noNxCvptDgd6RBsoze5bLeNOtiqwe2WOp6n5+q5R0mOJ+Z7vzghCayGNFPgWmnH+F\nTeW/+wSIkc0+v5L8TK7NWwKBgBrlWlyLO9deUfqpHqihhICBYaEexOlGuF+yZfqT\neW7BdFBJ8OYm33sFCR+JHV/oZlIWT8o1Wizd9vPPtEWoQ1P4wg/D8Si6GwSIeWEI\nYudD/HX4x7T/rmlI6qIAg9CYW18sqoRq3c2gm2fro6qPfYgiWIItLbWjUcBfd7Ki\nQjTtAoGARKdRv3jMWL84rlEx1nBRgL3pe9Dt+Uxzde2xT3ZeF+5Hp9NfU01qE6M6\n1I6H64smqpetlsXmCEVKwBemP3pJa6avLKgIYiQvHAD/v4rs9mqgy1RTqtYyGNhR\n1A/6dKkbiZ6wzePLLPasXVZxSKEviXf5gJooqumQVSVhCswyCZ0=\n-----END RSA PRIVATE KEY-----\n"
  },
  {
    "path": "caddytest/caddytest.go",
    "content": "package caddytest\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"crypto/tls\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/fs\"\n\t\"log\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/http/cookiejar\"\n\t\"os\"\n\t\"path\"\n\t\"reflect\"\n\t\"regexp\"\n\t\"runtime\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/aryann/difflib\"\n\n\tcaddycmd \"github.com/caddyserver/caddy/v2/cmd\"\n\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t// plug in Caddy modules here\n\t_ \"github.com/caddyserver/caddy/v2/modules/standard\"\n)\n\n// Config store any configuration required to make the tests run\ntype Config struct {\n\t// Port we expect caddy to listening on\n\tAdminPort int\n\t// Certificates we expect to be loaded before attempting to run the tests\n\tCertificates []string\n\t// TestRequestTimeout is the time to wait for a http request to\n\tTestRequestTimeout time.Duration\n\t// LoadRequestTimeout is the time to wait for the config to be loaded against the caddy server\n\tLoadRequestTimeout time.Duration\n}\n\n// Default testing values\nvar Default = Config{\n\tAdminPort:          2999, // different from what a real server also running on a developer's machine might be\n\tCertificates:       []string{\"/caddy.localhost.crt\", \"/caddy.localhost.key\"},\n\tTestRequestTimeout: 5 * time.Second,\n\tLoadRequestTimeout: 5 * time.Second,\n}\n\nvar (\n\tmatchKey  = regexp.MustCompile(`(/[\\w\\d\\.]+\\.key)`)\n\tmatchCert = regexp.MustCompile(`(/[\\w\\d\\.]+\\.crt)`)\n)\n\n// Tester represents an instance of a test client.\ntype Tester struct {\n\tClient       *http.Client\n\tconfigLoaded bool\n\tt            testing.TB\n\tconfig       Config\n}\n\n// NewTester will create a new testing client with an attached cookie jar\nfunc NewTester(t testing.TB) *Tester {\n\tjar, err := cookiejar.New(nil)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to create cookiejar: %s\", err)\n\t}\n\n\treturn &Tester{\n\t\tClient: &http.Client{\n\t\t\tTransport: CreateTestingTransport(),\n\t\t\tJar:       jar,\n\t\t\tTimeout:   Default.TestRequestTimeout,\n\t\t},\n\t\tconfigLoaded: false,\n\t\tt:            t,\n\t\tconfig:       Default,\n\t}\n}\n\n// WithDefaultOverrides this will override the default test configuration with the provided values.\nfunc (tc *Tester) WithDefaultOverrides(overrides Config) *Tester {\n\tif overrides.AdminPort != 0 {\n\t\ttc.config.AdminPort = overrides.AdminPort\n\t}\n\tif len(overrides.Certificates) > 0 {\n\t\ttc.config.Certificates = overrides.Certificates\n\t}\n\tif overrides.TestRequestTimeout != 0 {\n\t\ttc.config.TestRequestTimeout = overrides.TestRequestTimeout\n\t\ttc.Client.Timeout = overrides.TestRequestTimeout\n\t}\n\tif overrides.LoadRequestTimeout != 0 {\n\t\ttc.config.LoadRequestTimeout = overrides.LoadRequestTimeout\n\t}\n\n\treturn tc\n}\n\ntype configLoadError struct {\n\tResponse string\n}\n\nfunc (e configLoadError) Error() string { return e.Response }\n\nfunc timeElapsed(start time.Time, name string) {\n\telapsed := time.Since(start)\n\tlog.Printf(\"%s took %s\", name, elapsed)\n}\n\n// InitServer this will configure the server with a configurion of a specific\n// type. The configType must be either \"json\" or the adapter type.\nfunc (tc *Tester) InitServer(rawConfig string, configType string) {\n\tif err := tc.initServer(rawConfig, configType); err != nil {\n\t\ttc.t.Logf(\"failed to load config: %s\", err)\n\t\ttc.t.Fail()\n\t}\n\tif err := tc.ensureConfigRunning(rawConfig, configType); err != nil {\n\t\ttc.t.Logf(\"failed ensuring config is running: %s\", err)\n\t\ttc.t.Fail()\n\t}\n}\n\n// InitServer this will configure the server with a configurion of a specific\n// type. The configType must be either \"json\" or the adapter type.\nfunc (tc *Tester) initServer(rawConfig string, configType string) error {\n\tif testing.Short() {\n\t\ttc.t.SkipNow()\n\t\treturn nil\n\t}\n\n\terr := validateTestPrerequisites(tc)\n\tif err != nil {\n\t\ttc.t.Skipf(\"skipping tests as failed integration prerequisites. %s\", err)\n\t\treturn nil\n\t}\n\n\ttc.t.Cleanup(func() {\n\t\tif tc.t.Failed() && tc.configLoaded {\n\t\t\tres, err := http.Get(fmt.Sprintf(\"http://localhost:%d/config/\", tc.config.AdminPort))\n\t\t\tif err != nil {\n\t\t\t\ttc.t.Log(\"unable to read the current config\")\n\t\t\t\treturn\n\t\t\t}\n\t\t\tdefer res.Body.Close()\n\t\t\tbody, _ := io.ReadAll(res.Body)\n\n\t\t\tvar out bytes.Buffer\n\t\t\t_ = json.Indent(&out, body, \"\", \"  \")\n\t\t\ttc.t.Logf(\"----------- failed with config -----------\\n%s\", out.String())\n\t\t}\n\t})\n\n\trawConfig = prependCaddyFilePath(rawConfig)\n\t// normalize JSON config\n\tif configType == \"json\" {\n\t\ttc.t.Logf(\"Before: %s\", rawConfig)\n\t\tvar conf any\n\t\tif err := json.Unmarshal([]byte(rawConfig), &conf); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tc, err := json.Marshal(conf)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\trawConfig = string(c)\n\t\ttc.t.Logf(\"After: %s\", rawConfig)\n\t}\n\tclient := &http.Client{\n\t\tTimeout: tc.config.LoadRequestTimeout,\n\t}\n\tstart := time.Now()\n\treq, err := http.NewRequest(\"POST\", fmt.Sprintf(\"http://localhost:%d/load\", tc.config.AdminPort), strings.NewReader(rawConfig))\n\tif err != nil {\n\t\ttc.t.Errorf(\"failed to create request. %s\", err)\n\t\treturn err\n\t}\n\n\tif configType == \"json\" {\n\t\treq.Header.Add(\"Content-Type\", \"application/json\")\n\t} else {\n\t\treq.Header.Add(\"Content-Type\", \"text/\"+configType)\n\t}\n\n\tres, err := client.Do(req) //nolint:gosec // no SSRF because URL is hard-coded to localhost, and port comes from config\n\tif err != nil {\n\t\ttc.t.Errorf(\"unable to contact caddy server. %s\", err)\n\t\treturn err\n\t}\n\ttimeElapsed(start, \"caddytest: config load time\")\n\n\tdefer res.Body.Close()\n\tbody, err := io.ReadAll(res.Body)\n\tif err != nil {\n\t\ttc.t.Errorf(\"unable to read response. %s\", err)\n\t\treturn err\n\t}\n\n\tif res.StatusCode != 200 {\n\t\treturn configLoadError{Response: string(body)}\n\t}\n\n\ttc.configLoaded = true\n\treturn nil\n}\n\nfunc (tc *Tester) ensureConfigRunning(rawConfig string, configType string) error {\n\texpectedBytes := []byte(prependCaddyFilePath(rawConfig))\n\tif configType != \"json\" {\n\t\tadapter := caddyconfig.GetAdapter(configType)\n\t\tif adapter == nil {\n\t\t\treturn fmt.Errorf(\"adapter of config type is missing: %s\", configType)\n\t\t}\n\t\texpectedBytes, _, _ = adapter.Adapt([]byte(rawConfig), nil)\n\t}\n\n\tvar expected any\n\terr := json.Unmarshal(expectedBytes, &expected)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tclient := &http.Client{\n\t\tTimeout: tc.config.LoadRequestTimeout,\n\t}\n\n\tfetchConfig := func(client *http.Client) any {\n\t\tresp, err := client.Get(fmt.Sprintf(\"http://localhost:%d/config/\", tc.config.AdminPort))\n\t\tif err != nil {\n\t\t\treturn nil\n\t\t}\n\t\tdefer resp.Body.Close()\n\t\tactualBytes, err := io.ReadAll(resp.Body)\n\t\tif err != nil {\n\t\t\treturn nil\n\t\t}\n\t\tvar actual any\n\t\terr = json.Unmarshal(actualBytes, &actual)\n\t\tif err != nil {\n\t\t\treturn nil\n\t\t}\n\t\treturn actual\n\t}\n\n\tfor retries := 10; retries > 0; retries-- {\n\t\tif reflect.DeepEqual(expected, fetchConfig(client)) {\n\t\t\treturn nil\n\t\t}\n\t\ttime.Sleep(1 * time.Second)\n\t}\n\ttc.t.Errorf(\"POSTed configuration isn't active\")\n\treturn errors.New(\"EnsureConfigRunning: POSTed configuration isn't active\")\n}\n\nconst initConfig = `{\n\tadmin localhost:%d\n}\n`\n\n// validateTestPrerequisites ensures the certificates are available in the\n// designated path and Caddy sub-process is running.\nfunc validateTestPrerequisites(tc *Tester) error {\n\t// check certificates are found\n\tfor _, certName := range tc.config.Certificates {\n\t\tif _, err := os.Stat(getIntegrationDir() + certName); errors.Is(err, fs.ErrNotExist) {\n\t\t\treturn fmt.Errorf(\"caddy integration test certificates (%s) not found\", certName)\n\t\t}\n\t}\n\n\tif isCaddyAdminRunning(tc) != nil {\n\t\t// setup the init config file, and set the cleanup afterwards\n\t\tf, err := os.CreateTemp(\"\", \"\")\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\ttc.t.Cleanup(func() {\n\t\t\tos.Remove(f.Name()) //nolint:gosec // false positive, filename comes from std lib, no path traversal\n\t\t})\n\t\tif _, err := fmt.Fprintf(f, initConfig, tc.config.AdminPort); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\t// start inprocess caddy server\n\t\tos.Args = []string{\"caddy\", \"run\", \"--config\", f.Name(), \"--adapter\", \"caddyfile\"}\n\t\tgo func() {\n\t\t\tcaddycmd.Main()\n\t\t}()\n\n\t\t// wait for caddy to start serving the initial config\n\t\tfor retries := 10; retries > 0 && isCaddyAdminRunning(tc) != nil; retries-- {\n\t\t\ttime.Sleep(1 * time.Second)\n\t\t}\n\t}\n\n\t// one more time to return the error\n\treturn isCaddyAdminRunning(tc)\n}\n\nfunc isCaddyAdminRunning(tc *Tester) error {\n\t// assert that caddy is running\n\tclient := &http.Client{\n\t\tTimeout: tc.config.LoadRequestTimeout,\n\t}\n\tresp, err := client.Get(fmt.Sprintf(\"http://localhost:%d/config/\", tc.config.AdminPort))\n\tif err != nil {\n\t\treturn fmt.Errorf(\"caddy integration test caddy server not running. Expected to be listening on localhost:%d\", tc.config.AdminPort)\n\t}\n\tresp.Body.Close()\n\n\treturn nil\n}\n\nfunc getIntegrationDir() string {\n\t_, filename, _, ok := runtime.Caller(1)\n\tif !ok {\n\t\tpanic(\"unable to determine the current file path\")\n\t}\n\n\treturn path.Dir(filename)\n}\n\n// use the convention to replace /[certificatename].[crt|key] with the full path\n// this helps reduce the noise in test configurations and also allow this\n// to run in any path\nfunc prependCaddyFilePath(rawConfig string) string {\n\tr := matchKey.ReplaceAllString(rawConfig, getIntegrationDir()+\"$1\")\n\tr = matchCert.ReplaceAllString(r, getIntegrationDir()+\"$1\")\n\treturn r\n}\n\n// CreateTestingTransport creates a testing transport that forces call dialing connections to happen locally\nfunc CreateTestingTransport() *http.Transport {\n\tdialer := net.Dialer{\n\t\tTimeout:   5 * time.Second,\n\t\tKeepAlive: 5 * time.Second,\n\t\tDualStack: true,\n\t}\n\n\tdialContext := func(ctx context.Context, network, addr string) (net.Conn, error) {\n\t\tparts := strings.Split(addr, \":\")\n\t\tdestAddr := fmt.Sprintf(\"127.0.0.1:%s\", parts[1])\n\t\tlog.Printf(\"caddytest: redirecting the dialer from %s to %s\", addr, destAddr)\n\t\treturn dialer.DialContext(ctx, network, destAddr)\n\t}\n\n\treturn &http.Transport{\n\t\tProxy:                 http.ProxyFromEnvironment,\n\t\tDialContext:           dialContext,\n\t\tForceAttemptHTTP2:     true,\n\t\tMaxIdleConns:          100,\n\t\tIdleConnTimeout:       90 * time.Second,\n\t\tTLSHandshakeTimeout:   5 * time.Second,\n\t\tExpectContinueTimeout: 1 * time.Second,\n\t\tTLSClientConfig:       &tls.Config{InsecureSkipVerify: true}, //nolint:gosec\n\t}\n}\n\n// AssertLoadError will load a config and expect an error\nfunc AssertLoadError(t *testing.T, rawConfig string, configType string, expectedError string) {\n\tt.Helper()\n\n\ttc := NewTester(t)\n\n\terr := tc.initServer(rawConfig, configType)\n\tif !strings.Contains(err.Error(), expectedError) {\n\t\tt.Errorf(\"expected error \\\"%s\\\" but got \\\"%s\\\"\", expectedError, err.Error())\n\t}\n}\n\n// AssertRedirect makes a request and asserts the redirection happens\nfunc (tc *Tester) AssertRedirect(requestURI string, expectedToLocation string, expectedStatusCode int) *http.Response {\n\ttc.t.Helper()\n\n\tredirectPolicyFunc := func(req *http.Request, via []*http.Request) error {\n\t\treturn http.ErrUseLastResponse\n\t}\n\n\t// using the existing client, we override the check redirect policy for this test\n\told := tc.Client.CheckRedirect\n\ttc.Client.CheckRedirect = redirectPolicyFunc\n\tdefer func() { tc.Client.CheckRedirect = old }()\n\n\tresp, err := tc.Client.Get(requestURI)\n\tif err != nil {\n\t\ttc.t.Errorf(\"failed to call server %s\", err)\n\t\treturn nil\n\t}\n\n\tif expectedStatusCode != resp.StatusCode {\n\t\ttc.t.Errorf(\"requesting \\\"%s\\\" expected status code: %d but got %d\", requestURI, expectedStatusCode, resp.StatusCode)\n\t}\n\n\tloc, err := resp.Location()\n\tif err != nil {\n\t\ttc.t.Errorf(\"requesting \\\"%s\\\" expected location: \\\"%s\\\" but got error: %s\", requestURI, expectedToLocation, err)\n\t}\n\tif loc == nil && expectedToLocation != \"\" {\n\t\ttc.t.Errorf(\"requesting \\\"%s\\\" expected a Location header, but didn't get one\", requestURI)\n\t}\n\tif loc != nil {\n\t\tif expectedToLocation != loc.String() {\n\t\t\ttc.t.Errorf(\"requesting \\\"%s\\\" expected location: \\\"%s\\\" but got \\\"%s\\\"\", requestURI, expectedToLocation, loc.String())\n\t\t}\n\t}\n\n\treturn resp\n}\n\n// CompareAdapt adapts a config and then compares it against an expected result\nfunc CompareAdapt(t testing.TB, filename, rawConfig string, adapterName string, expectedResponse string) bool {\n\tt.Helper()\n\n\tcfgAdapter := caddyconfig.GetAdapter(adapterName)\n\tif cfgAdapter == nil {\n\t\tt.Logf(\"unrecognized config adapter '%s'\", adapterName)\n\t\treturn false\n\t}\n\n\toptions := make(map[string]any)\n\n\tresult, warnings, err := cfgAdapter.Adapt([]byte(rawConfig), options)\n\tif err != nil {\n\t\tt.Logf(\"adapting config using %s adapter: %v\", adapterName, err)\n\t\treturn false\n\t}\n\n\t// prettify results to keep tests human-manageable\n\tvar prettyBuf bytes.Buffer\n\terr = json.Indent(&prettyBuf, result, \"\", \"\\t\")\n\tif err != nil {\n\t\treturn false\n\t}\n\tresult = prettyBuf.Bytes()\n\n\tif len(warnings) > 0 {\n\t\tfor _, w := range warnings {\n\t\t\tt.Logf(\"warning: %s:%d: %s: %s\", filename, w.Line, w.Directive, w.Message)\n\t\t}\n\t}\n\n\tdiff := difflib.Diff(\n\t\tstrings.Split(expectedResponse, \"\\n\"),\n\t\tstrings.Split(string(result), \"\\n\"))\n\n\t// scan for failure\n\tfailed := false\n\tfor _, d := range diff {\n\t\tif d.Delta != difflib.Common {\n\t\t\tfailed = true\n\t\t\tbreak\n\t\t}\n\t}\n\n\tif failed {\n\t\tfor _, d := range diff {\n\t\t\tswitch d.Delta {\n\t\t\tcase difflib.Common:\n\t\t\t\tfmt.Printf(\"  %s\\n\", d.Payload)\n\t\t\tcase difflib.LeftOnly:\n\t\t\t\tfmt.Printf(\" - %s\\n\", d.Payload)\n\t\t\tcase difflib.RightOnly:\n\t\t\t\tfmt.Printf(\" + %s\\n\", d.Payload)\n\t\t\t}\n\t\t}\n\t\treturn false\n\t}\n\treturn true\n}\n\n// AssertAdapt adapts a config and then tests it against an expected result\nfunc AssertAdapt(t testing.TB, rawConfig string, adapterName string, expectedResponse string) {\n\tt.Helper()\n\n\tok := CompareAdapt(t, \"Caddyfile\", rawConfig, adapterName, expectedResponse)\n\tif !ok {\n\t\tt.Fail()\n\t}\n}\n\n// Generic request functions\n\nfunc applyHeaders(t testing.TB, req *http.Request, requestHeaders []string) {\n\trequestContentType := \"\"\n\tfor _, requestHeader := range requestHeaders {\n\t\tarr := strings.SplitAfterN(requestHeader, \":\", 2)\n\t\tk := strings.TrimRight(arr[0], \":\")\n\t\tv := strings.TrimSpace(arr[1])\n\t\tif k == \"Content-Type\" {\n\t\t\trequestContentType = v\n\t\t}\n\t\tt.Logf(\"Request header: %s => %s\", k, v)\n\t\treq.Header.Set(k, v)\n\t}\n\n\tif requestContentType == \"\" {\n\t\tt.Logf(\"Content-Type header not provided\")\n\t}\n}\n\n// AssertResponseCode will execute the request and verify the status code, returns a response for additional assertions\nfunc (tc *Tester) AssertResponseCode(req *http.Request, expectedStatusCode int) *http.Response {\n\ttc.t.Helper()\n\n\tresp, err := tc.Client.Do(req) //nolint:gosec // no SSRFs demonstrated\n\tif err != nil {\n\t\ttc.t.Fatalf(\"failed to call server %s\", err)\n\t}\n\n\tif expectedStatusCode != resp.StatusCode {\n\t\ttc.t.Errorf(\"requesting \\\"%s\\\" expected status code: %d but got %d\", req.URL.RequestURI(), expectedStatusCode, resp.StatusCode)\n\t}\n\n\treturn resp\n}\n\n// AssertResponse request a URI and assert the status code and the body contains a string\nfunc (tc *Tester) AssertResponse(req *http.Request, expectedStatusCode int, expectedBody string) (*http.Response, string) {\n\ttc.t.Helper()\n\n\tresp := tc.AssertResponseCode(req, expectedStatusCode)\n\n\tdefer resp.Body.Close()\n\tbytes, err := io.ReadAll(resp.Body)\n\tif err != nil {\n\t\ttc.t.Fatalf(\"unable to read the response body %s\", err)\n\t}\n\n\tbody := string(bytes)\n\n\tif body != expectedBody {\n\t\ttc.t.Errorf(\"requesting \\\"%s\\\" expected response body \\\"%s\\\" but got \\\"%s\\\"\", req.RequestURI, expectedBody, body)\n\t}\n\n\treturn resp, body\n}\n\n// Verb specific test functions\n\n// AssertGetResponse GET a URI and expect a statusCode and body text\nfunc (tc *Tester) AssertGetResponse(requestURI string, expectedStatusCode int, expectedBody string) (*http.Response, string) {\n\ttc.t.Helper()\n\n\treq, err := http.NewRequest(\"GET\", requestURI, nil)\n\tif err != nil {\n\t\ttc.t.Fatalf(\"unable to create request %s\", err)\n\t}\n\n\treturn tc.AssertResponse(req, expectedStatusCode, expectedBody)\n}\n\n// AssertDeleteResponse request a URI and expect a statusCode and body text\nfunc (tc *Tester) AssertDeleteResponse(requestURI string, expectedStatusCode int, expectedBody string) (*http.Response, string) {\n\ttc.t.Helper()\n\n\treq, err := http.NewRequest(\"DELETE\", requestURI, nil)\n\tif err != nil {\n\t\ttc.t.Fatalf(\"unable to create request %s\", err)\n\t}\n\n\treturn tc.AssertResponse(req, expectedStatusCode, expectedBody)\n}\n\n// AssertPostResponseBody POST to a URI and assert the response code and body\nfunc (tc *Tester) AssertPostResponseBody(requestURI string, requestHeaders []string, requestBody *bytes.Buffer, expectedStatusCode int, expectedBody string) (*http.Response, string) {\n\ttc.t.Helper()\n\n\treq, err := http.NewRequest(\"POST\", requestURI, requestBody)\n\tif err != nil {\n\t\ttc.t.Errorf(\"failed to create request %s\", err)\n\t\treturn nil, \"\"\n\t}\n\n\tapplyHeaders(tc.t, req, requestHeaders)\n\n\treturn tc.AssertResponse(req, expectedStatusCode, expectedBody)\n}\n\n// AssertPutResponseBody PUT to a URI and assert the response code and body\nfunc (tc *Tester) AssertPutResponseBody(requestURI string, requestHeaders []string, requestBody *bytes.Buffer, expectedStatusCode int, expectedBody string) (*http.Response, string) {\n\ttc.t.Helper()\n\n\treq, err := http.NewRequest(\"PUT\", requestURI, requestBody)\n\tif err != nil {\n\t\ttc.t.Errorf(\"failed to create request %s\", err)\n\t\treturn nil, \"\"\n\t}\n\n\tapplyHeaders(tc.t, req, requestHeaders)\n\n\treturn tc.AssertResponse(req, expectedStatusCode, expectedBody)\n}\n\n// AssertPatchResponseBody PATCH to a URI and assert the response code and body\nfunc (tc *Tester) AssertPatchResponseBody(requestURI string, requestHeaders []string, requestBody *bytes.Buffer, expectedStatusCode int, expectedBody string) (*http.Response, string) {\n\ttc.t.Helper()\n\n\treq, err := http.NewRequest(\"PATCH\", requestURI, requestBody)\n\tif err != nil {\n\t\ttc.t.Errorf(\"failed to create request %s\", err)\n\t\treturn nil, \"\"\n\t}\n\n\tapplyHeaders(tc.t, req, requestHeaders)\n\n\treturn tc.AssertResponse(req, expectedStatusCode, expectedBody)\n}\n"
  },
  {
    "path": "caddytest/caddytest_test.go",
    "content": "package caddytest\n\nimport (\n\t\"bytes\"\n\t\"net/http\"\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc TestReplaceCertificatePaths(t *testing.T) {\n\trawConfig := `a.caddy.localhost:9443 {\n\t\ttls /caddy.localhost.crt /caddy.localhost.key {\n\t\t}\n\n\t\tredir / https://b.caddy.localhost:9443/version 301\n    \n\t\trespond /version 200 {\n\t\t  body \"hello from a.caddy.localhost\"\n\t\t}\t\n\t  }`\n\n\tr := prependCaddyFilePath(rawConfig)\n\n\tif !strings.Contains(r, getIntegrationDir()+\"/caddy.localhost.crt\") {\n\t\tt.Error(\"expected the /caddy.localhost.crt to be expanded to include the full path\")\n\t}\n\n\tif !strings.Contains(r, getIntegrationDir()+\"/caddy.localhost.key\") {\n\t\tt.Error(\"expected the /caddy.localhost.crt to be expanded to include the full path\")\n\t}\n\n\tif !strings.Contains(r, \"https://b.caddy.localhost:9443/version\") {\n\t\tt.Error(\"expected redirect uri to be unchanged\")\n\t}\n}\n\nfunc TestLoadUnorderedJSON(t *testing.T) {\n\ttester := NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\t\"logging\": {\n\t\t\t\"logs\": {\n\t\t\t\t\"default\": {\n\t\t\t\t\t\"level\": \"DEBUG\",\n\t\t\t\t\t\"writer\": {\n\t\t\t\t\t\t\"output\": \"stdout\"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"sStdOutLogs\": {\n\t\t\t\t\t\"level\": \"DEBUG\",\n\t\t\t\t\t\"writer\": {\n\t\t\t\t\t\t\"output\": \"stdout\"\n\t\t\t\t\t},\n\t\t\t\t\t\"include\": [\n\t\t\t\t\t\t\"http.*\",\n\t\t\t\t\t\t\"admin.*\"\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"sFileLogs\": {\n\t\t\t\t\t\"level\": \"DEBUG\",\n\t\t\t\t\t\"writer\": {\n\t\t\t\t\t\t\"output\": \"stdout\"\n\t\t\t\t\t},\n\t\t\t\t\t\"include\": [\n\t\t\t\t\t\t\"http.*\",\n\t\t\t\t\t\t\"admin.*\"\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"admin\": {\n\t\t\t\"listen\": \"localhost:2999\"\n\t\t},\n\t\t\"apps\": {\n\t\t\t\"pki\": {\n\t\t\t\t\"certificate_authorities\" : {\n\t\t\t\t  \"local\" : {\n\t\t\t\t\t\"install_trust\": false\n\t\t\t\t  }\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"http\": {\n\t\t\t\t\"http_port\": 9080,\n\t\t\t\t\"https_port\": 9443,\n\t\t\t\t\"servers\": {\n\t\t\t\t\t\"s_server\": {\n\t\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\t\":9080\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\"body\": \"Hello\"\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\"localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\"127.0.0.1\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\t\"default_logger_name\": \"sStdOutLogs\",\n\t\t\t\t\t\t\t\"logger_names\": {\n\t\t\t\t\t\t\t\t\"localhost\": \"sStdOutLogs\",\n\t\t\t\t\t\t\t\t\"127.0.0.1\": \"sFileLogs\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n  `, \"json\")\n\treq, err := http.NewRequest(http.MethodGet, \"http://localhost:9080/\", nil)\n\tif err != nil {\n\t\tt.Fail()\n\t\treturn\n\t}\n\ttester.AssertResponseCode(req, 200)\n}\n\nfunc TestCheckID(t *testing.T) {\n\ttester := NewTester(t)\n\ttester.InitServer(`{\n\t\t\"admin\": {\n\t\t\t\"listen\": \"localhost:2999\"\n\t\t},\n\t\t\"apps\": {\n\t\t\t\"http\": {\n\t\t\t\t\"http_port\": 9080,\n\t\t\t\t\"servers\": {\n\t\t\t\t\t\"s_server\": {\n\t\t\t\t\t\t\"@id\": \"s_server\",\n\t\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\t\":9080\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\"body\": \"Hello\"\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\t`, \"json\")\n\theaders := []string{\"Content-Type:application/json\"}\n\tsServer1 := []byte(`{\"@id\":\"s_server\",\"listen\":[\":9080\"],\"routes\":[{\"@id\":\"route1\",\"handle\":[{\"handler\":\"static_response\",\"body\":\"Hello 2\"}]}]}`)\n\n\t// PUT to an existing ID should fail with a 409 conflict\n\ttester.AssertPutResponseBody(\n\t\t\"http://localhost:2999/id/s_server\",\n\t\theaders,\n\t\tbytes.NewBuffer(sServer1),\n\t\t409,\n\t\t`{\"error\":\"[/config/apps/http/servers/s_server] key already exists: s_server\"}`+\"\\n\")\n\n\t// POST replaces the object fully\n\ttester.AssertPostResponseBody(\n\t\t\"http://localhost:2999/id/s_server\",\n\t\theaders,\n\t\tbytes.NewBuffer(sServer1),\n\t\t200,\n\t\t\"\")\n\n\t// Verify the server is running the new route\n\ttester.AssertGetResponse(\n\t\t\"http://localhost:9080/\",\n\t\t200,\n\t\t\"Hello 2\")\n\n\t// Update the existing route to ensure IDs are handled correctly when replaced\n\ttester.AssertPostResponseBody(\n\t\t\"http://localhost:2999/id/s_server\",\n\t\theaders,\n\t\tbytes.NewBuffer([]byte(`{\"@id\":\"s_server\",\"listen\":[\":9080\"],\"routes\":[{\"@id\":\"route1\",\"handle\":[{\"handler\":\"static_response\",\"body\":\"Hello2\"}],\"match\":[{\"path\":[\"/route_1/*\"]}]}]}`)),\n\t\t200,\n\t\t\"\")\n\n\tsServer2 := []byte(`{\"@id\":\"s_server\",\"listen\":[\":9080\"],\"routes\":[{\"@id\":\"route1\",\"handle\":[{\"handler\":\"static_response\",\"body\":\"Hello2\"}],\"match\":[{\"path\":[\"/route_1/*\"]}]}]}`)\n\n\t// Identical patch should succeed and return 200 (config is unchanged branch)\n\ttester.AssertPatchResponseBody(\n\t\t\"http://localhost:2999/id/s_server\",\n\t\theaders,\n\t\tbytes.NewBuffer(sServer2),\n\t\t200,\n\t\t\"\")\n\n\troute2 := []byte(`{\"@id\":\"route2\",\"handle\": [{\"handler\": \"static_response\",\"body\": \"route2\"}],\"match\":[{\"path\":[\"/route_2/*\"]}]}`)\n\n\t// Put a new route2 object before the route1 object due to the path of /id/route1\n\t// Being translated to: /config/apps/http/servers/s_server/routes/0\n\ttester.AssertPutResponseBody(\n\t\t\"http://localhost:2999/id/route1\",\n\t\theaders,\n\t\tbytes.NewBuffer(route2),\n\t\t200,\n\t\t\"\")\n\n\t// Verify that the whole config looks correct, now containing both route1 and route2\n\ttester.AssertGetResponse(\n\t\t\"http://localhost:2999/config/\",\n\t\t200,\n\t\t`{\"admin\":{\"listen\":\"localhost:2999\"},\"apps\":{\"http\":{\"http_port\":9080,\"servers\":{\"s_server\":{\"@id\":\"s_server\",\"listen\":[\":9080\"],\"routes\":[{\"@id\":\"route2\",\"handle\":[{\"body\":\"route2\",\"handler\":\"static_response\"}],\"match\":[{\"path\":[\"/route_2/*\"]}]},{\"@id\":\"route1\",\"handle\":[{\"body\":\"Hello2\",\"handler\":\"static_response\"}],\"match\":[{\"path\":[\"/route_1/*\"]}]}]}}}}}`+\"\\n\")\n\n\t// Try to add another copy of route2 using POST to test duplicate ID handling\n\t// Since the first route2 ended up at array index 0, and we are appending to the array, the index for the new element would be 2\n\ttester.AssertPostResponseBody(\n\t\t\"http://localhost:2999/id/route2\",\n\t\theaders,\n\t\tbytes.NewBuffer(route2),\n\t\t400,\n\t\t`{\"error\":\"indexing config: duplicate ID 'route2' found at /config/apps/http/servers/s_server/routes/0 and /config/apps/http/servers/s_server/routes/2\"}`+\"\\n\")\n\n\t// Use PATCH to modify an existing object successfully\n\ttester.AssertPatchResponseBody(\n\t\t\"http://localhost:2999/id/route1\",\n\t\theaders,\n\t\tbytes.NewBuffer([]byte(`{\"@id\":\"route1\",\"handle\":[{\"handler\":\"static_response\",\"body\":\"route1\"}],\"match\":[{\"path\":[\"/route_1/*\"]}]}`)),\n\t\t200,\n\t\t\"\")\n\n\t// Verify the PATCH updated the server state\n\ttester.AssertGetResponse(\n\t\t\"http://localhost:9080/route_1/\",\n\t\t200,\n\t\t\"route1\")\n}\n"
  },
  {
    "path": "caddytest/integration/acme_test.go",
    "content": "package integration\n\nimport (\n\t\"context\"\n\t\"crypto/ecdsa\"\n\t\"crypto/elliptic\"\n\t\"crypto/rand\"\n\t\"fmt\"\n\t\"log/slog\"\n\t\"net\"\n\t\"net/http\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/mholt/acmez/v3\"\n\t\"github.com/mholt/acmez/v3/acme\"\n\tsmallstepacme \"github.com/smallstep/certificates/acme\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/exp/zapslog\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddytest\"\n)\n\nconst acmeChallengePort = 9081\n\n// Test the basic functionality of Caddy's ACME server\nfunc TestACMEServerWithDefaults(t *testing.T) {\n\tctx := context.Background()\n\tlogger, err := zap.NewDevelopment()\n\tif err != nil {\n\t\tt.Error(err)\n\t\treturn\n\t}\n\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t\tlocal_certs\n\t}\n\tacme.localhost {\n\t\tacme_server\n\t}\n  `, \"caddyfile\")\n\n\tclient := acmez.Client{\n\t\tClient: &acme.Client{\n\t\t\tDirectory:  \"https://acme.localhost:9443/acme/local/directory\",\n\t\t\tHTTPClient: tester.Client,\n\t\t\tLogger:     slog.New(zapslog.NewHandler(logger.Core(), zapslog.WithName(\"acmez\"))),\n\t\t},\n\t\tChallengeSolvers: map[string]acmez.Solver{\n\t\t\tacme.ChallengeTypeHTTP01: &naiveHTTPSolver{logger: logger},\n\t\t},\n\t}\n\n\taccountPrivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)\n\tif err != nil {\n\t\tt.Errorf(\"generating account key: %v\", err)\n\t}\n\taccount := acme.Account{\n\t\tContact:              []string{\"mailto:you@example.com\"},\n\t\tTermsOfServiceAgreed: true,\n\t\tPrivateKey:           accountPrivateKey,\n\t}\n\taccount, err = client.NewAccount(ctx, account)\n\tif err != nil {\n\t\tt.Errorf(\"new account: %v\", err)\n\t\treturn\n\t}\n\n\t// Every certificate needs a key.\n\tcertPrivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)\n\tif err != nil {\n\t\tt.Errorf(\"generating certificate key: %v\", err)\n\t\treturn\n\t}\n\n\tcerts, err := client.ObtainCertificateForSANs(ctx, account, certPrivateKey, []string{\"localhost\"})\n\tif err != nil {\n\t\tt.Errorf(\"obtaining certificate: %v\", err)\n\t\treturn\n\t}\n\n\t// ACME servers should usually give you the entire certificate chain\n\t// in PEM format, and sometimes even alternate chains! It's up to you\n\t// which one(s) to store and use, but whatever you do, be sure to\n\t// store the certificate and key somewhere safe and secure, i.e. don't\n\t// lose them!\n\tfor _, cert := range certs {\n\t\tt.Logf(\"Certificate %q:\\n%s\\n\\n\", cert.URL, cert.ChainPEM)\n\t}\n}\n\nfunc TestACMEServerWithMismatchedChallenges(t *testing.T) {\n\tctx := context.Background()\n\tlogger := caddy.Log().Named(\"acmez\")\n\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t\tlocal_certs\n\t}\n\tacme.localhost {\n\t\tacme_server {\n\t\t\tchallenges tls-alpn-01\n\t\t}\n\t}\n  `, \"caddyfile\")\n\n\tclient := acmez.Client{\n\t\tClient: &acme.Client{\n\t\t\tDirectory:  \"https://acme.localhost:9443/acme/local/directory\",\n\t\t\tHTTPClient: tester.Client,\n\t\t\tLogger:     slog.New(zapslog.NewHandler(logger.Core(), zapslog.WithName(\"acmez\"))),\n\t\t},\n\t\tChallengeSolvers: map[string]acmez.Solver{\n\t\t\tacme.ChallengeTypeHTTP01: &naiveHTTPSolver{logger: logger},\n\t\t},\n\t}\n\n\taccountPrivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)\n\tif err != nil {\n\t\tt.Errorf(\"generating account key: %v\", err)\n\t}\n\taccount := acme.Account{\n\t\tContact:              []string{\"mailto:you@example.com\"},\n\t\tTermsOfServiceAgreed: true,\n\t\tPrivateKey:           accountPrivateKey,\n\t}\n\taccount, err = client.NewAccount(ctx, account)\n\tif err != nil {\n\t\tt.Errorf(\"new account: %v\", err)\n\t\treturn\n\t}\n\n\t// Every certificate needs a key.\n\tcertPrivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)\n\tif err != nil {\n\t\tt.Errorf(\"generating certificate key: %v\", err)\n\t\treturn\n\t}\n\n\tcerts, err := client.ObtainCertificateForSANs(ctx, account, certPrivateKey, []string{\"localhost\"})\n\tif len(certs) > 0 {\n\t\tt.Errorf(\"expected '0' certificates, but received '%d'\", len(certs))\n\t}\n\tif err == nil {\n\t\tt.Error(\"expected errors, but received none\")\n\t}\n\tconst expectedErrMsg = \"no solvers available for remaining challenges (configured=[http-01] offered=[tls-alpn-01] remaining=[tls-alpn-01])\"\n\tif !strings.Contains(err.Error(), expectedErrMsg) {\n\t\tt.Errorf(`received error message does not match expectation: expected=\"%s\" received=\"%s\"`, expectedErrMsg, err.Error())\n\t}\n}\n\n// naiveHTTPSolver is a no-op acmez.Solver for example purposes only.\ntype naiveHTTPSolver struct {\n\tsrv    *http.Server\n\tlogger *zap.Logger\n}\n\nfunc (s *naiveHTTPSolver) Present(ctx context.Context, challenge acme.Challenge) error {\n\tsmallstepacme.InsecurePortHTTP01 = acmeChallengePort\n\ts.srv = &http.Server{\n\t\tAddr: fmt.Sprintf(\":%d\", acmeChallengePort),\n\t\tHandler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\t\thost, _, err := net.SplitHostPort(r.Host)\n\t\t\tif err != nil {\n\t\t\t\thost = r.Host\n\t\t\t}\n\t\t\ts.logger.Info(\"received request on challenge server\", zap.String(\"path\", r.URL.Path))\n\t\t\tif r.Method == \"GET\" && r.URL.Path == challenge.HTTP01ResourcePath() && strings.EqualFold(host, challenge.Identifier.Value) {\n\t\t\t\tw.Header().Add(\"Content-Type\", \"text/plain\")\n\t\t\t\tw.Write([]byte(challenge.KeyAuthorization))\n\t\t\t\tr.Close = true\n\t\t\t\ts.logger.Info(\"served key authentication\",\n\t\t\t\t\tzap.String(\"identifier\", challenge.Identifier.Value),\n\t\t\t\t\tzap.String(\"challenge\", \"http-01\"),\n\t\t\t\t\tzap.String(\"remote\", r.RemoteAddr),\n\t\t\t\t)\n\t\t\t}\n\t\t}),\n\t}\n\tl, err := net.Listen(\"tcp\", fmt.Sprintf(\":%d\", acmeChallengePort))\n\tif err != nil {\n\t\treturn err\n\t}\n\ts.logger.Info(\"present challenge\", zap.Any(\"challenge\", challenge))\n\tgo s.srv.Serve(l)\n\treturn nil\n}\n\nfunc (s naiveHTTPSolver) CleanUp(ctx context.Context, challenge acme.Challenge) error {\n\tsmallstepacme.InsecurePortHTTP01 = 0\n\ts.logger.Info(\"cleanup\", zap.Any(\"challenge\", challenge))\n\tif s.srv != nil {\n\t\ts.srv.Close()\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "caddytest/integration/acmeserver_test.go",
    "content": "package integration\n\nimport (\n\t\"context\"\n\t\"crypto/ecdsa\"\n\t\"crypto/elliptic\"\n\t\"crypto/rand\"\n\t\"log/slog\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/mholt/acmez/v3\"\n\t\"github.com/mholt/acmez/v3/acme\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/exp/zapslog\"\n\n\t\"github.com/caddyserver/caddy/v2/caddytest\"\n)\n\nfunc TestACMEServerDirectory(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\tskip_install_trust\n\t\tlocal_certs\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t\tpki {\n\t\t\tca local {\n\t\t\t\tname \"Caddy Local Authority\"\n\t\t\t}\n\t\t}\n\t}\n\tacme.localhost:9443 {\n\t\tacme_server\n\t}\n  `, \"caddyfile\")\n\ttester.AssertGetResponse(\n\t\t\"https://acme.localhost:9443/acme/local/directory\",\n\t\t200,\n\t\t`{\"newNonce\":\"https://acme.localhost:9443/acme/local/new-nonce\",\"newAccount\":\"https://acme.localhost:9443/acme/local/new-account\",\"newOrder\":\"https://acme.localhost:9443/acme/local/new-order\",\"revokeCert\":\"https://acme.localhost:9443/acme/local/revoke-cert\",\"keyChange\":\"https://acme.localhost:9443/acme/local/key-change\"}\n`)\n}\n\nfunc TestACMEServerAllowPolicy(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\tskip_install_trust\n\t\tlocal_certs\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t\tpki {\n\t\t\tca local {\n\t\t\t\tname \"Caddy Local Authority\"\n\t\t\t}\n\t\t}\n\t}\n\tacme.localhost {\n\t\tacme_server {\n\t\t\tchallenges http-01\n\t\t\tallow {\n\t\t\t\tdomains localhost\n\t\t\t}\n\t\t}\n\t}\n  `, \"caddyfile\")\n\n\tctx := context.Background()\n\tlogger, err := zap.NewDevelopment()\n\tif err != nil {\n\t\tt.Error(err)\n\t\treturn\n\t}\n\n\tclient := acmez.Client{\n\t\tClient: &acme.Client{\n\t\t\tDirectory:  \"https://acme.localhost:9443/acme/local/directory\",\n\t\t\tHTTPClient: tester.Client,\n\t\t\tLogger:     slog.New(zapslog.NewHandler(logger.Core())),\n\t\t},\n\t\tChallengeSolvers: map[string]acmez.Solver{\n\t\t\tacme.ChallengeTypeHTTP01: &naiveHTTPSolver{logger: logger},\n\t\t},\n\t}\n\n\taccountPrivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)\n\tif err != nil {\n\t\tt.Errorf(\"generating account key: %v\", err)\n\t}\n\taccount := acme.Account{\n\t\tContact:              []string{\"mailto:you@example.com\"},\n\t\tTermsOfServiceAgreed: true,\n\t\tPrivateKey:           accountPrivateKey,\n\t}\n\taccount, err = client.NewAccount(ctx, account)\n\tif err != nil {\n\t\tt.Errorf(\"new account: %v\", err)\n\t\treturn\n\t}\n\n\t// Every certificate needs a key.\n\tcertPrivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)\n\tif err != nil {\n\t\tt.Errorf(\"generating certificate key: %v\", err)\n\t\treturn\n\t}\n\t{\n\t\tcerts, err := client.ObtainCertificateForSANs(ctx, account, certPrivateKey, []string{\"localhost\"})\n\t\tif err != nil {\n\t\t\tt.Errorf(\"obtaining certificate for allowed domain: %v\", err)\n\t\t\treturn\n\t\t}\n\n\t\t// ACME servers should usually give you the entire certificate chain\n\t\t// in PEM format, and sometimes even alternate chains! It's up to you\n\t\t// which one(s) to store and use, but whatever you do, be sure to\n\t\t// store the certificate and key somewhere safe and secure, i.e. don't\n\t\t// lose them!\n\t\tfor _, cert := range certs {\n\t\t\tt.Logf(\"Certificate %q:\\n%s\\n\\n\", cert.URL, cert.ChainPEM)\n\t\t}\n\t}\n\t{\n\t\t_, err := client.ObtainCertificateForSANs(ctx, account, certPrivateKey, []string{\"not-matching.localhost\"})\n\t\tif err == nil {\n\t\t\tt.Errorf(\"obtaining certificate for 'not-matching.localhost' domain\")\n\t\t} else if !strings.Contains(err.Error(), \"urn:ietf:params:acme:error:rejectedIdentifier\") {\n\t\t\tt.Logf(\"unexpected error: %v\", err)\n\t\t}\n\t}\n}\n\nfunc TestACMEServerDenyPolicy(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\tskip_install_trust\n\t\tlocal_certs\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t\tpki {\n\t\t\tca local {\n\t\t\t\tname \"Caddy Local Authority\"\n\t\t\t}\n\t\t}\n\t}\n\tacme.localhost {\n\t\tacme_server {\n\t\t\tdeny {\n\t\t\t\tdomains deny.localhost\n\t\t\t}\n\t\t}\n\t}\n  `, \"caddyfile\")\n\n\tctx := context.Background()\n\tlogger, err := zap.NewDevelopment()\n\tif err != nil {\n\t\tt.Error(err)\n\t\treturn\n\t}\n\n\tclient := acmez.Client{\n\t\tClient: &acme.Client{\n\t\t\tDirectory:  \"https://acme.localhost:9443/acme/local/directory\",\n\t\t\tHTTPClient: tester.Client,\n\t\t\tLogger:     slog.New(zapslog.NewHandler(logger.Core())),\n\t\t},\n\t\tChallengeSolvers: map[string]acmez.Solver{\n\t\t\tacme.ChallengeTypeHTTP01: &naiveHTTPSolver{logger: logger},\n\t\t},\n\t}\n\n\taccountPrivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)\n\tif err != nil {\n\t\tt.Errorf(\"generating account key: %v\", err)\n\t}\n\taccount := acme.Account{\n\t\tContact:              []string{\"mailto:you@example.com\"},\n\t\tTermsOfServiceAgreed: true,\n\t\tPrivateKey:           accountPrivateKey,\n\t}\n\taccount, err = client.NewAccount(ctx, account)\n\tif err != nil {\n\t\tt.Errorf(\"new account: %v\", err)\n\t\treturn\n\t}\n\n\t// Every certificate needs a key.\n\tcertPrivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)\n\tif err != nil {\n\t\tt.Errorf(\"generating certificate key: %v\", err)\n\t\treturn\n\t}\n\t{\n\t\t_, err := client.ObtainCertificateForSANs(ctx, account, certPrivateKey, []string{\"deny.localhost\"})\n\t\tif err == nil {\n\t\t\tt.Errorf(\"obtaining certificate for 'deny.localhost' domain\")\n\t\t} else if !strings.Contains(err.Error(), \"urn:ietf:params:acme:error:rejectedIdentifier\") {\n\t\t\tt.Logf(\"unexpected error: %v\", err)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/autohttps_test.go",
    "content": "package integration\n\nimport (\n\t\"net/http\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/caddytest\"\n)\n\nfunc TestAutoHTTPtoHTTPSRedirectsImplicitPort(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\tadmin localhost:2999\n\t\tskip_install_trust\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t}\n\tlocalhost\n\trespond \"Yahaha! You found me!\"\n  `, \"caddyfile\")\n\n\ttester.AssertRedirect(\"http://localhost:9080/\", \"https://localhost/\", http.StatusPermanentRedirect)\n}\n\nfunc TestAutoHTTPtoHTTPSRedirectsExplicitPortSameAsHTTPSPort(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t}\n\tlocalhost:9443\n\trespond \"Yahaha! You found me!\"\n  `, \"caddyfile\")\n\n\ttester.AssertRedirect(\"http://localhost:9080/\", \"https://localhost/\", http.StatusPermanentRedirect)\n}\n\nfunc TestAutoHTTPtoHTTPSRedirectsExplicitPortDifferentFromHTTPSPort(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t}\n\tlocalhost:1234\n\trespond \"Yahaha! You found me!\"\n  `, \"caddyfile\")\n\n\ttester.AssertRedirect(\"http://localhost:9080/\", \"https://localhost:1234/\", http.StatusPermanentRedirect)\n}\n\nfunc TestAutoHTTPRedirectsWithHTTPListenerFirstInAddresses(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n{\n  \"admin\": {\n\t\"listen\": \"localhost:2999\"\n  },\n  \"apps\": {\n    \"http\": {\n      \"http_port\": 9080,\n      \"https_port\": 9443,\n      \"servers\": {\n        \"ingress_server\": {\n          \"listen\": [\n            \":9080\",\n            \":9443\"\n          ],\n          \"routes\": [\n            {\n              \"match\": [\n                {\n\t\t\t\t  \"host\": [\"localhost\"]\n                }\n              ]\n            }\n          ]\n        }\n      }\n    },\n\t\"pki\": {\n\t\t\"certificate_authorities\": {\n\t\t\t\"local\": {\n\t\t\t\t\"install_trust\": false\n\t\t\t}\n\t\t}\n\t}\n  }\n}\n`, \"json\")\n\ttester.AssertRedirect(\"http://localhost:9080/\", \"https://localhost/\", http.StatusPermanentRedirect)\n}\n\nfunc TestAutoHTTPRedirectsInsertedBeforeUserDefinedCatchAll(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t\tlocal_certs\n\t}\n\thttp://:9080 {\n\t\trespond \"Foo\"\n\t}\n\thttp://baz.localhost:9080 {\n\t\trespond \"Baz\"\n\t}\n\tbar.localhost {\n\t\trespond \"Bar\"\n\t}\n  `, \"caddyfile\")\n\ttester.AssertRedirect(\"http://bar.localhost:9080/\", \"https://bar.localhost/\", http.StatusPermanentRedirect)\n\ttester.AssertGetResponse(\"http://foo.localhost:9080/\", 200, \"Foo\")\n\ttester.AssertGetResponse(\"http://baz.localhost:9080/\", 200, \"Baz\")\n}\n\nfunc TestAutoHTTPRedirectsInsertedBeforeUserDefinedCatchAllWithNoExplicitHTTPSite(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t\tlocal_certs\n\t}\n\thttp://:9080 {\n\t\trespond \"Foo\"\n\t}\n\tbar.localhost {\n\t\trespond \"Bar\"\n\t}\n  `, \"caddyfile\")\n\ttester.AssertRedirect(\"http://bar.localhost:9080/\", \"https://bar.localhost/\", http.StatusPermanentRedirect)\n\ttester.AssertGetResponse(\"http://foo.localhost:9080/\", 200, \"Foo\")\n\ttester.AssertGetResponse(\"http://baz.localhost:9080/\", 200, \"Foo\")\n}\n\nfunc TestAutoHTTPSRedirectSortingExactMatchOverWildcard(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n    {\n        skip_install_trust\n        admin localhost:2999\n        http_port     9080\n        https_port    9443\n        local_certs\n    }\n    *.localhost:10443 {\n        respond \"Wildcard\"\n    }\n    dev.localhost {\n        respond \"Exact\"\n    }\n  `, \"caddyfile\")\n\n\ttester.AssertRedirect(\"http://dev.localhost:9080/\", \"https://dev.localhost/\", http.StatusPermanentRedirect)\n\n\ttester.AssertRedirect(\"http://foo.localhost:9080/\", \"https://foo.localhost:10443/\", http.StatusPermanentRedirect)\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/acme_dns_configured.caddyfiletest",
    "content": "{\n\tacme_dns mock foo\n}\n\nexample.com {\n\trespond \"Hello World\"\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Hello World\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"provider\": {\n\t\t\t\t\t\t\t\t\t\t\t\"argument\": \"foo\",\n\t\t\t\t\t\t\t\t\t\t\t\"name\": \"mock\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/acme_dns_naked_use_dns_defaults.caddyfiletest",
    "content": "{\n\tdns mock\n\tacme_dns\n}\n\nexample.com {\n\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"dns\": {\n\t\t\t\t\"name\": \"mock\"\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/acme_dns_naked_without_dns.caddyfiletest",
    "content": "{\n\tacme_dns\n}\n\nexample.com {\n\trespond \"Hello World\"\n}\n----------\nacme_dns specified without DNS provider config, but no provider specified with 'dns' global option"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/acme_server_custom_challenges.caddyfiletest",
    "content": "{\n\tpki {\n\t\tca custom-ca {\n\t\t\tname \"Custom CA\"\n\t\t}\n\t}\n}\n\nacme.example.com {\n\tacme_server {\n\t\tca custom-ca\n\t\tchallenges dns-01\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"acme.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"ca\": \"custom-ca\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"challenges\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dns-01\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"acme_server\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"pki\": {\n\t\t\t\"certificate_authorities\": {\n\t\t\t\t\"custom-ca\": {\n\t\t\t\t\t\"name\": \"Custom CA\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/acme_server_default_challenges.caddyfiletest",
    "content": "{\n\tpki {\n\t\tca custom-ca {\n\t\t\tname \"Custom CA\"\n\t\t}\n\t}\n}\n\nacme.example.com {\n\tacme_server {\n\t\tca custom-ca\n\t\tchallenges\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"acme.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"ca\": \"custom-ca\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"acme_server\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"pki\": {\n\t\t\t\"certificate_authorities\": {\n\t\t\t\t\"custom-ca\": {\n\t\t\t\t\t\"name\": \"Custom CA\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/acme_server_lifetime.caddyfiletest",
    "content": "{\n\tpki {\n\t\tca internal {\n\t\t\tname \"Internal\"\n\t\t\troot_cn \"Internal Root Cert\"\n\t\t\tintermediate_cn \"Internal Intermediate Cert\"\n\t\t}\n\t\tca internal-long-lived {\n\t\t\tname \"Long-lived\"\n\t\t\troot_cn \"Internal Root Cert 2\"\n\t\t\tintermediate_cn \"Internal Intermediate Cert 2\"\n\t\t}\n\t}\n}\n\nacme-internal.example.com {\n\tacme_server {\n\t\tca internal\n\t}\n}\n\nacme-long-lived.example.com {\n\tacme_server {\n\t\tca internal-long-lived\n\t\tlifetime 7d\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"acme-long-lived.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"ca\": \"internal-long-lived\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"acme_server\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"lifetime\": 604800000000000\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"acme-internal.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"ca\": \"internal\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"acme_server\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"pki\": {\n\t\t\t\"certificate_authorities\": {\n\t\t\t\t\"internal\": {\n\t\t\t\t\t\"name\": \"Internal\",\n\t\t\t\t\t\"root_common_name\": \"Internal Root Cert\",\n\t\t\t\t\t\"intermediate_common_name\": \"Internal Intermediate Cert\"\n\t\t\t\t},\n\t\t\t\t\"internal-long-lived\": {\n\t\t\t\t\t\"name\": \"Long-lived\",\n\t\t\t\t\t\"root_common_name\": \"Internal Root Cert 2\",\n\t\t\t\t\t\"intermediate_common_name\": \"Internal Intermediate Cert 2\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/acme_server_multi_custom_challenges.caddyfiletest",
    "content": "{\n\tpki {\n\t\tca custom-ca {\n\t\t\tname \"Custom CA\"\n\t\t}\n\t}\n}\n\nacme.example.com {\n\tacme_server {\n\t\tca custom-ca\n\t\tchallenges dns-01 http-01\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"acme.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"ca\": \"custom-ca\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"challenges\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dns-01\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"http-01\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"acme_server\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"pki\": {\n\t\t\t\"certificate_authorities\": {\n\t\t\t\t\"custom-ca\": {\n\t\t\t\t\t\"name\": \"Custom CA\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/acme_server_policy-allow.caddyfiletest",
    "content": "{\n\tpki {\n\t\tca custom-ca {\n\t\t\tname \"Custom CA\"\n\t\t}\n\t}\n}\n\nacme.example.com {\n\tacme_server {\n\t\tca custom-ca\n\t\tallow {\n\t\t\tdomains host-1.internal.example.com host-2.internal.example.com\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"acme.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"ca\": \"custom-ca\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"acme_server\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"policy\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"allow\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"domains\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"host-1.internal.example.com\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"host-2.internal.example.com\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"pki\": {\n\t\t\t\"certificate_authorities\": {\n\t\t\t\t\"custom-ca\": {\n\t\t\t\t\t\"name\": \"Custom CA\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/acme_server_policy-both.caddyfiletest",
    "content": "{\n\tpki {\n\t\tca custom-ca {\n\t\t\tname \"Custom CA\"\n\t\t}\n\t}\n}\n\nacme.example.com {\n\tacme_server {\n\t\tca custom-ca\n\t\tallow {\n\t\t\tdomains host-1.internal.example.com host-2.internal.example.com\n\t\t}\n\t\tdeny {\n\t\t\tdomains dc.internal.example.com\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"acme.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"ca\": \"custom-ca\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"acme_server\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"policy\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"allow\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"domains\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"host-1.internal.example.com\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"host-2.internal.example.com\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"deny\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"domains\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dc.internal.example.com\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"pki\": {\n\t\t\t\"certificate_authorities\": {\n\t\t\t\t\"custom-ca\": {\n\t\t\t\t\t\"name\": \"Custom CA\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/acme_server_policy-deny.caddyfiletest",
    "content": "{\n\tpki {\n\t\tca custom-ca {\n\t\t\tname \"Custom CA\"\n\t\t}\n\t}\n}\n\nacme.example.com {\n\tacme_server {\n\t\tca custom-ca\n\t\tdeny {\n\t\t\tdomains dc.internal.example.com\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"acme.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"ca\": \"custom-ca\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"acme_server\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"policy\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"deny\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"domains\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dc.internal.example.com\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"pki\": {\n\t\t\t\"certificate_authorities\": {\n\t\t\t\t\"custom-ca\": {\n\t\t\t\t\t\"name\": \"Custom CA\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/acme_server_sign_with_root.caddyfiletest",
    "content": "{\n\tpki {\n\t\tca internal {\n\t\t\tname \"Internal\"\n\t\t\troot_cn \"Internal Root Cert\"\n\t\t\tintermediate_cn \"Internal Intermediate Cert\"\n\t\t}\n\t}\n}\nacme.example.com {\n\tacme_server {\n\t\tca internal\n\t\tsign_with_root\n\t}\n}\n\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"acme.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"ca\": \"internal\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"acme_server\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"sign_with_root\": true\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"pki\": {\n\t\t\t\"certificate_authorities\": {\n\t\t\t\t\"internal\": {\n\t\t\t\t\t\"name\": \"Internal\",\n\t\t\t\t\t\"root_common_name\": \"Internal Root Cert\",\n\t\t\t\t\t\"intermediate_common_name\": \"Internal Intermediate Cert\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/ambiguous_site_definition.caddyfiletest",
    "content": "example.com\nhandle {\n\trespond \"one\"\n}\n\nexample.com\nhandle {\n\trespond \"two\"\n}\n----------\nCaddyfile:6: unrecognized directive: example.com\nDid you mean to define a second site? If so, you must use curly braces around each site to separate their configurations."
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/ambiguous_site_definition_duplicate_key.caddyfiletest",
    "content": ":8080 {\n\trespond \"one\"\n}\n\n:8080 {\n\trespond \"two\"\n}\n----------\nambiguous site definition: :8080"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/auto_https_disable_redirects.caddyfiletest",
    "content": "{\n\tauto_https disable_redirects\n}\n\nlocalhost\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\"disable_redirects\": true\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/auto_https_ignore_loaded_certs.caddyfiletest",
    "content": "{\n\tauto_https ignore_loaded_certs\n}\n\nlocalhost\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\"ignore_loaded_certificates\": true\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/auto_https_off.caddyfiletest",
    "content": "{\n\tauto_https off\n}\n\nlocalhost\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t{}\n\t\t\t\t\t],\n\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\"disable\": true\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/bind_fd_fdgram_h123.caddyfiletest",
    "content": "{\n\tauto_https disable_redirects\n\tadmin off\n}\n\nhttp://localhost {\n\tbind fd/{env.CADDY_HTTP_FD} {\n\t\tprotocols h1\n\t}\n\tlog\n\trespond \"Hello, HTTP!\"\n}\n\nhttps://localhost {\n\tbind fd/{env.CADDY_HTTPS_FD} {\n\t\tprotocols h1 h2\n\t}\n\tbind fdgram/{env.CADDY_HTTP3_FD} {\n\t\tprotocols h3\n\t}\n\tlog\n\trespond \"Hello, HTTPS!\"\n}\n----------\n{\n\t\"admin\": {\n\t\t\"disabled\": true\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\"fd/{env.CADDY_HTTPS_FD}\",\n\t\t\t\t\t\t\"fdgram/{env.CADDY_HTTP3_FD}\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Hello, HTTPS!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\"disable_redirects\": true\n\t\t\t\t\t},\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"logger_names\": {\n\t\t\t\t\t\t\t\"localhost\": [\n\t\t\t\t\t\t\t\t\"\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"listen_protocols\": [\n\t\t\t\t\t\t[\n\t\t\t\t\t\t\t\"h1\",\n\t\t\t\t\t\t\t\"h2\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t[\n\t\t\t\t\t\t\t\"h3\"\n\t\t\t\t\t\t]\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv1\": {\n\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\"disable_redirects\": true\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"srv2\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\"fd/{env.CADDY_HTTP_FD}\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Hello, HTTP!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\"disable_redirects\": true,\n\t\t\t\t\t\t\"skip\": [\n\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"logger_names\": {\n\t\t\t\t\t\t\t\"localhost\": [\n\t\t\t\t\t\t\t\t\"\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"listen_protocols\": [\n\t\t\t\t\t\t[\n\t\t\t\t\t\t\t\"h1\"\n\t\t\t\t\t\t]\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/bind_ipv6.caddyfiletest",
    "content": "example.com {\n\tbind tcp6/[::]\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\"tcp6/[::]:443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/directive_as_site_address.caddyfiletest",
    "content": "handle\n\nrespond \"should not work\"\n----------\nCaddyfile:1: parsed 'handle' as a site address, but it is a known directive; directives must appear in a site block"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/duplicate_listener_address_global.caddyfiletest",
    "content": "{\n\tservers {\n\t\tsrv0 {\n\t\t\tlisten :8080\n\t\t}\n\t\tsrv1 {\n\t\t\tlisten :8080\n\t\t}\n\t}\n}\n----------\nparsing caddyfile tokens for 'servers': unrecognized servers option 'srv0', at Caddyfile:3"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/enable_tls_for_catch_all_site.caddyfiletest",
    "content": ":8443 {\n\ttls internal {\n\t\ton_demand\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t{}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"module\": \"internal\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"on_demand\": true\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}\n\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/encode_options.caddyfiletest",
    "content": ":80\n\n# All the options\nencode gzip zstd {\n\tminimum_length 256\n\tmatch {\n\t\tstatus 2xx 4xx 500\n\t\theader Content-Type text/*\n\t\theader Content-Type application/json*\n\t\theader Content-Type application/javascript*\n\t\theader Content-Type application/xhtml+xml*\n\t\theader Content-Type application/atom+xml*\n\t\theader Content-Type application/rss+xml*\n\t\theader Content-Type application/wasm*\n\t\theader Content-Type image/svg+xml*\n\t}\n}\n\n# Long way with a block for each encoding\nencode {\n\tzstd\n\tgzip 5\n}\n\nencode\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"encodings\": {\n\t\t\t\t\t\t\t\t\t\t\"gzip\": {},\n\t\t\t\t\t\t\t\t\t\t\"zstd\": {}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"handler\": \"encode\",\n\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\"Content-Type\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"text/*\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"application/json*\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"application/javascript*\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"application/xhtml+xml*\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"application/atom+xml*\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"application/rss+xml*\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"application/wasm*\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"image/svg+xml*\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t2,\n\t\t\t\t\t\t\t\t\t\t\t4,\n\t\t\t\t\t\t\t\t\t\t\t500\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"minimum_length\": 256,\n\t\t\t\t\t\t\t\t\t\"prefer\": [\n\t\t\t\t\t\t\t\t\t\t\"gzip\",\n\t\t\t\t\t\t\t\t\t\t\"zstd\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"encodings\": {\n\t\t\t\t\t\t\t\t\t\t\"gzip\": {\n\t\t\t\t\t\t\t\t\t\t\t\"level\": 5\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"zstd\": {}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"handler\": \"encode\",\n\t\t\t\t\t\t\t\t\t\"prefer\": [\n\t\t\t\t\t\t\t\t\t\t\"zstd\",\n\t\t\t\t\t\t\t\t\t\t\"gzip\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"encodings\": {\n\t\t\t\t\t\t\t\t\t\t\"gzip\": {},\n\t\t\t\t\t\t\t\t\t\t\"zstd\": {}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"handler\": \"encode\",\n\t\t\t\t\t\t\t\t\t\"prefer\": [\n\t\t\t\t\t\t\t\t\t\t\"zstd\",\n\t\t\t\t\t\t\t\t\t\t\"gzip\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/error_example.caddyfiletest",
    "content": "example.com {\n\troot * /srv\n\n\t# Trigger errors for certain paths\n\terror /private* \"Unauthorized\" 403\n\terror /hidden* \"Not found\" 404\n\n\t# Handle the error by serving an HTML page \n\thandle_errors {\n\t\trewrite * /{http.error.status_code}.html\n\t\tfile_server\n\t}\n\n\tfile_server\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"root\": \"/srv\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"error\": \"Unauthorized\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 403\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/private*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"error\": \"Not found\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 404\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/hidden*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"file_server\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"hide\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"./Caddyfile\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"errors\": {\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group0\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"uri\": \"/{http.error.status_code}.html\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"file_server\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"hide\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"./Caddyfile\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/error_multi_site_blocks.caddyfiletest",
    "content": "foo.localhost {\n\troot * /srv\n\terror /private* \"Unauthorized\" 410\n\terror /fivehundred* \"Internal Server Error\" 500\n\n\thandle_errors 5xx {\n\t\trespond \"Error In range [500 .. 599]\"\n\t}\n\thandle_errors 410 {\n\t\trespond \"404 or 410 error\"\n\t}\n}\n\nbar.localhost {\n\troot * /srv\n\terror /private* \"Unauthorized\" 410\n\terror /fivehundred* \"Internal Server Error\" 500\n\n\thandle_errors 5xx {\n\t\trespond \"Error In range [500 .. 599] from second site\"\n\t}\n\thandle_errors 410 {\n\t\trespond \"404 or 410 error from second site\"\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"foo.localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"root\": \"/srv\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"error\": \"Internal Server Error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 500\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/fivehundred*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"error\": \"Unauthorized\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 410\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/private*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"bar.localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"root\": \"/srv\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"error\": \"Internal Server Error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 500\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/fivehundred*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"error\": \"Unauthorized\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 410\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/private*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"errors\": {\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\"foo.localhost\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"404 or 410 error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"expression\": \"{http.error.status_code} in [410]\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Error In range [500 .. 599]\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"expression\": \"{http.error.status_code} \\u003e= 500 \\u0026\\u0026 {http.error.status_code} \\u003c= 599\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\"bar.localhost\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"404 or 410 error from second site\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"expression\": \"{http.error.status_code} in [410]\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Error In range [500 .. 599] from second site\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"expression\": \"{http.error.status_code} \\u003e= 500 \\u0026\\u0026 {http.error.status_code} \\u003c= 599\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/error_range_codes.caddyfiletest",
    "content": "{\n\thttp_port 3010\n}\nlocalhost:3010 {\n\troot * /srv\n\terror /private* \"Unauthorized\" 410\n\terror /hidden* \"Not found\" 404\n\n\thandle_errors 4xx {\n\t\trespond \"Error in the [400 .. 499] range\"\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"http_port\": 3010,\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":3010\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"root\": \"/srv\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"error\": \"Unauthorized\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 410\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/private*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"error\": \"Not found\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 404\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/hidden*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"errors\": {\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Error in the [400 .. 499] range\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"expression\": \"{http.error.status_code} \\u003e= 400 \\u0026\\u0026 {http.error.status_code} \\u003c= 499\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/error_range_simple_codes.caddyfiletest",
    "content": "{\n\thttp_port 2099\n}\nlocalhost:2099 {\n\troot * /srv\n\terror /private* \"Unauthorized\" 410\n\terror /threehundred* \"Moved Permanently\" 301\n\terror /internalerr* \"Internal Server Error\" 500\n\n\thandle_errors 500 3xx {\n\t\trespond \"Error code is equal to 500 or in the [300..399] range\"\n\t}\n\thandle_errors 4xx {\n\t\trespond \"Error in the [400 .. 499] range\"\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"http_port\": 2099,\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":2099\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"root\": \"/srv\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"error\": \"Moved Permanently\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 301\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/threehundred*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"error\": \"Internal Server Error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 500\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/internalerr*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"error\": \"Unauthorized\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 410\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/private*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"errors\": {\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Error in the [400 .. 499] range\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"expression\": \"{http.error.status_code} \\u003e= 400 \\u0026\\u0026 {http.error.status_code} \\u003c= 499\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Error code is equal to 500 or in the [300..399] range\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"expression\": \"{http.error.status_code} \\u003e= 300 \\u0026\\u0026 {http.error.status_code} \\u003c= 399 || {http.error.status_code} in [500]\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/error_simple_codes.caddyfiletest",
    "content": "{\n\thttp_port 3010\n}\nlocalhost:3010 {\n\troot * /srv\n\terror /private* \"Unauthorized\" 410\n\terror /hidden* \"Not found\" 404\n\n\thandle_errors 404 410 {\n\t\trespond \"404 or 410 error\"\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"http_port\": 3010,\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":3010\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"root\": \"/srv\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"error\": \"Unauthorized\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 410\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/private*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"error\": \"Not found\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 404\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/hidden*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"errors\": {\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"404 or 410 error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"expression\": \"{http.error.status_code} in [404, 410]\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/error_sort.caddyfiletest",
    "content": "{\n\thttp_port 2099\n}\nlocalhost:2099 {\n\troot * /srv\n\terror /private* \"Unauthorized\" 410\n\terror /hidden* \"Not found\" 404\n\terror /internalerr* \"Internal Server Error\" 500\n\n\thandle_errors {\n\t\trespond \"Fallback route: code outside the [400..499] range\"\n\t}\n\thandle_errors 4xx {\n\t\trespond \"Error in the [400 .. 499] range\"\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"http_port\": 2099,\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":2099\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"root\": \"/srv\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"error\": \"Internal Server Error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 500\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/internalerr*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"error\": \"Unauthorized\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 410\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/private*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"error\": \"Not found\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 404\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/hidden*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"errors\": {\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Error in the [400 .. 499] range\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"expression\": \"{http.error.status_code} \\u003e= 400 \\u0026\\u0026 {http.error.status_code} \\u003c= 499\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Fallback route: code outside the [400..499] range\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/error_subhandlers.caddyfiletest",
    "content": "{\n\thttp_port 2099\n}\nlocalhost:2099 {\n\troot * /var/www/\n\tfile_server\n\n\thandle_errors 404 {\n\t\thandle /en/* {\n\t\t\trespond \"not found\" 404\n\t\t}\n\t\thandle /es/* {\n\t\t\trespond \"no encontrado\"\n\t\t}\n\t\thandle {\n\t\t\trespond \"default not found\"\n\t\t}\n\t}\n\thandle_errors {\n\t\thandle /en/* {\n\t\t\trespond \"English error\"\n\t\t}\n\t\thandle /es/* {\n\t\t\trespond \"Spanish error\"\n\t\t}\n\t\thandle {\n\t\t\trespond \"Default error\"\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"http_port\": 2099,\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":2099\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"root\": \"/var/www/\"\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"file_server\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"hide\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"./Caddyfile\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"errors\": {\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group3\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"not found\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 404\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/en/*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group3\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"no encontrado\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/es/*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group3\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"default not found\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"expression\": \"{http.error.status_code} in [404]\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group8\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"English error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/en/*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group8\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Spanish error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/es/*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group8\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Default error\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/expression_quotes.caddyfiletest",
    "content": "(snippet) {\n\t@g `{http.error.status_code} == 404`\n}\n\nexample.com\n\n@a expression {http.error.status_code} == 400\nabort @a\n\n@b expression {http.error.status_code} == \"401\"\nabort @b\n\n@c expression {http.error.status_code} == `402`\nabort @c\n\n@d expression \"{http.error.status_code} == 403\"\nabort @d\n\n@e expression `{http.error.status_code} == 404`\nabort @e\n\n@f `{http.error.status_code} == 404`\nabort @f\n\nimport snippet\nabort @g\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"abort\": true,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"expression\": \"{http.error.status_code} == 400\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"abort\": true,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"expression\": \"{http.error.status_code} == \\\"401\\\"\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"abort\": true,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"expression\": \"{http.error.status_code} == `402`\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"abort\": true,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"expression\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"expr\": \"{http.error.status_code} == 403\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"name\": \"d\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"abort\": true,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"expression\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"expr\": \"{http.error.status_code} == 404\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"name\": \"e\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"abort\": true,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"expression\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"expr\": \"{http.error.status_code} == 404\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"name\": \"f\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"abort\": true,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"expression\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"expr\": \"{http.error.status_code} == 404\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"name\": \"g\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/file_server_disable_canonical_uris.caddyfiletest",
    "content": ":80\n\nfile_server {\n\tdisable_canonical_uris\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"canonical_uris\": false,\n\t\t\t\t\t\t\t\t\t\"handler\": \"file_server\",\n\t\t\t\t\t\t\t\t\t\"hide\": [\n\t\t\t\t\t\t\t\t\t\t\"./Caddyfile\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/file_server_etag_file_extensions.caddyfiletest",
    "content": ":8080 {\n\troot * ./\n\tfile_server {\n\t\tetag_file_extensions .b3sum .sha256\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8080\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"vars\",\n\t\t\t\t\t\t\t\t\t\"root\": \"./\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"etag_file_extensions\": [\n\t\t\t\t\t\t\t\t\t\t\".b3sum\",\n\t\t\t\t\t\t\t\t\t\t\".sha256\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"handler\": \"file_server\",\n\t\t\t\t\t\t\t\t\t\"hide\": [\n\t\t\t\t\t\t\t\t\t\t\"./Caddyfile\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/file_server_file_limit.caddyfiletest",
    "content": ":80\n\nfile_server {\n\tbrowse {\n\t\tfile_limit 4000\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"browse\": {\n\t\t\t\t\t\t\t\t\t\t\"file_limit\": 4000\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"handler\": \"file_server\",\n\t\t\t\t\t\t\t\t\t\"hide\": [\n\t\t\t\t\t\t\t\t\t\t\"./Caddyfile\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/file_server_pass_thru.caddyfiletest",
    "content": ":80\n\nfile_server {\n\tpass_thru\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"file_server\",\n\t\t\t\t\t\t\t\t\t\"hide\": [\n\t\t\t\t\t\t\t\t\t\t\"./Caddyfile\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"pass_thru\": true\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/file_server_precompressed.caddyfiletest",
    "content": ":80\n\nfile_server {\n\tprecompressed zstd br gzip\n}\n\nfile_server {\n\tprecompressed\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"file_server\",\n\t\t\t\t\t\t\t\t\t\"hide\": [\n\t\t\t\t\t\t\t\t\t\t\"./Caddyfile\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"precompressed\": {\n\t\t\t\t\t\t\t\t\t\t\"br\": {},\n\t\t\t\t\t\t\t\t\t\t\"gzip\": {},\n\t\t\t\t\t\t\t\t\t\t\"zstd\": {}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"precompressed_order\": [\n\t\t\t\t\t\t\t\t\t\t\"zstd\",\n\t\t\t\t\t\t\t\t\t\t\"br\",\n\t\t\t\t\t\t\t\t\t\t\"gzip\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"file_server\",\n\t\t\t\t\t\t\t\t\t\"hide\": [\n\t\t\t\t\t\t\t\t\t\t\"./Caddyfile\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"precompressed\": {\n\t\t\t\t\t\t\t\t\t\t\"br\": {},\n\t\t\t\t\t\t\t\t\t\t\"gzip\": {},\n\t\t\t\t\t\t\t\t\t\t\"zstd\": {}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"precompressed_order\": [\n\t\t\t\t\t\t\t\t\t\t\"br\",\n\t\t\t\t\t\t\t\t\t\t\"zstd\",\n\t\t\t\t\t\t\t\t\t\t\"gzip\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/file_server_sort.caddyfiletest",
    "content": ":80\n\nfile_server {\n\tbrowse {\n\t\tsort size desc\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"browse\": {\n\t\t\t\t\t\t\t\t\t\t\"sort\": [\n\t\t\t\t\t\t\t\t\t\t\t\"size\",\n\t\t\t\t\t\t\t\t\t\t\t\"desc\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"handler\": \"file_server\",\n\t\t\t\t\t\t\t\t\t\"hide\": [\n\t\t\t\t\t\t\t\t\t\t\"./Caddyfile\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/file_server_status.caddyfiletest",
    "content": "localhost\n\nroot * /srv\n\nhandle /nope* {\n\tfile_server {\n\t\tstatus 403\n\t}\n}\n\nhandle /custom-status* {\n\tfile_server {\n\t\tstatus {env.CUSTOM_STATUS}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"root\": \"/srv\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group2\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"file_server\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"hide\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"./Caddyfile\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": \"{env.CUSTOM_STATUS}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/custom-status*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group2\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"file_server\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"hide\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"./Caddyfile\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 403\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/nope*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/forward_auth_authelia.caddyfiletest",
    "content": "app.example.com {\n\tforward_auth authelia:9091 {\n\t\turi /api/authz/forward-auth\n\t\tcopy_headers Remote-User Remote-Groups Remote-Name Remote-Email\n\t}\n\n\treverse_proxy backend:8080\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"app.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle_response\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t2\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"delete\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Remote-Email\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Remote-Email\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.Remote-Email}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"vars\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.Remote-Email}\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"delete\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Remote-Groups\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Remote-Groups\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.Remote-Groups}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"vars\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.Remote-Groups}\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"delete\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Remote-Name\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Remote-Name\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.Remote-Name}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"vars\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.Remote-Name}\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"delete\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Remote-User\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Remote-User\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.Remote-User}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"vars\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.Remote-User}\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"X-Forwarded-Method\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.request.method}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"X-Forwarded-Uri\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"rewrite\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"method\": \"GET\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"uri\": \"/api/authz/forward-auth\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"authelia:9091\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"backend:8080\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/forward_auth_copy_headers_strip.caddyfiletest",
    "content": ":8080\n\nforward_auth 127.0.0.1:9091 {\n\turi /\n\tcopy_headers X-User-Id X-User-Role\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8080\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handle_response\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t2\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"delete\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"X-User-Id\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"X-User-Id\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.X-User-Id}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"vars\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.X-User-Id}\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"delete\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"X-User-Role\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"X-User-Role\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.X-User-Role}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"vars\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.X-User-Role}\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"X-Forwarded-Method\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.request.method}\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"X-Forwarded-Uri\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri}\"\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"rewrite\": {\n\t\t\t\t\t\t\t\t\t\t\"method\": \"GET\",\n\t\t\t\t\t\t\t\t\t\t\"uri\": \"/\"\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"127.0.0.1:9091\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/forward_auth_rename_headers.caddyfiletest",
    "content": ":8881\n\nforward_auth localhost:9000 {\n\turi /auth\n\tcopy_headers A>1 B C>3 {\n\t\tD\n\t\tE>5\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8881\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handle_response\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t2\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"delete\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"1\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"1\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.A}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"vars\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.A}\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"delete\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"B\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"B\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.B}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"vars\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.B}\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"delete\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"3\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"3\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.C}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"vars\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.C}\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"delete\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"D\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"D\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.D}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"vars\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.D}\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"delete\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"5\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"5\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.E}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"vars\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.header.E}\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"X-Forwarded-Method\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.request.method}\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"X-Forwarded-Uri\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri}\"\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"rewrite\": {\n\t\t\t\t\t\t\t\t\t\t\"method\": \"GET\",\n\t\t\t\t\t\t\t\t\t\t\"uri\": \"/auth\"\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:9000\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options.caddyfiletest",
    "content": "{\n\tdebug\n\thttp_port 8080\n\thttps_port 8443\n\tgrace_period 5s\n\tshutdown_delay 10s\n\tdefault_sni localhost\n\torder root first\n\tstorage file_system {\n\t\troot /data\n\t}\n\tstorage_check off\n\tstorage_clean_interval off\n\tacme_ca https://example.com\n\tacme_ca_root /path/to/ca.crt\n\tocsp_stapling off\n\n\temail test@example.com\n\tadmin off\n\ton_demand_tls {\n\t\task https://example.com\n\t}\n\tlocal_certs\n\tkey_type ed25519\n}\n\n:80\n----------\n{\n\t\"admin\": {\n\t\t\"disabled\": true\n\t},\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"default\": {\n\t\t\t\t\"level\": \"DEBUG\"\n\t\t\t}\n\t\t}\n\t},\n\t\"storage\": {\n\t\t\"module\": \"file_system\",\n\t\t\"root\": \"/data\"\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"http_port\": 8080,\n\t\t\t\"https_port\": 8443,\n\t\t\t\"grace_period\": 5000000000,\n\t\t\t\"shutdown_delay\": 10000000000,\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"module\": \"internal\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"key_type\": \"ed25519\",\n\t\t\t\t\t\t\"disable_ocsp_stapling\": true\n\t\t\t\t\t}\n\t\t\t\t],\n\t\t\t\t\"on_demand\": {\n\t\t\t\t\t\"permission\": {\n\t\t\t\t\t\t\"endpoint\": \"https://example.com\",\n\t\t\t\t\t\t\"module\": \"http\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"disable_ocsp_stapling\": true,\n\t\t\t\"disable_storage_check\": true,\n\t\t\t\"disable_storage_clean\": true\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options_acme.caddyfiletest",
    "content": "{\n\tdebug\n\thttp_port 8080\n\thttps_port 8443\n\tdefault_sni localhost\n\torder root first\n\tstorage file_system {\n\t\troot /data\n\t}\n\tacme_ca https://example.com\n\tacme_eab {\n\t\tkey_id 4K2scIVbBpNd-78scadB2g\n\t\tmac_key abcdefghijklmnopqrstuvwx-abcdefghijklnopqrstuvwxyz12ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefgh\n\t}\n\tacme_ca_root /path/to/ca.crt\n\temail test@example.com\n\tadmin off\n\ton_demand_tls {\n\t\task https://example.com\n\t}\n\tstorage_clean_interval 7d\n\trenew_interval 1d\n\tocsp_interval 2d\n\n\tkey_type ed25519\n}\n\n:80\n----------\n{\n\t\"admin\": {\n\t\t\"disabled\": true\n\t},\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"default\": {\n\t\t\t\t\"level\": \"DEBUG\"\n\t\t\t}\n\t\t}\n\t},\n\t\"storage\": {\n\t\t\"module\": \"file_system\",\n\t\t\"root\": \"/data\"\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"http_port\": 8080,\n\t\t\t\"https_port\": 8443,\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"ca\": \"https://example.com\",\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"http\": {\n\t\t\t\t\t\t\t\t\t\t\"alternate_port\": 8080\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"tls-alpn\": {\n\t\t\t\t\t\t\t\t\t\t\"alternate_port\": 8443\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"email\": \"test@example.com\",\n\t\t\t\t\t\t\t\t\"external_account\": {\n\t\t\t\t\t\t\t\t\t\"key_id\": \"4K2scIVbBpNd-78scadB2g\",\n\t\t\t\t\t\t\t\t\t\"mac_key\": \"abcdefghijklmnopqrstuvwx-abcdefghijklnopqrstuvwxyz12ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefgh\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"module\": \"acme\",\n\t\t\t\t\t\t\t\t\"trusted_roots_pem_files\": [\n\t\t\t\t\t\t\t\t\t\"/path/to/ca.crt\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"key_type\": \"ed25519\"\n\t\t\t\t\t}\n\t\t\t\t],\n\t\t\t\t\"on_demand\": {\n\t\t\t\t\t\"permission\": {\n\t\t\t\t\t\t\"endpoint\": \"https://example.com\",\n\t\t\t\t\t\t\"module\": \"http\"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"ocsp_interval\": 172800000000000,\n\t\t\t\t\"renew_interval\": 86400000000000,\n\t\t\t\t\"storage_clean_interval\": 604800000000000\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options_admin.caddyfiletest",
    "content": "{\n\tdebug\n\thttp_port 8080\n\thttps_port 8443\n\tdefault_sni localhost\n\torder root first\n\tstorage file_system {\n\t\troot /data\n\t}\n\tacme_ca https://example.com\n\tacme_ca_root /path/to/ca.crt\n\n\temail test@example.com\n\tadmin {\n\t\torigins localhost:2019 [::1]:2019 127.0.0.1:2019 192.168.10.128\n\t}\n\ton_demand_tls {\n\t\task https://example.com\n\t}\n\tlocal_certs\n\tkey_type ed25519\n}\n\n:80\n----------\n{\n\t\"admin\": {\n\t\t\"listen\": \"localhost:2019\",\n\t\t\"origins\": [\n\t\t\t\"localhost:2019\",\n\t\t\t\"[::1]:2019\",\n\t\t\t\"127.0.0.1:2019\",\n\t\t\t\"192.168.10.128\"\n\t\t]\n\t},\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"default\": {\n\t\t\t\t\"level\": \"DEBUG\"\n\t\t\t}\n\t\t}\n\t},\n\t\"storage\": {\n\t\t\"module\": \"file_system\",\n\t\t\"root\": \"/data\"\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"http_port\": 8080,\n\t\t\t\"https_port\": 8443,\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"module\": \"internal\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"key_type\": \"ed25519\"\n\t\t\t\t\t}\n\t\t\t\t],\n\t\t\t\t\"on_demand\": {\n\t\t\t\t\t\"permission\": {\n\t\t\t\t\t\t\"endpoint\": \"https://example.com\",\n\t\t\t\t\t\t\"module\": \"http\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options_admin_with_persist_config_off.caddyfiletest",
    "content": "{\n\thttp_port 8080\n\tpersist_config off\n\tadmin {\n\t\torigins localhost:2019 [::1]:2019 127.0.0.1:2019 192.168.10.128\n\t}\n}\n\n:80\n----------\n{\n\t\"admin\": {\n\t\t\"listen\": \"localhost:2019\",\n\t\t\"origins\": [\n\t\t\t\"localhost:2019\",\n\t\t\t\"[::1]:2019\",\n\t\t\t\"127.0.0.1:2019\",\n\t\t\t\"192.168.10.128\"\n\t\t],\n\t\t\"config\": {\n\t\t\t\"persist\": false\n\t\t}\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"http_port\": 8080,\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options_debug_with_access_log.caddyfiletest",
    "content": "{\n\tdebug\n}\n\n:8881 {\n\tlog {\n\t\tformat console\n\t}\n}\n----------\n{\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"default\": {\n\t\t\t\t\"level\": \"DEBUG\",\n\t\t\t\t\"exclude\": [\n\t\t\t\t\t\"http.log.access.log0\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"log0\": {\n\t\t\t\t\"encoder\": {\n\t\t\t\t\t\"format\": \"console\"\n\t\t\t\t},\n\t\t\t\t\"level\": \"DEBUG\",\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.log0\"\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8881\"\n\t\t\t\t\t],\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"default_logger_name\": \"log0\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options_default_bind.caddyfiletest",
    "content": "{\n\tdefault_bind tcp4/0.0.0.0 tcp6/[::]\n}\n\nexample.com {\n}\n\nexample.org:12345 {\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\"tcp4/0.0.0.0:12345\",\n\t\t\t\t\t\t\"tcp6/[::]:12345\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.org\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv1\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\"tcp4/0.0.0.0:443\",\n\t\t\t\t\t\t\"tcp6/[::]:443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options_log_and_site.caddyfiletest",
    "content": "{\n\tlog {\n\t\toutput file caddy.log\n\t\tinclude some-log-source\n\t\texclude admin.api admin2.api\n\t}\n\tlog custom-logger {\n\t\toutput file caddy.log\n\t\tlevel WARN\n\t\tinclude custom-log-source\n\t}\n}\n\n:8884 {\n\tlog {\n\t\tformat json\n\t\toutput file access.log\n\t}\n}\n----------\n{\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"custom-logger\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"caddy.log\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"level\": \"WARN\",\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"custom-log-source\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"default\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"caddy.log\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"some-log-source\"\n\t\t\t\t],\n\t\t\t\t\"exclude\": [\n\t\t\t\t\t\"admin.api\",\n\t\t\t\t\t\"admin2.api\",\n\t\t\t\t\t\"custom-log-source\",\n\t\t\t\t\t\"http.log.access.log0\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"log0\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"access.log\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"encoder\": {\n\t\t\t\t\t\"format\": \"json\"\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.log0\"\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"default_logger_name\": \"log0\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options_log_basic.caddyfiletest",
    "content": "{\n\tlog {\n\t\toutput file foo.log\n\t}\n}\n----------\n{\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"default\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"foo.log\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options_log_custom.caddyfiletest",
    "content": "{\n\tlog custom-logger {\n\t\tformat filter {\n\t\t\twrap console\n\t\t\tfields {\n\t\t\t\trequest>remote_ip ip_mask {\n\t\t\t\t\tipv4 24\n\t\t\t\t\tipv6 32\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n----------\n{\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"custom-logger\": {\n\t\t\t\t\"encoder\": {\n\t\t\t\t\t\"fields\": {\n\t\t\t\t\t\t\"request\\u003eremote_ip\": {\n\t\t\t\t\t\t\t\"filter\": \"ip_mask\",\n\t\t\t\t\t\t\t\"ipv4_cidr\": 24,\n\t\t\t\t\t\t\t\"ipv6_cidr\": 32\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"format\": \"filter\",\n\t\t\t\t\t\"wrap\": {\n\t\t\t\t\t\t\"format\": \"console\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options_log_multi.caddyfiletest",
    "content": "{\n\tlog first {\n\t\toutput file foo.log\n\t}\n\tlog second {\n\t\tformat json\n\t}\n}\n----------\n{\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"first\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"foo.log\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"second\": {\n\t\t\t\t\"encoder\": {\n\t\t\t\t\t\"format\": \"json\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options_log_sampling.caddyfiletest",
    "content": "{\n\tlog {\n\t\tsampling {\n\t\t\tinterval 300\n\t\t\tfirst 50\n\t\t\tthereafter 40\n\t\t}\n\t}\n}\n----------\n{\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"default\": {\n\t\t\t\t\"sampling\": {\n\t\t\t\t\t\"interval\": 300,\n\t\t\t\t\t\"first\": 50,\n\t\t\t\t\t\"thereafter\": 40\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options_persist_config.caddyfiletest",
    "content": "{\n\tpersist_config off\n}\n\n:8881 {\n}\n----------\n{\n\t\"admin\": {\n\t\t\"config\": {\n\t\t\t\"persist\": false\n\t\t}\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8881\"\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options_preferred_chains.caddyfiletest",
    "content": "{\n\tpreferred_chains smallest\n}\n\nexample.com\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"module\": \"acme\",\n\t\t\t\t\t\t\t\t\"preferred_chains\": {\n\t\t\t\t\t\t\t\t\t\"smallest\": true\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options_resolvers.caddyfiletest",
    "content": "{\n\temail test@example.com\n\tdns mock\n\ttls_resolvers 1.1.1.1 8.8.8.8\n\tacme_dns\n}\n\nexample.com {\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"resolvers\": [\n\t\t\t\t\t\t\t\t\t\t\t\"1.1.1.1\",\n\t\t\t\t\t\t\t\t\t\t\t\"8.8.8.8\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"email\": \"test@example.com\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"ca\": \"https://acme.zerossl.com/v2/DV90\",\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"resolvers\": [\n\t\t\t\t\t\t\t\t\t\t\t\"1.1.1.1\",\n\t\t\t\t\t\t\t\t\t\t\t\"8.8.8.8\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"email\": \"test@example.com\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"dns\": {\n\t\t\t\t\"name\": \"mock\"\n\t\t\t},\n\t\t\t\"resolvers\": [\n\t\t\t\t\"1.1.1.1\",\n\t\t\t\t\"8.8.8.8\"\n\t\t\t]\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options_resolvers_http_challenge.caddyfiletest",
    "content": "{\n\ttls_resolvers 1.1.1.1 8.8.8.8\n}\n\nexample.com {\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"resolvers\": [\n\t\t\t\t\"1.1.1.1\",\n\t\t\t\t\"8.8.8.8\"\n\t\t\t]\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options_resolvers_local_dns_inherit.caddyfiletest",
    "content": "{\n\temail test@example.com\n\tdns mock\n\ttls_resolvers 1.1.1.1 8.8.8.8\n}\n\nexample.com {\n\ttls {\n\t\tdns mock\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"provider\": {\n\t\t\t\t\t\t\t\t\t\t\t\"name\": \"mock\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"resolvers\": [\n\t\t\t\t\t\t\t\t\t\t\t\"1.1.1.1\",\n\t\t\t\t\t\t\t\t\t\t\t\"8.8.8.8\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"email\": \"test@example.com\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"dns\": {\n\t\t\t\t\"name\": \"mock\"\n\t\t\t},\n\t\t\t\"resolvers\": [\n\t\t\t\t\"1.1.1.1\",\n\t\t\t\t\"8.8.8.8\"\n\t\t\t]\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options_resolvers_local_override.caddyfiletest",
    "content": "{\n\temail test@example.com\n\tdns mock\n\ttls_resolvers 1.1.1.1 8.8.8.8\n\tacme_dns\n}\n\nexample.com {\n\ttls {\n\t\tresolvers 9.9.9.9\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"resolvers\": [\n\t\t\t\t\t\t\t\t\t\t\t\"9.9.9.9\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"email\": \"test@example.com\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"resolvers\": [\n\t\t\t\t\t\t\t\t\t\t\t\"1.1.1.1\",\n\t\t\t\t\t\t\t\t\t\t\t\"8.8.8.8\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"email\": \"test@example.com\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"ca\": \"https://acme.zerossl.com/v2/DV90\",\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"resolvers\": [\n\t\t\t\t\t\t\t\t\t\t\t\"1.1.1.1\",\n\t\t\t\t\t\t\t\t\t\t\t\"8.8.8.8\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"email\": \"test@example.com\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"dns\": {\n\t\t\t\t\"name\": \"mock\"\n\t\t\t},\n\t\t\t\"resolvers\": [\n\t\t\t\t\"1.1.1.1\",\n\t\t\t\t\"8.8.8.8\"\n\t\t\t]\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options_resolvers_mixed.caddyfiletest",
    "content": "{\n\temail test@example.com\n\tdns mock\n\ttls_resolvers 1.1.1.1 8.8.8.8\n\tacme_dns\n}\n\nsite1.example.com {\n}\n\nsite2.example.com {\n\ttls {\n\t\tresolvers 9.9.9.9 8.8.4.4\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"site1.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"site2.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"site2.example.com\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"resolvers\": [\n\t\t\t\t\t\t\t\t\t\t\t\"9.9.9.9\",\n\t\t\t\t\t\t\t\t\t\t\t\"8.8.4.4\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"email\": \"test@example.com\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"resolvers\": [\n\t\t\t\t\t\t\t\t\t\t\t\"1.1.1.1\",\n\t\t\t\t\t\t\t\t\t\t\t\"8.8.8.8\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"email\": \"test@example.com\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"ca\": \"https://acme.zerossl.com/v2/DV90\",\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"resolvers\": [\n\t\t\t\t\t\t\t\t\t\t\t\"1.1.1.1\",\n\t\t\t\t\t\t\t\t\t\t\t\"8.8.8.8\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"email\": \"test@example.com\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"dns\": {\n\t\t\t\t\"name\": \"mock\"\n\t\t\t},\n\t\t\t\"resolvers\": [\n\t\t\t\t\"1.1.1.1\",\n\t\t\t\t\"8.8.8.8\"\n\t\t\t]\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_options_skip_install_trust.caddyfiletest",
    "content": "{\n\tskip_install_trust\n\tpki {\n\t\tca {\n\t\t\tname \"Local\"\n\t\t\troot_cn \"Custom Local Root Name\"\n\t\t\tintermediate_cn \"Custom Local Intermediate Name\"\n\t\t\troot {\n\t\t\t\tcert /path/to/cert.pem\n\t\t\t\tkey /path/to/key.pem\n\t\t\t\tformat pem_file\n\t\t\t}\n\t\t\tintermediate {\n\t\t\t\tcert /path/to/cert.pem\n\t\t\t\tkey /path/to/key.pem\n\t\t\t\tformat pem_file\n\t\t\t}\n\t\t}\n\t\tca foo {\n\t\t\tname \"Foo\"\n\t\t\troot_cn \"Custom Foo Root Name\"\n\t\t\tintermediate_cn \"Custom Foo Intermediate Name\"\n\t\t}\n\t}\n}\n\na.example.com {\n\ttls internal\n}\n\nacme.example.com {\n\tacme_server {\n\t\tca foo\n\t}\n}\n\nacme-bar.example.com {\n\tacme_server {\n\t\tca bar\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"acme-bar.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"ca\": \"bar\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"acme_server\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"acme.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"ca\": \"foo\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"acme_server\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"a.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"pki\": {\n\t\t\t\"certificate_authorities\": {\n\t\t\t\t\"bar\": {\n\t\t\t\t\t\"install_trust\": false\n\t\t\t\t},\n\t\t\t\t\"foo\": {\n\t\t\t\t\t\"name\": \"Foo\",\n\t\t\t\t\t\"root_common_name\": \"Custom Foo Root Name\",\n\t\t\t\t\t\"intermediate_common_name\": \"Custom Foo Intermediate Name\",\n\t\t\t\t\t\"install_trust\": false\n\t\t\t\t},\n\t\t\t\t\"local\": {\n\t\t\t\t\t\"name\": \"Local\",\n\t\t\t\t\t\"root_common_name\": \"Custom Local Root Name\",\n\t\t\t\t\t\"intermediate_common_name\": \"Custom Local Intermediate Name\",\n\t\t\t\t\t\"install_trust\": false,\n\t\t\t\t\t\"root\": {\n\t\t\t\t\t\t\"certificate\": \"/path/to/cert.pem\",\n\t\t\t\t\t\t\"private_key\": \"/path/to/key.pem\",\n\t\t\t\t\t\t\"format\": \"pem_file\"\n\t\t\t\t\t},\n\t\t\t\t\t\"intermediate\": {\n\t\t\t\t\t\t\"certificate\": \"/path/to/cert.pem\",\n\t\t\t\t\t\t\"private_key\": \"/path/to/key.pem\",\n\t\t\t\t\t\t\"format\": \"pem_file\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"acme-bar.example.com\",\n\t\t\t\t\t\t\t\"acme.example.com\"\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"a.example.com\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"module\": \"internal\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_server_options_multi.caddyfiletest",
    "content": "{\n\tservers {\n\t\ttimeouts {\n\t\t\tidle 90s\n\t\t}\n\t\tstrict_sni_host insecure_off\n\t}\n\tservers :80 {\n\t\ttimeouts {\n\t\t\tidle 60s\n\t\t}\n\t}\n\tservers :443 {\n\t\ttimeouts {\n\t\t\tidle 30s\n\t\t}\n\t\tstrict_sni_host\n\t}\n}\n\nfoo.com {\n}\n\nhttp://bar.com {\n}\n\n:8080 {\n}\n\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"idle_timeout\": 30000000000,\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"foo.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"strict_sni_host\": true\n\t\t\t\t},\n\t\t\t\t\"srv1\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"idle_timeout\": 60000000000,\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"bar.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv2\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8080\"\n\t\t\t\t\t],\n\t\t\t\t\t\"idle_timeout\": 90000000000,\n\t\t\t\t\t\"strict_sni_host\": false\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/global_server_options_single.caddyfiletest",
    "content": "{\n\tservers {\n\t\tlistener_wrappers {\n\t\t\thttp_redirect\n\t\t\ttls\n\t\t}\n\t\ttimeouts {\n\t\t\tread_body 30s\n\t\t\tread_header 30s\n\t\t\twrite 30s\n\t\t\tidle 30s\n\t\t}\n\t\tmax_header_size 100MB\n\t\tenable_full_duplex\n\t\tlog_credentials\n\t\tprotocols h1 h2 h2c h3\n\t\tstrict_sni_host\n\t\ttrusted_proxies static private_ranges\n\t\tclient_ip_headers Custom-Real-Client-IP X-Forwarded-For\n\t\tclient_ip_headers A-Third-One\n\t\tkeepalive_interval 20s\n\t\tkeepalive_idle 20s\n\t\tkeepalive_count 10\n\t\t0rtt off\n\t}\n}\n\nfoo.com {\n}\n\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"listener_wrappers\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"wrapper\": \"http_redirect\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"wrapper\": \"tls\"\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"read_timeout\": 30000000000,\n\t\t\t\t\t\"read_header_timeout\": 30000000000,\n\t\t\t\t\t\"write_timeout\": 30000000000,\n\t\t\t\t\t\"idle_timeout\": 30000000000,\n\t\t\t\t\t\"keepalive_interval\": 20000000000,\n\t\t\t\t\t\"keepalive_idle\": 20000000000,\n\t\t\t\t\t\"keepalive_count\": 10,\n\t\t\t\t\t\"max_header_bytes\": 100000000,\n\t\t\t\t\t\"enable_full_duplex\": true,\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"foo.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"strict_sni_host\": true,\n\t\t\t\t\t\"trusted_proxies\": {\n\t\t\t\t\t\t\"ranges\": [\n\t\t\t\t\t\t\t\"192.168.0.0/16\",\n\t\t\t\t\t\t\t\"172.16.0.0/12\",\n\t\t\t\t\t\t\t\"10.0.0.0/8\",\n\t\t\t\t\t\t\t\"127.0.0.1/8\",\n\t\t\t\t\t\t\t\"fd00::/8\",\n\t\t\t\t\t\t\t\"::1\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"source\": \"static\"\n\t\t\t\t\t},\n\t\t\t\t\t\"client_ip_headers\": [\n\t\t\t\t\t\t\"Custom-Real-Client-IP\",\n\t\t\t\t\t\t\"X-Forwarded-For\",\n\t\t\t\t\t\t\"A-Third-One\"\n\t\t\t\t\t],\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"should_log_credentials\": true\n\t\t\t\t\t},\n\t\t\t\t\t\"protocols\": [\n\t\t\t\t\t\t\"h1\",\n\t\t\t\t\t\t\"h2\",\n\t\t\t\t\t\t\"h2c\",\n\t\t\t\t\t\t\"h3\"\n\t\t\t\t\t],\n\t\t\t\t\t\"allow_0rtt\": false\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/handle_nested_in_route.caddyfiletest",
    "content": ":8881 {\n\troute {\n\t\thandle /foo/* {\n\t\t\trespond \"Foo\"\n\t\t}\n\t\thandle {\n\t\t\trespond \"Bar\"\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8881\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group2\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Foo\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/foo/*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group2\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Bar\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/handle_path.caddyfiletest",
    "content": ":80\nhandle_path /api/v1/* {\n\trespond \"API v1\"\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"/api/v1/*\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"strip_path_prefix\": \"/api/v1\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"API v1\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/handle_path_sorting.caddyfiletest",
    "content": ":80 {\n\thandle /api/* {\n\t\trespond \"api\"\n\t}\n\n\thandle_path /static/* {\n\t\trespond \"static\"\n\t}\n\n\thandle {\n\t\trespond \"handle\"\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"group\": \"group3\",\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"/static/*\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"strip_path_prefix\": \"/static\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"static\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"group\": \"group3\",\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"/api/*\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"api\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"group\": \"group3\",\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"handle\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/header.caddyfiletest",
    "content": ":80 {\n\theader Denis \"Ritchie\"\n\theader +Edsger \"Dijkstra\"\n\theader ?John \"von Neumann\"\n\theader -Wolfram\n\theader {\n\t\tGrace: \"Hopper\" # some users habitually suffix field names with a colon\n\t\t+Ray \"Solomonoff\"\n\t\t?Tim \"Berners-Lee\"\n\t\tdefer\n\t}\n\t@images path /images/*\n\theader @images {\n\t\tCache-Control \"public, max-age=3600, stale-while-revalidate=86400\"\n\t\tmatch {\n\t\t\tstatus 200\n\t\t}\n\t}\n\theader {\n\t\t+Link \"Foo\"\n\t\t+Link \"Bar\"\n\t\tmatch status 200\n\t}\n\theader >Set Defer\n\theader >Replace Deferred Replacement\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"/images/*\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\"require\": {\n\t\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t\t200\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\"Cache-Control\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"public, max-age=3600, stale-while-revalidate=86400\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\"Denis\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"Ritchie\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\"add\": {\n\t\t\t\t\t\t\t\t\t\t\t\"Edsger\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"Dijkstra\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\"require\": {\n\t\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"John\": null\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\"John\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"von Neumann\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\"deferred\": true,\n\t\t\t\t\t\t\t\t\t\t\"delete\": [\n\t\t\t\t\t\t\t\t\t\t\t\"Wolfram\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\"add\": {\n\t\t\t\t\t\t\t\t\t\t\t\"Ray\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"Solomonoff\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"deferred\": true,\n\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\"Grace\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"Hopper\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\"require\": {\n\t\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"Tim\": null\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\"Tim\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"Berners-Lee\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\"add\": {\n\t\t\t\t\t\t\t\t\t\t\t\"Link\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"Foo\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"Bar\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"require\": {\n\t\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t\t200\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\"deferred\": true,\n\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\"Set\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"Defer\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\"deferred\": true,\n\t\t\t\t\t\t\t\t\t\t\"replace\": {\n\t\t\t\t\t\t\t\t\t\t\t\"Replace\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"replace\": \"Replacement\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"search_regexp\": \"Deferred\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/header_placeholder_search.caddyfiletest",
    "content": ":80 {\n\theader Test-Static \":443\" \"STATIC-WORKS\"\n\theader Test-Dynamic \":{http.request.local.port}\" \"DYNAMIC-WORKS\"\n\theader Test-Complex \"port-{http.request.local.port}-end\" \"COMPLEX-{http.request.method}\"\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\"replace\": {\n\t\t\t\t\t\t\t\t\t\t\t\"Test-Static\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"replace\": \"STATIC-WORKS\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"search_regexp\": \":443\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\"replace\": {\n\t\t\t\t\t\t\t\t\t\t\t\"Test-Dynamic\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"replace\": \"DYNAMIC-WORKS\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"search_regexp\": \":{http.request.local.port}\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\"replace\": {\n\t\t\t\t\t\t\t\t\t\t\t\"Test-Complex\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"replace\": \"COMPLEX-{http.request.method}\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"search_regexp\": \"port-{http.request.local.port}-end\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/heredoc.caddyfiletest",
    "content": "example.com {\n\trespond <<EOF\n    <html>\n      <head><title>Foo</title>\n      <body>Foo</body>\n    </html>\n    EOF 200\n}\n\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"\\u003chtml\\u003e\\n  \\u003chead\\u003e\\u003ctitle\\u003eFoo\\u003c/title\\u003e\\n  \\u003cbody\\u003eFoo\\u003c/body\\u003e\\n\\u003c/html\\u003e\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 200\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/heredoc_extra_indentation.caddyfiletest",
    "content": ":80\n\nhandle {\n\trespond <<END\n        line1\n        line2\n  END\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"      line1\\n      line2\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/heredoc_incomplete.caddyfiletest",
    "content": ":80\n\nhandle {\n    respond <<EOF\n    Hello\n# missing EOF marker\n}\n----------\nmismatched leading whitespace in heredoc <<EOF on line #5 [    Hello], expected whitespace [# missing ] to match the closing marker"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/heredoc_invalid_marker.caddyfiletest",
    "content": ":80\n\nhandle {\n    respond <<END!\n    Hello\n    END!\n}\n----------\nheredoc marker on line #4 must contain only alpha-numeric characters, dashes and underscores; got 'END!'"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/heredoc_mismatched_whitespace.caddyfiletest",
    "content": ":80\n\nhandle {\n\trespond <<END\n\tline1\n\tline2\n  END\n}\n----------\nmismatched leading whitespace in heredoc <<END on line #5 [\tline1], expected whitespace [  ] to match the closing marker"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/heredoc_missing_marker.caddyfiletest",
    "content": ":80\n\nhandle {\n    respond << \n    Hello\n    END\n}\n----------\nparsing caddyfile tokens for 'handle': unrecognized directive: Hello - are you sure your Caddyfile structure (nesting and braces) is correct?, at Caddyfile:7"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/heredoc_too_many_angle_brackets.caddyfiletest",
    "content": ":80\n\nhandle {\n    respond <<<END\n    Hello\n    END\n}\n----------\ntoo many '<' for heredoc on line #4; only use two, for example <<END"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/http_only_hostnames.caddyfiletest",
    "content": "# https://github.com/caddyserver/caddy/issues/3977\nhttp://* {\n\trespond \"Hello, world!\"\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"*\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Hello, world!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/http_only_on_any_address.caddyfiletest",
    "content": ":80 {\n\trespond /version 200 {\n\t\tbody \"hello from localhost\"\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"/version\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\"status_code\": 200\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/http_only_on_domain.caddyfiletest",
    "content": "http://a.caddy.localhost {\n\trespond /version 200 {\n\t\tbody \"hello from localhost\"\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"a.caddy.localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 200\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/version\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/http_only_on_hostless_block.caddyfiletest",
    "content": "# Issue #4113\n:80, http://example.com {\n\trespond \"foo\"\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"foo\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/http_only_on_localhost.caddyfiletest",
    "content": "localhost:80 {\n\trespond /version 200 {\n\t\tbody \"hello from localhost\"\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 200\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/version\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/http_only_on_non_standard_port.caddyfiletest",
    "content": "http://a.caddy.localhost:81 {\n\trespond /version 200 {\n\t\tbody \"hello from localhost\"\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":81\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"a.caddy.localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 200\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/version\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\"skip\": [\n\t\t\t\t\t\t\t\"a.caddy.localhost\"\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/http_valid_directive_like_site_address.caddyfiletest",
    "content": "http://handle {\n\tfile_server\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"handle\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"file_server\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"hide\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"./Caddyfile\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/https_on_domain.caddyfiletest",
    "content": "a.caddy.localhost {\n\trespond /version 200 {\n\t\tbody \"hello from localhost\"\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"a.caddy.localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 200\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/version\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/import_args_file.caddyfiletest",
    "content": "example.com\n\nimport testdata/import_respond.txt Groot Rocket\nimport testdata/import_respond.txt you \"the confused man\"\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"'I am Groot', hears Rocket\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"'I am you', hears the confused man\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/import_args_snippet.caddyfiletest",
    "content": "(logging) {\n\tlog {\n\t\toutput file /var/log/caddy/{args[0]}.access.log\n\t}\n}\n\na.example.com {\n\timport logging a.example.com\n}\n\nb.example.com {\n\timport logging b.example.com\n}\n----------\n{\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"default\": {\n\t\t\t\t\"exclude\": [\n\t\t\t\t\t\"http.log.access.log0\",\n\t\t\t\t\t\"http.log.access.log1\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"log0\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"/var/log/caddy/a.example.com.access.log\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.log0\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"log1\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"/var/log/caddy/b.example.com.access.log\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.log1\"\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"a.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"b.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"logger_names\": {\n\t\t\t\t\t\t\t\"a.example.com\": [\n\t\t\t\t\t\t\t\t\"log0\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"b.example.com\": [\n\t\t\t\t\t\t\t\t\"log1\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/import_args_snippet_env_placeholder.caddyfiletest",
    "content": "(foo) {\n\trespond {env.FOO}\n}\n\n:80 {\n\timport foo\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"{env.FOO}\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/import_block_anonymous.caddyfiletest",
    "content": "(site) {\n    http://{args[0]} https://{args[0]} {\n        {block}\n    }\n}\nimport site test.domain {\n    { \n        header_up Host {host}\n        header_up X-Real-IP {remote_host}\n    }\n}\n----------\nanonymous blocks are not supported"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/import_block_snippet.caddyfiletest",
    "content": "(snippet) {\n\theader {\n\t\t{block}\n\t}\n}\n\nexample.com {\n\timport snippet {\n\t\tfoo bar\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Foo\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"bar\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/import_block_snippet_args.caddyfiletest",
    "content": "(snippet) {\n\t{block}\n}\n\nexample.com {\n\timport snippet {\n\t\theader foo bar\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Foo\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"bar\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/import_block_snippet_non_replaced_block.caddyfiletest",
    "content": "(snippet) {\n\theader {\n\t\treverse_proxy localhost:3000\n\t\t{block}\n\t}\n}\n\nexample.com {\n\timport snippet\n}\n---------- \n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Reverse_proxy\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"localhost:3000\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/import_block_snippet_non_replaced_block_from_separate_file.caddyfiletest",
    "content": "import testdata/issue_7518_unused_block_panic_snippets.conf\n\nexample.com {\n\timport snippet\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Reverse_proxy\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"localhost:3000\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/import_block_snippet_non_replaced_key_block.caddyfiletest",
    "content": "(snippet) {\n\theader {\n\t\treverse_proxy localhost:3000\n\t\t{blocks.content_type}\n\t}\n}\n\nexample.com {\n\timport snippet\n}\n---------- \n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Reverse_proxy\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"localhost:3000\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/import_block_with_site_block.caddyfiletest",
    "content": "(site) {\n\thttps://{args[0]} {\n\t\t{block}\n\t}\n}\n\nimport site test.domain {\n\treverse_proxy http://192.168.1.1:8080 {\n\t\theader_up Host {host}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"test.domain\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Host\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.request.host}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"192.168.1.1:8080\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/import_blocks_snippet.caddyfiletest",
    "content": "(snippet) {\n\theader {\n\t\t{blocks.foo}\n\t}\n\theader {\n\t\t{blocks.bar}\n\t}\n}\n\nexample.com {\n\timport snippet {\n\t\tfoo {\n\t\t\tfoo a\n\t\t}\n\t\tbar {\n\t\t\tbar b\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Foo\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"a\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Bar\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"b\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/import_blocks_snippet_nested.caddyfiletest",
    "content": "(snippet) {\n\theader {\n\t\t{blocks.bar}\n\t}\n\timport sub_snippet {\n\t\tbar {\n\t\t\t{blocks.foo}\n\t\t}\n\t}\n}\n(sub_snippet) {\n\theader {\n\t\t{blocks.bar}\n\t}\n}\nexample.com {\n\timport snippet {\n\t\tfoo {\n\t\t\tfoo a\n\t\t}\n\t\tbar {\n\t\t\tbar b\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Bar\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"b\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Foo\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"a\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/import_cycle.caddyfiletest",
    "content": "(import1) {\n\timport import2\n}\n\n(import2) {\n\timport import1\n}\n\nimport import1\n\n----------\na cycle of imports exists between Caddyfile:import2 and Caddyfile:import1"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/intercept_response.caddyfiletest",
    "content": "localhost\n\nrespond \"To intercept\"\n\nintercept {\n\t@500 status 500\n\treplace_status @500 400\n\n\t@all status 2xx 3xx 4xx 5xx\n\treplace_status @all {http.error.status_code}\n\n\treplace_status {http.error.status_code}\n\n\t@accel header X-Accel-Redirect *\n\thandle_response @accel {\n\t\trespond \"Header X-Accel-Redirect!\"\n\t}\n\n\t@another {\n\t\theader X-Another *\n\t}\n\thandle_response @another {\n\t\trespond \"Header X-Another!\"\n\t}\n\n\t@401 status 401\n\thandle_response @401 {\n\t\trespond \"Status 401!\"\n\t}\n\n\thandle_response {\n\t\trespond \"Any! This should be last in the JSON!\"\n\t}\n\n\t@403 {\n\t\tstatus 403\n\t}\n\thandle_response @403 {\n\t\trespond \"Status 403!\"\n\t}\n\n\t@multi {\n\t\tstatus 401 403\n\t\tstatus 404\n\t\theader Foo *\n\t\theader Bar *\n\t}\n\thandle_response @multi {\n\t\trespond \"Headers Foo, Bar AND statuses 401, 403 and 404!\"\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle_response\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t500\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 400\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t2,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t3,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t4,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t5\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": \"{http.error.status_code}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"X-Accel-Redirect\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Header X-Accel-Redirect!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"X-Another\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Header X-Another!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t401\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Status 401!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t403\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Status 403!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Bar\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Foo\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t401,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t403,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t404\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Headers Foo, Bar AND statuses 401, 403 and 404!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": \"{http.error.status_code}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Any! This should be last in the JSON!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"intercept\"\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"To intercept\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/invoke_named_routes.caddyfiletest",
    "content": "&(first) {\n\t@first path /first\n\tvars @first first 1\n\trespond \"first\"\n}\n\n&(second) {\n\trespond \"second\"\n}\n\n:8881 {\n\tinvoke first\n\troute {\n\t\tinvoke second\n\t}\n}\n\n:8882 {\n\thandle {\n\t\tinvoke second\n\t}\n}\n\n:8883 {\n\trespond \"no invoke\"\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8881\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"invoke\",\n\t\t\t\t\t\t\t\t\t\"name\": \"first\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"invoke\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"name\": \"second\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"named_routes\": {\n\t\t\t\t\t\t\"first\": {\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"first\": 1,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/first\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"first\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"second\": {\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"second\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"srv1\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8882\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"invoke\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"name\": \"second\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"named_routes\": {\n\t\t\t\t\t\t\"second\": {\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"second\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"srv2\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8883\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"no invoke\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/invoke_undefined_named_route.caddyfiletest",
    "content": "example.com {\n\tinvoke foo\n}\n----------\ncannot invoke named route 'foo', which was not defined"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/log_add.caddyfiletest",
    "content": ":80 {\n\tlog\n\n\tvars foo foo\n\n\tlog_append const bar\n\tlog_append vars foo\n\tlog_append placeholder {path}\n\n\tlog_append /only-for-this-path secret value\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"foo\": \"foo\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"vars\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"/only-for-this-path\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"log_append\",\n\t\t\t\t\t\t\t\t\t\"key\": \"secret\",\n\t\t\t\t\t\t\t\t\t\"value\": \"value\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"log_append\",\n\t\t\t\t\t\t\t\t\t\"key\": \"const\",\n\t\t\t\t\t\t\t\t\t\"value\": \"bar\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"log_append\",\n\t\t\t\t\t\t\t\t\t\"key\": \"vars\",\n\t\t\t\t\t\t\t\t\t\"value\": \"foo\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"log_append\",\n\t\t\t\t\t\t\t\t\t\"key\": \"placeholder\",\n\t\t\t\t\t\t\t\t\t\"value\": \"{http.request.uri.path}\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"logs\": {}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/log_append_encoder.caddyfiletest",
    "content": "{\n\tlog {\n\t\tformat append {\n\t\t\twrap json\n\t\t\tfields {\n\t\t\t\twrap \"foo\"\n\t\t\t}\n\t\t\tenv {env.EXAMPLE}\n\t\t\tint 1\n\t\t\tfloat 1.1\n\t\t\tbool true\n\t\t\tstring \"string\"\n\t\t}\n\t}\n}\n\n:80 {\n\trespond \"Hello, World!\"\n}\n----------\n{\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"default\": {\n\t\t\t\t\"encoder\": {\n\t\t\t\t\t\"fields\": {\n\t\t\t\t\t\t\"bool\": true,\n\t\t\t\t\t\t\"env\": \"{env.EXAMPLE}\",\n\t\t\t\t\t\t\"float\": 1.1,\n\t\t\t\t\t\t\"int\": 1,\n\t\t\t\t\t\t\"string\": \"string\",\n\t\t\t\t\t\t\"wrap\": \"foo\"\n\t\t\t\t\t},\n\t\t\t\t\t\"format\": \"append\",\n\t\t\t\t\t\"wrap\": {\n\t\t\t\t\t\t\"format\": \"json\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"Hello, World!\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/log_except_catchall_blocks.caddyfiletest",
    "content": "http://localhost:2020 {\n\tlog\n\tlog_skip /first-hidden*\n\tlog_skip /second-hidden*\n\trespond 200\n}\n\n:2020 {\n\trespond 418\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":2020\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"log_skip\": true\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/second-hidden*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"log_skip\": true\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/first-hidden*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 200\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 418\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\"skip\": [\n\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"logger_names\": {\n\t\t\t\t\t\t\t\"localhost\": [\n\t\t\t\t\t\t\t\t\"\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"skip_unmapped_hosts\": true\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/log_filter_no_wrap.caddyfiletest",
    "content": ":80\n\nlog {\n\toutput stdout\n\tformat filter {\n\t\tfields {\n\t\t\trequest>headers>Server delete\n\t\t}\n\t}\n}\n----------\n{\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"default\": {\n\t\t\t\t\"exclude\": [\n\t\t\t\t\t\"http.log.access.log0\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"log0\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"output\": \"stdout\"\n\t\t\t\t},\n\t\t\t\t\"encoder\": {\n\t\t\t\t\t\"fields\": {\n\t\t\t\t\t\t\"request\\u003eheaders\\u003eServer\": {\n\t\t\t\t\t\t\t\"filter\": \"delete\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"format\": \"filter\"\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.log0\"\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"default_logger_name\": \"log0\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/log_filter_with_header.txt",
    "content": "localhost {\n\tlog {\n\t\toutput file ./caddy.access.log\n\t}\n\tlog health_check_log {\n\t\toutput file ./caddy.access.health.log\n\t\tno_hostname\n\t}\n\tlog general_log {\n\t\toutput file ./caddy.access.general.log\n\t\tno_hostname\n\t}\n\t@healthCheck `header_regexp('User-Agent', '^some-regexp$') || path('/healthz*')`\n\thandle @healthCheck {\n\t\tlog_name health_check_log general_log\n\t\trespond \"Healthy\"\n\t}\n\n\thandle {\n\t\trespond \"Hello World\"\n\t}\n}\n----------\n{\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"default\": {\n\t\t\t\t\"exclude\": [\n\t\t\t\t\t\"http.log.access.general_log\",\n\t\t\t\t\t\"http.log.access.health_check_log\",\n\t\t\t\t\t\"http.log.access.log0\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"general_log\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"./caddy.access.general.log\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.general_log\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"health_check_log\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"./caddy.access.health.log\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.health_check_log\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"log0\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"./caddy.access.log\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.log0\"\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group2\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"access_logger_names\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"health_check_log\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"general_log\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Healthy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"expression\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"expr\": \"header_regexp('User-Agent', '^some-regexp$') || path('/healthz*')\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"name\": \"healthCheck\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group2\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Hello World\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"logger_names\": {\n\t\t\t\t\t\t\t\"localhost\": [\n\t\t\t\t\t\t\t\t\"log0\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/log_filters.caddyfiletest",
    "content": ":80\n\nlog {\n\toutput stdout\n\tformat filter {\n\t\twrap console\n\n\t\t# long form, with \"fields\" wrapper\n\t\tfields {\n\t\t\turi query {\n\t\t\t\treplace foo REDACTED\n\t\t\t\tdelete bar\n\t\t\t\thash baz\n\t\t\t}\n\t\t}\n\n\t\t# short form, flatter structure\n\t\trequest>headers>Authorization replace REDACTED\n\t\trequest>headers>Server delete\n\t\trequest>headers>Cookie cookie {\n\t\t\treplace foo REDACTED\n\t\t\tdelete bar\n\t\t\thash baz\n\t\t}\n\t\trequest>remote_ip ip_mask {\n\t\t\tipv4 24\n\t\t\tipv6 32\n\t\t}\n\t\trequest>client_ip ip_mask 16 32\n\t\trequest>headers>Regexp regexp secret REDACTED\n\t\trequest>headers>Hash hash\n\t}\n}\n----------\n{\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"default\": {\n\t\t\t\t\"exclude\": [\n\t\t\t\t\t\"http.log.access.log0\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"log0\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"output\": \"stdout\"\n\t\t\t\t},\n\t\t\t\t\"encoder\": {\n\t\t\t\t\t\"fields\": {\n\t\t\t\t\t\t\"request\\u003eclient_ip\": {\n\t\t\t\t\t\t\t\"filter\": \"ip_mask\",\n\t\t\t\t\t\t\t\"ipv4_cidr\": 16,\n\t\t\t\t\t\t\t\"ipv6_cidr\": 32\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"request\\u003eheaders\\u003eAuthorization\": {\n\t\t\t\t\t\t\t\"filter\": \"replace\",\n\t\t\t\t\t\t\t\"value\": \"REDACTED\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"request\\u003eheaders\\u003eCookie\": {\n\t\t\t\t\t\t\t\"actions\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"name\": \"foo\",\n\t\t\t\t\t\t\t\t\t\"type\": \"replace\",\n\t\t\t\t\t\t\t\t\t\"value\": \"REDACTED\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"name\": \"bar\",\n\t\t\t\t\t\t\t\t\t\"type\": \"delete\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"name\": \"baz\",\n\t\t\t\t\t\t\t\t\t\"type\": \"hash\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"filter\": \"cookie\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"request\\u003eheaders\\u003eHash\": {\n\t\t\t\t\t\t\t\"filter\": \"hash\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"request\\u003eheaders\\u003eRegexp\": {\n\t\t\t\t\t\t\t\"filter\": \"regexp\",\n\t\t\t\t\t\t\t\"regexp\": \"secret\",\n\t\t\t\t\t\t\t\"value\": \"REDACTED\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"request\\u003eheaders\\u003eServer\": {\n\t\t\t\t\t\t\t\"filter\": \"delete\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"request\\u003eremote_ip\": {\n\t\t\t\t\t\t\t\"filter\": \"ip_mask\",\n\t\t\t\t\t\t\t\"ipv4_cidr\": 24,\n\t\t\t\t\t\t\t\"ipv6_cidr\": 32\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"uri\": {\n\t\t\t\t\t\t\t\"actions\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"parameter\": \"foo\",\n\t\t\t\t\t\t\t\t\t\"type\": \"replace\",\n\t\t\t\t\t\t\t\t\t\"value\": \"REDACTED\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"parameter\": \"bar\",\n\t\t\t\t\t\t\t\t\t\"type\": \"delete\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"parameter\": \"baz\",\n\t\t\t\t\t\t\t\t\t\"type\": \"hash\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"filter\": \"query\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"format\": \"filter\",\n\t\t\t\t\t\"wrap\": {\n\t\t\t\t\t\t\"format\": \"console\"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.log0\"\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"default_logger_name\": \"log0\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/log_multi_logger_name.caddyfiletest",
    "content": "(log-both) {\n\tlog {args[0]}-json {\n\t\thostnames {args[0]}\n\t\toutput file /var/log/{args[0]}.log\n\t\tformat json\n\t}\n\tlog {args[0]}-console {\n\t\thostnames {args[0]}\n\t\toutput file /var/log/{args[0]}.json\n\t\tformat console\n\t}\n}\n\n*.example.com {\n\t# Subdomains log to multiple files at once, with\n\t# different output files and formats.\n\timport log-both foo.example.com\n\timport log-both bar.example.com\n}\n----------\n{\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"bar.example.com-console\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"/var/log/bar.example.com.json\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"encoder\": {\n\t\t\t\t\t\"format\": \"console\"\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.bar.example.com-console\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"bar.example.com-json\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"/var/log/bar.example.com.log\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"encoder\": {\n\t\t\t\t\t\"format\": \"json\"\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.bar.example.com-json\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"default\": {\n\t\t\t\t\"exclude\": [\n\t\t\t\t\t\"http.log.access.bar.example.com-console\",\n\t\t\t\t\t\"http.log.access.bar.example.com-json\",\n\t\t\t\t\t\"http.log.access.foo.example.com-console\",\n\t\t\t\t\t\"http.log.access.foo.example.com-json\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"foo.example.com-console\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"/var/log/foo.example.com.json\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"encoder\": {\n\t\t\t\t\t\"format\": \"console\"\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.foo.example.com-console\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"foo.example.com-json\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"/var/log/foo.example.com.log\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"encoder\": {\n\t\t\t\t\t\"format\": \"json\"\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.foo.example.com-json\"\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"*.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"logger_names\": {\n\t\t\t\t\t\t\t\"bar.example.com\": [\n\t\t\t\t\t\t\t\t\"bar.example.com-json\",\n\t\t\t\t\t\t\t\t\"bar.example.com-console\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"foo.example.com\": [\n\t\t\t\t\t\t\t\t\"foo.example.com-json\",\n\t\t\t\t\t\t\t\t\"foo.example.com-console\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/log_multiple_regexp_filters.caddyfiletest",
    "content": ":80\n\nlog {\n\toutput stdout\n\tformat filter {\n\t\twrap console\n\t\t\n\t\t# Multiple regexp filters for the same field - this should work now!\n\t\trequest>headers>Authorization regexp \"Bearer\\s+([A-Za-z0-9_-]+)\" \"Bearer [REDACTED]\"\n\t\trequest>headers>Authorization regexp \"Basic\\s+([A-Za-z0-9+/=]+)\" \"Basic [REDACTED]\"\n\t\trequest>headers>Authorization regexp \"token=([^&\\s]+)\" \"token=[REDACTED]\"\n\t\t\n\t\t# Single regexp filter - this should continue to work as before\n\t\trequest>headers>Cookie regexp \"sessionid=[^;]+\" \"sessionid=[REDACTED]\"\n\t\t\n\t\t# Mixed filters (non-regexp) - these should work normally\n\t\trequest>headers>Server delete\n\t\trequest>remote_ip ip_mask {\n\t\t\tipv4 24\n\t\t\tipv6 32\n\t\t}\n\t}\n}\n----------\n{\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"default\": {\n\t\t\t\t\"exclude\": [\n\t\t\t\t\t\"http.log.access.log0\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"log0\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"output\": \"stdout\"\n\t\t\t\t},\n\t\t\t\t\"encoder\": {\n\t\t\t\t\t\"fields\": {\n\t\t\t\t\t\t\"request\\u003eheaders\\u003eAuthorization\": {\n\t\t\t\t\t\t\t\"filter\": \"multi_regexp\",\n\t\t\t\t\t\t\t\"operations\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"regexp\": \"Bearer\\\\s+([A-Za-z0-9_-]+)\",\n\t\t\t\t\t\t\t\t\t\"value\": \"Bearer [REDACTED]\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"regexp\": \"Basic\\\\s+([A-Za-z0-9+/=]+)\",\n\t\t\t\t\t\t\t\t\t\"value\": \"Basic [REDACTED]\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"regexp\": \"token=([^\\u0026\\\\s]+)\",\n\t\t\t\t\t\t\t\t\t\"value\": \"token=[REDACTED]\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"request\\u003eheaders\\u003eCookie\": {\n\t\t\t\t\t\t\t\"filter\": \"regexp\",\n\t\t\t\t\t\t\t\"regexp\": \"sessionid=[^;]+\",\n\t\t\t\t\t\t\t\"value\": \"sessionid=[REDACTED]\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"request\\u003eheaders\\u003eServer\": {\n\t\t\t\t\t\t\t\"filter\": \"delete\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"request\\u003eremote_ip\": {\n\t\t\t\t\t\t\t\"filter\": \"ip_mask\",\n\t\t\t\t\t\t\t\"ipv4_cidr\": 24,\n\t\t\t\t\t\t\t\"ipv6_cidr\": 32\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"format\": \"filter\",\n\t\t\t\t\t\"wrap\": {\n\t\t\t\t\t\t\"format\": \"console\"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.log0\"\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"default_logger_name\": \"log0\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n} "
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/log_override_hostname.caddyfiletest",
    "content": "*.example.com {\n\tlog {\n\t\thostnames foo.example.com bar.example.com\n\t\toutput file /foo-bar.txt\n\t}\n\tlog {\n\t\thostnames baz.example.com\n\t\toutput file /baz.txt\n\t}\n}\n\nexample.com:8443 {\n\tlog {\n\t\toutput file /port.txt\n\t}\n}\n----------\n{\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"default\": {\n\t\t\t\t\"exclude\": [\n\t\t\t\t\t\"http.log.access.log0\",\n\t\t\t\t\t\"http.log.access.log1\",\n\t\t\t\t\t\"http.log.access.log2\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"log0\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"/foo-bar.txt\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.log0\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"log1\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"/baz.txt\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.log1\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"log2\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"/port.txt\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.log2\"\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"*.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"logger_names\": {\n\t\t\t\t\t\t\t\"bar.example.com\": [\n\t\t\t\t\t\t\t\t\"log0\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"baz.example.com\": [\n\t\t\t\t\t\t\t\t\"log1\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"foo.example.com\": [\n\t\t\t\t\t\t\t\t\"log0\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"srv1\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"logger_names\": {\n\t\t\t\t\t\t\t\"example.com\": [\n\t\t\t\t\t\t\t\t\"log2\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/log_override_name_multiaccess.caddyfiletest",
    "content": "{\n\tlog access-console {\n\t\tinclude http.log.access.foo\n\t\toutput file access-localhost.log\n\t\tformat console\n\t}\n\n\tlog access-json {\n\t\tinclude http.log.access.foo\n\t\toutput file access-localhost.json\n\t\tformat json\n\t}\n}\n\nhttp://localhost:8881 {\n\tlog foo\n}\n----------\n{\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"access-console\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"access-localhost.log\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"encoder\": {\n\t\t\t\t\t\"format\": \"console\"\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.foo\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"access-json\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"access-localhost.json\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"encoder\": {\n\t\t\t\t\t\"format\": \"json\"\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.foo\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"default\": {\n\t\t\t\t\"exclude\": [\n\t\t\t\t\t\"http.log.access.foo\"\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8881\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\"skip\": [\n\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"logger_names\": {\n\t\t\t\t\t\t\t\"localhost\": [\n\t\t\t\t\t\t\t\t\"foo\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/log_override_name_multiaccess_debug.caddyfiletest",
    "content": "{\n\tdebug\n\n\tlog access-console {\n\t\tinclude http.log.access.foo\n\t\toutput file access-localhost.log\n\t\tformat console\n\t}\n\n\tlog access-json {\n\t\tinclude http.log.access.foo\n\t\toutput file access-localhost.json\n\t\tformat json\n\t}\n}\n\nhttp://localhost:8881 {\n\tlog foo\n}\n----------\n{\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"access-console\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"access-localhost.log\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"encoder\": {\n\t\t\t\t\t\"format\": \"console\"\n\t\t\t\t},\n\t\t\t\t\"level\": \"DEBUG\",\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.foo\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"access-json\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"filename\": \"access-localhost.json\",\n\t\t\t\t\t\"output\": \"file\"\n\t\t\t\t},\n\t\t\t\t\"encoder\": {\n\t\t\t\t\t\"format\": \"json\"\n\t\t\t\t},\n\t\t\t\t\"level\": \"DEBUG\",\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.foo\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"default\": {\n\t\t\t\t\"level\": \"DEBUG\",\n\t\t\t\t\"exclude\": [\n\t\t\t\t\t\"http.log.access.foo\"\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8881\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\"skip\": [\n\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"logger_names\": {\n\t\t\t\t\t\t\t\"localhost\": [\n\t\t\t\t\t\t\t\t\"foo\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/log_roll_days.caddyfiletest",
    "content": ":80\n\nlog one {\n\toutput file /var/log/access.log {\n\t\tmode 0644\n\t\tdir_mode 0755\n\t\troll_size 1gb\n\t\troll_uncompressed\n\t\troll_compression none\n\t\troll_local_time\n\t\troll_keep 5\n\t\troll_keep_for 90d\n\t}\n}\nlog two {\n\toutput file /var/log/access-2.log {\n\t\tmode 0777\n\t\tdir_mode from_file\n\t\troll_size 1gib\n\t\troll_compression zstd\n\t\troll_interval 12h\n\t\troll_at 00:00 06:00 12:00,18:00\n\t\troll_minutes 10 40 45,46\n\t\troll_keep 10\n\t\troll_keep_for 90d\n\t}\n}\n----------\n{\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"default\": {\n\t\t\t\t\"exclude\": [\n\t\t\t\t\t\"http.log.access.one\",\n\t\t\t\t\t\"http.log.access.two\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"one\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"dir_mode\": \"0755\",\n\t\t\t\t\t\"filename\": \"/var/log/access.log\",\n\t\t\t\t\t\"mode\": \"0644\",\n\t\t\t\t\t\"output\": \"file\",\n\t\t\t\t\t\"roll_compression\": \"none\",\n\t\t\t\t\t\"roll_gzip\": false,\n\t\t\t\t\t\"roll_keep\": 5,\n\t\t\t\t\t\"roll_keep_days\": 90,\n\t\t\t\t\t\"roll_local_time\": true,\n\t\t\t\t\t\"roll_size_mb\": 954\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.one\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"two\": {\n\t\t\t\t\"writer\": {\n\t\t\t\t\t\"dir_mode\": \"from_file\",\n\t\t\t\t\t\"filename\": \"/var/log/access-2.log\",\n\t\t\t\t\t\"mode\": \"0777\",\n\t\t\t\t\t\"output\": \"file\",\n\t\t\t\t\t\"roll_at\": [\n\t\t\t\t\t\t\"00:00\",\n\t\t\t\t\t\t\"06:00\",\n\t\t\t\t\t\t\"12:00\",\n\t\t\t\t\t\t\"18:00\"\n\t\t\t\t\t],\n\t\t\t\t\t\"roll_compression\": \"zstd\",\n\t\t\t\t\t\"roll_interval\": 43200000000000,\n\t\t\t\t\t\"roll_keep\": 10,\n\t\t\t\t\t\"roll_keep_days\": 90,\n\t\t\t\t\t\"roll_minutes\": [\n\t\t\t\t\t\t10,\n\t\t\t\t\t\t40,\n\t\t\t\t\t\t45,\n\t\t\t\t\t\t46\n\t\t\t\t\t],\n\t\t\t\t\t\"roll_size_mb\": 1024\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.two\"\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"default_logger_name\": \"two\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/log_sampling.caddyfiletest",
    "content": ":80 {\n\tlog {\n\t\tsampling {\n\t\t\tinterval 300\n\t\t\tfirst 50\n\t\t\tthereafter 40\n\t\t}\n\t}\n}\n----------\n{\n\t\"logging\": {\n\t\t\"logs\": {\n\t\t\t\"default\": {\n\t\t\t\t\"exclude\": [\n\t\t\t\t\t\"http.log.access.log0\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"log0\": {\n\t\t\t\t\"sampling\": {\n\t\t\t\t\t\"interval\": 300,\n\t\t\t\t\t\"first\": 50,\n\t\t\t\t\t\"thereafter\": 40\n\t\t\t\t},\n\t\t\t\t\"include\": [\n\t\t\t\t\t\"http.log.access.log0\"\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t},\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"default_logger_name\": \"log0\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/log_skip_hosts.caddyfiletest",
    "content": "one.example.com {\n\tlog\n}\n\ntwo.example.com {\n}\n\nthree.example.com {\n}\n\nexample.com {\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"three.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"one.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"two.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"logs\": {\n\t\t\t\t\t\t\"logger_names\": {\n\t\t\t\t\t\t\t\"one.example.com\": [\n\t\t\t\t\t\t\t\t\"\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"skip_hosts\": [\n\t\t\t\t\t\t\t\"example.com\",\n\t\t\t\t\t\t\t\"three.example.com\",\n\t\t\t\t\t\t\t\"two.example.com\"\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/map_and_vars_with_raw_types.caddyfiletest",
    "content": "example.com\n\nmap {host} {my_placeholder} {magic_number} {\n\t# Should output boolean \"true\" and an integer\n\texample.com true 3\n\n\t# Should output a string and null\n\tfoo.example.com \"string value\"\n\n\t# Should output two strings (quoted int)\n\t(.*)\\.example.com \"${1} subdomain\" \"5\"\n\n\t# Should output null and a string (quoted int)\n\t~.*\\.net$ - `7`\n\n\t# Should output a float and the string \"false\"\n\t~.*\\.xyz$ 123.456 \"false\"\n\n\t# Should output two strings, second being escaped quote\n\tdefault \"unknown domain\" \\\"\"\"\n}\n\nvars foo bar\nvars {\n\tabc true\n\tdef 1\n\tghi 2.3\n\tjkl \"mn op\"\n}\n\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"defaults\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"unknown domain\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"\\\"\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"destinations\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{my_placeholder}\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{magic_number}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"map\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"mappings\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"input\": \"example.com\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"outputs\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\ttrue,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t3\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"input\": \"foo.example.com\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"outputs\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"string value\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tnull\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"input\": \"(.*)\\\\.example.com\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"outputs\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"${1} subdomain\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"5\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"input_regexp\": \".*\\\\.net$\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"outputs\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tnull,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"7\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"input_regexp\": \".*\\\\.xyz$\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"outputs\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t123.456,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"false\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"source\": \"{http.request.host}\"\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"abc\": true,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"def\": 1,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"ghi\": 2.3,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"jkl\": \"mn op\"\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"foo\": \"bar\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/matcher_outside_site_block.caddyfiletest",
    "content": "@foo {\n\tpath /foo\n}\n\nhandle {\n\trespond \"should not work\"\n}\n----------\nrequest matchers may not be defined globally, they must be in a site block; found @foo, at Caddyfile:1"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/matcher_syntax.caddyfiletest",
    "content": ":80 {\n\t@matcher {\n\t\tmethod GET\n\t}\n\trespond @matcher \"get\"\n\n\t@matcher2 method POST\n\trespond @matcher2 \"post\"\n\n\t@matcher3 not method PUT\n\trespond @matcher3 \"not put\"\n\n\t@matcher4 vars \"{http.request.uri}\" \"/vars-matcher\"\n\trespond @matcher4 \"from vars matcher\"\n\n\t@matcher5 vars_regexp static \"{http.request.uri}\" `\\.([a-f0-9]{6})\\.(css|js)$`\n\trespond @matcher5 \"from vars_regexp matcher with name\"\n\n\t@matcher6 vars_regexp \"{http.request.uri}\" `\\.([a-f0-9]{6})\\.(css|js)$`\n\trespond @matcher6 \"from vars_regexp matcher without name\"\n\n\t@matcher7 `path('/foo*') && method('GET')`\n\trespond @matcher7 \"inline expression matcher shortcut\"\n\n\t@matcher8 {\n\t\theader Foo bar\n\t\theader Foo foobar\n\t\theader Bar foo\n\t}\n\trespond @matcher8 \"header matcher merging values of the same field\"\n\n\t@matcher9 {\n\t\tquery foo=bar foo=baz bar=foo\n\t\tquery bar=baz\n\t}\n\trespond @matcher9 \"query matcher merging pairs with the same keys\"\n\n\t@matcher10 {\n\t\theader !Foo\n\t\theader Bar foo\n\t}\n\trespond @matcher10 \"header matcher with null field matcher\"\n\n\t@matcher11 remote_ip private_ranges\n\trespond @matcher11 \"remote_ip matcher with private ranges\"\n\n\t@matcher12 client_ip private_ranges\n\trespond @matcher12 \"client_ip matcher with private ranges\"\n\n\t@matcher13 {\n\t\tremote_ip 1.1.1.1\n\t\tremote_ip 2.2.2.2\n\t}\n\trespond @matcher13 \"remote_ip merged\"\n\n\t@matcher14 {\n\t\tclient_ip 1.1.1.1\n\t\tclient_ip 2.2.2.2\n\t}\n\trespond @matcher14 \"client_ip merged\"\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"method\": [\n\t\t\t\t\t\t\t\t\t\t\"GET\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"get\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"method\": [\n\t\t\t\t\t\t\t\t\t\t\"POST\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"post\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"method\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"PUT\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"not put\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"vars\": {\n\t\t\t\t\t\t\t\t\t\t\"{http.request.uri}\": [\n\t\t\t\t\t\t\t\t\t\t\t\"/vars-matcher\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"from vars matcher\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"vars_regexp\": {\n\t\t\t\t\t\t\t\t\t\t\"{http.request.uri}\": {\n\t\t\t\t\t\t\t\t\t\t\t\"name\": \"static\",\n\t\t\t\t\t\t\t\t\t\t\t\"pattern\": \"\\\\.([a-f0-9]{6})\\\\.(css|js)$\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"from vars_regexp matcher with name\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"vars_regexp\": {\n\t\t\t\t\t\t\t\t\t\t\"{http.request.uri}\": {\n\t\t\t\t\t\t\t\t\t\t\t\"name\": \"matcher6\",\n\t\t\t\t\t\t\t\t\t\t\t\"pattern\": \"\\\\.([a-f0-9]{6})\\\\.(css|js)$\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"from vars_regexp matcher without name\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"expression\": {\n\t\t\t\t\t\t\t\t\t\t\"expr\": \"path('/foo*') \\u0026\\u0026 method('GET')\",\n\t\t\t\t\t\t\t\t\t\t\"name\": \"matcher7\"\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"inline expression matcher shortcut\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"header\": {\n\t\t\t\t\t\t\t\t\t\t\"Bar\": [\n\t\t\t\t\t\t\t\t\t\t\t\"foo\"\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"Foo\": [\n\t\t\t\t\t\t\t\t\t\t\t\"bar\",\n\t\t\t\t\t\t\t\t\t\t\t\"foobar\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"header matcher merging values of the same field\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"query\": {\n\t\t\t\t\t\t\t\t\t\t\"bar\": [\n\t\t\t\t\t\t\t\t\t\t\t\"foo\",\n\t\t\t\t\t\t\t\t\t\t\t\"baz\"\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"foo\": [\n\t\t\t\t\t\t\t\t\t\t\t\"bar\",\n\t\t\t\t\t\t\t\t\t\t\t\"baz\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"query matcher merging pairs with the same keys\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"header\": {\n\t\t\t\t\t\t\t\t\t\t\"Bar\": [\n\t\t\t\t\t\t\t\t\t\t\t\"foo\"\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"Foo\": null\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"header matcher with null field matcher\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"remote_ip\": {\n\t\t\t\t\t\t\t\t\t\t\"ranges\": [\n\t\t\t\t\t\t\t\t\t\t\t\"192.168.0.0/16\",\n\t\t\t\t\t\t\t\t\t\t\t\"172.16.0.0/12\",\n\t\t\t\t\t\t\t\t\t\t\t\"10.0.0.0/8\",\n\t\t\t\t\t\t\t\t\t\t\t\"127.0.0.1/8\",\n\t\t\t\t\t\t\t\t\t\t\t\"fd00::/8\",\n\t\t\t\t\t\t\t\t\t\t\t\"::1\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"remote_ip matcher with private ranges\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"client_ip\": {\n\t\t\t\t\t\t\t\t\t\t\"ranges\": [\n\t\t\t\t\t\t\t\t\t\t\t\"192.168.0.0/16\",\n\t\t\t\t\t\t\t\t\t\t\t\"172.16.0.0/12\",\n\t\t\t\t\t\t\t\t\t\t\t\"10.0.0.0/8\",\n\t\t\t\t\t\t\t\t\t\t\t\"127.0.0.1/8\",\n\t\t\t\t\t\t\t\t\t\t\t\"fd00::/8\",\n\t\t\t\t\t\t\t\t\t\t\t\"::1\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"client_ip matcher with private ranges\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"remote_ip\": {\n\t\t\t\t\t\t\t\t\t\t\"ranges\": [\n\t\t\t\t\t\t\t\t\t\t\t\"1.1.1.1\",\n\t\t\t\t\t\t\t\t\t\t\t\"2.2.2.2\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"remote_ip merged\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"client_ip\": {\n\t\t\t\t\t\t\t\t\t\t\"ranges\": [\n\t\t\t\t\t\t\t\t\t\t\t\"1.1.1.1\",\n\t\t\t\t\t\t\t\t\t\t\t\"2.2.2.2\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"client_ip merged\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/matchers_in_route.caddyfiletest",
    "content": ":80 {\n\troute {\n\t\t# unused matchers should not panic\n\t\t# see https://github.com/caddyserver/caddy/issues/3745\n\t\t@matcher1 path /path1\n\t\t@matcher2 path /path2\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/method_directive.caddyfiletest",
    "content": ":8080 {\n\tmethod FOO\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8080\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\"method\": \"FOO\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/metrics_disable_om.caddyfiletest",
    "content": ":80 {\n\tmetrics /metrics {\n\t\tdisable_openmetrics\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"/metrics\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"disable_openmetrics\": true,\n\t\t\t\t\t\t\t\t\t\"handler\": \"metrics\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/metrics_merge_options.caddyfiletest",
    "content": "{\n\tmetrics\n\tservers :80 {\n\t\tmetrics {\n\t\t\tper_host\n\t\t}\n\t}\n}\n:80 {\n\trespond \"Hello\"\n}\n\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"Hello\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"metrics\": {\n\t\t\t\t\"per_host\": true\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/metrics_perhost.caddyfiletest",
    "content": "{\n\tservers :80 {\n\t\tmetrics {\n\t\t\tper_host\n\t\t}\n\t}\n}\n:80 {\n\trespond \"Hello\"\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"Hello\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"metrics\": {\n\t\t\t\t\"per_host\": true\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/metrics_syntax.caddyfiletest",
    "content": ":80 {\n\tmetrics /metrics\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"/metrics\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"metrics\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/not_block_merging.caddyfiletest",
    "content": ":80\n\n@test {\n\tnot {\n\t\theader Abc \"123\"\n\t\theader Bcd \"123\"\n\t}\n}\nrespond @test 403\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"header\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"Abc\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"123\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"Bcd\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"123\"\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\"status_code\": 403\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/php_fastcgi_expanded_form.caddyfiletest",
    "content": ":8886\n\nroute {\n\t# Add trailing slash for directory requests\n\t@canonicalPath {\n\t\tfile {\n\t\t\ttry_files {path}/index.php\n\t\t}\n\t\tnot path */\n\t}\n\tredir @canonicalPath {orig_path}/{orig_?query} 308\n\n\t# If the requested file does not exist, try index files\n\t@indexFiles {\n\t\tfile {\n\t\t\ttry_files {path} {path}/index.php index.php\n\t\t\tsplit_path .php\n\t\t}\n\t}\n\trewrite @indexFiles {file_match.relative}\n\n\t# Proxy PHP files to the FastCGI responder\n\t@phpFiles {\n\t\tpath *.php\n\t}\n\treverse_proxy @phpFiles 127.0.0.1:9000 {\n\t\ttransport fastcgi {\n\t\t\tsplit .php\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8886\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Location\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.request.orig_uri.path}/{http.request.orig_uri.prefixed_query}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 308\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"file\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"try_files\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri.path}/index.php\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"*/\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group0\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"uri\": \"{http.matchers.file.relative}\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"file\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"split_path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\".php\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"try_files\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri.path}\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri.path}/index.php\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"index.php\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"protocol\": \"fastcgi\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"split_path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\".php\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"127.0.0.1:9000\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"*.php\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/php_fastcgi_handle_response.caddyfiletest",
    "content": ":8881 {\n\tphp_fastcgi app:9000 {\n\t\tenv FOO bar\n\n\t\t@error status 4xx\n\t\thandle_response @error {\n\t\t\troot * /errors\n\t\t\trewrite * /{http.reverse_proxy.status_code}.html\n\t\t\tfile_server\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8881\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"file\": {\n\t\t\t\t\t\t\t\t\t\t\"try_files\": [\n\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri.path}/index.php\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"*/\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\"Location\": [\n\t\t\t\t\t\t\t\t\t\t\t\"{http.request.orig_uri.path}/{http.request.orig_uri.prefixed_query}\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"status_code\": 308\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"file\": {\n\t\t\t\t\t\t\t\t\t\t\"try_files\": [\n\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri.path}\",\n\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri.path}/index.php\",\n\t\t\t\t\t\t\t\t\t\t\t\"index.php\"\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"try_policy\": \"first_exist_fallback\",\n\t\t\t\t\t\t\t\t\t\t\"split_path\": [\n\t\t\t\t\t\t\t\t\t\t\t\".php\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\"uri\": \"{http.matchers.file.relative}\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"*.php\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handle_response\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t4\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"root\": \"/errors\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group0\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"uri\": \"/{http.reverse_proxy.status_code}.html\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"file_server\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"hide\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"./Caddyfile\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\"env\": {\n\t\t\t\t\t\t\t\t\t\t\t\"FOO\": \"bar\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"protocol\": \"fastcgi\",\n\t\t\t\t\t\t\t\t\t\t\"split_path\": [\n\t\t\t\t\t\t\t\t\t\t\t\".php\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"app:9000\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/php_fastcgi_index_off.caddyfiletest",
    "content": ":8884\n\nphp_fastcgi localhost:9000 {\n\t# some php_fastcgi-specific subdirectives\n\tsplit .php .php5\n\tenv VAR1 value1\n\tenv VAR2 value2\n\troot /var/www\n\tindex off\n\tdial_timeout 3s\n\tread_timeout 10s\n\twrite_timeout 20s\n\n\t# passed through to reverse_proxy (directive order doesn't matter!)\n\tlb_policy random\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"*.php\",\n\t\t\t\t\t\t\t\t\t\t\"*.php5\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"load_balancing\": {\n\t\t\t\t\t\t\t\t\t\t\"selection_policy\": {\n\t\t\t\t\t\t\t\t\t\t\t\"policy\": \"random\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\"dial_timeout\": 3000000000,\n\t\t\t\t\t\t\t\t\t\t\"env\": {\n\t\t\t\t\t\t\t\t\t\t\t\"VAR1\": \"value1\",\n\t\t\t\t\t\t\t\t\t\t\t\"VAR2\": \"value2\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"protocol\": \"fastcgi\",\n\t\t\t\t\t\t\t\t\t\t\"read_timeout\": 10000000000,\n\t\t\t\t\t\t\t\t\t\t\"root\": \"/var/www\",\n\t\t\t\t\t\t\t\t\t\t\"split_path\": [\n\t\t\t\t\t\t\t\t\t\t\t\".php\",\n\t\t\t\t\t\t\t\t\t\t\t\".php5\"\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"write_timeout\": 20000000000\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:9000\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/php_fastcgi_matcher.caddyfiletest",
    "content": ":8884\n\n# the use of a host matcher here should cause this\n# site block to be wrapped in a subroute, even though\n# the site block does not have a hostname; this is\n# to prevent auto-HTTPS from picking up on this host\n# matcher because it is not a key on the site block\n@test host example.com\nphp_fastcgi @test localhost:9000\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Location\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.request.orig_uri.path}/{http.request.orig_uri.prefixed_query}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 308\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"file\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"try_files\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri.path}/index.php\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"*/\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"uri\": \"{http.matchers.file.relative}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"file\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"split_path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\".php\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"try_files\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri.path}\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri.path}/index.php\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"index.php\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"try_policy\": \"first_exist_fallback\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"protocol\": \"fastcgi\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"split_path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\".php\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:9000\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"*.php\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/php_fastcgi_subdirectives.caddyfiletest",
    "content": ":8884\n\nphp_fastcgi localhost:9000 {\n\t# some php_fastcgi-specific subdirectives\n\tsplit .php .php5\n\tenv VAR1 value1\n\tenv VAR2 value2\n\troot /var/www\n\tindex index.php5\n\n\t# passed through to reverse_proxy (directive order doesn't matter!)\n\tlb_policy random\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"file\": {\n\t\t\t\t\t\t\t\t\t\t\"try_files\": [\n\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri.path}/index.php5\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"*/\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\"Location\": [\n\t\t\t\t\t\t\t\t\t\t\t\"{http.request.orig_uri.path}/{http.request.orig_uri.prefixed_query}\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"status_code\": 308\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"file\": {\n\t\t\t\t\t\t\t\t\t\t\"try_files\": [\n\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri.path}\",\n\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri.path}/index.php5\",\n\t\t\t\t\t\t\t\t\t\t\t\"index.php5\"\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"try_policy\": \"first_exist_fallback\",\n\t\t\t\t\t\t\t\t\t\t\"split_path\": [\n\t\t\t\t\t\t\t\t\t\t\t\".php\",\n\t\t\t\t\t\t\t\t\t\t\t\".php5\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\"uri\": \"{http.matchers.file.relative}\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"*.php\",\n\t\t\t\t\t\t\t\t\t\t\"*.php5\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"load_balancing\": {\n\t\t\t\t\t\t\t\t\t\t\"selection_policy\": {\n\t\t\t\t\t\t\t\t\t\t\t\"policy\": \"random\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\"env\": {\n\t\t\t\t\t\t\t\t\t\t\t\"VAR1\": \"value1\",\n\t\t\t\t\t\t\t\t\t\t\t\"VAR2\": \"value2\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"protocol\": \"fastcgi\",\n\t\t\t\t\t\t\t\t\t\t\"root\": \"/var/www\",\n\t\t\t\t\t\t\t\t\t\t\"split_path\": [\n\t\t\t\t\t\t\t\t\t\t\t\".php\",\n\t\t\t\t\t\t\t\t\t\t\t\".php5\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:9000\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/php_fastcgi_try_files_override.caddyfiletest",
    "content": ":8884\n\nphp_fastcgi localhost:9000 {\n\t# some php_fastcgi-specific subdirectives\n\tsplit .php .php5\n\tenv VAR1 value1\n\tenv VAR2 value2\n\troot /var/www\n\ttry_files {path} {path}/index.php =404\n\tdial_timeout 3s\n\tread_timeout 10s\n\twrite_timeout 20s\n\n\t# passed through to reverse_proxy (directive order doesn't matter!)\n\tlb_policy random\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"file\": {\n\t\t\t\t\t\t\t\t\t\t\"try_files\": [\n\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri.path}/index.php\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"*/\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\"Location\": [\n\t\t\t\t\t\t\t\t\t\t\t\"{http.request.orig_uri.path}/{http.request.orig_uri.prefixed_query}\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"status_code\": 308\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"file\": {\n\t\t\t\t\t\t\t\t\t\t\"try_files\": [\n\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri.path}\",\n\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri.path}/index.php\",\n\t\t\t\t\t\t\t\t\t\t\t\"=404\"\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"split_path\": [\n\t\t\t\t\t\t\t\t\t\t\t\".php\",\n\t\t\t\t\t\t\t\t\t\t\t\".php5\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\"uri\": \"{http.matchers.file.relative}\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"*.php\",\n\t\t\t\t\t\t\t\t\t\t\"*.php5\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"load_balancing\": {\n\t\t\t\t\t\t\t\t\t\t\"selection_policy\": {\n\t\t\t\t\t\t\t\t\t\t\t\"policy\": \"random\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\"dial_timeout\": 3000000000,\n\t\t\t\t\t\t\t\t\t\t\"env\": {\n\t\t\t\t\t\t\t\t\t\t\t\"VAR1\": \"value1\",\n\t\t\t\t\t\t\t\t\t\t\t\"VAR2\": \"value2\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"protocol\": \"fastcgi\",\n\t\t\t\t\t\t\t\t\t\t\"read_timeout\": 10000000000,\n\t\t\t\t\t\t\t\t\t\t\"root\": \"/var/www\",\n\t\t\t\t\t\t\t\t\t\t\"split_path\": [\n\t\t\t\t\t\t\t\t\t\t\t\".php\",\n\t\t\t\t\t\t\t\t\t\t\t\".php5\"\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"write_timeout\": 20000000000\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:9000\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/php_fastcgi_try_files_override_no_dir_index.caddyfiletest",
    "content": ":8884\n\nphp_fastcgi localhost:9000 {\n\t# some php_fastcgi-specific subdirectives\n\tsplit .php .php5\n\tenv VAR1 value1\n\tenv VAR2 value2\n\troot /var/www\n\ttry_files {path} index.php\n\tdial_timeout 3s\n\tread_timeout 10s\n\twrite_timeout 20s\n\n\t# passed through to reverse_proxy (directive order doesn't matter!)\n\tlb_policy random\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"file\": {\n\t\t\t\t\t\t\t\t\t\t\"try_files\": [\n\t\t\t\t\t\t\t\t\t\t\t\"{http.request.uri.path}\",\n\t\t\t\t\t\t\t\t\t\t\t\"index.php\"\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"try_policy\": \"first_exist_fallback\",\n\t\t\t\t\t\t\t\t\t\t\"split_path\": [\n\t\t\t\t\t\t\t\t\t\t\t\".php\",\n\t\t\t\t\t\t\t\t\t\t\t\".php5\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\"uri\": \"{http.matchers.file.relative}\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"*.php\",\n\t\t\t\t\t\t\t\t\t\t\"*.php5\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"load_balancing\": {\n\t\t\t\t\t\t\t\t\t\t\"selection_policy\": {\n\t\t\t\t\t\t\t\t\t\t\t\"policy\": \"random\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\"dial_timeout\": 3000000000,\n\t\t\t\t\t\t\t\t\t\t\"env\": {\n\t\t\t\t\t\t\t\t\t\t\t\"VAR1\": \"value1\",\n\t\t\t\t\t\t\t\t\t\t\t\"VAR2\": \"value2\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"protocol\": \"fastcgi\",\n\t\t\t\t\t\t\t\t\t\t\"read_timeout\": 10000000000,\n\t\t\t\t\t\t\t\t\t\t\"root\": \"/var/www\",\n\t\t\t\t\t\t\t\t\t\t\"split_path\": [\n\t\t\t\t\t\t\t\t\t\t\t\".php\",\n\t\t\t\t\t\t\t\t\t\t\t\".php5\"\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"write_timeout\": 20000000000\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:9000\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/portless_upstream.caddyfiletest",
    "content": "whoami.example.com {\n\treverse_proxy whoami\n}\n\napp.example.com {\n\treverse_proxy app:80\n}\nunix.example.com {\n\treverse_proxy unix//path/to/socket\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"whoami.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"whoami:80\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"unix.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"unix//path/to/socket\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"app.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"app:80\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/push.caddyfiletest",
    "content": ":80\n\npush * /foo.txt\n\npush {\n\tGET /foo.txt\n}\n\npush {\n\tGET /foo.txt\n\tHEAD /foo.txt\n}\n\npush {\n\theaders {\n\t\tFoo bar\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"push\",\n\t\t\t\t\t\t\t\t\t\"resources\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"target\": \"/foo.txt\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"push\",\n\t\t\t\t\t\t\t\t\t\"resources\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"method\": \"GET\",\n\t\t\t\t\t\t\t\t\t\t\t\"target\": \"/foo.txt\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"push\",\n\t\t\t\t\t\t\t\t\t\"resources\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"method\": \"GET\",\n\t\t\t\t\t\t\t\t\t\t\t\"target\": \"/foo.txt\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"method\": \"HEAD\",\n\t\t\t\t\t\t\t\t\t\t\t\"target\": \"/foo.txt\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"push\",\n\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\"Foo\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"bar\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/renewal_window_ratio_global.caddyfiletest",
    "content": "{\n\trenewal_window_ratio 0.1666\n}\n\nexample.com {\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"renewal_window_ratio\": 0.1666\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/renewal_window_ratio_tls_directive.caddyfiletest",
    "content": "{\n\trenewal_window_ratio 0.1666\n}\n\na.example.com {\n\ttls {\n\t\trenewal_window_ratio 0.25\n\t}\n}\n\nb.example.com {\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"a.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"b.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"a.example.com\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"renewal_window_ratio\": 0.25\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"renewal_window_ratio\": 0.1666\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/replaceable_upstream.caddyfiletest",
    "content": "*.sandbox.localhost {\n\t@sandboxPort {\n\t\theader_regexp first_label Host ^([0-9]{3})\\.sandbox\\.\n\t}\n\thandle @sandboxPort {\n\t\treverse_proxy {re.first_label.1}\n\t}\n\thandle {\n\t\tredir {scheme}://application.localhost\n\t}\n}\n\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"*.sandbox.localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group2\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"{http.regexp.first_label.1}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"header_regexp\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Host\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"name\": \"first_label\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"pattern\": \"^([0-9]{3})\\\\.sandbox\\\\.\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group2\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Location\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.request.scheme}://application.localhost\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 302\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/replaceable_upstream_partial_port.caddyfiletest",
    "content": "*.sandbox.localhost {\n\t@sandboxPort {\n\t\theader_regexp port Host ^([0-9]{3})\\.sandbox\\.\n\t}\n\thandle @sandboxPort {\n\t\treverse_proxy app:6{re.port.1}\n\t}\n\thandle {\n\t\tredir {scheme}://application.localhost\n\t}\n}\n\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"*.sandbox.localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group2\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"app:6{http.regexp.port.1}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"header_regexp\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Host\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"name\": \"port\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"pattern\": \"^([0-9]{3})\\\\.sandbox\\\\.\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group2\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Location\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.request.scheme}://application.localhost\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 302\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/replaceable_upstream_port.caddyfiletest",
    "content": "*.sandbox.localhost {\n\t@sandboxPort {\n\t\theader_regexp port Host ^([0-9]{3})\\.sandbox\\.\n\t}\n\thandle @sandboxPort {\n\t\treverse_proxy app:{re.port.1}\n\t}\n\thandle {\n\t\tredir {scheme}://application.localhost\n\t}\n}\n\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"*.sandbox.localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group2\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"app:{http.regexp.port.1}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"header_regexp\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Host\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"name\": \"port\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"pattern\": \"^([0-9]{3})\\\\.sandbox\\\\.\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group2\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Location\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.request.scheme}://application.localhost\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 302\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/request_body.caddyfiletest",
    "content": "localhost\n\nrequest_body {\n\tmax_size 1MB\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"request_body\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"max_size\": 1000000\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/request_header.caddyfiletest",
    "content": ":80\n\n@matcher path /something*\nrequest_header @matcher Denis \"Ritchie\"\n\nrequest_header +Edsger \"Dijkstra\"\nrequest_header -Wolfram\n\n@images path /images/*\nrequest_header @images Cache-Control \"public, max-age=3600, stale-while-revalidate=86400\"\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"/something*\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\"Denis\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"Ritchie\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"/images/*\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\"Cache-Control\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"public, max-age=3600, stale-while-revalidate=86400\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\"add\": {\n\t\t\t\t\t\t\t\t\t\t\t\"Edsger\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"Dijkstra\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\"delete\": [\n\t\t\t\t\t\t\t\t\t\t\t\"Wolfram\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_buffers.caddyfiletest",
    "content": "https://example.com {\n\treverse_proxy https://localhost:54321 {\n\t\trequest_buffers unlimited\n\t\tresponse_buffers unlimited\n\t}\n}\n\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"request_buffers\": -1,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"response_buffers\": -1,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"protocol\": \"http\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"tls\": {}\n\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:54321\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_dynamic_upstreams.caddyfiletest",
    "content": ":8884 {\n\treverse_proxy {\n\t\tdynamic a foo 9000\n\t}\n\n\treverse_proxy {\n\t\tdynamic a {\n\t\t\tname foo\n\t\t\tport 9000\n\t\t\trefresh 5m\n\t\t\tresolvers 8.8.8.8 8.8.4.4\n\t\t\tdial_timeout 2s\n\t\t\tdial_fallback_delay 300ms\n\t\t\tversions ipv6\n\t\t}\n\t}\n}\n\n:8885 {\n\treverse_proxy {\n\t\tdynamic srv _api._tcp.example.com\n\t}\n\n\treverse_proxy {\n\t\tdynamic srv {\n\t\t\tservice api\n\t\t\tproto tcp\n\t\t\tname example.com\n\t\t\trefresh 5m\n\t\t\tresolvers 8.8.8.8 8.8.4.4\n\t\t\tdial_timeout 1s\n\t\t\tdial_fallback_delay -1s\n\t\t}\n\t}\n}\n\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"dynamic_upstreams\": {\n\t\t\t\t\t\t\t\t\t\t\"name\": \"foo\",\n\t\t\t\t\t\t\t\t\t\t\"port\": \"9000\",\n\t\t\t\t\t\t\t\t\t\t\"source\": \"a\"\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"dynamic_upstreams\": {\n\t\t\t\t\t\t\t\t\t\t\"dial_fallback_delay\": 300000000,\n\t\t\t\t\t\t\t\t\t\t\"dial_timeout\": 2000000000,\n\t\t\t\t\t\t\t\t\t\t\"name\": \"foo\",\n\t\t\t\t\t\t\t\t\t\t\"port\": \"9000\",\n\t\t\t\t\t\t\t\t\t\t\"refresh\": 300000000000,\n\t\t\t\t\t\t\t\t\t\t\"resolver\": {\n\t\t\t\t\t\t\t\t\t\t\t\"addresses\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"8.8.8.8\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"8.8.4.4\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"source\": \"a\",\n\t\t\t\t\t\t\t\t\t\t\"versions\": {\n\t\t\t\t\t\t\t\t\t\t\t\"ipv6\": true\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv1\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8885\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"dynamic_upstreams\": {\n\t\t\t\t\t\t\t\t\t\t\"name\": \"_api._tcp.example.com\",\n\t\t\t\t\t\t\t\t\t\t\"source\": \"srv\"\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"dynamic_upstreams\": {\n\t\t\t\t\t\t\t\t\t\t\"dial_fallback_delay\": -1000000000,\n\t\t\t\t\t\t\t\t\t\t\"dial_timeout\": 1000000000,\n\t\t\t\t\t\t\t\t\t\t\"name\": \"example.com\",\n\t\t\t\t\t\t\t\t\t\t\"proto\": \"tcp\",\n\t\t\t\t\t\t\t\t\t\t\"refresh\": 300000000000,\n\t\t\t\t\t\t\t\t\t\t\"resolver\": {\n\t\t\t\t\t\t\t\t\t\t\t\"addresses\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"8.8.8.8\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"8.8.4.4\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"service\": \"api\",\n\t\t\t\t\t\t\t\t\t\t\"source\": \"srv\"\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_dynamic_upstreams_grace_period.caddyfiletest",
    "content": ":8884 {\n\treverse_proxy {\n\t\tdynamic srv {\n\t\t\tname foo\n\t\t\trefresh 5m\n\t\t\tgrace_period 5s\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"dynamic_upstreams\": {\n\t\t\t\t\t\t\t\t\t\t\"grace_period\": 5000000000,\n\t\t\t\t\t\t\t\t\t\t\"name\": \"foo\",\n\t\t\t\t\t\t\t\t\t\t\"refresh\": 300000000000,\n\t\t\t\t\t\t\t\t\t\t\"source\": \"srv\"\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_empty_non_http_transport.caddyfiletest",
    "content": ":8884\n\nreverse_proxy 127.0.0.1:65535 {\n\ttransport fastcgi\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\"protocol\": \"fastcgi\"\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"127.0.0.1:65535\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_h2c_shorthand.caddyfiletest",
    "content": ":8884\n\nreverse_proxy h2c://localhost:8080\n\nreverse_proxy unix+h2c//run/app.sock\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\"protocol\": \"http\",\n\t\t\t\t\t\t\t\t\t\t\"versions\": [\n\t\t\t\t\t\t\t\t\t\t\t\"h2c\",\n\t\t\t\t\t\t\t\t\t\t\t\"2\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:8080\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\"protocol\": \"http\",\n\t\t\t\t\t\t\t\t\t\t\"versions\": [\n\t\t\t\t\t\t\t\t\t\t\t\"h2c\",\n\t\t\t\t\t\t\t\t\t\t\t\"2\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"unix//run/app.sock\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_handle_response.caddyfiletest",
    "content": ":8884\n\nreverse_proxy 127.0.0.1:65535 {\n\t@500 status 500\n\treplace_status @500 400\n\n\t@all status 2xx 3xx 4xx 5xx\n\treplace_status @all {http.error.status_code}\n\n\treplace_status {http.error.status_code}\n\n\t@accel header X-Accel-Redirect *\n\thandle_response @accel {\n\t\trespond \"Header X-Accel-Redirect!\"\n\t}\n\n\t@another {\n\t\theader X-Another *\n\t}\n\thandle_response @another {\n\t\trespond \"Header X-Another!\"\n\t}\n\n\t@401 status 401\n\thandle_response @401 {\n\t\trespond \"Status 401!\"\n\t}\n\n\thandle_response {\n\t\trespond \"Any! This should be last in the JSON!\"\n\t}\n\n\t@403 {\n\t\tstatus 403\n\t}\n\thandle_response @403 {\n\t\trespond \"Status 403!\"\n\t}\n\n\t@multi {\n\t\tstatus 401 403\n\t\tstatus 404\n\t\theader Foo *\n\t\theader Bar *\n\t}\n\thandle_response @multi {\n\t\trespond \"Headers Foo, Bar AND statuses 401, 403 and 404!\"\n\t}\n\n\t@200 status 200\n\thandle_response @200 {\n\t\tcopy_response_headers {\n\t\t\tinclude Foo Bar\n\t\t}\n\t\trespond \"Copied headers from the response\"\n\t}\n\n\t@201 status 201\n\thandle_response @201 {\n\t\theader Foo \"Copying the response\"\n\t\tcopy_response 404\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handle_response\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t500\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 400\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t2,\n\t\t\t\t\t\t\t\t\t\t\t\t\t3,\n\t\t\t\t\t\t\t\t\t\t\t\t\t4,\n\t\t\t\t\t\t\t\t\t\t\t\t\t5\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\"status_code\": \"{http.error.status_code}\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"X-Accel-Redirect\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Header X-Accel-Redirect!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"X-Another\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Header X-Another!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t401\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Status 401!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t403\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Status 403!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"Bar\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"Foo\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t401,\n\t\t\t\t\t\t\t\t\t\t\t\t\t403,\n\t\t\t\t\t\t\t\t\t\t\t\t\t404\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Headers Foo, Bar AND statuses 401, 403 and 404!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t200\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"copy_response_headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"include\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Foo\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Bar\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Copied headers from the response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t201\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"headers\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"response\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Foo\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Copying the response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"copy_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 404\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"status_code\": \"{http.error.status_code}\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Any! This should be last in the JSON!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"127.0.0.1:65535\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_health_headers.caddyfiletest",
    "content": ":8884\n\nreverse_proxy 127.0.0.1:65535 {\n\thealth_headers {\n\t\tHost example.com\n\t\tX-Header-Key 95ca39e3cbe7\n\t\tX-Header-Keys VbG4NZwWnipo 335Q9/MhqcNU3s2TO\n\t\tX-Empty-Value\n\t\tSame-Key 1\n\t\tSame-Key 2\n\t\tX-System-Hostname {system.hostname}\n\t}\n\thealth_uri /health\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"health_checks\": {\n\t\t\t\t\t\t\t\t\t\t\"active\": {\n\t\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"Host\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"Same-Key\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"1\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"2\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"X-Empty-Value\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"X-Header-Key\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"95ca39e3cbe7\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"X-Header-Keys\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"VbG4NZwWnipo\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"335Q9/MhqcNU3s2TO\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"X-System-Hostname\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"{system.hostname}\"\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\"uri\": \"/health\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"127.0.0.1:65535\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_health_method.caddyfiletest",
    "content": ":8884\n\nreverse_proxy 127.0.0.1:65535 {\n\thealth_uri /health\n\thealth_method HEAD\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"health_checks\": {\n\t\t\t\t\t\t\t\t\t\t\"active\": {\n\t\t\t\t\t\t\t\t\t\t\t\"method\": \"HEAD\",\n\t\t\t\t\t\t\t\t\t\t\t\"uri\": \"/health\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"127.0.0.1:65535\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_health_path_query.caddyfiletest",
    "content": "# Health with query in the uri\n:8443 {\n\treverse_proxy localhost:54321 {\n\t\thealth_uri /health?ready=1\n\t\thealth_status 2xx\n\t}\n}\n\n# Health without query in the uri\n:8444 {\n\treverse_proxy localhost:54321 {\n\t\thealth_uri /health\n\t\thealth_status 200\n\t}\n}\n\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"health_checks\": {\n\t\t\t\t\t\t\t\t\t\t\"active\": {\n\t\t\t\t\t\t\t\t\t\t\t\"expect_status\": 2,\n\t\t\t\t\t\t\t\t\t\t\t\"uri\": \"/health?ready=1\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:54321\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv1\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8444\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"health_checks\": {\n\t\t\t\t\t\t\t\t\t\t\"active\": {\n\t\t\t\t\t\t\t\t\t\t\t\"expect_status\": 200,\n\t\t\t\t\t\t\t\t\t\t\t\"uri\": \"/health\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:54321\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_health_reqbody.caddyfiletest",
    "content": ":8884\n\nreverse_proxy 127.0.0.1:65535 {\n\thealth_uri /health\n\thealth_request_body \"test body\"\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"health_checks\": {\n\t\t\t\t\t\t\t\t\t\t\"active\": {\n\t\t\t\t\t\t\t\t\t\t\t\"body\": \"test body\",\n\t\t\t\t\t\t\t\t\t\t\t\"uri\": \"/health\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"127.0.0.1:65535\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_http_transport_forward_proxy_url.txt",
    "content": ":8884\nreverse_proxy 127.0.0.1:65535 {\n\ttransport http {\n\t\tforward_proxy_url http://localhost:8080\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\"network_proxy\": {\n\t\t\t\t\t\t\t\t\t\t\t\"from\": \"url\",\n\t\t\t\t\t\t\t\t\t\t\t\"url\": \"http://localhost:8080\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"protocol\": \"http\"\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"127.0.0.1:65535\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_http_transport_none_proxy.txt",
    "content": ":8884\nreverse_proxy 127.0.0.1:65535 {\n\ttransport http {\n\t\tnetwork_proxy none\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\"network_proxy\": {\n\t\t\t\t\t\t\t\t\t\t\t\"from\": \"none\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"protocol\": \"http\"\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"127.0.0.1:65535\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_http_transport_tls_file_cert.txt",
    "content": ":8884\nreverse_proxy 127.0.0.1:65535 {\n\ttransport http {\n\t\ttls_trust_pool file {\n\t\t\tpem_file ../caddy.ca.cer\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\"protocol\": \"http\",\n\t\t\t\t\t\t\t\t\t\t\"tls\": {\n\t\t\t\t\t\t\t\t\t\t\t\"ca\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"pem_files\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"../caddy.ca.cer\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"provider\": \"file\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"127.0.0.1:65535\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_http_transport_tls_inline_cert.txt",
    "content": ":8884\nreverse_proxy 127.0.0.1:65535 {\n\ttransport http {\n\t\ttls_trust_pool inline {\n\t\t\ttrust_der MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\"protocol\": \"http\",\n\t\t\t\t\t\t\t\t\t\t\"tls\": {\n\t\t\t\t\t\t\t\t\t\t\t\"ca\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"provider\": \"inline\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"trusted_ca_certs\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\"\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"127.0.0.1:65535\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_http_transport_url_proxy.txt",
    "content": ":8884\nreverse_proxy 127.0.0.1:65535 {\n\ttransport http {\n\t\tnetwork_proxy url http://localhost:8080\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\"network_proxy\": {\n\t\t\t\t\t\t\t\t\t\t\t\"from\": \"url\",\n\t\t\t\t\t\t\t\t\t\t\t\"url\": \"http://localhost:8080\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"protocol\": \"http\"\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"127.0.0.1:65535\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_load_balance.caddyfiletest",
    "content": ":8884\n\nreverse_proxy 127.0.0.1:65535 {\n\tlb_policy first\n\tlb_retries 5\n\tlb_try_duration 10s\n\tlb_try_interval 500ms\n\tlb_retry_match {\n\t\tpath /foo*\n\t\tmethod POST\n\t}\n\tlb_retry_match path /bar*\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"load_balancing\": {\n\t\t\t\t\t\t\t\t\t\t\"retries\": 5,\n\t\t\t\t\t\t\t\t\t\t\"retry_match\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"method\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"POST\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"/foo*\"\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"/bar*\"\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"selection_policy\": {\n\t\t\t\t\t\t\t\t\t\t\t\"policy\": \"first\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"try_duration\": 10000000000,\n\t\t\t\t\t\t\t\t\t\t\"try_interval\": 500000000\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"127.0.0.1:65535\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_load_balance_wrr.caddyfiletest",
    "content": ":8884\n\nreverse_proxy 127.0.0.1:65535 127.0.0.1:35535 {\n\tlb_policy weighted_round_robin 10 1\n\tlb_retries 5\n\tlb_try_duration 10s\n\tlb_try_interval 500ms\n\tlb_retry_match {\n\t\tpath /foo*\n\t\tmethod POST\n\t}\n\tlb_retry_match path /bar*\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"load_balancing\": {\n\t\t\t\t\t\t\t\t\t\t\"retries\": 5,\n\t\t\t\t\t\t\t\t\t\t\"retry_match\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"method\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"POST\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"/foo*\"\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"/bar*\"\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"selection_policy\": {\n\t\t\t\t\t\t\t\t\t\t\t\"policy\": \"weighted_round_robin\",\n\t\t\t\t\t\t\t\t\t\t\t\"weights\": [\n\t\t\t\t\t\t\t\t\t\t\t\t10,\n\t\t\t\t\t\t\t\t\t\t\t\t1\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"try_duration\": 10000000000,\n\t\t\t\t\t\t\t\t\t\t\"try_interval\": 500000000\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"127.0.0.1:65535\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"127.0.0.1:35535\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_localaddr.caddyfiletest",
    "content": "https://example.com {\n\treverse_proxy http://localhost:54321 {\n\t\ttransport http {\n\t\t\tlocal_address 192.168.0.1\n\t\t}\n\t}\n}\n\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"local_address\": \"192.168.0.1\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"protocol\": \"http\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:54321\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_options.caddyfiletest",
    "content": "https://example.com {\n\treverse_proxy /path https://localhost:54321 {\n\t\theader_up Host {upstream_hostport}\n\t\theader_up Foo bar\n\n\t\tmethod GET\n\t\trewrite /rewritten?uri={uri}\n\n\t\trequest_buffers 4KB\n\n\t\ttransport http {\n\t\t\tread_buffer 10MB\n\t\t\twrite_buffer 20MB\n\t\t\tmax_response_header 30MB\n\t\t\tdial_timeout 3s\n\t\t\tdial_fallback_delay 5s\n\t\t\tresponse_header_timeout 8s\n\t\t\texpect_continue_timeout 9s\n\t\t\tresolvers 8.8.8.8 8.8.4.4\n\n\t\t\tversions h2c 2\n\t\t\tcompression off\n\t\t\tmax_conns_per_host 5\n\t\t\tkeepalive_idle_conns_per_host 2\n\t\t\tkeepalive_interval 30s\n\n\t\t\ttls_renegotiation freely\n\t\t\ttls_except_ports 8181 8182\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"request\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"set\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Foo\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"bar\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Host\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"{http.reverse_proxy.upstream.hostport}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"request_buffers\": 4000,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"rewrite\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"method\": \"GET\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"uri\": \"/rewritten?uri={http.request.uri}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"compression\": false,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial_fallback_delay\": 5000000000,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial_timeout\": 3000000000,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"expect_continue_timeout\": 9000000000,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"keep_alive\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"max_idle_conns_per_host\": 2,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"probe_interval\": 30000000000\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"max_conns_per_host\": 5,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"max_response_header_size\": 30000000,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"protocol\": \"http\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"read_buffer_size\": 10000000,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"resolver\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"addresses\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"8.8.8.8\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"8.8.4.4\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"response_header_timeout\": 8000000000,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"tls\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"except_ports\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"8181\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"8182\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"renegotiation\": \"freely\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"versions\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"h2c\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"2\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"write_buffer_size\": 20000000\n\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:54321\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/path\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_port_range.caddyfiletest",
    "content": ":8884 {\n\t# Port range\n\treverse_proxy localhost:8001-8002\n\n\t# Port range with placeholder\n\treverse_proxy {host}:8001-8002\n\n\t# Port range with scheme\n\treverse_proxy https://localhost:8001-8002\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:8001\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:8002\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"{http.request.host}:8001\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"{http.request.host}:8002\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\"protocol\": \"http\",\n\t\t\t\t\t\t\t\t\t\t\"tls\": {}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:8001\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:8002\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_trusted_proxies.caddyfiletest",
    "content": ":8884\n\nreverse_proxy 127.0.0.1:65535 {\n\ttrusted_proxies 127.0.0.1\n}\n\nreverse_proxy 127.0.0.1:65535 {\n\ttrusted_proxies private_ranges\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"trusted_proxies\": [\n\t\t\t\t\t\t\t\t\t\t\"127.0.0.1\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"127.0.0.1:65535\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"trusted_proxies\": [\n\t\t\t\t\t\t\t\t\t\t\"192.168.0.0/16\",\n\t\t\t\t\t\t\t\t\t\t\"172.16.0.0/12\",\n\t\t\t\t\t\t\t\t\t\t\"10.0.0.0/8\",\n\t\t\t\t\t\t\t\t\t\t\"127.0.0.1/8\",\n\t\t\t\t\t\t\t\t\t\t\"fd00::/8\",\n\t\t\t\t\t\t\t\t\t\t\"::1\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"127.0.0.1:65535\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_trusted_proxies_unix.caddyfiletest",
    "content": "{\n\tservers {\n\t\ttrusted_proxies_unix\n\t}\n}\n\nexample.com {\n\treverse_proxy https://local:8080\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"protocol\": \"http\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"tls\": {}\n\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"local:8080\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"trusted_proxies_unix\": true\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/reverse_proxy_upstream_placeholder.caddyfiletest",
    "content": ":8884 {\n\tmap {host} {upstream} {\n\t\tfoo.example.com 1.2.3.4\n\t\tdefault 2.3.4.5\n\t}\n\n\t# Upstream placeholder with a port should retain the port\n\treverse_proxy {upstream}:80\n}\n\n:8885 {\n\tmap {host} {upstream} {\n\t\tfoo.example.com 1.2.3.4:8080\n\t\tdefault 2.3.4.5:8080\n\t}\n\n\t# Upstream placeholder with no port should not have a port joined\n\treverse_proxy {upstream}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8884\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"defaults\": [\n\t\t\t\t\t\t\t\t\t\t\"2.3.4.5\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"destinations\": [\n\t\t\t\t\t\t\t\t\t\t\"{upstream}\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"handler\": \"map\",\n\t\t\t\t\t\t\t\t\t\"mappings\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"input\": \"foo.example.com\",\n\t\t\t\t\t\t\t\t\t\t\t\"outputs\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"1.2.3.4\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"source\": \"{http.request.host}\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"{upstream}:80\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv1\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8885\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"defaults\": [\n\t\t\t\t\t\t\t\t\t\t\"2.3.4.5:8080\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"destinations\": [\n\t\t\t\t\t\t\t\t\t\t\"{upstream}\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"handler\": \"map\",\n\t\t\t\t\t\t\t\t\t\"mappings\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"input\": \"foo.example.com\",\n\t\t\t\t\t\t\t\t\t\t\t\"outputs\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\"1.2.3.4:8080\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"source\": \"{http.request.host}\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"{upstream}\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/rewrite_directive_permutations.caddyfiletest",
    "content": ":8080\n\n# With explicit wildcard matcher\nroute {\n\trewrite * /a\n}\n\n# With path matcher\nroute {\n\trewrite /path /b\n}\n\n# With named matcher\nroute {\n\t@named method GET\n\trewrite @named /c\n}\n\n# With no matcher, assumed to be wildcard\nroute {\n\trewrite /d\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8080\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group0\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"uri\": \"/a\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group1\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"uri\": \"/b\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/path\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group2\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"uri\": \"/c\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"method\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"GET\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group3\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"uri\": \"/d\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/root_directive_permutations.caddyfiletest",
    "content": ":8080\n\n# With explicit wildcard matcher\nroute {\n\troot * /a\n}\n\n# With path matcher\nroute {\n\troot /path /b\n}\n\n# With named matcher\nroute {\n\t@named method GET\n\troot @named /c\n}\n\n# With no matcher, assumed to be wildcard\nroute {\n\troot /d\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8080\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"root\": \"/a\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"root\": \"/b\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/path\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"root\": \"/c\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"method\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"GET\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"vars\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"root\": \"/d\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/server_names.caddyfiletest",
    "content": "{\n\tservers :443 {\n\t\tname https\n\t}\n\n\tservers :8000 {\n\t\tname app1\n\t}\n\n\tservers :8001 {\n\t\tname app2\n\t}\n\n\tservers 123.123.123.123:8002 {\n\t\tname bind-server\n\t}\n}\n\nexample.com {\n}\n\n:8000 {\n}\n\n:8001, :8002 {\n}\n\n:8002 {\n\tbind 123.123.123.123 222.222.222.222\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"app1\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8000\"\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"app2\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8001\"\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"bind-server\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\"123.123.123.123:8002\",\n\t\t\t\t\t\t\"222.222.222.222:8002\"\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"https\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv4\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8002\"\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/shorthand_parameterized_placeholders.caddyfiletest",
    "content": "localhost:80\n\nrespond * \"{header.content-type} {labels.0} {query.p} {path.0} {re.name.0}\"\n\n@match path_regexp ^/foo(.*)$\nrespond @match \"{re.1}\"\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"{http.regexp.1}\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"path_regexp\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"name\": \"match\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"pattern\": \"^/foo(.*)$\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"{http.request.header.content-type} {http.request.host.labels.0} {http.request.uri.query.p} {http.request.uri.path.0} {http.regexp.name.0}\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/site_address_invalid_port.caddyfiletest",
    "content": ":70000\n\nhandle {\n\trespond \"should not work\"\n}\n----------\nport 70000 is out of range"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/site_address_negative_port.caddyfiletest",
    "content": ":-1\n\nhandle {\n\trespond \"should not work\"\n}\n----------\nport -1 is out of range"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/site_address_unsupported_scheme.caddyfiletest",
    "content": "foo://example.com\n\nhandle {\n\trespond \"hello\"\n}\n----------\nunsupported URL scheme foo://"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/site_address_wss_invalid_port.caddyfiletest",
    "content": "wss://example.com:70000\n\nhandle {\n\trespond \"should not work\"\n}\n----------\nport 70000 is out of range"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/site_address_wss_scheme.caddyfiletest",
    "content": "wss://example.com\n\nhandle {\n\trespond \"hello\"\n}\n----------\nthe scheme wss:// is only supported in browsers; use https:// instead"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/site_block_sorting.caddyfiletest",
    "content": "# https://caddy.community/t/caddy-suddenly-directs-my-site-to-the-wrong-directive/11597/2\nabcdef {\n\trespond \"abcdef\"\n}\n\nabcdefg {\n\trespond \"abcdefg\"\n}\n\nabc {\n\trespond \"abc\"\n}\n\nabcde, http://abcde {\n\trespond \"abcde\"\n}\n\n:443, ab {\n\trespond \"443 or ab\"\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"abcdefg\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"abcdefg\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"abcdef\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"abcdef\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"abcde\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"abcde\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"abc\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"abc\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"443 or ab\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv1\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"abcde\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"abcde\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"certificates\": {\n\t\t\t\t\"automate\": [\n\t\t\t\t\t\"ab\"\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/sort_directives_with_any_matcher_first.caddyfiletest",
    "content": ":80\n\nrespond 200\n\n@untrusted not remote_ip 10.1.1.0/24\nrespond @untrusted 401\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"not\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"remote_ip\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"ranges\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"10.1.1.0/24\"\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\"status_code\": 401\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\"status_code\": 200\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/sort_directives_within_handle.caddyfiletest",
    "content": "*.example.com {\n\t@foo host foo.example.com\n\thandle @foo {\n\t\thandle_path /strip {\n\t\t\trespond \"this should be first\"\n\t\t}\n\t\thandle_path /strip* {\n\t\t\trespond \"this should be second\"\n\t\t}\n\t\thandle {\n\t\t\trespond \"this should be last\"\n\t\t}\n\t}\n\thandle {\n\t\trespond \"this should be last\"\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"*.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group6\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group3\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"strip_path_prefix\": \"/strip\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"this should be first\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/strip\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group3\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"strip_path_prefix\": \"/strip\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"this should be second\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/strip*\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group3\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"this should be last\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"foo.example.com\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group6\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"this should be last\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/sort_vars_in_reverse.caddyfiletest",
    "content": ":80\n\nvars /foobar foo last\nvars /foo foo middle-last\nvars /foo* foo middle-first\nvars * foo first\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"foo\": \"first\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"vars\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"/foo*\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"foo\": \"middle-first\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"vars\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"/foo\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"foo\": \"middle-last\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"vars\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"/foobar\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"foo\": \"last\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"vars\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_acme_dns_override_global_dns.caddyfiletest",
    "content": "{\n\tdns mock foo\n\tacme_dns mock bar\n}\n\nlocalhost {\n\ttls {\n\t\tresolvers 8.8.8.8 8.8.4.4\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"provider\": {\n\t\t\t\t\t\t\t\t\t\t\t\"argument\": \"bar\",\n\t\t\t\t\t\t\t\t\t\t\t\"name\": \"mock\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"resolvers\": [\n\t\t\t\t\t\t\t\t\t\t\t\"8.8.8.8\",\n\t\t\t\t\t\t\t\t\t\t\t\"8.8.4.4\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"provider\": {\n\t\t\t\t\t\t\t\t\t\t\t\"argument\": \"bar\",\n\t\t\t\t\t\t\t\t\t\t\t\"name\": \"mock\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"dns\": {\n\t\t\t\t\"argument\": \"foo\",\n\t\t\t\t\"name\": \"mock\"\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_acme_preferred_chains.caddyfiletest",
    "content": "localhost\n\ntls {\n\tissuer acme {\n\t\tpreferred_chains {\n\t\t\tany_common_name \"Generic CA 1\" \"Generic CA 2\"\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"module\": \"acme\",\n\t\t\t\t\t\t\t\t\"preferred_chains\": {\n\t\t\t\t\t\t\t\t\t\"any_common_name\": [\n\t\t\t\t\t\t\t\t\t\t\"Generic CA 1\",\n\t\t\t\t\t\t\t\t\t\t\"Generic CA 2\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_automation_policies_1.caddyfiletest",
    "content": "{\n\tlocal_certs\n}\n\n*.tld, *.*.tld {\n\ttls {\n\t\ton_demand\n\t}\n}\n\nfoo.tld, www.foo.tld {\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"foo.tld\",\n\t\t\t\t\t\t\t\t\t\t\"www.foo.tld\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"*.tld\",\n\t\t\t\t\t\t\t\t\t\t\"*.*.tld\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"foo.tld\",\n\t\t\t\t\t\t\t\"www.foo.tld\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"module\": \"internal\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"*.*.tld\",\n\t\t\t\t\t\t\t\"*.tld\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"module\": \"internal\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"on_demand\": true\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"module\": \"internal\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_automation_policies_10.caddyfiletest",
    "content": "# example from issue #4667\n{\n\tauto_https off\n}\n\nhttps://, example.com {\n\ttls test.crt test.key\n\trespond \"Hello World\"\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"Hello World\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"certificate_selection\": {\n\t\t\t\t\t\t\t\t\"any_tag\": [\n\t\t\t\t\t\t\t\t\t\"cert0\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\"disable\": true\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"certificates\": {\n\t\t\t\t\"load_files\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"certificate\": \"test.crt\",\n\t\t\t\t\t\t\"key\": \"test.key\",\n\t\t\t\t\t\t\"tags\": [\n\t\t\t\t\t\t\t\"cert0\"\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_automation_policies_11.caddyfiletest",
    "content": "# example from https://caddy.community/t/21415\na.com {\n\ttls {\n\t\tget_certificate http http://foo.com/get\n\t}\n}\n\nb.com {\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"a.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"b.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"a.com\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"get_certificate\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"url\": \"http://foo.com/get\",\n\t\t\t\t\t\t\t\t\"via\": \"http\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"b.com\"\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_automation_policies_2.caddyfiletest",
    "content": "# issue #3953\n{\n\tcert_issuer zerossl api_key\n}\n\nexample.com {\n\ttls {\n\t\ton_demand\n\t\tkey_type rsa2048\n\t}\n}\n\nhttp://example.net {\n}\n\n:1234 {\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":1234\"\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv1\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv2\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.net\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"api_key\": \"api_key\",\n\t\t\t\t\t\t\t\t\"module\": \"zerossl\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"key_type\": \"rsa2048\",\n\t\t\t\t\t\t\"on_demand\": true\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"api_key\": \"api_key\",\n\t\t\t\t\t\t\t\t\"module\": \"zerossl\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_automation_policies_3.caddyfiletest",
    "content": "# https://caddy.community/t/caddyfile-having-individual-sites-differ-from-global-options/11297\n{\n\tlocal_certs\n}\n\na.example.com {\n\ttls internal\n}\n\nb.example.com {\n\ttls abc@example.com\n}\n\nc.example.com {\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"a.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"b.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"c.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"b.example.com\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"email\": \"abc@example.com\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"ca\": \"https://acme.zerossl.com/v2/DV90\",\n\t\t\t\t\t\t\t\t\"email\": \"abc@example.com\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"module\": \"internal\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_automation_policies_4.caddyfiletest",
    "content": "{\n\temail my.email@example.com\n}\n\n:82 {\n\tredir https://example.com{uri}\n}\n\n:83 {\n\tredir https://example.com{uri}\n}\n\n:84 {\n\tredir https://example.com{uri}\n}\n\nabc.de {\n\tredir https://example.com{uri}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"abc.de\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"Location\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"https://example.com{http.request.uri}\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 302\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv1\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":82\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\"Location\": [\n\t\t\t\t\t\t\t\t\t\t\t\"https://example.com{http.request.uri}\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"status_code\": 302\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv2\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":83\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\"Location\": [\n\t\t\t\t\t\t\t\t\t\t\t\"https://example.com{http.request.uri}\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"status_code\": 302\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv3\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":84\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\"headers\": {\n\t\t\t\t\t\t\t\t\t\t\"Location\": [\n\t\t\t\t\t\t\t\t\t\t\t\"https://example.com{http.request.uri}\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"status_code\": 302\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"email\": \"my.email@example.com\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"ca\": \"https://acme.zerossl.com/v2/DV90\",\n\t\t\t\t\t\t\t\t\"email\": \"my.email@example.com\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_automation_policies_5.caddyfiletest",
    "content": "a.example.com {\n}\n\nb.example.com {\n}\n\n:443 {\n\ttls {\n\t\ton_demand\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"a.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"b.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"a.example.com\",\n\t\t\t\t\t\t\t\"b.example.com\"\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"on_demand\": true\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_automation_policies_6.caddyfiletest",
    "content": "# (this Caddyfile is contrived, but based on issue #4161)\n\nexample.com {\n\ttls {\n\t\tca https://foobar\n\t}\n}\n\nexample.com:8443 {\n\ttls {\n\t\tca https://foobar\n\t}\n}\n\nexample.com:8444 {\n\ttls {\n\t\tca https://foobar\n\t}\n}\n\nexample.com:8445 {\n\ttls {\n\t\tca https://foobar\n\t}\n}\n\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv1\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv2\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8444\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv3\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8445\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"ca\": \"https://foobar\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_automation_policies_7.caddyfiletest",
    "content": "# (this Caddyfile is contrived, but based on issues #4176 and #4198)\n\nhttp://example.com {\n}\n\nhttps://example.com {\n\ttls internal\n}\n\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv1\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"module\": \"internal\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_automation_policies_8.caddyfiletest",
    "content": "# (this Caddyfile is contrived, but based on issues #4176 and #4198)\n\nhttp://example.com {\n}\n\nhttps://example.com {\n\ttls abc@example.com\n}\n\nhttp://localhost:8081 {\n}\n\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv1\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv2\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8081\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\"skip\": [\n\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"email\": \"abc@example.com\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"ca\": \"https://acme.zerossl.com/v2/DV90\",\n\t\t\t\t\t\t\t\t\"email\": \"abc@example.com\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_automation_policies_9.caddyfiletest",
    "content": "# example from issue #4640\nhttp://foo:8447, http://127.0.0.1:8447 {\n\treverse_proxy 127.0.0.1:8080\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8447\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"foo\",\n\t\t\t\t\t\t\t\t\t\t\"127.0.0.1\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"127.0.0.1:8080\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\"skip\": [\n\t\t\t\t\t\t\t\"foo\",\n\t\t\t\t\t\t\t\"127.0.0.1\"\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_automation_policies_global_email_localhost.caddyfiletest",
    "content": "{\n\temail foo@bar\n}\n\nlocalhost {\n}\n\nexample.com {\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"example.com\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"email\": \"foo@bar\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"ca\": \"https://acme.zerossl.com/v2/DV90\",\n\t\t\t\t\t\t\t\t\"email\": \"foo@bar\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_automation_wildcard_force_automate.caddyfiletest",
    "content": "automated1.example.com {\n\ttls force_automate\n\trespond \"Automated!\"\n}\n\nautomated2.example.com {\n\ttls force_automate\n\trespond \"Automated!\"\n}\n\nshadowed.example.com {\n\trespond \"Shadowed!\"\n}\n\n*.example.com {\n\ttls cert.pem key.pem\n\trespond \"Wildcard!\"\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"automated1.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Automated!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"automated2.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Automated!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"shadowed.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Shadowed!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"*.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Wildcard!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\"sni\": [\n\t\t\t\t\t\t\t\t\t\"automated1.example.com\",\n\t\t\t\t\t\t\t\t\t\"automated2.example.com\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\"sni\": [\n\t\t\t\t\t\t\t\t\t\"*.example.com\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"certificate_selection\": {\n\t\t\t\t\t\t\t\t\"any_tag\": [\n\t\t\t\t\t\t\t\t\t\"cert0\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"certificates\": {\n\t\t\t\t\"automate\": [\n\t\t\t\t\t\"automated1.example.com\",\n\t\t\t\t\t\"automated2.example.com\"\n\t\t\t\t],\n\t\t\t\t\"load_files\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"certificate\": \"cert.pem\",\n\t\t\t\t\t\t\"key\": \"key.pem\",\n\t\t\t\t\t\t\"tags\": [\n\t\t\t\t\t\t\t\"cert0\"\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_automation_wildcard_shadowing.caddyfiletest",
    "content": "subdomain.example.com {\n\trespond \"Subdomain!\"\n}\n\n*.example.com {\n\ttls cert.pem key.pem\n\trespond \"Wildcard!\"\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"subdomain.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Subdomain!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"*.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Wildcard!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\"sni\": [\n\t\t\t\t\t\t\t\t\t\"*.example.com\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"certificate_selection\": {\n\t\t\t\t\t\t\t\t\"any_tag\": [\n\t\t\t\t\t\t\t\t\t\"cert0\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"certificates\": {\n\t\t\t\t\"load_files\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"certificate\": \"cert.pem\",\n\t\t\t\t\t\t\"key\": \"key.pem\",\n\t\t\t\t\t\t\"tags\": [\n\t\t\t\t\t\t\t\"cert0\"\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_client_auth_cert_file-legacy-with-verifier.caddyfiletest",
    "content": "localhost\n\nrespond \"hello from localhost\"\ntls {\n\tclient_auth {\n\t\tmode request\n\t\ttrusted_ca_cert_file ../caddy.ca.cer\n\t\tverifier dummy\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\"sni\": [\n\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"client_authentication\": {\n\t\t\t\t\t\t\t\t\"ca\": {\n\t\t\t\t\t\t\t\t\t\"provider\": \"inline\",\n\t\t\t\t\t\t\t\t\t\"trusted_ca_certs\": [\n\t\t\t\t\t\t\t\t\t\t\"MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"verifiers\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"verifier\": \"dummy\"\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"mode\": \"request\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_client_auth_cert_file-legacy.caddyfiletest",
    "content": "localhost\n\nrespond \"hello from localhost\"\ntls {\n\tclient_auth {\n\t\tmode request\n\t\ttrusted_ca_cert_file ../caddy.ca.cer\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\"sni\": [\n\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"client_authentication\": {\n\t\t\t\t\t\t\t\t\"ca\": {\n\t\t\t\t\t\t\t\t\t\"provider\": \"inline\",\n\t\t\t\t\t\t\t\t\t\"trusted_ca_certs\": [\n\t\t\t\t\t\t\t\t\t\t\"MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"mode\": \"request\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_client_auth_cert_file.caddyfiletest",
    "content": "localhost\n\nrespond \"hello from localhost\"\ntls {\n\tclient_auth {\n\t\tmode request\n\t\ttrust_pool file {\n\t\t\tpem_file ../caddy.ca.cer\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\"sni\": [\n\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"client_authentication\": {\n\t\t\t\t\t\t\t\t\"ca\": {\n\t\t\t\t\t\t\t\t\t\"pem_files\": [\n\t\t\t\t\t\t\t\t\t\t\"../caddy.ca.cer\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"provider\": \"file\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"mode\": \"request\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_client_auth_inline_cert-legacy.caddyfiletest",
    "content": "localhost\n\nrespond \"hello from localhost\"\ntls {\n\tclient_auth {\n\t\tmode request\n\t\ttrusted_ca_cert MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\"sni\": [\n\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"client_authentication\": {\n\t\t\t\t\t\t\t\t\"ca\": {\n\t\t\t\t\t\t\t\t\t\"provider\": \"inline\",\n\t\t\t\t\t\t\t\t\t\"trusted_ca_certs\": [\n\t\t\t\t\t\t\t\t\t\t\"MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"mode\": \"request\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_client_auth_inline_cert.caddyfiletest",
    "content": "localhost\n\nrespond \"hello from localhost\"\ntls {\n\tclient_auth {\n\t\tmode request\n\t\ttrust_pool inline {\n\t\t\ttrust_der MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\"sni\": [\n\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"client_authentication\": {\n\t\t\t\t\t\t\t\t\"ca\": {\n\t\t\t\t\t\t\t\t\t\"provider\": \"inline\",\n\t\t\t\t\t\t\t\t\t\"trusted_ca_certs\": [\n\t\t\t\t\t\t\t\t\t\t\"MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"mode\": \"request\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_client_auth_inline_cert_with_leaf_trust.caddyfiletest",
    "content": "localhost\n\nrespond \"hello from localhost\"\ntls {\n\tclient_auth {\n\t\tmode request\n\t\ttrust_pool inline {\n\t\t\ttrust_der MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\n\t\t}\n\t\ttrusted_leaf_cert_file ../caddy.ca.cer\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\"sni\": [\n\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"client_authentication\": {\n\t\t\t\t\t\t\t\t\"ca\": {\n\t\t\t\t\t\t\t\t\t\"provider\": \"inline\",\n\t\t\t\t\t\t\t\t\t\"trusted_ca_certs\": [\n\t\t\t\t\t\t\t\t\t\t\"MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"trusted_leaf_certs\": [\n\t\t\t\t\t\t\t\t\t\"MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\"\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"mode\": \"request\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_client_auth_leaf_verifier_file_loader_block.caddyfiletest",
    "content": "localhost\n\nrespond \"hello from localhost\"\ntls {\n\tclient_auth {\n\t\tmode request\n\t\ttrust_pool inline {\n\t\t\ttrust_der MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\n\t\t}\n\t\tverifier leaf {\n\t\t\tfile ../caddy.ca.cer\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\"sni\": [\n\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"client_authentication\": {\n\t\t\t\t\t\t\t\t\"ca\": {\n\t\t\t\t\t\t\t\t\t\"provider\": \"inline\",\n\t\t\t\t\t\t\t\t\t\"trusted_ca_certs\": [\n\t\t\t\t\t\t\t\t\t\t\"MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"verifiers\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"leaf_certs_loaders\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"files\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"../caddy.ca.cer\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"loader\": \"file\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"verifier\": \"leaf\"\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"mode\": \"request\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_client_auth_leaf_verifier_file_loader_inline.caddyfiletest",
    "content": "localhost\n\nrespond \"hello from localhost\"\ntls {\n\tclient_auth {\n\t\tmode request\n\t\ttrust_pool inline {\n\t\t\ttrust_der MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\n\t\t}\n\t\tverifier leaf file ../caddy.ca.cer\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\"sni\": [\n\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"client_authentication\": {\n\t\t\t\t\t\t\t\t\"ca\": {\n\t\t\t\t\t\t\t\t\t\"provider\": \"inline\",\n\t\t\t\t\t\t\t\t\t\"trusted_ca_certs\": [\n\t\t\t\t\t\t\t\t\t\t\"MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"verifiers\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"leaf_certs_loaders\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"files\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"../caddy.ca.cer\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"loader\": \"file\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"verifier\": \"leaf\"\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"mode\": \"request\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_client_auth_leaf_verifier_file_loader_multi-in-block.caddyfiletest",
    "content": "localhost\n\nrespond \"hello from localhost\"\ntls {\n\tclient_auth {\n\t\tmode request\n\t\ttrust_pool inline {\n\t\t\ttrust_der MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\n\t\t}\n\t\tverifier leaf {\n\t\t\tfile ../caddy.ca.cer\n\t\t\tfile ../caddy.ca.cer\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\"sni\": [\n\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"client_authentication\": {\n\t\t\t\t\t\t\t\t\"ca\": {\n\t\t\t\t\t\t\t\t\t\"provider\": \"inline\",\n\t\t\t\t\t\t\t\t\t\"trusted_ca_certs\": [\n\t\t\t\t\t\t\t\t\t\t\"MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"verifiers\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"leaf_certs_loaders\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"files\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"../caddy.ca.cer\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"loader\": \"file\"\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"files\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"../caddy.ca.cer\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"loader\": \"file\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"verifier\": \"leaf\"\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"mode\": \"request\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_client_auth_leaf_verifier_folder_loader_block.caddyfiletest",
    "content": "localhost\n\nrespond \"hello from localhost\"\ntls {\n\tclient_auth {\n\t\tmode request\n\t\ttrust_pool inline {\n\t\t\ttrust_der MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\n\t\t}\n\t\tverifier leaf {\n\t\t\tfolder ../\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\"sni\": [\n\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"client_authentication\": {\n\t\t\t\t\t\t\t\t\"ca\": {\n\t\t\t\t\t\t\t\t\t\"provider\": \"inline\",\n\t\t\t\t\t\t\t\t\t\"trusted_ca_certs\": [\n\t\t\t\t\t\t\t\t\t\t\"MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"verifiers\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"leaf_certs_loaders\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"folders\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"../\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"loader\": \"folder\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"verifier\": \"leaf\"\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"mode\": \"request\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_client_auth_leaf_verifier_folder_loader_inline.caddyfiletest",
    "content": "localhost\n\nrespond \"hello from localhost\"\ntls {\n\tclient_auth {\n\t\tmode request\n\t\ttrust_pool inline {\n\t\t\ttrust_der MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\n\t\t}\n\t\tverifier leaf folder ../\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\"sni\": [\n\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"client_authentication\": {\n\t\t\t\t\t\t\t\t\"ca\": {\n\t\t\t\t\t\t\t\t\t\"provider\": \"inline\",\n\t\t\t\t\t\t\t\t\t\"trusted_ca_certs\": [\n\t\t\t\t\t\t\t\t\t\t\"MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"verifiers\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"leaf_certs_loaders\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"folders\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"../\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"loader\": \"folder\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"verifier\": \"leaf\"\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"mode\": \"request\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_client_auth_leaf_verifier_folder_loader_multi-in-block.caddyfiletest",
    "content": "localhost\n\nrespond \"hello from localhost\"\ntls {\n\tclient_auth {\n\t\tmode request\n\t\ttrust_pool inline {\n\t\t\ttrust_der MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\n\t\t}\n\t\tverifier leaf {\n\t\t\tfolder ../\n\t\t\tfolder ../\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t],\n\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\"sni\": [\n\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"client_authentication\": {\n\t\t\t\t\t\t\t\t\"ca\": {\n\t\t\t\t\t\t\t\t\t\"provider\": \"inline\",\n\t\t\t\t\t\t\t\t\t\"trusted_ca_certs\": [\n\t\t\t\t\t\t\t\t\t\t\"MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"verifiers\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"leaf_certs_loaders\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"folders\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"../\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"loader\": \"folder\"\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"folders\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"../\"\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"loader\": \"folder\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"verifier\": \"leaf\"\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"mode\": \"request\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_conn_policy_consolidate.caddyfiletest",
    "content": "# https://github.com/caddyserver/caddy/issues/3906\na.a {\n\ttls internal\n\trespond 403\n}\n\nhttp://b.b https://b.b:8443 {\n\ttls internal\n\trespond 404\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"a.a\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 403\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv1\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"b.b\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 404\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"srv2\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":8443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"b.b\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 404\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"a.a\",\n\t\t\t\t\t\t\t\"b.b\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"module\": \"internal\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_dns_multiple_options_without_provider.caddyfiletest",
    "content": "localhost\n\ntls {\n\tpropagation_delay 10s\n\tdns_ttl 5m\n}\n\n----------\nparsing caddyfile tokens for 'tls': setting DNS challenge options [propagation_delay, dns_ttl] requires a DNS provider (set with the 'dns' subdirective or 'acme_dns' global option), at Caddyfile:6"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_dns_override_acme_dns.caddyfiletest",
    "content": "{\n\tacme_dns mock foo\n}\n\nlocalhost {\n\ttls {\n\t\tdns mock bar\n\t\tresolvers 8.8.8.8 8.8.4.4\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"provider\": {\n\t\t\t\t\t\t\t\t\t\t\t\"argument\": \"bar\",\n\t\t\t\t\t\t\t\t\t\t\t\"name\": \"mock\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"resolvers\": [\n\t\t\t\t\t\t\t\t\t\t\t\"8.8.8.8\",\n\t\t\t\t\t\t\t\t\t\t\t\"8.8.4.4\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"provider\": {\n\t\t\t\t\t\t\t\t\t\t\t\"argument\": \"foo\",\n\t\t\t\t\t\t\t\t\t\t\t\"name\": \"mock\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_dns_override_global_dns.caddyfiletest",
    "content": "{\n\tdns mock foo\n}\n\nlocalhost {\n\ttls {\n\t\tdns mock bar\n\t\tresolvers 8.8.8.8 8.8.4.4\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"provider\": {\n\t\t\t\t\t\t\t\t\t\t\t\"argument\": \"bar\",\n\t\t\t\t\t\t\t\t\t\t\t\"name\": \"mock\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"resolvers\": [\n\t\t\t\t\t\t\t\t\t\t\t\"8.8.8.8\",\n\t\t\t\t\t\t\t\t\t\t\t\"8.8.4.4\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"dns\": {\n\t\t\t\t\"argument\": \"foo\",\n\t\t\t\t\"name\": \"mock\"\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_dns_propagation_timeout_without_provider.caddyfiletest",
    "content": ":443 {\n\ttls {\n\t\tpropagation_timeout 30s\n\t}\n}\n----------\nparsing caddyfile tokens for 'tls': setting DNS challenge options [propagation_timeout] requires a DNS provider (set with the 'dns' subdirective or 'acme_dns' global option), at Caddyfile:4"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_dns_propagation_without_provider.caddyfiletest",
    "content": ":443 {\n\ttls {\n\t\tpropagation_delay 30s\n\t}\n}\n----------\nparsing caddyfile tokens for 'tls': setting DNS challenge options [propagation_delay] requires a DNS provider (set with the 'dns' subdirective or 'acme_dns' global option), at Caddyfile:4"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_dns_resolvers_with_global_provider.caddyfiletest",
    "content": "{\n\tacme_dns mock\n}\n\nlocalhost {\n\ttls {\n\t\tresolvers 8.8.8.8 8.8.4.4\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"provider\": {\n\t\t\t\t\t\t\t\t\t\t\t\"name\": \"mock\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"resolvers\": [\n\t\t\t\t\t\t\t\t\t\t\t\"8.8.8.8\",\n\t\t\t\t\t\t\t\t\t\t\t\"8.8.4.4\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"provider\": {\n\t\t\t\t\t\t\t\t\t\t\t\"name\": \"mock\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_dns_ttl.caddyfiletest",
    "content": "localhost\n\nrespond \"hello from localhost\"\ntls {\n\tdns mock\n\tdns_ttl 5m10s\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"provider\": {\n\t\t\t\t\t\t\t\t\t\t\t\"name\": \"mock\"\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"ttl\": 310000000000\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_explicit_issuer_dns_ttl.caddyfiletest",
    "content": "localhost\n\nrespond \"hello from localhost\"\ntls {\n\tissuer acme {\n\t\tdns_ttl 5m10s\n\t}\n\tissuer zerossl api_key {\n\t\tdns_ttl 10m20s\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"ttl\": 310000000000\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"api_key\": \"api_key\",\n\t\t\t\t\t\t\t\t\"cname_validation\": {\n\t\t\t\t\t\t\t\t\t\"ttl\": 620000000000\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"module\": \"zerossl\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_explicit_issuer_propagation_options.caddyfiletest",
    "content": "localhost\n\nrespond \"hello from localhost\"\ntls {\n\tissuer acme {\n\t\tpropagation_delay 5m10s\n\t\tpropagation_timeout 10m20s\n\t}\n\tissuer zerossl api_key {\n\t\tpropagation_delay 5m30s\n\t\tpropagation_timeout -1\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"propagation_delay\": 310000000000,\n\t\t\t\t\t\t\t\t\t\t\"propagation_timeout\": 620000000000\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"api_key\": \"api_key\",\n\t\t\t\t\t\t\t\t\"cname_validation\": {\n\t\t\t\t\t\t\t\t\t\"propagation_delay\": 330000000000,\n\t\t\t\t\t\t\t\t\t\"propagation_timeout\": -1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"module\": \"zerossl\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_internal_options.caddyfiletest",
    "content": "a.example.com {\n\ttls {\n\t\tissuer internal {\n\t\t\tca foo\n\t\t\tlifetime 24h\n\t\t\tsign_with_root\n\t\t}\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"a.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"a.example.com\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"ca\": \"foo\",\n\t\t\t\t\t\t\t\t\"lifetime\": 86400000000000,\n\t\t\t\t\t\t\t\t\"module\": \"internal\",\n\t\t\t\t\t\t\t\t\"sign_with_root\": true\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tls_propagation_options.caddyfiletest",
    "content": "localhost\n\nrespond \"hello from localhost\"\ntls {\n\tdns mock\n\tpropagation_delay 5m10s\n\tpropagation_timeout 10m20s\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"propagation_delay\": 310000000000,\n\t\t\t\t\t\t\t\t\t\t\"propagation_timeout\": 620000000000,\n\t\t\t\t\t\t\t\t\t\t\"provider\": {\n\t\t\t\t\t\t\t\t\t\t\t\"name\": \"mock\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/tracing.caddyfiletest",
    "content": ":80 {\n\ttracing /myhandler {\n\t\tspan my-span\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"/myhandler\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"tracing\",\n\t\t\t\t\t\t\t\t\t\"span\": \"my-span\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/uri_query_operations.caddyfiletest",
    "content": ":9080\nuri query +foo bar\nuri query -baz\nuri query taz test\nuri query key=value example\nuri query changethis>changed\nuri query {\n\tfindme value replacement\n\t+foo1 baz\n}\n\nrespond \"{query}\"\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":9080\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\"query\": {\n\t\t\t\t\t\t\t\t\t\t\"add\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"key\": \"foo\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"val\": \"bar\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\"query\": {\n\t\t\t\t\t\t\t\t\t\t\"delete\": [\n\t\t\t\t\t\t\t\t\t\t\t\"baz\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\"query\": {\n\t\t\t\t\t\t\t\t\t\t\"set\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"key\": \"taz\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"val\": \"test\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\"query\": {\n\t\t\t\t\t\t\t\t\t\t\"set\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"key\": \"key=value\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"val\": \"example\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\"query\": {\n\t\t\t\t\t\t\t\t\t\t\"rename\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"key\": \"changethis\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"val\": \"changed\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\"query\": {\n\t\t\t\t\t\t\t\t\t\t\"add\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"key\": \"foo1\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"val\": \"baz\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"replace\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"key\": \"findme\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"replace\": \"replacement\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"search_regexp\": \"value\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"{http.request.uri.query}\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/uri_replace_brace_escape.caddyfiletest",
    "content": ":9080\nuri replace \"\\}\" %7D\nuri replace \"\\{\" %7B\n\nrespond \"{query}\"\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":9080\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\"uri_substring\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"find\": \"\\\\}\",\n\t\t\t\t\t\t\t\t\t\t\t\"replace\": \"%7D\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"rewrite\",\n\t\t\t\t\t\t\t\t\t\"uri_substring\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"find\": \"\\\\{\",\n\t\t\t\t\t\t\t\t\t\t\t\"replace\": \"%7B\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"{http.request.uri.query}\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt/wildcard_pattern.caddyfiletest",
    "content": "*.example.com {\n\ttls foo@example.com {\n\t\tdns mock\n\t}\n\n\t@foo host foo.example.com\n\thandle @foo {\n\t\trespond \"Foo!\"\n\t}\n\n\t@bar host bar.example.com\n\thandle @bar {\n\t\trespond \"Bar!\"\n\t}\n\n\t# Fallback for otherwise unhandled domains\n\thandle {\n\t\tabort\n\t}\n}\n----------\n{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"*.example.com\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group3\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Foo!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"foo.example.com\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group3\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"Bar!\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"bar.example.com\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"group\": \"group3\",\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"abort\": true,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"tls\": {\n\t\t\t\"automation\": {\n\t\t\t\t\"policies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"subjects\": [\n\t\t\t\t\t\t\t\"*.example.com\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"issuers\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"provider\": {\n\t\t\t\t\t\t\t\t\t\t\t\"name\": \"mock\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"email\": \"foo@example.com\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"ca\": \"https://acme.zerossl.com/v2/DV90\",\n\t\t\t\t\t\t\t\t\"challenges\": {\n\t\t\t\t\t\t\t\t\t\"dns\": {\n\t\t\t\t\t\t\t\t\t\t\"provider\": {\n\t\t\t\t\t\t\t\t\t\t\t\"name\": \"mock\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"email\": \"foo@example.com\",\n\t\t\t\t\t\t\t\t\"module\": \"acme\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}"
  },
  {
    "path": "caddytest/integration/caddyfile_adapt_test.go",
    "content": "package integration\n\nimport (\n\tjsonMod \"encoding/json\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"regexp\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddytest\"\n\t_ \"github.com/caddyserver/caddy/v2/internal/testmocks\"\n)\n\nfunc TestCaddyfileAdaptToJSON(t *testing.T) {\n\t// load the list of test files from the dir\n\tfiles, err := os.ReadDir(\"./caddyfile_adapt\")\n\tif err != nil {\n\t\tt.Errorf(\"failed to read caddyfile_adapt dir: %s\", err)\n\t}\n\n\t// prep a regexp to fix strings on windows\n\twinNewlines := regexp.MustCompile(`\\r?\\n`)\n\n\tfor _, f := range files {\n\t\tif f.IsDir() {\n\t\t\tcontinue\n\t\t}\n\t\tfilename := f.Name()\n\n\t\t// run each file as a subtest, so that we can see which one fails more easily\n\t\tt.Run(filename, func(t *testing.T) {\n\t\t\t// read the test file\n\t\t\tdata, err := os.ReadFile(\"./caddyfile_adapt/\" + filename)\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"failed to read %s dir: %s\", filename, err)\n\t\t\t}\n\n\t\t\t// split the Caddyfile (first) and JSON (second) parts\n\t\t\t// (append newline to Caddyfile to match formatter expectations)\n\t\t\tparts := strings.Split(string(data), \"----------\")\n\t\t\tcaddyfile, expected := strings.TrimSpace(parts[0])+\"\\n\", strings.TrimSpace(parts[1])\n\n\t\t\t// replace windows newlines in the json with unix newlines\n\t\t\texpected = winNewlines.ReplaceAllString(expected, \"\\n\")\n\n\t\t\t// replace os-specific default path for file_server's hide field\n\t\t\treplacePath, _ := jsonMod.Marshal(fmt.Sprint(\".\", string(filepath.Separator), \"Caddyfile\"))\n\t\t\texpected = strings.ReplaceAll(expected, `\"./Caddyfile\"`, string(replacePath))\n\n\t\t\t// if the expected output is JSON, compare it\n\t\t\tif len(expected) > 0 && expected[0] == '{' {\n\t\t\t\tok := caddytest.CompareAdapt(t, filename, caddyfile, \"caddyfile\", expected)\n\t\t\t\tif !ok {\n\t\t\t\t\tt.Errorf(\"failed to adapt %s\", filename)\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\t// otherwise, adapt the Caddyfile and check for errors\n\t\t\tcfgAdapter := caddyconfig.GetAdapter(\"caddyfile\")\n\t\t\t_, _, err = cfgAdapter.Adapt([]byte(caddyfile), nil)\n\t\t\tif err == nil {\n\t\t\t\tt.Errorf(\"expected error for %s but got none\", filename)\n\t\t\t} else {\n\t\t\t\tnormalizedErr := winNewlines.ReplaceAllString(err.Error(), \"\\n\")\n\t\t\t\tif !strings.Contains(normalizedErr, expected) {\n\t\t\t\t\tt.Errorf(\"expected error for %s to contain:\\n%s\\nbut got:\\n%s\", filename, expected, normalizedErr)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/caddyfile_test.go",
    "content": "package integration\n\nimport (\n\t\"net/http\"\n\t\"net/url\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/caddytest\"\n)\n\nfunc TestRespond(t *testing.T) {\n\t// arrange\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(` \n  {\n    admin localhost:2999\n    http_port     9080\n    https_port    9443\n    grace_period  1ns\n  }\n  \n  localhost:9080 {\n    respond /version 200 {\n      body \"hello from localhost\"\n    }\t\n    }\n  `, \"caddyfile\")\n\n\t// act and assert\n\ttester.AssertGetResponse(\"http://localhost:9080/version\", 200, \"hello from localhost\")\n}\n\nfunc TestRedirect(t *testing.T) {\n\t// arrange\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n  {\n    admin localhost:2999\n    http_port     9080\n    https_port    9443\n    grace_period  1ns\n  }\n  \n  localhost:9080 {\n    \n    redir / http://localhost:9080/hello 301\n    \n    respond /hello 200 {\n      body \"hello from localhost\"\n    }\t\n    }\n  `, \"caddyfile\")\n\n\t// act and assert\n\ttester.AssertRedirect(\"http://localhost:9080/\", \"http://localhost:9080/hello\", 301)\n\n\t// follow redirect\n\ttester.AssertGetResponse(\"http://localhost:9080/\", 200, \"hello from localhost\")\n}\n\nfunc TestDuplicateHosts(t *testing.T) {\n\t// act and assert\n\tcaddytest.AssertLoadError(t,\n\t\t`\n    localhost:9080 {\n    }\n  \n    localhost:9080 { \n    }\n    `,\n\t\t\"caddyfile\",\n\t\t\"ambiguous site definition\")\n}\n\nfunc TestReadCookie(t *testing.T) {\n\tlocalhost, _ := url.Parse(\"http://localhost\")\n\tcookie := http.Cookie{\n\t\tName:  \"clientname\",\n\t\tValue: \"caddytest\",\n\t}\n\n\t// arrange\n\ttester := caddytest.NewTester(t)\n\ttester.Client.Jar.SetCookies(localhost, []*http.Cookie{&cookie})\n\ttester.InitServer(` \n  {\n    skip_install_trust\n    admin localhost:2999\n    http_port     9080\n    https_port    9443\n    grace_period  1ns\n  }\n  \n  localhost:9080 {\n    templates {\n      root testdata\n    }\n    file_server {\n      root testdata\n    }\n  }\n  `, \"caddyfile\")\n\n\t// act and assert\n\ttester.AssertGetResponse(\"http://localhost:9080/cookie.html\", 200, \"<h2>Cookie.ClientName caddytest</h2>\")\n}\n\nfunc TestReplIndex(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n  {\n    skip_install_trust\n    admin localhost:2999\n    http_port     9080\n    https_port    9443\n    grace_period  1ns\n  }\n\n  localhost:9080 {\n    templates {\n      root testdata\n    }\n    file_server {\n      root testdata\n      index \"index.{host}.html\"\n    }\n  }\n  `, \"caddyfile\")\n\n\t// act and assert\n\ttester.AssertGetResponse(\"http://localhost:9080/\", 200, \"\")\n}\n\nfunc TestInvalidPrefix(t *testing.T) {\n\ttype testCase struct {\n\t\tconfig, expectedError string\n\t}\n\n\tfailureCases := []testCase{\n\t\t{\n\t\t\tconfig:        `wss://localhost`,\n\t\t\texpectedError: `the scheme wss:// is only supported in browsers; use https:// instead`,\n\t\t},\n\t\t{\n\t\t\tconfig:        `ws://localhost`,\n\t\t\texpectedError: `the scheme ws:// is only supported in browsers; use http:// instead`,\n\t\t},\n\t\t{\n\t\t\tconfig:        `someInvalidPrefix://localhost`,\n\t\t\texpectedError: \"unsupported URL scheme someinvalidprefix://\",\n\t\t},\n\t\t{\n\t\t\tconfig:        `h2c://localhost`,\n\t\t\texpectedError: `unsupported URL scheme h2c://`,\n\t\t},\n\t\t{\n\t\t\tconfig:        `localhost, wss://localhost`,\n\t\t\texpectedError: `the scheme wss:// is only supported in browsers; use https:// instead`,\n\t\t},\n\t\t{\n\t\t\tconfig: `localhost {\n  \t\t\t\treverse_proxy ws://localhost\"\n            }`,\n\t\t\texpectedError: `the scheme ws:// is only supported in browsers; use http:// instead`,\n\t\t},\n\t\t{\n\t\t\tconfig: `localhost {\n  \t\t\t\treverse_proxy someInvalidPrefix://localhost\"\n\t\t\t}`,\n\t\t\texpectedError: `unsupported URL scheme someinvalidprefix://`,\n\t\t},\n\t}\n\n\tfor _, failureCase := range failureCases {\n\t\tcaddytest.AssertLoadError(t, failureCase.config, \"caddyfile\", failureCase.expectedError)\n\t}\n}\n\nfunc TestValidPrefix(t *testing.T) {\n\ttype testCase struct {\n\t\trawConfig, expectedResponse string\n\t}\n\n\tsuccessCases := []testCase{\n\t\t{\n\t\t\t\"localhost\",\n\t\t\t`{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}`,\n\t\t},\n\t\t{\n\t\t\t\"https://localhost\",\n\t\t\t`{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}`,\n\t\t},\n\t\t{\n\t\t\t\"http://localhost\",\n\t\t\t`{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}`,\n\t\t},\n\t\t{\n\t\t\t`localhost {\n\t\t\treverse_proxy http://localhost:3000\n\t\t }`,\n\t\t\t`{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:3000\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}`,\n\t\t},\n\t\t{\n\t\t\t`localhost {\n\t\t\treverse_proxy https://localhost:3000\n\t\t }`,\n\t\t\t`{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"protocol\": \"http\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"tls\": {}\n\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:3000\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}`,\n\t\t},\n\t\t{\n\t\t\t`localhost {\n\t\t\treverse_proxy h2c://localhost:3000\n\t\t }`,\n\t\t\t`{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"protocol\": \"http\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"versions\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"h2c\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"2\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:3000\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}`,\n\t\t},\n\t\t{\n\t\t\t`localhost {\n\t\t\treverse_proxy localhost:3000\n\t\t }`,\n\t\t\t`{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":443\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"localhost:3000\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}`,\n\t\t},\n\t}\n\n\tfor _, successCase := range successCases {\n\t\tcaddytest.AssertAdapt(t, successCase.rawConfig, \"caddyfile\", successCase.expectedResponse)\n\t}\n}\n\nfunc TestUriReplace(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\n\ttester.InitServer(`\n\t{\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t}\n\t:9080\n\turi replace \"\\}\" %7D\n\turi replace \"\\{\" %7B\n\t\n\trespond \"{query}\"`, \"caddyfile\")\n\n\ttester.AssertGetResponse(\"http://localhost:9080/endpoint?test={%20content%20}\", 200, \"test=%7B%20content%20%7D\")\n}\n\nfunc TestUriOps(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\n\ttester.InitServer(`\n\t{\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t}\n\t:9080\n\turi query +foo bar\n\turi query -baz\n\turi query taz test\n\turi query key=value example\n\turi query changethis>changed\n\t\n\trespond \"{query}\"`, \"caddyfile\")\n\n\ttester.AssertGetResponse(\"http://localhost:9080/endpoint?foo=bar0&baz=buz&taz=nottest&changethis=val\", 200, \"changed=val&foo=bar0&foo=bar&key%3Dvalue=example&taz=test\")\n}\n\n// Tests the `http.request.local.port` placeholder.\n// We don't test the very similar `http.request.local.host` placeholder,\n// because depending on the host the test is running on, localhost might\n// refer to 127.0.0.1 or ::1.\n// TODO: Test each http version separately (especially http/3)\nfunc TestHttpRequestLocalPortPlaceholder(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\n\ttester.InitServer(`\n\t{\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t}\n\t:9080\n\trespond \"{http.request.local.port}\"`, \"caddyfile\")\n\n\ttester.AssertGetResponse(\"http://localhost:9080/\", 200, \"9080\")\n}\n\nfunc TestSetThenAddQueryParams(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\n\ttester.InitServer(`\n\t{\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t}\n\t:9080\n\turi query foo bar\n\turi query +foo baz\n\t\n\trespond \"{query}\"`, \"caddyfile\")\n\n\ttester.AssertGetResponse(\"http://localhost:9080/endpoint\", 200, \"foo=bar&foo=baz\")\n}\n\nfunc TestSetThenDeleteParams(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\n\ttester.InitServer(`\n\t{\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t}\n\t:9080\n\turi query bar foo{query.foo}\n\turi query -foo\n\t\n\trespond \"{query}\"`, \"caddyfile\")\n\n\ttester.AssertGetResponse(\"http://localhost:9080/endpoint?foo=bar\", 200, \"bar=foobar\")\n}\n\nfunc TestRenameAndOtherOps(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\n\ttester.InitServer(`\n\t{\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t}\n\t:9080\n\turi query foo>bar\n\turi query bar taz\n\turi query +bar baz\n\t\n\trespond \"{query}\"`, \"caddyfile\")\n\n\ttester.AssertGetResponse(\"http://localhost:9080/endpoint?foo=bar\", 200, \"bar=taz&bar=baz\")\n}\n\nfunc TestReplaceOps(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\n\ttester.InitServer(`\n\t{\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t}\n\t:9080\n\turi query foo bar baz\t\n\trespond \"{query}\"`, \"caddyfile\")\n\n\ttester.AssertGetResponse(\"http://localhost:9080/endpoint?foo=bar\", 200, \"foo=baz\")\n}\n\nfunc TestReplaceWithReplacementPlaceholder(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t}\n\t:9080\n\turi query foo bar {query.placeholder}\t\n\trespond \"{query}\"`, \"caddyfile\")\n\n\ttester.AssertGetResponse(\"http://localhost:9080/endpoint?placeholder=baz&foo=bar\", 200, \"foo=baz&placeholder=baz\")\n}\n\nfunc TestReplaceWithKeyPlaceholder(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t}\n\t:9080\n\turi query {query.placeholder} bar baz\t\n\trespond \"{query}\"`, \"caddyfile\")\n\n\ttester.AssertGetResponse(\"http://localhost:9080/endpoint?placeholder=foo&foo=bar\", 200, \"foo=baz&placeholder=foo\")\n}\n\nfunc TestPartialReplacement(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t}\n\t:9080\n\turi query foo ar az\t\n\trespond \"{query}\"`, \"caddyfile\")\n\n\ttester.AssertGetResponse(\"http://localhost:9080/endpoint?foo=bar\", 200, \"foo=baz\")\n}\n\nfunc TestNonExistingSearch(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t}\n\t:9080\n\turi query foo var baz\t\n\trespond \"{query}\"`, \"caddyfile\")\n\n\ttester.AssertGetResponse(\"http://localhost:9080/endpoint?foo=bar\", 200, \"foo=bar\")\n}\n\nfunc TestReplaceAllOps(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\n\ttester.InitServer(`\n\t{\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t}\n\t:9080\n\turi query * bar baz\t\n\trespond \"{query}\"`, \"caddyfile\")\n\n\ttester.AssertGetResponse(\"http://localhost:9080/endpoint?foo=bar&baz=bar\", 200, \"baz=baz&foo=baz\")\n}\n\nfunc TestUriOpsBlock(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\n\ttester.InitServer(`\n\t{\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t}\n\t:9080\n\turi query {\n\t\t+foo bar\n\t\t-baz\n\t\ttaz test\n\t} \n\trespond \"{query}\"`, \"caddyfile\")\n\n\ttester.AssertGetResponse(\"http://localhost:9080/endpoint?foo=bar0&baz=buz&taz=nottest\", 200, \"foo=bar0&foo=bar&taz=test\")\n}\n\nfunc TestHandleErrorSimpleCodes(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`{\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t}\n\tlocalhost:9080 {\n\t\troot * /srv\n\t\terror /private* \"Unauthorized\" 410\n\t\terror /hidden* \"Not found\" 404\n\t\n\t\thandle_errors 404 410 {\n\t\t\trespond \"404 or 410 error\"\n\t\t}\n\t}`, \"caddyfile\")\n\t// act and assert\n\ttester.AssertGetResponse(\"http://localhost:9080/private\", 410, \"404 or 410 error\")\n\ttester.AssertGetResponse(\"http://localhost:9080/hidden\", 404, \"404 or 410 error\")\n}\n\nfunc TestHandleErrorRange(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`{\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t}\n\tlocalhost:9080 {\n\t\troot * /srv\n\t\terror /private* \"Unauthorized\" 410\n\t\terror /hidden* \"Not found\" 404\n\n\t\thandle_errors 4xx {\n\t\t\trespond \"Error in the [400 .. 499] range\"\n\t\t}\n\t}`, \"caddyfile\")\n\t// act and assert\n\ttester.AssertGetResponse(\"http://localhost:9080/private\", 410, \"Error in the [400 .. 499] range\")\n\ttester.AssertGetResponse(\"http://localhost:9080/hidden\", 404, \"Error in the [400 .. 499] range\")\n}\n\nfunc TestHandleErrorSort(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`{\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t}\n\tlocalhost:9080 {\n\t\troot * /srv\n\t\terror /private* \"Unauthorized\" 410\n\t\terror /hidden* \"Not found\" 404\n\t\terror /internalerr* \"Internal Server Error\" 500\n\n\t\thandle_errors {\n\t\t\trespond \"Fallback route: code outside the [400..499] range\"\n\t\t}\n\t\thandle_errors 4xx {\n\t\t\trespond \"Error in the [400 .. 499] range\"\n\t\t}\n\t}`, \"caddyfile\")\n\t// act and assert\n\ttester.AssertGetResponse(\"http://localhost:9080/internalerr\", 500, \"Fallback route: code outside the [400..499] range\")\n\ttester.AssertGetResponse(\"http://localhost:9080/hidden\", 404, \"Error in the [400 .. 499] range\")\n}\n\nfunc TestHandleErrorRangeAndCodes(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`{\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t}\n\tlocalhost:9080 {\n\t\troot * /srv\n\t\terror /private* \"Unauthorized\" 410\n\t\terror /threehundred* \"Moved Permanently\" 301\n\t\terror /internalerr* \"Internal Server Error\" 500\n\n\t\thandle_errors 500 3xx {\n\t\t\trespond \"Error code is equal to 500 or in the [300..399] range\"\n\t\t}\n\t\thandle_errors 4xx {\n\t\t\trespond \"Error in the [400 .. 499] range\"\n\t\t}\n\t}`, \"caddyfile\")\n\t// act and assert\n\ttester.AssertGetResponse(\"http://localhost:9080/internalerr\", 500, \"Error code is equal to 500 or in the [300..399] range\")\n\ttester.AssertGetResponse(\"http://localhost:9080/threehundred\", 301, \"Error code is equal to 500 or in the [300..399] range\")\n\ttester.AssertGetResponse(\"http://localhost:9080/private\", 410, \"Error in the [400 .. 499] range\")\n}\n\nfunc TestHandleErrorSubHandlers(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`{\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t}\n\tlocalhost:9080 {\n\t\troot * /srv\n\t\tfile_server\n\t\terror /*/internalerr* \"Internal Server Error\" 500\n\n\t\thandle_errors 404 {\n\t\t\thandle /en/* {\n\t\t\t\trespond \"not found\" 404\n\t\t\t}\n\t\t\thandle /es/* {\n\t\t\t\trespond \"no encontrado\" 404\n\t\t\t}\n\t\t\thandle {\n\t\t\t\trespond \"default not found\"\n\t\t\t}\n\t\t}\n\t\thandle_errors {\n\t\t\thandle {\n\t\t\t\trespond \"Default error\"\n\t\t\t}\n\t\t\thandle /en/* {\n\t\t\t\trespond \"English error\"\n\t\t\t}\n\t\t}\n\t}\n\t`, \"caddyfile\")\n\t// act and assert\n\ttester.AssertGetResponse(\"http://localhost:9080/en/notfound\", 404, \"not found\")\n\ttester.AssertGetResponse(\"http://localhost:9080/es/notfound\", 404, \"no encontrado\")\n\ttester.AssertGetResponse(\"http://localhost:9080/notfound\", 404, \"default not found\")\n\ttester.AssertGetResponse(\"http://localhost:9080/es/internalerr\", 500, \"Default error\")\n\ttester.AssertGetResponse(\"http://localhost:9080/en/internalerr\", 500, \"English error\")\n}\n\nfunc TestInvalidSiteAddressesAsDirectives(t *testing.T) {\n\ttype testCase struct {\n\t\tconfig, expectedError string\n\t}\n\n\tfailureCases := []testCase{\n\t\t{\n\t\t\tconfig: `\n\t\t\thandle {\n\t\t\t\tfile_server\n\t\t\t}`,\n\t\t\texpectedError: `Caddyfile:2: parsed 'handle' as a site address, but it is a known directive; directives must appear in a site block`,\n\t\t},\n\t\t{\n\t\t\tconfig: `\n\t\t\treverse_proxy localhost:9000 localhost:9001 {\n\t\t\t\tfile_server\n\t\t\t}`,\n\t\t\texpectedError: `Caddyfile:2: parsed 'reverse_proxy' as a site address, but it is a known directive; directives must appear in a site block`,\n\t\t},\n\t}\n\n\tfor _, failureCase := range failureCases {\n\t\tcaddytest.AssertLoadError(t, failureCase.config, \"caddyfile\", failureCase.expectedError)\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/forwardauth_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage integration\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"strings\"\n\t\"sync\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/caddytest\"\n)\n\n// TestForwardAuthCopyHeadersStripsClientHeaders is a regression test for the\n// header injection vulnerability in forward_auth copy_headers.\n//\n// When the auth service returns 200 OK without one of the copy_headers headers,\n// the MatchNot guard skips the Set operation. Before this fix, the original\n// client-supplied header survived unchanged into the backend request, allowing\n// privilege escalation with only a valid (non-privileged) bearer token. After\n// the fix, an unconditional delete route runs first, so the backend always\n// sees an absent header rather than the attacker-supplied value.\nfunc TestForwardAuthCopyHeadersStripsClientHeaders(t *testing.T) {\n\t// Mock auth service: accepts any Bearer token, returns 200 OK with NO\n\t// identity headers. This is the stateless JWT validator pattern that\n\t// triggers the vulnerability.\n\tauthSrv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\tif strings.HasPrefix(r.Header.Get(\"Authorization\"), \"Bearer \") {\n\t\t\tw.WriteHeader(http.StatusOK)\n\t\t\treturn\n\t\t}\n\t\tw.WriteHeader(http.StatusUnauthorized)\n\t}))\n\tdefer authSrv.Close()\n\n\t// Mock backend: records the identity headers it receives. A real application\n\t// would use X-User-Id / X-User-Role to make authorization decisions.\n\ttype received struct{ userID, userRole string }\n\tvar (\n\t\tmu   sync.Mutex\n\t\tlast received\n\t)\n\tbackendSrv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\tmu.Lock()\n\t\tlast = received{\n\t\t\tuserID:   r.Header.Get(\"X-User-Id\"),\n\t\t\tuserRole: r.Header.Get(\"X-User-Role\"),\n\t\t}\n\t\tmu.Unlock()\n\t\tw.WriteHeader(http.StatusOK)\n\t\tfmt.Fprint(w, \"ok\")\n\t}))\n\tdefer backendSrv.Close()\n\n\tauthAddr := strings.TrimPrefix(authSrv.URL, \"http://\")\n\tbackendAddr := strings.TrimPrefix(backendSrv.URL, \"http://\")\n\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(fmt.Sprintf(`\n\t{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port 9080\n\t\thttps_port 9443\n\t\tgrace_period 1ns\n\t}\n\thttp://localhost:9080 {\n\t\tforward_auth %s {\n\t\t\turi /\n\t\t\tcopy_headers X-User-Id X-User-Role\n\t\t}\n\t\treverse_proxy %s\n\t}\n\t`, authAddr, backendAddr), \"caddyfile\")\n\n\t// Case 1: no token. Auth must still reject the request even when the client\n\t// includes identity headers. This confirms the auth check is not bypassed.\n\treq, _ := http.NewRequest(http.MethodGet, \"http://localhost:9080/\", nil)\n\treq.Header.Set(\"X-User-Id\", \"injected\")\n\treq.Header.Set(\"X-User-Role\", \"injected\")\n\tresp := tester.AssertResponseCode(req, http.StatusUnauthorized)\n\tresp.Body.Close()\n\n\t// Case 2: valid token, no injected headers. The backend should see absent\n\t// identity headers (the auth service never returns them).\n\treq, _ = http.NewRequest(http.MethodGet, \"http://localhost:9080/\", nil)\n\treq.Header.Set(\"Authorization\", \"Bearer token123\")\n\ttester.AssertResponse(req, http.StatusOK, \"ok\")\n\tmu.Lock()\n\tgotID, gotRole := last.userID, last.userRole\n\tmu.Unlock()\n\tif gotID != \"\" {\n\t\tt.Errorf(\"baseline: X-User-Id should be absent, got %q\", gotID)\n\t}\n\tif gotRole != \"\" {\n\t\tt.Errorf(\"baseline: X-User-Role should be absent, got %q\", gotRole)\n\t}\n\n\t// Case 3 (the security regression): valid token plus forged identity headers.\n\t// The fix must strip those values so the backend never sees them.\n\treq, _ = http.NewRequest(http.MethodGet, \"http://localhost:9080/\", nil)\n\treq.Header.Set(\"Authorization\", \"Bearer token123\")\n\treq.Header.Set(\"X-User-Id\", \"admin\")        // forged\n\treq.Header.Set(\"X-User-Role\", \"superadmin\") // forged\n\ttester.AssertResponse(req, http.StatusOK, \"ok\")\n\tmu.Lock()\n\tgotID, gotRole = last.userID, last.userRole\n\tmu.Unlock()\n\tif gotID != \"\" {\n\t\tt.Errorf(\"injection: X-User-Id must be stripped, got %q\", gotID)\n\t}\n\tif gotRole != \"\" {\n\t\tt.Errorf(\"injection: X-User-Role must be stripped, got %q\", gotRole)\n\t}\n}\n\n// TestForwardAuthCopyHeadersAuthResponseWins verifies that when the auth\n// service does include a copy_headers header in its response, that value\n// is forwarded to the backend and takes precedence over any client-supplied\n// value for the same header.\nfunc TestForwardAuthCopyHeadersAuthResponseWins(t *testing.T) {\n\tconst wantUserID = \"service-user-42\"\n\tconst wantUserRole = \"editor\"\n\n\t// Auth service: accepts bearer token and sets identity headers.\n\tauthSrv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\tif strings.HasPrefix(r.Header.Get(\"Authorization\"), \"Bearer \") {\n\t\t\tw.Header().Set(\"X-User-Id\", wantUserID)\n\t\t\tw.Header().Set(\"X-User-Role\", wantUserRole)\n\t\t\tw.WriteHeader(http.StatusOK)\n\t\t\treturn\n\t\t}\n\t\tw.WriteHeader(http.StatusUnauthorized)\n\t}))\n\tdefer authSrv.Close()\n\n\ttype received struct{ userID, userRole string }\n\tvar (\n\t\tmu   sync.Mutex\n\t\tlast received\n\t)\n\tbackendSrv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\tmu.Lock()\n\t\tlast = received{\n\t\t\tuserID:   r.Header.Get(\"X-User-Id\"),\n\t\t\tuserRole: r.Header.Get(\"X-User-Role\"),\n\t\t}\n\t\tmu.Unlock()\n\t\tw.WriteHeader(http.StatusOK)\n\t\tfmt.Fprint(w, \"ok\")\n\t}))\n\tdefer backendSrv.Close()\n\n\tauthAddr := strings.TrimPrefix(authSrv.URL, \"http://\")\n\tbackendAddr := strings.TrimPrefix(backendSrv.URL, \"http://\")\n\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(fmt.Sprintf(`\n\t{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port 9080\n\t\thttps_port 9443\n\t\tgrace_period 1ns\n\t}\n\thttp://localhost:9080 {\n\t\tforward_auth %s {\n\t\t\turi /\n\t\t\tcopy_headers X-User-Id X-User-Role\n\t\t}\n\t\treverse_proxy %s\n\t}\n\t`, authAddr, backendAddr), \"caddyfile\")\n\n\t// The client sends forged headers; the auth service overrides them with\n\t// its own values. The backend must receive the auth service values.\n\treq, _ := http.NewRequest(http.MethodGet, \"http://localhost:9080/\", nil)\n\treq.Header.Set(\"Authorization\", \"Bearer token123\")\n\treq.Header.Set(\"X-User-Id\", \"forged-id\")   // must be overwritten\n\treq.Header.Set(\"X-User-Role\", \"forged-role\") // must be overwritten\n\ttester.AssertResponse(req, http.StatusOK, \"ok\")\n\n\tmu.Lock()\n\tgotID, gotRole := last.userID, last.userRole\n\tmu.Unlock()\n\tif gotID != wantUserID {\n\t\tt.Errorf(\"X-User-Id: want %q, got %q\", wantUserID, gotID)\n\t}\n\tif gotRole != wantUserRole {\n\t\tt.Errorf(\"X-User-Role: want %q, got %q\", wantUserRole, gotRole)\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/h2listener_test.go",
    "content": "package integration\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\t\"slices\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/caddytest\"\n)\n\nfunc newH2ListenerWithVersionsWithTLSTester(t *testing.T, serverVersions []string, clientVersions []string) *caddytest.Tester {\n\tconst baseConfig = `\n\t{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t\tservers :9443 {\n            protocols %s\n        }\n\t}\n\tlocalhost {\n\t\trespond \"{http.request.tls.proto} {http.request.proto}\"\n\t}\n\t`\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(fmt.Sprintf(baseConfig, strings.Join(serverVersions, \" \")), \"caddyfile\")\n\n\ttr := tester.Client.Transport.(*http.Transport)\n\ttr.TLSClientConfig.NextProtos = clientVersions\n\ttr.Protocols = new(http.Protocols)\n\tif slices.Contains(clientVersions, \"h2\") {\n\t\ttr.ForceAttemptHTTP2 = true\n\t\ttr.Protocols.SetHTTP2(true)\n\t}\n\tif !slices.Contains(clientVersions, \"http/1.1\") {\n\t\ttr.Protocols.SetHTTP1(false)\n\t}\n\n\treturn tester\n}\n\nfunc TestH2ListenerWithTLS(t *testing.T) {\n\ttests := []struct {\n\t\tserverVersions []string\n\t\tclientVersions []string\n\t\texpectedBody   string\n\t\tfailed         bool\n\t}{\n\t\t{[]string{\"h2\"}, []string{\"h2\"}, \"h2 HTTP/2.0\", false},\n\t\t{[]string{\"h2\"}, []string{\"http/1.1\"}, \"\", true},\n\t\t{[]string{\"h1\"}, []string{\"http/1.1\"}, \"http/1.1 HTTP/1.1\", false},\n\t\t{[]string{\"h1\"}, []string{\"h2\"}, \"\", true},\n\t\t{[]string{\"h2\", \"h1\"}, []string{\"h2\"}, \"h2 HTTP/2.0\", false},\n\t\t{[]string{\"h2\", \"h1\"}, []string{\"http/1.1\"}, \"http/1.1 HTTP/1.1\", false},\n\t}\n\tfor _, tc := range tests {\n\t\ttester := newH2ListenerWithVersionsWithTLSTester(t, tc.serverVersions, tc.clientVersions)\n\t\tt.Logf(\"running with server versions %v and client versions %v:\", tc.serverVersions, tc.clientVersions)\n\t\tif tc.failed {\n\t\t\tresp, err := tester.Client.Get(\"https://localhost:9443\")\n\t\t\tif err == nil {\n\t\t\t\tt.Errorf(\"unexpected response: %d\", resp.StatusCode)\n\t\t\t}\n\t\t} else {\n\t\t\ttester.AssertGetResponse(\"https://localhost:9443\", 200, tc.expectedBody)\n\t\t}\n\t}\n}\n\nfunc newH2ListenerWithVersionsWithoutTLSTester(t *testing.T, serverVersions []string, clientVersions []string) *caddytest.Tester {\n\tconst baseConfig = `\n\t{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\tservers :9080 {\n            protocols %s\n        }\n\t}\n\thttp://localhost {\n\t\trespond \"{http.request.proto}\"\n\t}\n\t`\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(fmt.Sprintf(baseConfig, strings.Join(serverVersions, \" \")), \"caddyfile\")\n\n\ttr := tester.Client.Transport.(*http.Transport)\n\ttr.Protocols = new(http.Protocols)\n\tif slices.Contains(clientVersions, \"h2c\") {\n\t\ttr.Protocols.SetHTTP1(false)\n\t\ttr.Protocols.SetUnencryptedHTTP2(true)\n\t} else if slices.Contains(clientVersions, \"http/1.1\") {\n\t\ttr.Protocols.SetHTTP1(true)\n\t\ttr.Protocols.SetUnencryptedHTTP2(false)\n\t}\n\n\treturn tester\n}\n\nfunc TestH2ListenerWithoutTLS(t *testing.T) {\n\ttests := []struct {\n\t\tserverVersions []string\n\t\tclientVersions []string\n\t\texpectedBody   string\n\t\tfailed         bool\n\t}{\n\t\t{[]string{\"h2c\"}, []string{\"h2c\"}, \"HTTP/2.0\", false},\n\t\t{[]string{\"h2c\"}, []string{\"http/1.1\"}, \"\", true},\n\t\t{[]string{\"h1\"}, []string{\"http/1.1\"}, \"HTTP/1.1\", false},\n\t\t{[]string{\"h1\"}, []string{\"h2c\"}, \"\", true},\n\t\t{[]string{\"h2c\", \"h1\"}, []string{\"h2c\"}, \"HTTP/2.0\", false},\n\t\t{[]string{\"h2c\", \"h1\"}, []string{\"http/1.1\"}, \"HTTP/1.1\", false},\n\t}\n\tfor _, tc := range tests {\n\t\ttester := newH2ListenerWithVersionsWithoutTLSTester(t, tc.serverVersions, tc.clientVersions)\n\t\tt.Logf(\"running with server versions %v and client versions %v:\", tc.serverVersions, tc.clientVersions)\n\t\tif tc.failed {\n\t\t\tresp, err := tester.Client.Get(\"http://localhost:9080\")\n\t\t\tif err == nil {\n\t\t\t\tt.Errorf(\"unexpected response: %d\", resp.StatusCode)\n\t\t\t}\n\t\t} else {\n\t\t\ttester.AssertGetResponse(\"http://localhost:9080\", 200, tc.expectedBody)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/handler_test.go",
    "content": "package integration\n\nimport (\n\t\"bytes\"\n\t\"net/http\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/caddytest\"\n)\n\nfunc TestBrowse(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t\tgrace_period  1ns\n\t}\n\thttp://localhost:9080 {\n\t\tfile_server browse\n\t}\n  `, \"caddyfile\")\n\n\treq, err := http.NewRequest(http.MethodGet, \"http://localhost:9080/\", nil)\n\tif err != nil {\n\t\tt.Fail()\n\t\treturn\n\t}\n\ttester.AssertResponseCode(req, 200)\n}\n\nfunc TestRespondWithJSON(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t\tgrace_period  1ns\n\t}\n\tlocalhost {\n\t\trespond {http.request.body}\n\t}\n  `, \"caddyfile\")\n\n\tres, _ := tester.AssertPostResponseBody(\"https://localhost:9443/\",\n\t\tnil,\n\t\tbytes.NewBufferString(`{\n\t\t\"greeting\": \"Hello, world!\"\n\t}`), 200, `{\n\t\t\"greeting\": \"Hello, world!\"\n\t}`)\n\tif res.Header.Get(\"Content-Type\") != \"application/json\" {\n\t\tt.Errorf(\"expected Content-Type to be application/json, but was %s\", res.Header.Get(\"Content-Type\"))\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/intercept_test.go",
    "content": "package integration\n\nimport (\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/caddytest\"\n)\n\nfunc TestIntercept(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`{\n\t\t\tskip_install_trust\n\t\t\tadmin localhost:2999\n\t\t\thttp_port     9080\n\t\t\thttps_port    9443\n\t\t\tgrace_period  1ns\n\t\t}\n\t\n\t\tlocalhost:9080 {\n\t\t\trespond /intercept \"I'm a teapot\" 408\n\t\t\theader /intercept To-Intercept ok\n\t\t\trespond /no-intercept \"I'm not a teapot\"\n\n\t\t\tintercept {\n\t\t\t\t@teapot status 408\n\t\t\t\thandle_response @teapot {\n\t\t\t\t\theader /intercept intercepted {resp.header.To-Intercept}\n\t\t\t\t\trespond /intercept \"I'm a combined coffee/tea pot that is temporarily out of coffee\" 503\n\t\t\t\t}\n\t\t\t}\t\n\t\t}\n\t\t`, \"caddyfile\")\n\n\tr, _ := tester.AssertGetResponse(\"http://localhost:9080/intercept\", 503, \"I'm a combined coffee/tea pot that is temporarily out of coffee\")\n\tif r.Header.Get(\"intercepted\") != \"ok\" {\n\t\tt.Fatalf(`header \"intercepted\" value is not \"ok\": %s`, r.Header.Get(\"intercepted\"))\n\t}\n\n\ttester.AssertGetResponse(\"http://localhost:9080/no-intercept\", 200, \"I'm not a teapot\")\n}\n"
  },
  {
    "path": "caddytest/integration/leafcertloaders_test.go",
    "content": "package integration\n\nimport (\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/caddytest\"\n)\n\nfunc TestLeafCertLoaders(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\t\"admin\": {\n\t\t\t\"listen\": \"localhost:2999\"\n\t\t},\n\t\t\"apps\": {\n\t\t\t\"http\": {\n\t\t\t\t\"http_port\": 9080,\n       \t\t\t\"https_port\": 9443,\n\t\t\t\t\"grace_period\": 1,\n\t\t\t\t\"servers\": {\n\t\t\t\t\t\"srv0\": {\n\t\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\t\":9443\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"client_authentication\": {\n\t\t\t\t\t\t\t\t\t\"verifiers\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"verifier\": \"leaf\",\n\t\t\t\t\t\t\t\t\t\t\t\"leaf_certs_loaders\": [\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"loader\": \"file\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"files\": [\"../leafcert.pem\"]\n\t\t\t\t\t\t\t\t\t\t\t\t}, \n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"loader\": \"folder\", \n\t\t\t\t\t\t\t\t\t\t\t\t\t\"folders\": [\"../\"]\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"loader\": \"storage\"\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"loader\": \"pem\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}`, \"json\")\n}\n"
  },
  {
    "path": "caddytest/integration/listener_test.go",
    "content": "package integration\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"math/rand/v2\"\n\t\"net\"\n\t\"net/http\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/caddytest\"\n)\n\nfunc setupListenerWrapperTest(t *testing.T, handlerFunc http.HandlerFunc) *caddytest.Tester {\n\tl, err := net.Listen(\"tcp\", \"127.0.0.1:0\")\n\tif err != nil {\n\t\tt.Fatalf(\"failed to listen: %s\", err)\n\t}\n\n\tmux := http.NewServeMux()\n\tmux.Handle(\"/\", handlerFunc)\n\tsrv := &http.Server{\n\t\tHandler: mux,\n\t}\n\tgo srv.Serve(l)\n\tt.Cleanup(func() {\n\t\t_ = srv.Close()\n\t\t_ = l.Close()\n\t})\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(fmt.Sprintf(`\n\t{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t\tlocal_certs\n\t\tservers :9443 {\n\t\t\tlistener_wrappers {\n\t\t\t\thttp_redirect\n\t\t\t\ttls\n\t\t\t}\n\t\t}\n\t}\n\tlocalhost {\n\t\treverse_proxy %s\n\t}\n  `, l.Addr().String()), \"caddyfile\")\n\treturn tester\n}\n\nfunc TestHTTPRedirectWrapperWithLargeUpload(t *testing.T) {\n\tconst uploadSize = (1024 * 1024) + 1 // 1 MB + 1 byte\n\t// 1 more than an MB\n\tbody := make([]byte, uploadSize)\n\trand.NewChaCha8([32]byte{}).Read(body)\n\n\ttester := setupListenerWrapperTest(t, func(writer http.ResponseWriter, request *http.Request) {\n\t\tbuf := new(bytes.Buffer)\n\t\t_, err := buf.ReadFrom(request.Body)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"failed to read body: %s\", err)\n\t\t}\n\n\t\tif !bytes.Equal(buf.Bytes(), body) {\n\t\t\tt.Fatalf(\"body not the same\")\n\t\t}\n\n\t\twriter.WriteHeader(http.StatusNoContent)\n\t})\n\tresp, err := tester.Client.Post(\"https://localhost:9443\", \"application/octet-stream\", bytes.NewReader(body))\n\tif err != nil {\n\t\tt.Fatalf(\"failed to post: %s\", err)\n\t}\n\n\tif resp.StatusCode != http.StatusNoContent {\n\t\tt.Fatalf(\"unexpected status: %d != %d\", resp.StatusCode, http.StatusNoContent)\n\t}\n}\n\nfunc TestLargeHttpRequest(t *testing.T) {\n\ttester := setupListenerWrapperTest(t, func(writer http.ResponseWriter, request *http.Request) {\n\t\tt.Fatal(\"not supposed to handle a request\")\n\t})\n\n\t// We never read the body in any way, set an extra long header instead.\n\treq, _ := http.NewRequest(\"POST\", \"http://localhost:9443\", nil)\n\treq.Header.Set(\"Long-Header\", strings.Repeat(\"X\", 1024*1024))\n\t_, err := tester.Client.Do(req)\n\tif err == nil {\n\t\tt.Fatal(\"not supposed to succeed\")\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/map_test.go",
    "content": "package integration\n\nimport (\n\t\"bytes\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/caddytest\"\n)\n\nfunc TestMap(t *testing.T) {\n\t// arrange\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t\tgrace_period  1ns\n\t}\n\n\tlocalhost:9080 {\n\n\t\tmap {http.request.method} {dest-1} {dest-2} {\n\t\t\tdefault unknown1    unknown2\n\t\t\t~G(.)(.)    G${1}${2}-called\n\t\t\tPOST    post-called foobar\n\t\t}\n\n\t\trespond /version 200 {\n\t\t\tbody \"hello from localhost {dest-1} {dest-2}\"\n\t\t}\t\n\t}\n\t`, \"caddyfile\")\n\n\t// act and assert\n\ttester.AssertGetResponse(\"http://localhost:9080/version\", 200, \"hello from localhost GET-called unknown2\")\n\ttester.AssertPostResponseBody(\"http://localhost:9080/version\", []string{}, bytes.NewBuffer([]byte{}), 200, \"hello from localhost post-called foobar\")\n}\n\nfunc TestMapRespondWithDefault(t *testing.T) {\n\t// arrange\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t\t}\n\t\t\n\t\tlocalhost:9080 {\n\t\n\t\t\tmap {http.request.method} {dest-name} {\n\t\t\t\tdefault unknown\n\t\t\t\tGET     get-called\n\t\t\t}\n\t\t\n\t\t\trespond /version 200 {\n\t\t\t\tbody \"hello from localhost {dest-name}\"\n\t\t\t}\t\n\t\t}\n\t`, \"caddyfile\")\n\n\t// act and assert\n\ttester.AssertGetResponse(\"http://localhost:9080/version\", 200, \"hello from localhost get-called\")\n\ttester.AssertPostResponseBody(\"http://localhost:9080/version\", []string{}, bytes.NewBuffer([]byte{}), 200, \"hello from localhost unknown\")\n}\n\nfunc TestMapAsJSON(t *testing.T) {\n\t// arrange\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\t\"admin\": {\n\t\t\t\"listen\": \"localhost:2999\"\n\t\t},\n\t\t\"apps\": {\n\t\t\t\"pki\": {\n\t\t\t\t\"certificate_authorities\" : {\n\t\t\t\t  \"local\" : {\n\t\t\t\t\t\"install_trust\": false\n\t\t\t\t  }\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"http\": {\n\t\t\t\t\"http_port\": 9080,\n\t\t\t\t\"https_port\": 9443,\n\t\t\t\t\"servers\": {\n\t\t\t\t\t\"srv0\": {\n\t\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\t\":9080\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"map\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"source\": \"{http.request.method}\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"destinations\": [\"{dest-name}\"],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"defaults\": [\"unknown\"],\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"mappings\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"input\": \"GET\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"outputs\": [\"get-called\"]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"input\": \"POST\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"outputs\": [\"post-called\"]\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost {dest-name}\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 200\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\"/version\"]\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"host\": [\"localhost\"]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}`, \"json\")\n\n\ttester.AssertGetResponse(\"http://localhost:9080/version\", 200, \"hello from localhost get-called\")\n\ttester.AssertPostResponseBody(\"http://localhost:9080/version\", []string{}, bytes.NewBuffer([]byte{}), 200, \"hello from localhost post-called\")\n}\n"
  },
  {
    "path": "caddytest/integration/mockdns_test.go",
    "content": "package integration\n\nimport (\n\t\"context\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/libdns/libdns\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(MockDNSProvider{})\n}\n\n// MockDNSProvider is a mock DNS provider, for testing config with DNS modules.\ntype MockDNSProvider struct {\n\tArgument string `json:\"argument,omitempty\"` // optional argument useful for testing\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (MockDNSProvider) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"dns.providers.mock\",\n\t\tNew: func() caddy.Module { return new(MockDNSProvider) },\n\t}\n}\n\n// Provision sets up the module.\nfunc (MockDNSProvider) Provision(ctx caddy.Context) error {\n\treturn nil\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (p *MockDNSProvider) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume directive name\n\n\tif d.NextArg() {\n\t\tp.Argument = d.Val()\n\t}\n\tif d.NextArg() {\n\t\treturn d.Errf(\"unexpected argument '%s'\", d.Val())\n\t}\n\treturn nil\n}\n\n// AppendRecords appends DNS records to the zone.\nfunc (MockDNSProvider) AppendRecords(ctx context.Context, zone string, recs []libdns.Record) ([]libdns.Record, error) {\n\treturn nil, nil\n}\n\n// DeleteRecords deletes DNS records from the zone.\nfunc (MockDNSProvider) DeleteRecords(ctx context.Context, zone string, recs []libdns.Record) ([]libdns.Record, error) {\n\treturn nil, nil\n}\n\n// GetRecords gets DNS records from the zone.\nfunc (MockDNSProvider) GetRecords(ctx context.Context, zone string) ([]libdns.Record, error) {\n\treturn nil, nil\n}\n\n// SetRecords sets DNS records in the zone.\nfunc (MockDNSProvider) SetRecords(ctx context.Context, zone string, recs []libdns.Record) ([]libdns.Record, error) {\n\treturn nil, nil\n}\n\n// Interface guard\nvar (\n\t_ caddyfile.Unmarshaler = (*MockDNSProvider)(nil)\n\t_ certmagic.DNSProvider = (*MockDNSProvider)(nil)\n\t_ caddy.Provisioner     = (*MockDNSProvider)(nil)\n\t_ caddy.Module          = (*MockDNSProvider)(nil)\n)\n"
  },
  {
    "path": "caddytest/integration/pki_test.go",
    "content": "package integration\n\nimport (\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/caddytest\"\n)\n\nfunc TestLeafCertLifetimeLessThanIntermediate(t *testing.T) {\n\tcaddytest.AssertLoadError(t, `\n    {\n      \"admin\": {\n        \"disabled\": true\n      },\n      \"apps\": {\n        \"http\": {\n          \"servers\": {\n            \"srv0\": {\n              \"listen\": [\n                \":443\"\n              ],\n              \"routes\": [\n                {\n                  \"handle\": [\n                    {\n                      \"handler\": \"subroute\",\n                      \"routes\": [\n                        {\n                          \"handle\": [\n                            {\n                              \"ca\": \"internal\",\n                              \"handler\": \"acme_server\",\n                              \"lifetime\": 604800000000000\n                            }\n                          ]\n                        }\n                      ]\n                    }\n                  ]\n                }\n              ]\n            }\n          }\n        },\n        \"pki\": {\n          \"certificate_authorities\": {\n            \"internal\": {\n              \"install_trust\": false,\n              \"intermediate_lifetime\": 604800000000000,\n              \"name\": \"Internal CA\"\n            }\n          }\n        }\n      }\n    }\n  `, \"json\", \"should be less than intermediate certificate lifetime\")\n}\n\nfunc TestIntermediateLifetimeLessThanRoot(t *testing.T) {\n\tcaddytest.AssertLoadError(t, `\n    {\n      \"admin\": {\n        \"disabled\": true\n      },\n      \"apps\": {\n        \"http\": {\n          \"servers\": {\n            \"srv0\": {\n              \"listen\": [\n                \":443\"\n              ],\n              \"routes\": [\n                {\n                  \"handle\": [\n                    {\n                      \"handler\": \"subroute\",\n                      \"routes\": [\n                        {\n                          \"handle\": [\n                            {\n                              \"ca\": \"internal\",\n                              \"handler\": \"acme_server\",\n                              \"lifetime\": 2592000000000000\n                            }\n                          ]\n                        }\n                      ]\n                    }\n                  ]\n                }\n              ]\n            }\n          }\n        },\n        \"pki\": {\n          \"certificate_authorities\": {\n            \"internal\": {\n              \"install_trust\": false,\n              \"intermediate_lifetime\": 311040000000000000,\n              \"name\": \"Internal CA\"\n            }\n          }\n        }\n      }\n    }\n  `, \"json\", \"intermediate certificate lifetime must be less than actual root certificate lifetime\")\n}\n"
  },
  {
    "path": "caddytest/integration/proxyprotocol_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Integration tests for Caddy's PROXY protocol support, covering two distinct\n// roles that Caddy can play:\n//\n//  1. As a PROXY protocol *sender* (reverse proxy outbound transport):\n//     Caddy receives an inbound request from a test client and the\n//     reverse_proxy handler forwards it to an upstream with a PROXY protocol\n//     header (v1 or v2) prepended to the connection.  A lightweight backend\n//     built with go-proxyproto validates that the header was received and\n//     carries the correct client address.\n//\n//     Transport versions tested:\n//   - \"1.1\"  -> plain HTTP/1.1 to the upstream\n//   - \"h2c\"  -> HTTP/2 cleartext (h2c) to the upstream (regression for #7529)\n//   - \"2\"    -> HTTP/2 over TLS (h2) to the upstream\n//\n//     For each transport version both PROXY protocol v1 and v2 are exercised.\n//\n//     HTTP/3 (h3) is not included because it uses QUIC/UDP and therefore\n//     bypasses the TCP-level dialContext that injects PROXY protocol headers;\n//     there is no meaningful h3 + proxy protocol sender combination to test.\n//\n//  2. As a PROXY protocol *receiver* (server-side listener wrapper):\n//     A raw TCP client dials Caddy directly, injects a PROXY v2 header\n//     spoofing a source address, and sends a normal HTTP/1.1 request.  The\n//     Caddy server is configured with the proxy_protocol listener wrapper and\n//     is expected to surface the spoofed address via the\n//     {http.request.remote.host} placeholder.\n\npackage integration\n\nimport (\n\t\"crypto/tls\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"slices\"\n\t\"strings\"\n\t\"sync\"\n\t\"testing\"\n\n\tgoproxy \"github.com/pires/go-proxyproto\"\n\t\"golang.org/x/net/http2\"\n\t\"golang.org/x/net/http2/h2c\"\n\n\t\"github.com/caddyserver/caddy/v2/caddytest\"\n)\n\n// proxyProtoBackend is a minimal HTTP server that sits behind a\n// go-proxyproto listener and records the source address that was\n// delivered in the PROXY header for each request.\ntype proxyProtoBackend struct {\n\tmu          sync.Mutex\n\theaderAddrs []string // host:port strings extracted from each PROXY header\n\n\tln  net.Listener\n\tsrv *http.Server\n}\n\n// newProxyProtoBackend starts a TCP listener wrapped with go-proxyproto on a\n// random local port and serves requests with a simple \"OK\" body.  The PROXY\n// header source addresses are accumulated in headerAddrs so tests can\n// inspect them.\nfunc newProxyProtoBackend(t *testing.T) *proxyProtoBackend {\n\tt.Helper()\n\n\tb := &proxyProtoBackend{}\n\n\trawLn, err := net.Listen(\"tcp\", \"127.0.0.1:0\")\n\tif err != nil {\n\t\tt.Fatalf(\"backend: listen: %v\", err)\n\t}\n\n\t// Wrap with go-proxyproto so the PROXY header is stripped and parsed\n\t// before the HTTP server sees the connection.  We use REQUIRE so that a\n\t// missing header returns an error instead of silently passing through.\n\tpLn := &goproxy.Listener{\n\t\tListener: rawLn,\n\t\tPolicy: func(_ net.Addr) (goproxy.Policy, error) {\n\t\t\treturn goproxy.REQUIRE, nil\n\t\t},\n\t}\n\tb.ln = pLn\n\n\t// Wrap the handler with h2c support so the backend can speak HTTP/2\n\t// cleartext (h2c) as well as plain HTTP/1.1.  Without this, Caddy's\n\t// reverse proxy would receive a 'frame too large' error when the\n\t// upstream transport is configured to use h2c.\n\th2Server := &http2.Server{}\n\thandlerFn := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\t// go-proxyproto has already updated the net.Conn's remote\n\t\t// address to the value from the PROXY header; the HTTP server\n\t\t// surfaces it in r.RemoteAddr.\n\t\tb.mu.Lock()\n\t\tb.headerAddrs = append(b.headerAddrs, r.RemoteAddr)\n\t\tb.mu.Unlock()\n\t\tw.WriteHeader(http.StatusOK)\n\t\t_, _ = fmt.Fprint(w, \"OK\")\n\t})\n\n\tb.srv = &http.Server{\n\t\tHandler: h2c.NewHandler(handlerFn, h2Server),\n\t}\n\n\tgo b.srv.Serve(pLn) //nolint:errcheck\n\tt.Cleanup(func() {\n\t\t_ = b.srv.Close()\n\t\t_ = rawLn.Close()\n\t})\n\n\treturn b\n}\n\n// addr returns the listening address (host:port) of the backend.\nfunc (b *proxyProtoBackend) addr() string {\n\treturn b.ln.Addr().String()\n}\n\n// recordedAddrs returns a snapshot of all PROXY-header source addresses seen\n// so far.\nfunc (b *proxyProtoBackend) recordedAddrs() []string {\n\tb.mu.Lock()\n\tdefer b.mu.Unlock()\n\tcp := make([]string, len(b.headerAddrs))\n\tcopy(cp, b.headerAddrs)\n\treturn cp\n}\n\n// tlsProxyProtoBackend is a TLS-enabled backend that sits behind a\n// go-proxyproto listener.  The PROXY header is stripped before the TLS\n// handshake so the layer order on a connection is:\n//\n//\traw TCP → go-proxyproto (strips PROXY header) → TLS handshake → HTTP/2\ntype tlsProxyProtoBackend struct {\n\tmu          sync.Mutex\n\theaderAddrs []string\n\n\tsrv *httptest.Server\n}\n\n// newTLSProxyProtoBackend starts a TLS listener that first reads and strips\n// PROXY protocol headers (go-proxyproto, REQUIRE policy) and then performs a\n// TLS handshake.  The backend speaks HTTP/2 over TLS (h2).\n//\n// The certificate is the standard self-signed certificate generated by\n// httptest.Server; the Caddy transport must be configured with\n// insecure_skip_verify: true to trust it.\nfunc newTLSProxyProtoBackend(t *testing.T) *tlsProxyProtoBackend {\n\tt.Helper()\n\n\tb := &tlsProxyProtoBackend{}\n\n\thandlerFn := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\tb.mu.Lock()\n\t\tb.headerAddrs = append(b.headerAddrs, r.RemoteAddr)\n\t\tb.mu.Unlock()\n\t\tw.WriteHeader(http.StatusOK)\n\t\t_, _ = fmt.Fprint(w, \"OK\")\n\t})\n\n\trawLn, err := net.Listen(\"tcp\", \"127.0.0.1:0\")\n\tif err != nil {\n\t\tt.Fatalf(\"tlsBackend: listen: %v\", err)\n\t}\n\n\t// Wrap with go-proxyproto so the PROXY header is consumed before TLS.\n\tpLn := &goproxy.Listener{\n\t\tListener: rawLn,\n\t\tPolicy: func(_ net.Addr) (goproxy.Policy, error) {\n\t\t\treturn goproxy.REQUIRE, nil\n\t\t},\n\t}\n\n\t// httptest.NewUnstartedServer lets us replace the listener before\n\t// calling StartTLS(), which wraps our proxyproto listener with\n\t// tls.NewListener.  This gives us the right layer order.\n\tb.srv = httptest.NewUnstartedServer(handlerFn)\n\tb.srv.Listener = pLn\n\n\t// StartTLS enables HTTP/2 on the server automatically.\n\tb.srv.StartTLS()\n\n\tt.Cleanup(func() {\n\t\tb.srv.Close()\n\t})\n\n\treturn b\n}\n\n// addr returns the listening address (host:port) of the TLS backend.\nfunc (b *tlsProxyProtoBackend) addr() string {\n\treturn b.srv.Listener.Addr().String()\n}\n\n// tlsConfig returns the *tls.Config used by the backend server.\n// Tests can use it to verify cert details if needed.\nfunc (b *tlsProxyProtoBackend) tlsConfig() *tls.Config {\n\treturn b.srv.TLS\n}\n\n// recordedAddrs returns a snapshot of all PROXY-header source addresses.\nfunc (b *tlsProxyProtoBackend) recordedAddrs() []string {\n\tb.mu.Lock()\n\tdefer b.mu.Unlock()\n\tcp := make([]string, len(b.headerAddrs))\n\tcopy(cp, b.headerAddrs)\n\treturn cp\n}\n\n// proxyProtoTLSConfig builds a Caddy JSON configuration that proxies to a TLS\n// upstream with PROXY protocol.  The transport uses insecure_skip_verify so\n// the self-signed certificate generated by httptest.Server is accepted.\nfunc proxyProtoTLSConfig(listenPort int, backendAddr, ppVersion string, transportVersions []string) string {\n\tversionsJSON, _ := json.Marshal(transportVersions)\n\treturn fmt.Sprintf(`{\n\t\t\"admin\": {\n\t\t\t\"listen\": \"localhost:2999\"\n\t\t},\n\t\t\"apps\": {\n\t\t\t\"pki\": {\n\t\t\t\t\"certificate_authorities\": {\n\t\t\t\t\t\"local\": {\n\t\t\t\t\t\t\"install_trust\": false\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"http\": {\n\t\t\t\t\"grace_period\": 1,\n\t\t\t\t\"servers\": {\n\t\t\t\t\t\"proxy\": {\n\t\t\t\t\t\t\"listen\": [\":%d\"],\n\t\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\t\"disable\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\"upstreams\": [{\"dial\": \"%s\"}],\n\t\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\t\"protocol\": \"http\",\n\t\t\t\t\t\t\t\t\t\t\t\"proxy_protocol\": \"%s\",\n\t\t\t\t\t\t\t\t\t\t\t\"versions\": %s,\n\t\t\t\t\t\t\t\t\t\t\t\"tls\": {\n\t\t\t\t\t\t\t\t\t\t\t\t\"insecure_skip_verify\": true\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}`, listenPort, backendAddr, ppVersion, string(versionsJSON))\n}\n\n// testTLSProxyProtocolMatrix is the shared implementation for TLS-based proxy\n// protocol tests.  It mirrors testProxyProtocolMatrix but uses a TLS backend.\nfunc testTLSProxyProtocolMatrix(t *testing.T, ppVersion string, transportVersions []string, numRequests int) {\n\tt.Helper()\n\n\tbackend := newTLSProxyProtoBackend(t)\n\tlistenPort := freePort(t)\n\n\ttester := caddytest.NewTester(t)\n\ttester.WithDefaultOverrides(caddytest.Config{\n\t\tAdminPort: 2999,\n\t})\n\tcfg := proxyProtoTLSConfig(listenPort, backend.addr(), ppVersion, transportVersions)\n\ttester.InitServer(cfg, \"json\")\n\n\tproxyURL := fmt.Sprintf(\"http://127.0.0.1:%d/\", listenPort)\n\n\tfor i := 0; i < numRequests; i++ {\n\t\tresp, err := tester.Client.Get(proxyURL)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"request %d/%d: GET %s: %v\", i+1, numRequests, proxyURL, err)\n\t\t}\n\t\tresp.Body.Close()\n\t\tif resp.StatusCode != http.StatusOK {\n\t\t\tt.Errorf(\"request %d/%d: expected status 200, got %d\", i+1, numRequests, resp.StatusCode)\n\t\t}\n\t}\n\n\taddrs := backend.recordedAddrs()\n\tif len(addrs) == 0 {\n\t\tt.Fatalf(\"backend recorded no PROXY protocol addresses (expected at least 1)\")\n\t}\n\n\tfor i, addr := range addrs {\n\t\thost, _, err := net.SplitHostPort(addr)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"addr[%d] %q: SplitHostPort: %v\", i, addr, err)\n\t\t\tcontinue\n\t\t}\n\t\tif host != \"127.0.0.1\" {\n\t\t\tt.Errorf(\"addr[%d]: expected source 127.0.0.1, got %q\", i, host)\n\t\t}\n\t}\n}\n\n// proxyProtoConfig builds a Caddy JSON configuration that:\n//   - listens on listenPort for inbound HTTP requests\n//   - proxies them to backendAddr with PROXY protocol ppVersion (\"v1\"/\"v2\")\n//   - uses the given transport versions (e.g. [\"1.1\"] or [\"h2c\"])\nfunc proxyProtoConfig(listenPort int, backendAddr, ppVersion string, transportVersions []string) string {\n\tversionsJSON, _ := json.Marshal(transportVersions)\n\treturn fmt.Sprintf(`{\n\t\t\"admin\": {\n\t\t\t\"listen\": \"localhost:2999\"\n\t\t},\n\t\t\"apps\": {\n\t\t\t\"pki\": {\n\t\t\t\t\"certificate_authorities\": {\n\t\t\t\t\t\"local\": {\n\t\t\t\t\t\t\"install_trust\": false\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"http\": {\n\t\t\t\t\"grace_period\": 1,\n\t\t\t\t\"servers\": {\n\t\t\t\t\t\"proxy\": {\n\t\t\t\t\t\t\"listen\": [\":%d\"],\n\t\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\t\"disable\": true\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\"upstreams\": [{\"dial\": \"%s\"}],\n\t\t\t\t\t\t\t\t\t\t\"transport\": {\n\t\t\t\t\t\t\t\t\t\t\t\"protocol\": \"http\",\n\t\t\t\t\t\t\t\t\t\t\t\"proxy_protocol\": \"%s\",\n\t\t\t\t\t\t\t\t\t\t\t\"versions\": %s\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}`, listenPort, backendAddr, ppVersion, string(versionsJSON))\n}\n\n// freePort returns a free local TCP port by binding briefly and releasing it.\nfunc freePort(t *testing.T) int {\n\tt.Helper()\n\tln, err := net.Listen(\"tcp\", \"127.0.0.1:0\")\n\tif err != nil {\n\t\tt.Fatalf(\"freePort: %v\", err)\n\t}\n\tport := ln.Addr().(*net.TCPAddr).Port\n\t_ = ln.Close()\n\treturn port\n}\n\n// TestProxyProtocolV1WithH1 verifies that PROXY protocol v1 headers are sent\n// correctly when the transport uses HTTP/1.1 to the upstream.\nfunc TestProxyProtocolV1WithH1(t *testing.T) {\n\ttestProxyProtocolMatrix(t, \"v1\", []string{\"1.1\"}, 1)\n}\n\n// TestProxyProtocolV2WithH1 verifies that PROXY protocol v2 headers are sent\n// correctly when the transport uses HTTP/1.1 to the upstream.\nfunc TestProxyProtocolV2WithH1(t *testing.T) {\n\ttestProxyProtocolMatrix(t, \"v2\", []string{\"1.1\"}, 1)\n}\n\n// TestProxyProtocolV1WithH2C verifies that PROXY protocol v1 headers are sent\n// correctly when the transport uses h2c (HTTP/2 cleartext) to the upstream.\nfunc TestProxyProtocolV1WithH2C(t *testing.T) {\n\ttestProxyProtocolMatrix(t, \"v1\", []string{\"h2c\"}, 1)\n}\n\n// TestProxyProtocolV2WithH2C verifies that PROXY protocol v2 headers are sent\n// correctly when the transport uses h2c (HTTP/2 cleartext) to the upstream.\n// This is the primary regression test for github.com/caddyserver/caddy/issues/7529:\n// before the fix, the h2 transport opened a new TCP connection per request\n// (because req.URL.Host was mangled differently for each request due to the\n// varying client port), which caused file-descriptor exhaustion under load.\nfunc TestProxyProtocolV2WithH2C(t *testing.T) {\n\ttestProxyProtocolMatrix(t, \"v2\", []string{\"h2c\"}, 1)\n}\n\n// TestProxyProtocolV2WithH2CMultipleRequests sends several sequential requests\n// through the h2c + PROXY-protocol path and confirms that:\n//  1. Every request receives a 200 response (no connection exhaustion).\n//  2. The backend received at least one PROXY header (connection was reused).\n//\n// This is the core regression guard for issue #7529: without the fix, a new\n// TCP connection was opened per request, quickly exhausting file descriptors.\nfunc TestProxyProtocolV2WithH2CMultipleRequests(t *testing.T) {\n\ttestProxyProtocolMatrix(t, \"v2\", []string{\"h2c\"}, 5)\n}\n\n// TestProxyProtocolV1WithH2 verifies that PROXY protocol v1 headers are sent\n// correctly when the transport uses HTTP/2 over TLS (h2) to the upstream.\nfunc TestProxyProtocolV1WithH2(t *testing.T) {\n\ttestTLSProxyProtocolMatrix(t, \"v1\", []string{\"2\"}, 1)\n}\n\n// TestProxyProtocolV2WithH2 verifies that PROXY protocol v2 headers are sent\n// correctly when the transport uses HTTP/2 over TLS (h2) to the upstream.\nfunc TestProxyProtocolV2WithH2(t *testing.T) {\n\ttestTLSProxyProtocolMatrix(t, \"v2\", []string{\"2\"}, 1)\n}\n\n// TestProxyProtocolServerAndProxy is an end-to-end matrix test that exercises\n// all combinations of PROXY protocol version x transport version.\nfunc TestProxyProtocolServerAndProxy(t *testing.T) {\n\tplainTests := []struct {\n\t\tname              string\n\t\tppVersion         string\n\t\ttransportVersions []string\n\t\tnumRequests       int\n\t}{\n\t\t{\"h1-v1\", \"v1\", []string{\"1.1\"}, 3},\n\t\t{\"h1-v2\", \"v2\", []string{\"1.1\"}, 3},\n\t\t{\"h2c-v1\", \"v1\", []string{\"h2c\"}, 3},\n\t\t{\"h2c-v2\", \"v2\", []string{\"h2c\"}, 3},\n\t}\n\tfor _, tc := range plainTests {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\ttestProxyProtocolMatrix(t, tc.ppVersion, tc.transportVersions, tc.numRequests)\n\t\t})\n\t}\n\n\ttlsTests := []struct {\n\t\tname              string\n\t\tppVersion         string\n\t\ttransportVersions []string\n\t\tnumRequests       int\n\t}{\n\t\t{\"h2-v1\", \"v1\", []string{\"2\"}, 3},\n\t\t{\"h2-v2\", \"v2\", []string{\"2\"}, 3},\n\t}\n\tfor _, tc := range tlsTests {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\ttestTLSProxyProtocolMatrix(t, tc.ppVersion, tc.transportVersions, tc.numRequests)\n\t\t})\n\t}\n}\n\n// testProxyProtocolMatrix is the shared implementation for the proxy protocol\n// tests.  It:\n//  1. Starts a go-proxyproto-wrapped backend.\n//  2. Configures Caddy as a reverse proxy with the given PROXY protocol\n//     version and transport versions.\n//  3. Sends numRequests GET requests through Caddy and asserts 200 OK each time.\n//  4. Asserts the backend recorded at least one PROXY header whose source host\n//     is 127.0.0.1 (the loopback address used by the test client).\nfunc testProxyProtocolMatrix(t *testing.T, ppVersion string, transportVersions []string, numRequests int) {\n\tt.Helper()\n\n\tbackend := newProxyProtoBackend(t)\n\tlistenPort := freePort(t)\n\n\ttester := caddytest.NewTester(t)\n\ttester.WithDefaultOverrides(caddytest.Config{\n\t\tAdminPort: 2999,\n\t})\n\tcfg := proxyProtoConfig(listenPort, backend.addr(), ppVersion, transportVersions)\n\ttester.InitServer(cfg, \"json\")\n\n\t// If the test is h2c-only (no \"1.1\" in versions), reconfigure the test\n\t// client transport to use unencrypted HTTP/2 so we actually exercise the\n\t// h2c code path through Caddy.\n\tif slices.Contains(transportVersions, \"h2c\") && !slices.Contains(transportVersions, \"1.1\") {\n\t\ttr, ok := tester.Client.Transport.(*http.Transport)\n\t\tif ok {\n\t\t\ttr.Protocols = new(http.Protocols)\n\t\t\ttr.Protocols.SetHTTP1(false)\n\t\t\ttr.Protocols.SetUnencryptedHTTP2(true)\n\t\t}\n\t}\n\n\tproxyURL := fmt.Sprintf(\"http://127.0.0.1:%d/\", listenPort)\n\n\tfor i := 0; i < numRequests; i++ {\n\t\tresp, err := tester.Client.Get(proxyURL)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"request %d/%d: GET %s: %v\", i+1, numRequests, proxyURL, err)\n\t\t}\n\t\tresp.Body.Close()\n\t\tif resp.StatusCode != http.StatusOK {\n\t\t\tt.Errorf(\"request %d/%d: expected status 200, got %d\", i+1, numRequests, resp.StatusCode)\n\t\t}\n\t}\n\n\t// The backend must have seen at least one PROXY header.  For h1, there is\n\t// one per request; for h2c, requests share the same connection so only one\n\t// header is written at connection establishment.\n\taddrs := backend.recordedAddrs()\n\tif len(addrs) == 0 {\n\t\tt.Fatalf(\"backend recorded no PROXY protocol addresses (expected at least 1)\")\n\t}\n\n\t// Every PROXY-decoded source address must be the loopback address since\n\t// the test client always connects from 127.0.0.1.\n\tfor i, addr := range addrs {\n\t\thost, _, err := net.SplitHostPort(addr)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"addr[%d] %q: SplitHostPort: %v\", i, addr, err)\n\t\t\tcontinue\n\t\t}\n\t\tif host != \"127.0.0.1\" {\n\t\t\tt.Errorf(\"addr[%d]: expected source 127.0.0.1, got %q\", i, host)\n\t\t}\n\t}\n}\n\n// TestProxyProtocolListenerWrapper verifies that Caddy's\n// caddy.listeners.proxy_protocol listener wrapper can successfully parse\n// incoming PROXY protocol headers.\n//\n// The test dials Caddy's listening port directly, injects a raw PROXY v2\n// header spoofing source address 10.0.0.1:1234, then sends a normal\n// HTTP/1.1 GET request.  The Caddy server is configured to echo back the\n// remote address ({http.request.remote.host}).  The test asserts that the\n// echoed address is the spoofed 10.0.0.1.\nfunc TestProxyProtocolListenerWrapper(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port 9080\n\t\thttps_port 9443\n\t\tgrace_period 1ns\n\t\tservers :9080 {\n\t\t\tlistener_wrappers {\n\t\t\t\tproxy_protocol {\n\t\t\t\t\ttimeout 5s\n\t\t\t\t\tallow 127.0.0.0/8\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\thttp://localhost:9080 {\n\t\trespond \"{http.request.remote.host}\"\n\t}`, \"caddyfile\")\n\n\t// Dial the Caddy listener directly and inject a PROXY v2 header that\n\t// claims the connection originates from 10.0.0.1:1234.\n\tconn, err := net.Dial(\"tcp\", \"127.0.0.1:9080\")\n\tif err != nil {\n\t\tt.Fatalf(\"dial: %v\", err)\n\t}\n\tdefer conn.Close()\n\n\tspoofedSrc := &net.TCPAddr{IP: net.ParseIP(\"10.0.0.1\"), Port: 1234}\n\tspoofedDst := &net.TCPAddr{IP: net.ParseIP(\"127.0.0.1\"), Port: 9080}\n\thdr := goproxy.HeaderProxyFromAddrs(2, spoofedSrc, spoofedDst)\n\tif _, err := hdr.WriteTo(conn); err != nil {\n\t\tt.Fatalf(\"write proxy header: %v\", err)\n\t}\n\n\t// Write a minimal HTTP/1.1 GET request.\n\t_, err = fmt.Fprintf(conn,\n\t\t\"GET / HTTP/1.1\\r\\nHost: localhost\\r\\nConnection: close\\r\\n\\r\\n\")\n\tif err != nil {\n\t\tt.Fatalf(\"write HTTP request: %v\", err)\n\t}\n\n\t// Read the raw response and look for the spoofed address in the body.\n\tbuf := make([]byte, 4096)\n\tn, _ := conn.Read(buf)\n\traw := string(buf[:n])\n\n\tif !strings.Contains(raw, \"10.0.0.1\") {\n\t\tt.Errorf(\"expected spoofed address 10.0.0.1 in response body; full response:\\n%s\", raw)\n\t}\n}\n"
  },
  {
    "path": "caddytest/integration/reverseproxy_test.go",
    "content": "package integration\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\t\"net/http\"\n\t\"os\"\n\t\"runtime\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/caddytest\"\n)\n\nfunc TestSRVReverseProxy(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\t\"admin\": {\n\t\t\t\"listen\": \"localhost:2999\"\n\t\t},\n\t\t\"apps\": {\n\t\t\t\"pki\": {\n\t\t\t\t\"certificate_authorities\": {\n\t\t\t\t\t\"local\": {\n\t\t\t\t\t\t\"install_trust\": false\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"http\": {\n\t\t\t\t\"grace_period\": 1,\n\t\t\t\t\"servers\": {\n\t\t\t\t\t\"srv0\": {\n\t\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\t\":18080\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\"dynamic_upstreams\": {\n\t\t\t\t\t\t\t\t\t\t\t\"source\": \"srv\",\n\t\t\t\t\t\t\t\t\t\t\t\"name\": \"srv.host.service.consul\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\t`, \"json\")\n}\n\nfunc TestDialWithPlaceholderUnix(t *testing.T) {\n\tif runtime.GOOS == \"windows\" {\n\t\tt.SkipNow()\n\t}\n\n\tf, err := os.CreateTemp(\"\", \"*.sock\")\n\tif err != nil {\n\t\tt.Errorf(\"failed to create TempFile: %s\", err)\n\t\treturn\n\t}\n\t// a hack to get a file name within a valid path to use as socket\n\tsocketName := f.Name()\n\tos.Remove(f.Name())\n\n\tserver := http.Server{\n\t\tHandler: http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {\n\t\t\tw.Write([]byte(\"Hello, World!\"))\n\t\t}),\n\t}\n\n\tunixListener, err := net.Listen(\"unix\", socketName)\n\tif err != nil {\n\t\tt.Errorf(\"failed to listen on the socket: %s\", err)\n\t\treturn\n\t}\n\tgo server.Serve(unixListener)\n\tt.Cleanup(func() {\n\t\tserver.Close()\n\t})\n\truntime.Gosched() // Allow other goroutines to run\n\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\t\"admin\": {\n\t\t\t\"listen\": \"localhost:2999\"\n\t\t},\n\t\t\"apps\": {\n\t\t\t\"pki\": {\n\t\t\t\t\"certificate_authorities\": {\n\t\t\t\t\t\"local\": {\n\t\t\t\t\t\t\"install_trust\": false\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"http\": {\n\t\t\t\t\"grace_period\": 1,\n\t\t\t\t\"servers\": {\n\t\t\t\t\t\"srv0\": {\n\t\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\t\":18080\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"unix/{http.request.header.X-Caddy-Upstream-Dial}\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t  \t}\n\t\t}\n\t}\n\t`, \"json\")\n\n\treq, err := http.NewRequest(http.MethodGet, \"http://localhost:18080\", nil)\n\tif err != nil {\n\t\tt.Fail()\n\t\treturn\n\t}\n\treq.Header.Set(\"X-Caddy-Upstream-Dial\", socketName)\n\ttester.AssertResponse(req, 200, \"Hello, World!\")\n}\n\nfunc TestReverseProxyWithPlaceholderDialAddress(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\t\"admin\": {\n\t\t\t\"listen\": \"localhost:2999\"\n\t\t},\n\t\t\"apps\": {\n\t\t\t\"pki\": {\n\t\t\t\t\"certificate_authorities\": {\n\t\t\t\t\t\"local\": {\n\t\t\t\t\t\t\"install_trust\": false\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"http\": {\n\t\t\t\t\"grace_period\": 1,\n\t\t\t\t\"servers\": {\n\t\t\t\t\t\"srv0\": {\n\t\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\t\":18080\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\"body\": \"Hello, World!\"\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\t\"skip\": [\n\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"srv1\": {\n\t\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\t\":9080\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"{http.request.header.X-Caddy-Upstream-Dial}\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\t\"skip\": [\n\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\t`, \"json\")\n\n\treq, err := http.NewRequest(http.MethodGet, \"http://localhost:9080\", nil)\n\tif err != nil {\n\t\tt.Fail()\n\t\treturn\n\t}\n\treq.Header.Set(\"X-Caddy-Upstream-Dial\", \"localhost:18080\")\n\ttester.AssertResponse(req, 200, \"Hello, World!\")\n}\n\nfunc TestReverseProxyWithPlaceholderTCPDialAddress(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\t\"admin\": {\n\t\t\t\"listen\": \"localhost:2999\"\n\t\t},\n\t\t\"apps\": {\n\t\t\t\"pki\": {\n\t\t\t\t\"certificate_authorities\": {\n\t\t\t\t\t\"local\": {\n\t\t\t\t\t\t\"install_trust\": false\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"http\": {\n\t\t\t\t\"grace_period\": 1,\n\t\t\t\t\"servers\": {\n\t\t\t\t\t\"srv0\": {\n\t\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\t\":18080\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\"body\": \"Hello, World!\"\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\t\"skip\": [\n\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"srv1\": {\n\t\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\t\":9080\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"reverse_proxy\",\n\t\t\t\t\t\t\t\t\t\t\"upstreams\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"dial\": \"tcp/{http.request.header.X-Caddy-Upstream-Dial}:18080\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"automatic_https\": {\n\t\t\t\t\t\t\t\"skip\": [\n\t\t\t\t\t\t\t\t\"localhost\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\t`, \"json\")\n\n\treq, err := http.NewRequest(http.MethodGet, \"http://localhost:9080\", nil)\n\tif err != nil {\n\t\tt.Fail()\n\t\treturn\n\t}\n\treq.Header.Set(\"X-Caddy-Upstream-Dial\", \"localhost\")\n\ttester.AssertResponse(req, 200, \"Hello, World!\")\n}\n\nfunc TestReverseProxyHealthCheck(t *testing.T) {\n\t// Start lightweight backend servers so they're ready before Caddy's\n\t// active health checker runs; this avoids a startup race where the\n\t// health checker probes backends that haven't yet begun accepting\n\t// connections and marks them unhealthy.\n\t//\n\t// This mirrors how health checks are typically used in practice (to a separate\n\t// backend service) and avoids probing the same Caddy instance while it's still\n\t// provisioning and not ready to accept connections.\n\n\t// backend server that responds to proxied requests\n\thelloSrv := &http.Server{\n\t\tHandler: http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {\n\t\t\t_, _ = w.Write([]byte(\"Hello, World!\"))\n\t\t}),\n\t}\n\tln0, err := net.Listen(\"tcp\", \"127.0.0.1:2020\")\n\tif err != nil {\n\t\tt.Fatalf(\"failed to listen on 127.0.0.1:2020: %v\", err)\n\t}\n\tgo helloSrv.Serve(ln0)\n\tt.Cleanup(func() { helloSrv.Close(); ln0.Close() })\n\n\t// backend server that serves health checks\n\thealthSrv := &http.Server{\n\t\tHandler: http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {\n\t\t\t_, _ = w.Write([]byte(\"ok\"))\n\t\t}),\n\t}\n\tln1, err := net.Listen(\"tcp\", \"127.0.0.1:2021\")\n\tif err != nil {\n\t\tt.Fatalf(\"failed to listen on 127.0.0.1:2021: %v\", err)\n\t}\n\tgo healthSrv.Serve(ln1)\n\tt.Cleanup(func() { healthSrv.Close(); ln1.Close() })\n\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t\tgrace_period 1ns\n\t}\n\thttp://localhost:9080 {\n\t\treverse_proxy {\n\t\t\tto localhost:2020\n\t\n\t\t\thealth_uri /health\n\t\t\thealth_port 2021\n\t\t\thealth_interval 10ms\n\t\t\thealth_timeout 100ms\n\t\t\thealth_passes 1\n\t\t\thealth_fails 1\n\t\t}\n\t}\n\t`, \"caddyfile\")\n\ttester.AssertGetResponse(\"http://localhost:9080/\", 200, \"Hello, World!\")\n}\n\n// TestReverseProxyHealthCheckPortUsed verifies that health_port is actually\n// used for active health checks and not the upstream's main port. This is a\n// regression test for https://github.com/caddyserver/caddy/issues/7524.\nfunc TestReverseProxyHealthCheckPortUsed(t *testing.T) {\n\t// upstream server: serves proxied requests normally, but returns 503 for\n\t// /health so that if health checks mistakenly hit this port the upstream\n\t// gets marked unhealthy and the proxy returns 503.\n\tupstreamSrv := &http.Server{\n\t\tHandler: http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {\n\t\t\tif req.URL.Path == \"/health\" {\n\t\t\t\tw.WriteHeader(http.StatusServiceUnavailable)\n\t\t\t\treturn\n\t\t\t}\n\t\t\t_, _ = w.Write([]byte(\"Hello, World!\"))\n\t\t}),\n\t}\n\tln0, err := net.Listen(\"tcp\", \"127.0.0.1:2022\")\n\tif err != nil {\n\t\tt.Fatalf(\"failed to listen on 127.0.0.1:2022: %v\", err)\n\t}\n\tgo upstreamSrv.Serve(ln0)\n\tt.Cleanup(func() { upstreamSrv.Close(); ln0.Close() })\n\n\t// separate health check server on the configured health_port: returns 200\n\t// so the upstream is marked healthy only if health checks go to this port.\n\thealthSrv := &http.Server{\n\t\tHandler: http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {\n\t\t\t_, _ = w.Write([]byte(\"ok\"))\n\t\t}),\n\t}\n\tln1, err := net.Listen(\"tcp\", \"127.0.0.1:2023\")\n\tif err != nil {\n\t\tt.Fatalf(\"failed to listen on 127.0.0.1:2023: %v\", err)\n\t}\n\tgo healthSrv.Serve(ln1)\n\tt.Cleanup(func() { healthSrv.Close(); ln1.Close() })\n\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`\n\t{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t\tgrace_period 1ns\n\t}\n\thttp://localhost:9080 {\n\t\treverse_proxy {\n\t\t\tto localhost:2022\n\n\t\t\thealth_uri /health\n\t\t\thealth_port 2023\n\t\t\thealth_interval 10ms\n\t\t\thealth_timeout 100ms\n\t\t\thealth_passes 1\n\t\t\thealth_fails 1\n\t\t}\n\t}\n\t`, \"caddyfile\")\n\ttester.AssertGetResponse(\"http://localhost:9080/\", 200, \"Hello, World!\")\n}\n\nfunc TestReverseProxyHealthCheckUnixSocket(t *testing.T) {\n\tif runtime.GOOS == \"windows\" {\n\t\tt.SkipNow()\n\t}\n\ttester := caddytest.NewTester(t)\n\tf, err := os.CreateTemp(\"\", \"*.sock\")\n\tif err != nil {\n\t\tt.Errorf(\"failed to create TempFile: %s\", err)\n\t\treturn\n\t}\n\t// a hack to get a file name within a valid path to use as socket\n\tsocketName := f.Name()\n\tos.Remove(f.Name())\n\n\tserver := http.Server{\n\t\tHandler: http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {\n\t\t\tif strings.HasPrefix(req.URL.Path, \"/health\") {\n\t\t\t\tw.Write([]byte(\"ok\"))\n\t\t\t\treturn\n\t\t\t}\n\t\t\tw.Write([]byte(\"Hello, World!\"))\n\t\t}),\n\t}\n\n\tunixListener, err := net.Listen(\"unix\", socketName)\n\tif err != nil {\n\t\tt.Errorf(\"failed to listen on the socket: %s\", err)\n\t\treturn\n\t}\n\tgo server.Serve(unixListener)\n\tt.Cleanup(func() {\n\t\tserver.Close()\n\t})\n\truntime.Gosched() // Allow other goroutines to run\n\n\ttester.InitServer(fmt.Sprintf(`\n\t{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t\tgrace_period 1ns\n\t}\n\thttp://localhost:9080 {\n\t\treverse_proxy {\n\t\t\tto unix/%s\n\t\n\t\t\thealth_uri /health\n\t\t\thealth_port 2021\n\t\t\thealth_interval 2s\n\t\t\thealth_timeout 5s\n\t\t}\n\t}\n\t`, socketName), \"caddyfile\")\n\n\ttester.AssertGetResponse(\"http://localhost:9080/\", 200, \"Hello, World!\")\n}\n\nfunc TestReverseProxyHealthCheckUnixSocketWithoutPort(t *testing.T) {\n\tif runtime.GOOS == \"windows\" {\n\t\tt.SkipNow()\n\t}\n\ttester := caddytest.NewTester(t)\n\tf, err := os.CreateTemp(\"\", \"*.sock\")\n\tif err != nil {\n\t\tt.Errorf(\"failed to create TempFile: %s\", err)\n\t\treturn\n\t}\n\t// a hack to get a file name within a valid path to use as socket\n\tsocketName := f.Name()\n\tos.Remove(f.Name())\n\n\tserver := http.Server{\n\t\tHandler: http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {\n\t\t\tif strings.HasPrefix(req.URL.Path, \"/health\") {\n\t\t\t\tw.Write([]byte(\"ok\"))\n\t\t\t\treturn\n\t\t\t}\n\t\t\tw.Write([]byte(\"Hello, World!\"))\n\t\t}),\n\t}\n\n\tunixListener, err := net.Listen(\"unix\", socketName)\n\tif err != nil {\n\t\tt.Errorf(\"failed to listen on the socket: %s\", err)\n\t\treturn\n\t}\n\tgo server.Serve(unixListener)\n\tt.Cleanup(func() {\n\t\tserver.Close()\n\t})\n\truntime.Gosched() // Allow other goroutines to run\n\n\ttester.InitServer(fmt.Sprintf(`\n\t{\n\t\tskip_install_trust\n\t\tadmin localhost:2999\n\t\thttp_port     9080\n\t\thttps_port    9443\n\t\tgrace_period 1ns\n\t}\n\thttp://localhost:9080 {\n\t\treverse_proxy {\n\t\t\tto unix/%s\n\t\n\t\t\thealth_uri /health\n\t\t\thealth_interval 2s\n\t\t\thealth_timeout 5s\n\t\t}\n\t}\n\t`, socketName), \"caddyfile\")\n\n\ttester.AssertGetResponse(\"http://localhost:9080/\", 200, \"Hello, World!\")\n}\n"
  },
  {
    "path": "caddytest/integration/sni_test.go",
    "content": "package integration\n\nimport (\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/caddytest\"\n)\n\nfunc TestDefaultSNI(t *testing.T) {\n\t// arrange\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(`{\n\t\t\"admin\": {\n\t\t\t\"listen\": \"localhost:2999\"\n\t\t},\n\t\t\"apps\": {\n\t\t\t\"http\": {\n\t\t\t\t\"http_port\": 9080,\n\t\t\t\t\"https_port\": 9443,\n\t\t\t\t\"grace_period\": 1,\n\t\t\t\t\"servers\": {\n\t\t\t\t\t\"srv0\": {\n\t\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\t\":9443\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from a.caddy.localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 200\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/version\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\"127.0.0.1\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"certificate_selection\": {\n\t\t\t\t\t\t\t\t\t\"any_tag\": [\"cert0\"]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\"sni\": [\n\t\t\t\t\t\t\t\t\t\t\"127.0.0.1\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"default_sni\": \"*.caddy.localhost\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"tls\": {\n\t\t\t\t\"certificates\": {\n\t\t\t\t\t\"load_files\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"certificate\": \"/caddy.localhost.crt\",\n\t\t\t\t\t\t\t\"key\": \"/caddy.localhost.key\",\n\t\t\t\t\t\t\t\"tags\": [\n\t\t\t\t\t\t\t\t\"cert0\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"pki\": {\n\t\t\t\t\"certificate_authorities\" : {\n\t\t\t\t\t\"local\" : {\n\t\t\t\t\t\t\"install_trust\": false\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\t`, \"json\")\n\n\t// act and assert\n\t// makes a request with no sni\n\ttester.AssertGetResponse(\"https://127.0.0.1:9443/version\", 200, \"hello from a.caddy.localhost\")\n}\n\nfunc TestDefaultSNIWithNamedHostAndExplicitIP(t *testing.T) {\n\t// arrange\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(` \n\t{\n\t\t\"admin\": {\n\t\t\t\"listen\": \"localhost:2999\"\n\t\t},\n\t\t\"apps\": {\n\t\t\t\"http\": {\n\t\t\t\t\"http_port\": 9080,\n\t\t\t\t\"https_port\": 9443,\n\t\t\t\t\"grace_period\": 1,\n\t\t\t\t\"servers\": {\n\t\t\t\t\t\"srv0\": {\n\t\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\t\":9443\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"subroute\",\n\t\t\t\t\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from a\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"status_code\": 200\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"/version\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"host\": [\n\t\t\t\t\t\t\t\t\t\t\t\"a.caddy.localhost\",\n\t\t\t\t\t\t\t\t\t\t\t\"127.0.0.1\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"terminal\": true\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"certificate_selection\": {\n\t\t\t\t\t\t\t\t\t\"any_tag\": [\"cert0\"]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"default_sni\": \"a.caddy.localhost\",\n\t\t\t\t\t\t\t\t\"match\": {\n\t\t\t\t\t\t\t\t\t\"sni\": [\n\t\t\t\t\t\t\t\t\t\t\"a.caddy.localhost\",\n\t\t\t\t\t\t\t\t\t\t\"127.0.0.1\",\n\t\t\t\t\t\t\t\t\t\t\"\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"default_sni\": \"a.caddy.localhost\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"tls\": {\n\t\t\t\t\"certificates\": {\n\t\t\t\t\t\"load_files\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"certificate\": \"/a.caddy.localhost.crt\",\n\t\t\t\t\t\t\t\"key\": \"/a.caddy.localhost.key\",\n\t\t\t\t\t\t\t\"tags\": [\n\t\t\t\t\t\t\t\t\"cert0\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"pki\": {\n\t\t\t\t\"certificate_authorities\" : {\n\t\t\t\t\t\"local\" : {\n\t\t\t\t\t\t\"install_trust\": false\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\t`, \"json\")\n\n\t// act and assert\n\t// makes a request with no sni\n\ttester.AssertGetResponse(\"https://127.0.0.1:9443/version\", 200, \"hello from a\")\n}\n\nfunc TestDefaultSNIWithPortMappingOnly(t *testing.T) {\n\t// arrange\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(` \n\t{\n\t\t\"admin\": {\n\t\t\t\"listen\": \"localhost:2999\"\n\t\t},\n\t\t\"apps\": {\n\t\t\t\"http\": {\n\t\t\t\t\"http_port\": 9080,\n\t\t\t\t\"https_port\": 9443,\n\t\t\t\t\"grace_period\": 1,\n\t\t\t\t\"servers\": {\n\t\t\t\t\t\"srv0\": {\n\t\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\t\":9443\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"body\": \"hello from a.caddy.localhost\",\n\t\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\t\"status_code\": 200\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\t\"/version\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"tls_connection_policies\": [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"certificate_selection\": {\n\t\t\t\t\t\t\t\t\t\"any_tag\": [\"cert0\"]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"default_sni\": \"a.caddy.localhost\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"tls\": {\n\t\t\t\t\"certificates\": {\n\t\t\t\t\t\"load_files\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"certificate\": \"/a.caddy.localhost.crt\",\n\t\t\t\t\t\t\t\"key\": \"/a.caddy.localhost.key\",\n\t\t\t\t\t\t\t\"tags\": [\n\t\t\t\t\t\t\t\t\"cert0\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"pki\": {\n\t\t\t\t\"certificate_authorities\" : {\n\t\t\t\t\t\"local\" : {\n\t\t\t\t\t\t\"install_trust\": false\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\t`, \"json\")\n\n\t// act and assert\n\t// makes a request with no sni\n\ttester.AssertGetResponse(\"https://127.0.0.1:9443/version\", 200, \"hello from a.caddy.localhost\")\n}\n\nfunc TestHttpOnlyOnDomainWithSNI(t *testing.T) {\n\tcaddytest.AssertAdapt(t, `\n\t{\n\t\tskip_install_trust\n\t\tdefault_sni a.caddy.localhost\n\t}\n\t:80 {\n\t\trespond /version 200 {\n\t\t\tbody \"hello from localhost\"\n\t\t}\n\t}\n\t`, \"caddyfile\", `{\n\t\"apps\": {\n\t\t\"http\": {\n\t\t\t\"servers\": {\n\t\t\t\t\"srv0\": {\n\t\t\t\t\t\"listen\": [\n\t\t\t\t\t\t\":80\"\n\t\t\t\t\t],\n\t\t\t\t\t\"routes\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"path\": [\n\t\t\t\t\t\t\t\t\t\t\"/version\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"handle\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"body\": \"hello from localhost\",\n\t\t\t\t\t\t\t\t\t\"handler\": \"static_response\",\n\t\t\t\t\t\t\t\t\t\"status_code\": 200\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"pki\": {\n\t\t\t\"certificate_authorities\": {\n\t\t\t\t\"local\": {\n\t\t\t\t\t\"install_trust\": false\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}`)\n}\n"
  },
  {
    "path": "caddytest/integration/stream_test.go",
    "content": "package integration\n\nimport (\n\t\"compress/gzip\"\n\t\"context\"\n\t\"crypto/rand\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/http/httputil\"\n\t\"net/url\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"golang.org/x/net/http2\"\n\t\"golang.org/x/net/http2/h2c\"\n\n\t\"github.com/caddyserver/caddy/v2/caddytest\"\n)\n\n// (see https://github.com/caddyserver/caddy/issues/3556 for use case)\nfunc TestH2ToH2CStream(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(` \n  {\n\t\"admin\": {\n\t\t\"listen\": \"localhost:2999\"\n\t},\n    \"apps\": {\n      \"http\": {\n        \"http_port\": 9080,\n        \"https_port\": 9443,\n\t\t\"grace_period\": 1,\n        \"servers\": {\n          \"srv0\": {\n            \"listen\": [\n              \":9443\"\n            ],\n            \"routes\": [\n              {\n                \"handle\": [\n                  {\n                    \"handler\": \"reverse_proxy\",\n                    \"transport\": {\n                      \"protocol\": \"http\",\n                      \"compression\": false,\n                      \"versions\": [\n                        \"h2c\",\n                        \"2\"\n                      ]\n                    },\n                    \"upstreams\": [\n                      {\n                        \"dial\": \"localhost:54321\"\n                      }\n                    ]\n                  }\n                ],\n                \"match\": [\n                  {\n                    \"path\": [\n                      \"/tov2ray\"\n                    ]\n                  }\n                ]\n              }\n            ],\n            \"tls_connection_policies\": [\n              {\n                \"certificate_selection\": {\n                  \"any_tag\": [\"cert0\"]\n                },\n                \"default_sni\": \"a.caddy.localhost\"\n              }\n            ]\n          }\n        }\n      },\n      \"tls\": {\n        \"certificates\": {\n          \"load_files\": [\n            {\n              \"certificate\": \"/a.caddy.localhost.crt\",\n              \"key\": \"/a.caddy.localhost.key\",\n              \"tags\": [\n                \"cert0\"\n              ]\n            }\n          ]\n        }\n      },\n      \"pki\": {\n        \"certificate_authorities\" : {\n          \"local\" : {\n            \"install_trust\": false\n          }\n        }\n      }\n    }\n  }\n  `, \"json\")\n\n\texpectedBody := \"some data to be echoed\"\n\t// start the server\n\tserver := testH2ToH2CStreamServeH2C(t)\n\tgo server.ListenAndServe()\n\tdefer func() {\n\t\tctx, cancel := context.WithTimeout(context.Background(), time.Nanosecond)\n\t\tdefer cancel()\n\t\tserver.Shutdown(ctx)\n\t}()\n\n\tr, w := io.Pipe()\n\treq := &http.Request{\n\t\tMethod: \"PUT\",\n\t\tBody:   io.NopCloser(r),\n\t\tURL: &url.URL{\n\t\t\tScheme: \"https\",\n\t\t\tHost:   \"127.0.0.1:9443\",\n\t\t\tPath:   \"/tov2ray\",\n\t\t},\n\t\tProto:      \"HTTP/2\",\n\t\tProtoMajor: 2,\n\t\tProtoMinor: 0,\n\t\tHeader:     make(http.Header),\n\t}\n\t// Disable any compression method from server.\n\treq.Header.Set(\"Accept-Encoding\", \"identity\")\n\n\tresp := tester.AssertResponseCode(req, http.StatusOK)\n\tif resp.StatusCode != http.StatusOK {\n\t\treturn\n\t}\n\tgo func() {\n\t\tfmt.Fprint(w, expectedBody)\n\t\tw.Close()\n\t}()\n\n\tdefer resp.Body.Close()\n\tbytes, err := io.ReadAll(resp.Body)\n\tif err != nil {\n\t\tt.Fatalf(\"unable to read the response body %s\", err)\n\t}\n\n\tbody := string(bytes)\n\n\tif !strings.Contains(body, expectedBody) {\n\t\tt.Errorf(\"requesting \\\"%s\\\" expected response body \\\"%s\\\" but got \\\"%s\\\"\", req.RequestURI, expectedBody, body)\n\t}\n}\n\nfunc testH2ToH2CStreamServeH2C(t *testing.T) *http.Server {\n\th2s := &http2.Server{}\n\thandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\trstring, err := httputil.DumpRequest(r, false)\n\t\tif err == nil {\n\t\t\tt.Logf(\"h2c server received req: %s\", rstring)\n\t\t}\n\t\t// We only accept HTTP/2!\n\t\tif r.ProtoMajor != 2 {\n\t\t\tt.Error(\"Not a HTTP/2 request, rejected!\")\n\t\t\tw.WriteHeader(http.StatusInternalServerError)\n\t\t\treturn\n\t\t}\n\n\t\tif r.Host != \"127.0.0.1:9443\" {\n\t\t\tt.Errorf(\"r.Host doesn't match, %v!\", r.Host)\n\t\t\tw.WriteHeader(http.StatusNotFound)\n\t\t\treturn\n\t\t}\n\n\t\tif !strings.HasPrefix(r.URL.Path, \"/tov2ray\") {\n\t\t\tw.WriteHeader(http.StatusNotFound)\n\t\t\treturn\n\t\t}\n\n\t\tw.Header().Set(\"Cache-Control\", \"no-store\")\n\t\tw.WriteHeader(200)\n\t\thttp.NewResponseController(w).Flush()\n\n\t\tbuf := make([]byte, 4*1024)\n\n\t\tfor {\n\t\t\tn, err := r.Body.Read(buf)\n\t\t\tif n > 0 {\n\t\t\t\tw.Write(buf[:n])\n\t\t\t}\n\n\t\t\tif err != nil {\n\t\t\t\tif err == io.EOF {\n\t\t\t\t\tr.Body.Close()\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t})\n\n\tserver := &http.Server{\n\t\tAddr:    \"127.0.0.1:54321\",\n\t\tHandler: h2c.NewHandler(handler, h2s),\n\t}\n\treturn server\n}\n\n// (see https://github.com/caddyserver/caddy/issues/3606 for use case)\nfunc TestH2ToH1ChunkedResponse(t *testing.T) {\n\ttester := caddytest.NewTester(t)\n\ttester.InitServer(` \n{\n\t\"admin\": {\n\t\t\"listen\": \"localhost:2999\"\n\t},\n  \"logging\": {\n    \"logs\": {\n      \"default\": {\n        \"level\": \"DEBUG\"\n      }\n    }\n  },\n  \"apps\": {\n    \"http\": {\n      \"http_port\": 9080,\n      \"https_port\": 9443,\n\t  \"grace_period\": 1,\n      \"servers\": {\n        \"srv0\": {\n          \"listen\": [\n            \":9443\"\n          ],\n          \"routes\": [\n            {\n              \"handle\": [\n                {\n                  \"handler\": \"subroute\",\n                  \"routes\": [\n                    {\n                      \"handle\": [\n                        {\n                          \"encodings\": {\n                            \"gzip\": {}\n                          },\n                          \"handler\": \"encode\"\n                        }\n                      ]\n                    },\n                    {\n                      \"handle\": [\n                        {\n                          \"handler\": \"reverse_proxy\",\n                          \"upstreams\": [\n                            {\n                              \"dial\": \"localhost:54321\"\n                            }\n                          ]\n                        }\n                      ],\n                      \"match\": [\n                        {\n                          \"path\": [\n                            \"/tov2ray\"\n                          ]\n                        }\n                      ]\n                    }\n                  ]\n                }\n              ],\n              \"terminal\": true\n            }\n          ],\n          \"tls_connection_policies\": [\n            {\n              \"certificate_selection\": {\n                \"any_tag\": [\n                  \"cert0\"\n                ]\n              },\n              \"default_sni\": \"a.caddy.localhost\"\n            }\n          ]\n        }\n      }\n    },\n    \"tls\": {\n      \"certificates\": {\n        \"load_files\": [\n          {\n            \"certificate\": \"/a.caddy.localhost.crt\",\n            \"key\": \"/a.caddy.localhost.key\",\n            \"tags\": [\n              \"cert0\"\n            ]\n          }\n        ]\n      }\n    },\n    \"pki\": {\n      \"certificate_authorities\": {\n        \"local\": {\n          \"install_trust\": false\n        }\n      }\n    }\n  }\n}\n  `, \"json\")\n\n\t// need a large body here to trigger caddy's compression, larger than gzip.miniLength\n\texpectedBody, err := GenerateRandomString(1024)\n\tif err != nil {\n\t\tt.Fatalf(\"generate expected body failed, err: %s\", err)\n\t}\n\n\t// start the server\n\tserver := testH2ToH1ChunkedResponseServeH1(t)\n\tgo server.ListenAndServe()\n\tdefer func() {\n\t\tctx, cancel := context.WithTimeout(context.Background(), time.Nanosecond)\n\t\tdefer cancel()\n\t\tserver.Shutdown(ctx)\n\t}()\n\n\tr, w := io.Pipe()\n\treq := &http.Request{\n\t\tMethod: \"PUT\",\n\t\tBody:   io.NopCloser(r),\n\t\tURL: &url.URL{\n\t\t\tScheme: \"https\",\n\t\t\tHost:   \"127.0.0.1:9443\",\n\t\t\tPath:   \"/tov2ray\",\n\t\t},\n\t\tProto:      \"HTTP/2\",\n\t\tProtoMajor: 2,\n\t\tProtoMinor: 0,\n\t\tHeader:     make(http.Header),\n\t}\n\t// underlying transport will automatically add gzip\n\t// req.Header.Set(\"Accept-Encoding\", \"gzip\")\n\tgo func() {\n\t\tfmt.Fprint(w, expectedBody)\n\t\tw.Close()\n\t}()\n\tresp := tester.AssertResponseCode(req, http.StatusOK)\n\tif resp.StatusCode != http.StatusOK {\n\t\treturn\n\t}\n\n\tdefer resp.Body.Close()\n\tbytes, err := io.ReadAll(resp.Body)\n\tif err != nil {\n\t\tt.Fatalf(\"unable to read the response body %s\", err)\n\t}\n\n\tbody := string(bytes)\n\n\tif body != expectedBody {\n\t\tt.Errorf(\"requesting \\\"%s\\\" expected response body \\\"%s\\\" but got \\\"%s\\\"\", req.RequestURI, expectedBody, body)\n\t}\n}\n\nfunc testH2ToH1ChunkedResponseServeH1(t *testing.T) *http.Server {\n\thandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\tif r.Host != \"127.0.0.1:9443\" {\n\t\t\tt.Errorf(\"r.Host doesn't match, %v!\", r.Host)\n\t\t\tw.WriteHeader(http.StatusNotFound)\n\t\t\treturn\n\t\t}\n\n\t\tif !strings.HasPrefix(r.URL.Path, \"/tov2ray\") {\n\t\t\tw.WriteHeader(http.StatusNotFound)\n\t\t\treturn\n\t\t}\n\n\t\tdefer r.Body.Close()\n\t\tbytes, err := io.ReadAll(r.Body)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"unable to read the response body %s\", err)\n\t\t}\n\n\t\tn := len(bytes)\n\n\t\tvar writer io.Writer\n\t\tif strings.Contains(r.Header.Get(\"Accept-Encoding\"), \"gzip\") {\n\t\t\tgw, err := gzip.NewWriterLevel(w, 5)\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"can't return gzip data\")\n\t\t\t\tw.WriteHeader(http.StatusInternalServerError)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tdefer gw.Close()\n\t\t\twriter = gw\n\t\t\tw.Header().Set(\"Content-Encoding\", \"gzip\")\n\t\t\tw.Header().Del(\"Content-Length\")\n\t\t\tw.WriteHeader(200)\n\t\t} else {\n\t\t\twriter = w\n\t\t}\n\t\tif n > 0 {\n\t\t\twriter.Write(bytes[:])\n\t\t}\n\t})\n\n\tserver := &http.Server{\n\t\tAddr:    \"127.0.0.1:54321\",\n\t\tHandler: handler,\n\t}\n\treturn server\n}\n\n// GenerateRandomBytes returns securely generated random bytes.\n// It will return an error if the system's secure random\n// number generator fails to function correctly, in which\n// case the caller should not continue.\nfunc GenerateRandomBytes(n int) ([]byte, error) {\n\tb := make([]byte, n)\n\t_, err := rand.Read(b)\n\t// Note that err == nil only if we read len(b) bytes.\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn b, nil\n}\n\n// GenerateRandomString returns a securely generated random string.\n// It will return an error if the system's secure random\n// number generator fails to function correctly, in which\n// case the caller should not continue.\nfunc GenerateRandomString(n int) (string, error) {\n\tconst letters = \"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz-\"\n\tbytes, err := GenerateRandomBytes(n)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tfor i, b := range bytes {\n\t\tbytes[i] = letters[b%byte(len(letters))]\n\t}\n\treturn string(bytes), nil\n}\n"
  },
  {
    "path": "caddytest/integration/testdata/cookie.html",
    "content": "<h2>Cookie.ClientName {{.Cookie \"clientname\"}}</h2>"
  },
  {
    "path": "caddytest/integration/testdata/foo.txt",
    "content": "foo"
  },
  {
    "path": "caddytest/integration/testdata/foo_with_multiple_trailing_newlines.txt",
    "content": "foo\n\n"
  },
  {
    "path": "caddytest/integration/testdata/foo_with_trailing_newline.txt",
    "content": "foo\n"
  },
  {
    "path": "caddytest/integration/testdata/import_respond.txt",
    "content": "respond \"'I am {args[0]}', hears {args[1]}\""
  },
  {
    "path": "caddytest/integration/testdata/index.localhost.html",
    "content": ""
  },
  {
    "path": "caddytest/integration/testdata/issue_7518_unused_block_panic_snippets.conf",
    "content": "# Used by import_block_snippet_non_replaced_block_from_separate_file.caddyfiletest\n\n(snippet) {\n\theader {\n\t\treverse_proxy localhost:3000\n\t\t{block}\n\t}\n}\n\n# This snippet being unused by the test Caddyfile is intentional.\n# This is to test that a panic runtime error triggered by an out-of-range slice index access\n# will not happen again, please see issue #7518 and pull request #7543 for more information\n(unused_snippet) {\n    header SomeHeader SomeValue\n}"
  },
  {
    "path": "caddytest/leafcert.pem",
    "content": "-----BEGIN CERTIFICATE-----\nMIICUTCCAfugAwIBAgIBADANBgkqhkiG9w0BAQQFADBXMQswCQYDVQQGEwJDTjEL\nMAkGA1UECBMCUE4xCzAJBgNVBAcTAkNOMQswCQYDVQQKEwJPTjELMAkGA1UECxMC\nVU4xFDASBgNVBAMTC0hlcm9uZyBZYW5nMB4XDTA1MDcxNTIxMTk0N1oXDTA1MDgx\nNDIxMTk0N1owVzELMAkGA1UEBhMCQ04xCzAJBgNVBAgTAlBOMQswCQYDVQQHEwJD\nTjELMAkGA1UEChMCT04xCzAJBgNVBAsTAlVOMRQwEgYDVQQDEwtIZXJvbmcgWWFu\nZzBcMA0GCSqGSIb3DQEBAQUAA0sAMEgCQQCp5hnG7ogBhtlynpOS21cBewKE/B7j\nV14qeyslnr26xZUsSVko36ZnhiaO/zbMOoRcKK9vEcgMtcLFuQTWDl3RAgMBAAGj\ngbEwga4wHQYDVR0OBBYEFFXI70krXeQDxZgbaCQoR4jUDncEMH8GA1UdIwR4MHaA\nFFXI70krXeQDxZgbaCQoR4jUDncEoVukWTBXMQswCQYDVQQGEwJDTjELMAkGA1UE\nCBMCUE4xCzAJBgNVBAcTAkNOMQswCQYDVQQKEwJPTjELMAkGA1UECxMCVU4xFDAS\nBgNVBAMTC0hlcm9uZyBZYW5nggEAMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEE\nBQADQQA/ugzBrjjK9jcWnDVfGHlk3icNRq0oV7Ri32z/+HQX67aRfgZu7KWdI+Ju\nWm7DCfrPNGVwFWUQOmsPue9rZBgO\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "cmd/caddy/main.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Package main is the entry point of the Caddy application.\n// Most of Caddy's functionality is provided through modules,\n// which can be plugged in by adding their import below.\n//\n// There is no need to modify the Caddy source code to customize your\n// builds. You can easily build a custom Caddy with these simple steps:\n//\n//  1. Copy this file (main.go) into a new folder\n//  2. Edit the imports below to include the modules you want plugged in\n//  3. Run `go mod init caddy`\n//  4. Run `go install` or `go build` - you now have a custom binary!\n//\n// Or you can use xcaddy which does it all for you as a command:\n// https://github.com/caddyserver/xcaddy\npackage main\n\nimport (\n\t_ \"time/tzdata\"\n\n\tcaddycmd \"github.com/caddyserver/caddy/v2/cmd\"\n\n\t// plug in Caddy modules here\n\t_ \"github.com/caddyserver/caddy/v2/modules/standard\"\n)\n\nfunc main() {\n\tcaddycmd.Main()\n}\n"
  },
  {
    "path": "cmd/caddy/setcap.sh",
    "content": "#!/bin/sh\n\n# USAGE:\n# \tgo run -exec ./setcap.sh main.go <args...>\n#\n# (Example: `go run -exec ./setcap.sh main.go run --config caddy.json`)\n#\n# For some reason this does not work on my Arch system, so if you find that's\n# the case, you can instead do:\n#\n# \tgo build && ./setcap.sh ./caddy <args...>\n#\n# but this will leave the ./caddy binary laying around.\n#\n\nsudo setcap cap_net_bind_service=+ep \"$1\"\n\"$@\"\n"
  },
  {
    "path": "cmd/cobra.go",
    "content": "package caddycmd\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nvar defaultFactory = newRootCommandFactory(func() *cobra.Command {\n\tbin := caddy.CustomBinaryName\n\tif bin == \"\" {\n\t\tbin = \"caddy\"\n\t}\n\n\tlong := caddy.CustomLongDescription\n\tif long == \"\" {\n\t\tlong = `Caddy is an extensible server platform written in Go.\n\nAt its core, Caddy merely manages configuration. Modules are plugged\nin statically at compile-time to provide useful functionality. Caddy's\nstandard distribution includes common modules to serve HTTP, TLS,\nand PKI applications, including the automation of certificates.\n\nTo run Caddy, use:\n\n\t- 'caddy run' to run Caddy in the foreground (recommended).\n\t- 'caddy start' to start Caddy in the background; only do this\n\t  if you will be keeping the terminal window open until you run\n\t  'caddy stop' to close the server.\n\nWhen Caddy is started, it opens a locally-bound administrative socket\nto which configuration can be POSTed via a restful HTTP API (see\nhttps://caddyserver.com/docs/api).\n\nCaddy's native configuration format is JSON. However, config adapters\ncan be used to convert other config formats to JSON when Caddy receives\nits configuration. The Caddyfile is a built-in config adapter that is\npopular for hand-written configurations due to its straightforward\nsyntax (see https://caddyserver.com/docs/caddyfile). Many third-party\nadapters are available (see https://caddyserver.com/docs/config-adapters).\nUse 'caddy adapt' to see how a config translates to JSON.\n\nFor convenience, the CLI can act as an HTTP client to give Caddy its\ninitial configuration for you. If a file named Caddyfile is in the\ncurrent working directory, it will do this automatically. Otherwise,\nyou can use the --config flag to specify the path to a config file.\n\nSome special-purpose subcommands build and load a configuration file\nfor you directly from command line input; for example:\n\n\t- caddy file-server\n\t- caddy reverse-proxy\n\t- caddy respond\n\nThese commands disable the administration endpoint because their\nconfiguration is specified solely on the command line.\n\nIn general, the most common way to run Caddy is simply:\n\n\t$ caddy run\n\nOr, with a configuration file:\n\n\t$ caddy run --config caddy.json\n\nIf running interactively in a terminal, running Caddy in the\nbackground may be more convenient:\n\n\t$ caddy start\n\t...\n\t$ caddy stop\n\nThis allows you to run other commands while Caddy stays running.\nBe sure to stop Caddy before you close the terminal!\n\nDepending on the system, Caddy may need permission to bind to low\nports. One way to do this on Linux is to use setcap:\n\n\t$ sudo setcap cap_net_bind_service=+ep $(which caddy)\n\nRemember to run that command again after replacing the binary.\n\nSee the Caddy website for tutorials, configuration structure,\nsyntax, and module documentation: https://caddyserver.com/docs/\n\nCustom Caddy builds are available on the Caddy download page at:\nhttps://caddyserver.com/download\n\nThe xcaddy command can be used to build Caddy from source with or\nwithout additional plugins: https://github.com/caddyserver/xcaddy\n\nWhere possible, Caddy should be installed using officially-supported\npackage installers: https://caddyserver.com/docs/install\n\nInstructions for running Caddy in production are also available:\nhttps://caddyserver.com/docs/running\n`\n\t}\n\n\treturn &cobra.Command{\n\t\tUse:  bin,\n\t\tLong: long,\n\t\tExample: `  $ caddy run\n  $ caddy run --config caddy.json\n  $ caddy reload --config caddy.json\n  $ caddy stop`,\n\n\t\t// kind of annoying to have all the help text printed out if\n\t\t// caddy has an error provisioning its modules, for instance...\n\t\tSilenceUsage: true,\n\t\tVersion:      onlyVersionText(),\n\t}\n})\n\nconst fullDocsFooter = `Full documentation is available at:\nhttps://caddyserver.com/docs/command-line`\n\nfunc init() {\n\tdefaultFactory.Use(func(rootCmd *cobra.Command) {\n\t\trootCmd.SetVersionTemplate(\"{{.Version}}\\n\")\n\t\trootCmd.SetHelpTemplate(rootCmd.HelpTemplate() + \"\\n\" + fullDocsFooter + \"\\n\")\n\t})\n}\n\nfunc onlyVersionText() string {\n\t_, f := caddy.Version()\n\treturn f\n}\n\nfunc caddyCmdToCobra(caddyCmd Command) *cobra.Command {\n\tcmd := &cobra.Command{\n\t\tUse:   caddyCmd.Name + \" \" + caddyCmd.Usage,\n\t\tShort: caddyCmd.Short,\n\t\tLong:  caddyCmd.Long,\n\t}\n\tif caddyCmd.CobraFunc != nil {\n\t\tcaddyCmd.CobraFunc(cmd)\n\t} else {\n\t\tcmd.RunE = WrapCommandFuncForCobra(caddyCmd.Func)\n\t\tcmd.Flags().AddGoFlagSet(caddyCmd.Flags)\n\t}\n\treturn cmd\n}\n\n// WrapCommandFuncForCobra wraps a Caddy CommandFunc for use\n// in a cobra command's RunE field.\nfunc WrapCommandFuncForCobra(f CommandFunc) func(cmd *cobra.Command, _ []string) error {\n\treturn func(cmd *cobra.Command, _ []string) error {\n\t\tstatus, err := f(Flags{cmd.Flags()})\n\t\tif status > 1 {\n\t\t\tcmd.SilenceErrors = true\n\t\t\treturn &exitError{ExitCode: status, Err: err}\n\t\t}\n\t\treturn err\n\t}\n}\n\n// exitError carries the exit code from CommandFunc to Main()\ntype exitError struct {\n\tExitCode int\n\tErr      error\n}\n\nfunc (e *exitError) Error() string {\n\tif e.Err == nil {\n\t\treturn fmt.Sprintf(\"exiting with status %d\", e.ExitCode)\n\t}\n\treturn e.Err.Error()\n}\n"
  },
  {
    "path": "cmd/commandfactory.go",
    "content": "package caddycmd\n\nimport (\n\t\"github.com/spf13/cobra\"\n)\n\ntype rootCommandFactory struct {\n\tconstructor func() *cobra.Command\n\toptions     []func(*cobra.Command)\n}\n\nfunc newRootCommandFactory(fn func() *cobra.Command) *rootCommandFactory {\n\treturn &rootCommandFactory{\n\t\tconstructor: fn,\n\t}\n}\n\nfunc (f *rootCommandFactory) Use(fn func(cmd *cobra.Command)) {\n\tf.options = append(f.options, fn)\n}\n\nfunc (f *rootCommandFactory) Build() *cobra.Command {\n\to := f.constructor()\n\tfor _, v := range f.options {\n\t\tv(o)\n\t}\n\treturn o\n}\n"
  },
  {
    "path": "cmd/commandfuncs.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddycmd\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"crypto/rand\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/fs\"\n\t\"log\"\n\t\"maps\"\n\t\"net\"\n\t\"net/http\"\n\t\"os\"\n\t\"os/exec\"\n\t\"runtime\"\n\t\"runtime/debug\"\n\t\"strings\"\n\n\t\"github.com/aryann/difflib\"\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/internal\"\n)\n\nfunc cmdStart(fl Flags) (int, error) {\n\tconfigFlag := fl.String(\"config\")\n\tconfigAdapterFlag := fl.String(\"adapter\")\n\tpidfileFlag := fl.String(\"pidfile\")\n\twatchFlag := fl.Bool(\"watch\")\n\n\tvar err error\n\tvar envfileFlag []string\n\tenvfileFlag, err = fl.GetStringSlice(\"envfile\")\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup,\n\t\t\tfmt.Errorf(\"reading envfile flag: %v\", err)\n\t}\n\n\t// open a listener to which the child process will connect when\n\t// it is ready to confirm that it has successfully started\n\tln, err := net.Listen(\"tcp\", \"127.0.0.1:0\")\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup,\n\t\t\tfmt.Errorf(\"opening listener for success confirmation: %v\", err)\n\t}\n\tdefer ln.Close()\n\n\t// craft the command with a pingback address and with a\n\t// pipe for its stdin, so we can tell it our confirmation\n\t// code that we expect so that some random port scan at\n\t// the most unfortunate time won't fool us into thinking\n\t// the child succeeded (i.e. the alternative is to just\n\t// wait for any connection on our listener, but better to\n\t// ensure it's the process we're expecting - we can be\n\t// sure by giving it some random bytes and having it echo\n\t// them back to us)\n\tcmd := exec.Command(os.Args[0], \"run\", \"--pingback\", ln.Addr().String()) //nolint:gosec // no command injection that I can determine...\n\t// we should be able to run caddy in relative paths\n\tif errors.Is(cmd.Err, exec.ErrDot) {\n\t\tcmd.Err = nil\n\t}\n\tif configFlag != \"\" {\n\t\tcmd.Args = append(cmd.Args, \"--config\", configFlag)\n\t}\n\n\tfor _, envfile := range envfileFlag {\n\t\tcmd.Args = append(cmd.Args, \"--envfile\", envfile)\n\t}\n\tif configAdapterFlag != \"\" {\n\t\tcmd.Args = append(cmd.Args, \"--adapter\", configAdapterFlag)\n\t}\n\tif watchFlag {\n\t\tcmd.Args = append(cmd.Args, \"--watch\")\n\t}\n\tif pidfileFlag != \"\" {\n\t\tcmd.Args = append(cmd.Args, \"--pidfile\", pidfileFlag)\n\t}\n\tstdinPipe, err := cmd.StdinPipe()\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup,\n\t\t\tfmt.Errorf(\"creating stdin pipe: %v\", err)\n\t}\n\tcmd.Stdout = os.Stdout\n\tcmd.Stderr = os.Stderr\n\n\t// generate the random bytes we'll send to the child process\n\texpect := make([]byte, 32)\n\t_, err = rand.Read(expect)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup,\n\t\t\tfmt.Errorf(\"generating random confirmation bytes: %v\", err)\n\t}\n\n\t// begin writing the confirmation bytes to the child's\n\t// stdin; use a goroutine since the child hasn't been\n\t// started yet, and writing synchronously would result\n\t// in a deadlock\n\tgo func() {\n\t\t_, _ = stdinPipe.Write(expect)\n\t\tstdinPipe.Close()\n\t}()\n\n\t// start the process\n\terr = cmd.Start()\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup,\n\t\t\tfmt.Errorf(\"starting caddy process: %v\", err)\n\t}\n\n\t// there are two ways we know we're done: either\n\t// the process will connect to our listener, or\n\t// it will exit with an error\n\tsuccess, exit := make(chan struct{}), make(chan error)\n\n\t// in one goroutine, we await the success of the child process\n\tgo func() {\n\t\tfor {\n\t\t\tconn, err := ln.Accept()\n\t\t\tif err != nil {\n\t\t\t\tif !errors.Is(err, net.ErrClosed) {\n\t\t\t\t\tlog.Println(err)\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\t}\n\t\t\terr = handlePingbackConn(conn, expect)\n\t\t\tif err == nil {\n\t\t\t\tclose(success)\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tlog.Println(err)\n\t\t}\n\t}()\n\n\t// in another goroutine, we await the failure of the child process\n\tgo func() {\n\t\terr := cmd.Wait() // don't send on this line! Wait blocks, but send starts before it unblocks\n\t\texit <- err       // sending on separate line ensures select won't trigger until after Wait unblocks\n\t}()\n\n\t// when one of the goroutines unblocks, we're done and can exit\n\tselect {\n\tcase <-success:\n\t\tfmt.Printf(\"Successfully started Caddy (pid=%d) - Caddy is running in the background\\n\", cmd.Process.Pid)\n\tcase err := <-exit:\n\t\treturn caddy.ExitCodeFailedStartup,\n\t\t\tfmt.Errorf(\"caddy process exited with error: %v\", err)\n\t}\n\n\treturn caddy.ExitCodeSuccess, nil\n}\n\nfunc cmdRun(fl Flags) (int, error) {\n\tcaddy.TrapSignals()\n\n\t// set up buffered logging for early startup\n\t// so that we can hold onto logs until after\n\t// the config is loaded (or fails to load)\n\t// so that we can write the logs to the user's\n\t// configured output. we must be sure to flush\n\t// on any error before the config is loaded.\n\tlogger, defaultLogger, logBuffer := caddy.BufferedLog()\n\n\tundoMaxProcs := setResourceLimits(logger)\n\tdefer undoMaxProcs()\n\t// release the local reference to the undo function so it can be GC'd;\n\t// the deferred call above has already captured the actual function value.\n\tundoMaxProcs = nil //nolint:ineffassign,wastedassign\n\n\tconfigFlag := fl.String(\"config\")\n\tconfigAdapterFlag := fl.String(\"adapter\")\n\tresumeFlag := fl.Bool(\"resume\")\n\tprintEnvFlag := fl.Bool(\"environ\")\n\twatchFlag := fl.Bool(\"watch\")\n\tpidfileFlag := fl.String(\"pidfile\")\n\tpingbackFlag := fl.String(\"pingback\")\n\n\t// load all additional envs as soon as possible\n\terr := handleEnvFileFlag(fl)\n\tif err != nil {\n\t\tlogBuffer.FlushTo(defaultLogger)\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\t// if we are supposed to print the environment, do that first\n\tif printEnvFlag {\n\t\tprintEnvironment()\n\t}\n\n\t// load the config, depending on flags\n\tvar config []byte\n\tif resumeFlag {\n\t\tconfig, err = os.ReadFile(caddy.ConfigAutosavePath)\n\t\tif errors.Is(err, fs.ErrNotExist) {\n\t\t\t// not a bad error; just can't resume if autosave file doesn't exist\n\t\t\tlogger.Info(\"no autosave file exists\", zap.String(\"autosave_file\", caddy.ConfigAutosavePath))\n\t\t\tresumeFlag = false\n\t\t} else if err != nil {\n\t\t\tlogBuffer.FlushTo(defaultLogger)\n\t\t\treturn caddy.ExitCodeFailedStartup, err\n\t\t} else {\n\t\t\tif configFlag == \"\" {\n\t\t\t\tlogger.Info(\"resuming from last configuration\",\n\t\t\t\t\tzap.String(\"autosave_file\", caddy.ConfigAutosavePath))\n\t\t\t} else {\n\t\t\t\t// if they also specified a config file, user should be aware that we're not\n\t\t\t\t// using it (doing so could lead to data/config loss by overwriting!)\n\t\t\t\tlogger.Warn(\"--config and --resume flags were used together; ignoring --config and resuming from last configuration\",\n\t\t\t\t\tzap.String(\"autosave_file\", caddy.ConfigAutosavePath))\n\t\t\t}\n\t\t}\n\t}\n\t// we don't use 'else' here since this value might have been changed in 'if' block; i.e. not mutually exclusive\n\tvar configFile string\n\tvar adapterUsed string\n\tif !resumeFlag {\n\t\tconfig, configFile, adapterUsed, err = LoadConfig(configFlag, configAdapterFlag)\n\t\tif err != nil {\n\t\t\tlogBuffer.FlushTo(defaultLogger)\n\t\t\treturn caddy.ExitCodeFailedStartup, err\n\t\t}\n\t}\n\n\t// create pidfile now, in case loading config takes a while (issue #5477)\n\tif pidfileFlag != \"\" {\n\t\terr := caddy.PIDFile(pidfileFlag)\n\t\tif err != nil {\n\t\t\tlogger.Error(\"unable to write PID file\",\n\t\t\t\tzap.String(\"pidfile\", pidfileFlag),\n\t\t\t\tzap.Error(err))\n\t\t}\n\t}\n\n\t// If we have a source config file (we're running via 'caddy run --config ...'),\n\t// record it so SIGUSR1 can reload from the same file. Also provide a callback\n\t// that knows how to load/adapt that source when requested by the main process.\n\tif configFile != \"\" {\n\t\tcaddy.SetLastConfig(configFile, adapterUsed, func(file, adapter string) error {\n\t\t\tcfg, _, _, err := LoadConfig(file, adapter)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\treturn caddy.Load(cfg, true)\n\t\t})\n\t}\n\n\t// run the initial config\n\terr = caddy.Load(config, true)\n\tif err != nil {\n\t\tlogBuffer.FlushTo(defaultLogger)\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"loading initial config: %v\", err)\n\t}\n\t// release the reference to the config so it can be GC'd\n\tconfig = nil //nolint:ineffassign,wastedassign\n\n\t// at this stage the config will have replaced the\n\t// default logger to the configured one, so we can\n\t// log normally, now that the config is running.\n\t// also clear our ref to the buffer so it can get GC'd\n\tlogger = caddy.Log()\n\tdefaultLogger = nil //nolint:ineffassign,wastedassign\n\tlogBuffer = nil     //nolint:wastedassign,ineffassign\n\tlogger.Info(\"serving initial configuration\")\n\n\t// if we are to report to another process the successful start\n\t// of the server, do so now by echoing back contents of stdin\n\tif pingbackFlag != \"\" {\n\t\tconfirmationBytes, err := io.ReadAll(os.Stdin)\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup,\n\t\t\t\tfmt.Errorf(\"reading confirmation bytes from stdin: %v\", err)\n\t\t}\n\t\tconn, err := net.Dial(\"tcp\", pingbackFlag)\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup,\n\t\t\t\tfmt.Errorf(\"dialing confirmation address: %v\", err)\n\t\t}\n\t\t_, err = conn.Write(confirmationBytes)\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup,\n\t\t\t\tfmt.Errorf(\"writing confirmation bytes to %s: %v\", pingbackFlag, err)\n\t\t}\n\t\t// close (non-defer because we `select {}` below)\n\t\t// and release references so they can be GC'd\n\t\tconn.Close()\n\t\tconfirmationBytes = nil //nolint:ineffassign,wastedassign\n\t\tconn = nil              //nolint:wastedassign,ineffassign\n\t}\n\n\t// if enabled, reload config file automatically on changes\n\t// (this better only be used in dev!)\n\tif watchFlag {\n\t\tgo watchConfigFile(configFile, adapterUsed)\n\t}\n\n\t// warn if the environment does not provide enough information about the disk\n\thasXDG := os.Getenv(\"XDG_DATA_HOME\") != \"\" &&\n\t\tos.Getenv(\"XDG_CONFIG_HOME\") != \"\" &&\n\t\tos.Getenv(\"XDG_CACHE_HOME\") != \"\"\n\tswitch runtime.GOOS {\n\tcase \"windows\":\n\t\tif os.Getenv(\"HOME\") == \"\" && os.Getenv(\"USERPROFILE\") == \"\" && !hasXDG {\n\t\t\tlogger.Warn(\"neither HOME nor USERPROFILE environment variables are set - please fix; some assets might be stored in ./caddy\")\n\t\t}\n\tcase \"plan9\":\n\t\tif os.Getenv(\"home\") == \"\" && !hasXDG {\n\t\t\tlogger.Warn(\"$home environment variable is empty - please fix; some assets might be stored in ./caddy\")\n\t\t}\n\tdefault:\n\t\tif os.Getenv(\"HOME\") == \"\" && !hasXDG {\n\t\t\tlogger.Warn(\"$HOME environment variable is empty - please fix; some assets might be stored in ./caddy\")\n\t\t}\n\t}\n\n\t// release the last local logger reference\n\tlogger = nil //nolint:wastedassign,ineffassign\n\n\tselect {}\n}\n\nfunc cmdStop(fl Flags) (int, error) {\n\taddressFlag := fl.String(\"address\")\n\tconfigFlag := fl.String(\"config\")\n\tconfigAdapterFlag := fl.String(\"adapter\")\n\n\tadminAddr, err := DetermineAdminAPIAddress(addressFlag, nil, configFlag, configAdapterFlag)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"couldn't determine admin API address: %v\", err)\n\t}\n\n\tresp, err := AdminAPIRequest(adminAddr, http.MethodPost, \"/stop\", nil, nil)\n\tif err != nil {\n\t\tcaddy.Log().Warn(\"failed using API to stop instance\", zap.Error(err))\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\tdefer resp.Body.Close()\n\n\treturn caddy.ExitCodeSuccess, nil\n}\n\nfunc cmdReload(fl Flags) (int, error) {\n\tconfigFlag := fl.String(\"config\")\n\tconfigAdapterFlag := fl.String(\"adapter\")\n\taddressFlag := fl.String(\"address\")\n\tforceFlag := fl.Bool(\"force\")\n\n\t// get the config in caddy's native format\n\tconfig, configFile, adapterUsed, err := LoadConfig(configFlag, configAdapterFlag)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\tif configFile == \"\" {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"no config file to load\")\n\t}\n\n\tadminAddr, err := DetermineAdminAPIAddress(addressFlag, config, configFile, configAdapterFlag)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"couldn't determine admin API address: %v\", err)\n\t}\n\n\t// optionally force a config reload\n\theaders := make(http.Header)\n\tif forceFlag {\n\t\theaders.Set(\"Cache-Control\", \"must-revalidate\")\n\t}\n\t// Provide the source file/adapter to the running process so it can\n\t// preserve its last-config knowledge if this reload came from the same source.\n\theaders.Set(\"Caddy-Config-Source-File\", configFile)\n\theaders.Set(\"Caddy-Config-Source-Adapter\", adapterUsed)\n\n\tresp, err := AdminAPIRequest(adminAddr, http.MethodPost, \"/load\", headers, bytes.NewReader(config))\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"sending configuration to instance: %v\", err)\n\t}\n\tdefer resp.Body.Close()\n\n\treturn caddy.ExitCodeSuccess, nil\n}\n\nfunc cmdVersion(_ Flags) (int, error) {\n\t_, full := caddy.Version()\n\tfmt.Println(full)\n\treturn caddy.ExitCodeSuccess, nil\n}\n\nfunc cmdBuildInfo(_ Flags) (int, error) {\n\tbi, ok := debug.ReadBuildInfo()\n\tif !ok {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"no build information\")\n\t}\n\tfmt.Println(bi)\n\treturn caddy.ExitCodeSuccess, nil\n}\n\n// jsonModuleInfo holds metadata about a Caddy module for JSON output.\ntype jsonModuleInfo struct {\n\tModuleName string `json:\"module_name\"`\n\tModuleType string `json:\"module_type\"`\n\tVersion    string `json:\"version,omitempty\"`\n\tPackageURL string `json:\"package_url,omitempty\"`\n}\n\nfunc cmdListModules(fl Flags) (int, error) {\n\tpackages := fl.Bool(\"packages\")\n\tversions := fl.Bool(\"versions\")\n\tskipStandard := fl.Bool(\"skip-standard\")\n\tjsonOutput := fl.Bool(\"json\")\n\n\t// Organize modules by whether they come with the standard distribution\n\tstandard, nonstandard, unknown, err := getModules()\n\tif err != nil {\n\t\t// If module info can't be fetched, just print the IDs and exit\n\t\tfor _, m := range caddy.Modules() {\n\t\t\tfmt.Println(m)\n\t\t}\n\t\treturn caddy.ExitCodeSuccess, nil\n\t}\n\n\t// Logic for JSON output\n\tif jsonOutput {\n\t\toutput := []jsonModuleInfo{}\n\n\t\t// addToOutput is a helper to convert internal module info to the JSON-serializable struct\n\t\taddToOutput := func(list []moduleInfo, moduleType string) {\n\t\t\tfor _, mi := range list {\n\t\t\t\titem := jsonModuleInfo{\n\t\t\t\t\tModuleName: mi.caddyModuleID,\n\t\t\t\t\tModuleType: moduleType, // Mapping the type here\n\t\t\t\t}\n\t\t\t\tif mi.goModule != nil {\n\t\t\t\t\titem.Version = mi.goModule.Version\n\t\t\t\t\titem.PackageURL = mi.goModule.Path\n\t\t\t\t}\n\t\t\t\toutput = append(output, item)\n\t\t\t}\n\t\t}\n\n\t\t// Pass the respective type for each category\n\t\tif !skipStandard {\n\t\t\taddToOutput(standard, \"standard\")\n\t\t}\n\t\taddToOutput(nonstandard, \"non-standard\")\n\t\taddToOutput(unknown, \"unknown\")\n\n\t\tjsonBytes, err := json.MarshalIndent(output, \"\", \"  \")\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedQuit, err\n\t\t}\n\t\tfmt.Println(string(jsonBytes))\n\t\treturn caddy.ExitCodeSuccess, nil\n\t}\n\n\t// Logic for Text output (Fallback)\n\tprintModuleInfo := func(mi moduleInfo) {\n\t\tfmt.Print(mi.caddyModuleID)\n\t\tif versions && mi.goModule != nil {\n\t\t\tfmt.Print(\" \" + mi.goModule.Version)\n\t\t}\n\t\tif packages && mi.goModule != nil {\n\t\t\tfmt.Print(\" \" + mi.goModule.Path)\n\t\t\tif mi.goModule.Replace != nil {\n\t\t\t\tfmt.Print(\" => \" + mi.goModule.Replace.Path)\n\t\t\t}\n\t\t}\n\t\tif mi.err != nil {\n\t\t\tfmt.Printf(\" [%v]\", mi.err)\n\t\t}\n\t\tfmt.Println()\n\t}\n\n\t// Standard modules (always shipped with Caddy)\n\tif !skipStandard {\n\t\tif len(standard) > 0 {\n\t\t\tfor _, mod := range standard {\n\t\t\t\tprintModuleInfo(mod)\n\t\t\t}\n\t\t}\n\t\tfmt.Printf(\"\\n  Standard modules: %d\\n\", len(standard))\n\t}\n\n\t// Non-standard modules (third party plugins)\n\tif len(nonstandard) > 0 {\n\t\tif len(standard) > 0 && !skipStandard {\n\t\t\tfmt.Println()\n\t\t}\n\t\tfor _, mod := range nonstandard {\n\t\t\tprintModuleInfo(mod)\n\t\t}\n\t\tfmt.Printf(\"\\n  Non-standard modules: %d\\n\", len(nonstandard))\n\t}\n\n\t// Unknown modules (couldn't get Caddy module info)\n\tif len(unknown) > 0 {\n\t\tif (len(standard) > 0 && !skipStandard) || len(nonstandard) > 0 {\n\t\t\tfmt.Println()\n\t\t}\n\t\tfor _, mod := range unknown {\n\t\t\tprintModuleInfo(mod)\n\t\t}\n\t\tfmt.Printf(\"\\n  Unknown modules: %d\\n\", len(unknown))\n\t}\n\n\treturn caddy.ExitCodeSuccess, nil\n}\n\nfunc cmdEnviron(fl Flags) (int, error) {\n\t// load all additional envs as soon as possible\n\terr := handleEnvFileFlag(fl)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\tprintEnvironment()\n\treturn caddy.ExitCodeSuccess, nil\n}\n\nfunc cmdAdaptConfig(fl Flags) (int, error) {\n\tconfigFlag := fl.String(\"config\")\n\tadapterFlag := fl.String(\"adapter\")\n\tprettyFlag := fl.Bool(\"pretty\")\n\tvalidateFlag := fl.Bool(\"validate\")\n\n\tvar err error\n\tconfigFlag, err = configFileWithRespectToDefault(caddy.Log(), configFlag)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\tif configFlag == \"\" {\n\t\treturn caddy.ExitCodeFailedStartup,\n\t\t\tfmt.Errorf(\"input file required when there is no Caddyfile in current directory (use --config flag)\")\n\t}\n\n\t// load all additional envs as soon as possible\n\terr = handleEnvFileFlag(fl)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\tif adapterFlag == \"\" {\n\t\treturn caddy.ExitCodeFailedStartup,\n\t\t\tfmt.Errorf(\"adapter name is required (use --adapt flag or leave unspecified for default)\")\n\t}\n\n\tcfgAdapter := caddyconfig.GetAdapter(adapterFlag)\n\tif cfgAdapter == nil {\n\t\treturn caddy.ExitCodeFailedStartup,\n\t\t\tfmt.Errorf(\"unrecognized config adapter: %s\", adapterFlag)\n\t}\n\n\tvar input []byte\n\t// read from stdin if the file name is \"-\"\n\tif configFlag == \"-\" {\n\t\tinput, err = io.ReadAll(os.Stdin)\n\t} else {\n\t\tinput, err = os.ReadFile(configFlag)\n\t}\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup,\n\t\t\tfmt.Errorf(\"reading input file: %v\", err)\n\t}\n\n\topts := map[string]any{\"filename\": configFlag}\n\n\tadaptedConfig, warnings, err := cfgAdapter.Adapt(input, opts)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\tif prettyFlag {\n\t\tvar prettyBuf bytes.Buffer\n\t\terr = json.Indent(&prettyBuf, adaptedConfig, \"\", \"\\t\")\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, err\n\t\t}\n\t\tadaptedConfig = prettyBuf.Bytes()\n\t}\n\n\t// print result to stdout\n\tfmt.Println(string(adaptedConfig))\n\n\t// print warnings to stderr\n\tfor _, warn := range warnings {\n\t\tmsg := warn.Message\n\t\tif warn.Directive != \"\" {\n\t\t\tmsg = fmt.Sprintf(\"%s: %s\", warn.Directive, warn.Message)\n\t\t}\n\t\tcaddy.Log().Named(adapterFlag).Warn(msg,\n\t\t\tzap.String(\"file\", warn.File),\n\t\t\tzap.Int(\"line\", warn.Line))\n\t}\n\n\t// validate output if requested\n\tif validateFlag {\n\t\tvar cfg *caddy.Config\n\t\terr = caddy.StrictUnmarshalJSON(adaptedConfig, &cfg)\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"decoding config: %v\", err)\n\t\t}\n\t\terr = caddy.Validate(cfg)\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"validation: %v\", err)\n\t\t}\n\t}\n\n\treturn caddy.ExitCodeSuccess, nil\n}\n\nfunc cmdValidateConfig(fl Flags) (int, error) {\n\tconfigFlag := fl.String(\"config\")\n\tadapterFlag := fl.String(\"adapter\")\n\n\t// load all additional envs as soon as possible\n\terr := handleEnvFileFlag(fl)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\t// use default config and ensure a config file is specified\n\tconfigFlag, err = configFileWithRespectToDefault(caddy.Log(), configFlag)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\tif configFlag == \"\" {\n\t\treturn caddy.ExitCodeFailedStartup,\n\t\t\tfmt.Errorf(\"input file required when there is no Caddyfile in current directory (use --config flag)\")\n\t}\n\n\tinput, _, _, err := LoadConfig(configFlag, adapterFlag)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\tinput = caddy.RemoveMetaFields(input)\n\n\tvar cfg *caddy.Config\n\terr = caddy.StrictUnmarshalJSON(input, &cfg)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"decoding config: %v\", err)\n\t}\n\n\terr = caddy.Validate(cfg)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\tfmt.Println(\"Valid configuration\")\n\n\treturn caddy.ExitCodeSuccess, nil\n}\n\nfunc cmdFmt(fl Flags) (int, error) {\n\tconfigFile := fl.Arg(0)\n\tconfigFlag := fl.String(\"config\")\n\tif (len(fl.Args()) > 1) || (configFlag != \"\" && configFile != \"\") {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"fmt does not support multiple files %s %s\", configFlag, strings.Join(fl.Args(), \" \"))\n\t}\n\tif configFile == \"\" && configFlag == \"\" {\n\t\tconfigFile = \"Caddyfile\"\n\t} else if configFile == \"\" {\n\t\tconfigFile = configFlag\n\t}\n\t// as a special case, read from stdin if the file name is \"-\"\n\tif configFile == \"-\" {\n\t\tinput, err := io.ReadAll(os.Stdin)\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup,\n\t\t\t\tfmt.Errorf(\"reading stdin: %v\", err)\n\t\t}\n\t\tfmt.Print(string(caddyfile.Format(input)))\n\t\treturn caddy.ExitCodeSuccess, nil\n\t}\n\n\tinput, err := os.ReadFile(configFile)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup,\n\t\t\tfmt.Errorf(\"reading input file: %v\", err)\n\t}\n\n\toutput := caddyfile.Format(input)\n\n\tif fl.Bool(\"overwrite\") {\n\t\tif err := os.WriteFile(configFile, output, 0o600); err != nil { //nolint:gosec // path traversal is not really a thing here, this is either \"Caddyfile\" or admin-controlled\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"overwriting formatted file: %v\", err)\n\t\t}\n\t\treturn caddy.ExitCodeSuccess, nil\n\t}\n\n\tif fl.Bool(\"diff\") {\n\t\tdiff := difflib.Diff(\n\t\t\tstrings.Split(string(input), \"\\n\"),\n\t\t\tstrings.Split(string(output), \"\\n\"))\n\t\tfor _, d := range diff {\n\t\t\tswitch d.Delta {\n\t\t\tcase difflib.Common:\n\t\t\t\tfmt.Printf(\"  %s\\n\", d.Payload)\n\t\t\tcase difflib.LeftOnly:\n\t\t\t\tfmt.Printf(\"- %s\\n\", d.Payload)\n\t\t\tcase difflib.RightOnly:\n\t\t\t\tfmt.Printf(\"+ %s\\n\", d.Payload)\n\t\t\t}\n\t\t}\n\t} else {\n\t\tfmt.Print(string(output))\n\t}\n\n\tif warning, diff := caddyfile.FormattingDifference(configFile, input); diff {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(`%s:%d: Caddyfile input is not formatted; Tip: use '--overwrite' to update your Caddyfile in-place instead of previewing it. Consult '--help' for more options`,\n\t\t\twarning.File,\n\t\t\twarning.Line,\n\t\t)\n\t}\n\n\treturn caddy.ExitCodeSuccess, nil\n}\n\n// handleEnvFileFlag loads the environment variables from the given --envfile\n// flag if specified. This should be called as early in the command function.\nfunc handleEnvFileFlag(fl Flags) error {\n\tvar err error\n\tvar envfileFlag []string\n\tenvfileFlag, err = fl.GetStringSlice(\"envfile\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"reading envfile flag: %v\", err)\n\t}\n\n\tfor _, envfile := range envfileFlag {\n\t\tif err := loadEnvFromFile(envfile); err != nil {\n\t\t\treturn fmt.Errorf(\"loading additional environment variables: %v\", err)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// AdminAPIRequest makes an API request according to the CLI flags given,\n// with the given HTTP method and request URI. If body is non-nil, it will\n// be assumed to be Content-Type application/json. The caller should close\n// the response body. Should only be used by Caddy CLI commands which\n// need to interact with a running instance of Caddy via the admin API.\nfunc AdminAPIRequest(adminAddr, method, uri string, headers http.Header, body io.Reader) (*http.Response, error) {\n\tparsedAddr, err := caddy.ParseNetworkAddress(adminAddr)\n\tif err != nil || parsedAddr.PortRangeSize() > 1 {\n\t\treturn nil, fmt.Errorf(\"invalid admin address %s: %v\", adminAddr, err)\n\t}\n\torigin := \"http://\" + parsedAddr.JoinHostPort(0)\n\tif parsedAddr.IsUnixNetwork() {\n\t\torigin = \"http://127.0.0.1\" // bogus host is a hack so that http.NewRequest() is happy\n\n\t\t// the unix address at this point might still contain the optional\n\t\t// unix socket permissions, which are part of the address/host.\n\t\t// those need to be removed first, as they aren't part of the\n\t\t// resulting unix file path\n\t\taddr, _, err := internal.SplitUnixSocketPermissionsBits(parsedAddr.Host)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tparsedAddr.Host = addr\n\t} else if parsedAddr.IsFdNetwork() {\n\t\torigin = \"http://127.0.0.1\"\n\t}\n\n\t// form the request\n\treq, err := http.NewRequest(method, origin+uri, body)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"making request: %v\", err)\n\t}\n\tif parsedAddr.IsUnixNetwork() || parsedAddr.IsFdNetwork() {\n\t\t// We used to conform to RFC 2616 Section 14.26 which requires\n\t\t// an empty host header when there is no host, as is the case\n\t\t// with unix sockets and socket fds. However, Go required a\n\t\t// Host value so we used a hack of a space character as the host\n\t\t// (it would see the Host was non-empty, then trim the space later).\n\t\t// As of Go 1.20.6 (July 2023), this hack no longer works. See:\n\t\t// https://github.com/golang/go/issues/60374\n\t\t// See also the discussion here:\n\t\t// https://github.com/golang/go/issues/61431\n\t\t//\n\t\t// After that, we now require a Host value of either 127.0.0.1\n\t\t// or ::1 if one is set. Above I choose to use 127.0.0.1. Even\n\t\t// though the value should be completely irrelevant (it could be\n\t\t// \"srldkjfsd\"), if for some reason the Host *is* used, at least\n\t\t// we can have some reasonable assurance it will stay on the local\n\t\t// machine and that browsers, if they ever allow access to unix\n\t\t// sockets, can still enforce CORS, ensuring it is still coming\n\t\t// from the local machine.\n\t} else {\n\t\treq.Header.Set(\"Origin\", origin)\n\t}\n\tif body != nil {\n\t\treq.Header.Set(\"Content-Type\", \"application/json\")\n\t}\n\tmaps.Copy(req.Header, headers)\n\n\t// make an HTTP client that dials our network type, since admin\n\t// endpoints aren't always TCP, which is what the default transport\n\t// expects; reuse is not of particular concern here\n\tclient := http.Client{\n\t\tTransport: &http.Transport{\n\t\t\tDialContext: func(_ context.Context, _, _ string) (net.Conn, error) {\n\t\t\t\treturn net.Dial(parsedAddr.Network, parsedAddr.JoinHostPort(0))\n\t\t\t},\n\t\t},\n\t}\n\n\tresp, err := client.Do(req) //nolint:gosec // the only SSRF here would be self-sabatoge I think\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"performing request: %v\", err)\n\t}\n\n\t// if it didn't work, let the user know\n\tif resp.StatusCode >= 400 {\n\t\trespBody, err := io.ReadAll(io.LimitReader(resp.Body, 1024*1024*2))\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"HTTP %d: reading error message: %v\", resp.StatusCode, err)\n\t\t}\n\t\treturn nil, fmt.Errorf(\"caddy responded with error: HTTP %d: %s\", resp.StatusCode, respBody)\n\t}\n\n\treturn resp, nil\n}\n\n// DetermineAdminAPIAddress determines which admin API endpoint address should\n// be used based on the inputs. By priority: if `address` is specified, then\n// it is returned; if `config` is specified, then that config will be used for\n// finding the admin address; if `configFile` (and `configAdapter`) are specified,\n// then that config will be loaded to find the admin address; otherwise, the\n// default admin listen address will be returned.\nfunc DetermineAdminAPIAddress(address string, config []byte, configFile, configAdapter string) (string, error) {\n\t// Prefer the address if specified and non-empty\n\tif address != \"\" {\n\t\treturn address, nil\n\t}\n\n\t// Try to load the config from file if specified, with the given adapter name\n\tif configFile != \"\" {\n\t\tvar loadedConfigFile string\n\t\tvar err error\n\n\t\t// use the provided loaded config if non-empty\n\t\t// otherwise, load it from the specified file/adapter\n\t\tloadedConfig := config\n\t\tif len(loadedConfig) == 0 {\n\t\t\t// get the config in caddy's native format\n\t\t\tloadedConfig, loadedConfigFile, _, err = LoadConfig(configFile, configAdapter)\n\t\t\tif err != nil {\n\t\t\t\treturn \"\", err\n\t\t\t}\n\t\t\tif loadedConfigFile == \"\" {\n\t\t\t\treturn \"\", fmt.Errorf(\"no config file to load; either use --config flag or ensure Caddyfile exists in current directory\")\n\t\t\t}\n\t\t}\n\n\t\t// get the address of the admin listener from the config\n\t\tif len(loadedConfig) > 0 {\n\t\t\tvar tmpStruct struct {\n\t\t\t\tAdmin caddy.AdminConfig `json:\"admin\"`\n\t\t\t}\n\t\t\terr := json.Unmarshal(loadedConfig, &tmpStruct)\n\t\t\tif err != nil {\n\t\t\t\treturn \"\", fmt.Errorf(\"unmarshaling admin listener address from config: %v\", err)\n\t\t\t}\n\t\t\tif tmpStruct.Admin.Listen != \"\" {\n\t\t\t\treturn tmpStruct.Admin.Listen, nil\n\t\t\t}\n\t\t}\n\t}\n\n\t// Fallback to the default listen address otherwise\n\treturn caddy.DefaultAdminListen, nil\n}\n\n// configFileWithRespectToDefault returns the filename to use for loading the config, based\n// on whether a config file is already specified and a supported default config file exists.\nfunc configFileWithRespectToDefault(logger *zap.Logger, configFile string) (string, error) {\n\tconst defaultCaddyfile = \"Caddyfile\"\n\n\t// if no input file was specified, try a default Caddyfile if the Caddyfile adapter is plugged in\n\tif configFile == \"\" && caddyconfig.GetAdapter(\"caddyfile\") != nil {\n\t\t_, err := os.Stat(defaultCaddyfile)\n\t\tif err == nil {\n\t\t\t// default Caddyfile exists\n\t\t\tif logger != nil {\n\t\t\t\tlogger.Info(\"using adjacent Caddyfile\")\n\t\t\t}\n\t\t\treturn defaultCaddyfile, nil\n\t\t}\n\t\tif !errors.Is(err, fs.ErrNotExist) {\n\t\t\t// problem checking\n\t\t\treturn configFile, fmt.Errorf(\"checking if default Caddyfile exists: %v\", err)\n\t\t}\n\t}\n\n\t// default config file does not exist or is irrelevant\n\treturn configFile, nil\n}\n\ntype moduleInfo struct {\n\tcaddyModuleID string\n\tgoModule      *debug.Module\n\terr           error\n}\n"
  },
  {
    "path": "cmd/commands.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddycmd\n\nimport (\n\t\"flag\"\n\t\"fmt\"\n\t\"os\"\n\t\"regexp\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"github.com/spf13/cobra\"\n\t\"github.com/spf13/cobra/doc\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\n// Command represents a subcommand. Name, Func,\n// and Short are required.\ntype Command struct {\n\t// The name of the subcommand. Must conform to the\n\t// format described by the RegisterCommand() godoc.\n\t// Required.\n\tName string\n\n\t// Usage is a brief message describing the syntax of\n\t// the subcommand's flags and args. Use [] to indicate\n\t// optional parameters and <> to enclose literal values\n\t// intended to be replaced by the user. Do not prefix\n\t// the string with \"caddy\" or the name of the command\n\t// since these will be prepended for you; only include\n\t// the actual parameters for this command.\n\tUsage string\n\n\t// Short is a one-line message explaining what the\n\t// command does. Should not end with punctuation.\n\t// Required.\n\tShort string\n\n\t// Long is the full help text shown to the user.\n\t// Will be trimmed of whitespace on both ends before\n\t// being printed.\n\tLong string\n\n\t// Flags is the flagset for command.\n\t// This is ignored if CobraFunc is set.\n\tFlags *flag.FlagSet\n\n\t// Func is a function that executes a subcommand using\n\t// the parsed flags. It returns an exit code and any\n\t// associated error.\n\t// Required if CobraFunc is not set.\n\tFunc CommandFunc\n\n\t// CobraFunc allows further configuration of the command\n\t// via cobra's APIs. If this is set, then Func and Flags\n\t// are ignored, with the assumption that they are set in\n\t// this function. A caddycmd.WrapCommandFuncForCobra helper\n\t// exists to simplify porting CommandFunc to Cobra's RunE.\n\tCobraFunc func(*cobra.Command)\n}\n\n// CommandFunc is a command's function. It runs the\n// command and returns the proper exit code along with\n// any error that occurred.\ntype CommandFunc func(Flags) (int, error)\n\n// Commands returns a list of commands initialised by\n// RegisterCommand\nfunc Commands() map[string]Command {\n\tcommandsMu.RLock()\n\tdefer commandsMu.RUnlock()\n\n\treturn commands\n}\n\nvar (\n\tcommandsMu sync.RWMutex\n\tcommands   = make(map[string]Command)\n)\n\nfunc init() {\n\tRegisterCommand(Command{\n\t\tName:  \"start\",\n\t\tUsage: \"[--config <path> [--adapter <name>]] [--envfile <path>] [--watch] [--pidfile <file>]\",\n\t\tShort: \"Starts the Caddy process in the background and then returns\",\n\t\tLong: `\nStarts the Caddy process, optionally bootstrapped with an initial config file.\nThis command unblocks after the server starts running or fails to run.\n\nIf --envfile is specified, an environment file with environment variables\nin the KEY=VALUE format will be loaded into the Caddy process.\n\nOn Windows, the spawned child process will remain attached to the terminal, so\nclosing the window will forcefully stop Caddy; to avoid forgetting this, try\nusing 'caddy run' instead to keep it in the foreground.\n`,\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\tcmd.Flags().StringP(\"config\", \"c\", \"\", \"Configuration file\")\n\t\t\tcmd.Flags().StringP(\"adapter\", \"a\", \"\", \"Name of config adapter to apply\")\n\t\t\tcmd.Flags().StringSliceP(\"envfile\", \"\", []string{}, \"Environment file(s) to load\")\n\t\t\tcmd.Flags().BoolP(\"watch\", \"w\", false, \"Reload changed config file automatically\")\n\t\t\tcmd.Flags().StringP(\"pidfile\", \"\", \"\", \"Path of file to which to write process ID\")\n\t\t\tcmd.RunE = WrapCommandFuncForCobra(cmdStart)\n\t\t},\n\t})\n\n\tRegisterCommand(Command{\n\t\tName:  \"run\",\n\t\tUsage: \"[--config <path> [--adapter <name>]] [--envfile <path>] [--environ] [--resume] [--watch] [--pidfile <file>]\",\n\t\tShort: `Starts the Caddy process and blocks indefinitely`,\n\t\tLong: `\nStarts the Caddy process, optionally bootstrapped with an initial config file,\nand blocks indefinitely until the server is stopped; i.e. runs Caddy in\n\"daemon\" mode (foreground).\n\nIf a config file is specified, it will be applied immediately after the process\nis running. If the config file is not in Caddy's native JSON format, you can\nspecify an adapter with --adapter to adapt the given config file to\nCaddy's native format. The config adapter must be a registered module. Any\nwarnings will be printed to the log, but beware that any adaptation without\nerrors will immediately be used. If you want to review the results of the\nadaptation first, use the 'adapt' subcommand.\n\nAs a special case, if the current working directory has a file called\n\"Caddyfile\" and the caddyfile config adapter is plugged in (default), then\nthat file will be loaded and used to configure Caddy, even without any command\nline flags.\n\nIf --envfile is specified, an environment file with environment variables\nin the KEY=VALUE format will be loaded into the Caddy process.\n\nIf --environ is specified, the environment as seen by the Caddy process will\nbe printed before starting. This is the same as the environ command but does\nnot quit after printing, and can be useful for troubleshooting.\n\nThe --resume flag will override the --config flag if there is a config auto-\nsave file. It is not an error if --resume is used and no autosave file exists.\n\nIf --watch is specified, the config file will be loaded automatically after\nchanges. ⚠️ This can make unintentional config changes easier; only use this\noption in a local development environment.\n`,\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\tcmd.Flags().StringP(\"config\", \"c\", \"\", \"Configuration file\")\n\t\t\tcmd.Flags().StringP(\"adapter\", \"a\", \"\", \"Name of config adapter to apply\")\n\t\t\tcmd.Flags().StringSliceP(\"envfile\", \"\", []string{}, \"Environment file(s) to load\")\n\t\t\tcmd.Flags().BoolP(\"environ\", \"e\", false, \"Print environment\")\n\t\t\tcmd.Flags().BoolP(\"resume\", \"r\", false, \"Use saved config, if any (and prefer over --config file)\")\n\t\t\tcmd.Flags().BoolP(\"watch\", \"w\", false, \"Watch config file for changes and reload it automatically\")\n\t\t\tcmd.Flags().StringP(\"pidfile\", \"\", \"\", \"Path of file to which to write process ID\")\n\t\t\tcmd.Flags().StringP(\"pingback\", \"\", \"\", \"Echo confirmation bytes to this address on success\")\n\t\t\tcmd.RunE = WrapCommandFuncForCobra(cmdRun)\n\t\t},\n\t})\n\n\tRegisterCommand(Command{\n\t\tName:  \"stop\",\n\t\tUsage: \"[--config <path> [--adapter <name>]] [--address <interface>]\",\n\t\tShort: \"Gracefully stops a started Caddy process\",\n\t\tLong: `\nStops the background Caddy process as gracefully as possible.\n\nIt requires that the admin API is enabled and accessible, since it will\nuse the API's /stop endpoint. The address of this request can be customized\nusing the --address flag, or from the given --config, if not the default.\n`,\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\tcmd.Flags().StringP(\"config\", \"c\", \"\", \"Configuration file to use to parse the admin address, if --address is not used\")\n\t\t\tcmd.Flags().StringP(\"adapter\", \"a\", \"\", \"Name of config adapter to apply (when --config is used)\")\n\t\t\tcmd.Flags().StringP(\"address\", \"\", \"\", \"The address to use to reach the admin API endpoint, if not the default\")\n\t\t\tcmd.RunE = WrapCommandFuncForCobra(cmdStop)\n\t\t},\n\t})\n\n\tRegisterCommand(Command{\n\t\tName:  \"reload\",\n\t\tUsage: \"--config <path> [--adapter <name>] [--address <interface>]\",\n\t\tShort: \"Changes the config of the running Caddy instance\",\n\t\tLong: `\nGives the running Caddy instance a new configuration. This has the same effect\nas POSTing a document to the /load API endpoint, but is convenient for simple\nworkflows revolving around config files.\n\nSince the admin endpoint is configurable, the endpoint configuration is loaded\nfrom the --address flag if specified; otherwise it is loaded from the given\nconfig file; otherwise the default is assumed.\n`,\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\tcmd.Flags().StringP(\"config\", \"c\", \"\", \"Configuration file (required)\")\n\t\t\tcmd.Flags().StringP(\"adapter\", \"a\", \"\", \"Name of config adapter to apply\")\n\t\t\tcmd.Flags().StringP(\"address\", \"\", \"\", \"Address of the administration listener, if different from config\")\n\t\t\tcmd.Flags().BoolP(\"force\", \"f\", false, \"Force config reload, even if it is the same\")\n\t\t\tcmd.RunE = WrapCommandFuncForCobra(cmdReload)\n\t\t},\n\t})\n\n\tRegisterCommand(Command{\n\t\tName:  \"version\",\n\t\tShort: \"Prints the version\",\n\t\tLong: `\nPrints the version of this Caddy binary.\n\nVersion information must be embedded into the binary at compile-time in\norder for Caddy to display anything useful with this command. If Caddy\nis built from within a version control repository, the Go command will\nembed the revision hash if available. However, if Caddy is built in the\nway specified by our online documentation (or by using xcaddy), more\ndetailed version information is printed as given by Go modules.\n\nFor more details about the full version string, see the Go module\ndocumentation: https://go.dev/doc/modules/version-numbers\n`,\n\t\tFunc: cmdVersion,\n\t})\n\n\tRegisterCommand(Command{\n\t\tName:  \"list-modules\",\n\t\tUsage: \"[--packages] [--versions] [--skip-standard] [--json]\",\n\t\tShort: \"Lists the installed Caddy modules\",\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\tcmd.Flags().BoolP(\"packages\", \"\", false, \"Print package paths\")\n\t\t\tcmd.Flags().BoolP(\"versions\", \"\", false, \"Print version information\")\n\t\t\tcmd.Flags().BoolP(\"skip-standard\", \"s\", false, \"Skip printing standard modules\")\n\t\t\tcmd.Flags().BoolP(\"json\", \"\", false, \"Print modules in JSON format\")\n\t\t\tcmd.RunE = WrapCommandFuncForCobra(cmdListModules)\n\t\t},\n\t})\n\n\tRegisterCommand(Command{\n\t\tName:  \"build-info\",\n\t\tShort: \"Prints information about this build\",\n\t\tFunc:  cmdBuildInfo,\n\t})\n\n\tRegisterCommand(Command{\n\t\tName:  \"environ\",\n\t\tUsage: \"[--envfile <path>]\",\n\t\tShort: \"Prints the environment\",\n\t\tLong: `\nPrints the environment as seen by this Caddy process.\n\nThe environment includes variables set in the system. If your Caddy\nconfiguration uses environment variables (e.g. \"{env.VARIABLE}\") then\nthis command can be useful for verifying that the variables will have\nthe values you expect in your config.\n\nIf --envfile is specified, an environment file with environment variables\nin the KEY=VALUE format will be loaded into the Caddy process.\n\nNote that environments may be different depending on how you run Caddy.\nEnvironments for Caddy instances started by service managers such as\nsystemd are often different than the environment inherited from your\nshell or terminal.\n\nYou can also print the environment the same time you use \"caddy run\"\nby adding the \"--environ\" flag.\n\nEnvironments may contain sensitive data.\n`,\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\tcmd.Flags().StringSliceP(\"envfile\", \"\", []string{}, \"Environment file(s) to load\")\n\t\t\tcmd.RunE = WrapCommandFuncForCobra(cmdEnviron)\n\t\t},\n\t})\n\n\tRegisterCommand(Command{\n\t\tName:  \"adapt\",\n\t\tUsage: \"--config <path> [--adapter <name>] [--pretty] [--validate] [--envfile <path>]\",\n\t\tShort: \"Adapts a configuration to Caddy's native JSON\",\n\t\tLong: `\nAdapts a configuration to Caddy's native JSON format and writes the\noutput to stdout, along with any warnings to stderr.\n\nIf --pretty is specified, the output will be formatted with indentation\nfor human readability.\n\nIf --validate is used, the adapted config will be checked for validity.\nIf the config is invalid, an error will be printed to stderr and a non-\nzero exit status will be returned.\n\nIf --envfile is specified, an environment file with environment variables\nin the KEY=VALUE format will be loaded into the Caddy process.\n\nIf you wish to use stdin instead of a regular file, use - as the path.\n`,\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\tcmd.Flags().StringP(\"config\", \"c\", \"\", \"Configuration file to adapt (required)\")\n\t\t\tcmd.Flags().StringP(\"adapter\", \"a\", \"caddyfile\", \"Name of config adapter\")\n\t\t\tcmd.Flags().BoolP(\"pretty\", \"p\", false, \"Format the output for human readability\")\n\t\t\tcmd.Flags().BoolP(\"validate\", \"\", false, \"Validate the output\")\n\t\t\tcmd.Flags().StringSliceP(\"envfile\", \"\", []string{}, \"Environment file(s) to load\")\n\t\t\tcmd.RunE = WrapCommandFuncForCobra(cmdAdaptConfig)\n\t\t},\n\t})\n\n\tRegisterCommand(Command{\n\t\tName:  \"validate\",\n\t\tUsage: \"--config <path> [--adapter <name>] [--envfile <path>]\",\n\t\tShort: \"Tests whether a configuration file is valid\",\n\t\tLong: `\nLoads and provisions the provided config, but does not start running it.\nThis reveals any errors with the configuration through the loading and\nprovisioning stages.\n\nIf --envfile is specified, an environment file with environment variables\nin the KEY=VALUE format will be loaded into the Caddy process.\n`,\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\tcmd.Flags().StringP(\"config\", \"c\", \"\", \"Input configuration file\")\n\t\t\tcmd.Flags().StringP(\"adapter\", \"a\", \"\", \"Name of config adapter\")\n\t\t\tcmd.Flags().StringSliceP(\"envfile\", \"\", []string{}, \"Environment file(s) to load\")\n\t\t\tcmd.RunE = WrapCommandFuncForCobra(cmdValidateConfig)\n\t\t},\n\t})\n\n\tRegisterCommand(Command{\n\t\tName:  \"storage\",\n\t\tShort: \"Commands for working with Caddy's storage (EXPERIMENTAL)\",\n\t\tLong: `\nAllows exporting and importing Caddy's storage contents. The two commands can be\ncombined in a pipeline to transfer directly from one storage to another:\n\n$ caddy storage export --config Caddyfile.old --output - |\n> caddy storage import --config Caddyfile.new --input -\n\nThe - argument refers to stdout and stdin, respectively.\n\nNOTE: When importing to or exporting from file_system storage (the default), the command\nshould be run as the user that owns the associated root path.\n\nEXPERIMENTAL: May be changed or removed.\n`,\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\texportCmd := &cobra.Command{\n\t\t\t\tUse:   \"export --config <path> --output <path>\",\n\t\t\t\tShort: \"Exports storage assets as a tarball\",\n\t\t\t\tLong: `\nThe contents of the configured storage module (TLS certificates, etc)\nare exported via a tarball.\n\n--output is required, - can be given for stdout.\n`,\n\t\t\t\tRunE: WrapCommandFuncForCobra(cmdExportStorage),\n\t\t\t}\n\t\t\texportCmd.Flags().StringP(\"config\", \"c\", \"\", \"Input configuration file (required)\")\n\t\t\texportCmd.Flags().StringP(\"output\", \"o\", \"\", \"Output path\")\n\t\t\tcmd.AddCommand(exportCmd)\n\n\t\t\timportCmd := &cobra.Command{\n\t\t\t\tUse:   \"import --config <path> --input <path>\",\n\t\t\t\tShort: \"Imports storage assets from a tarball.\",\n\t\t\t\tLong: `\nImports storage assets to the configured storage module. The import file must be\na tar archive.\n\n--input is required, - can be given for stdin.\n`,\n\t\t\t\tRunE: WrapCommandFuncForCobra(cmdImportStorage),\n\t\t\t}\n\t\t\timportCmd.Flags().StringP(\"config\", \"c\", \"\", \"Configuration file to load (required)\")\n\t\t\timportCmd.Flags().StringP(\"input\", \"i\", \"\", \"Tar of assets to load (required)\")\n\t\t\tcmd.AddCommand(importCmd)\n\t\t},\n\t})\n\n\tRegisterCommand(Command{\n\t\tName:  \"fmt\",\n\t\tUsage: \"[--overwrite] [--diff] [<path>]\",\n\t\tShort: \"Formats a Caddyfile\",\n\t\tLong: `\nFormats the Caddyfile by adding proper indentation and spaces to improve\nhuman readability. It prints the result to stdout.\n\nIf --overwrite is specified, the output will be written to the config file\ndirectly instead of printing it.\n\nIf --diff is specified, the output will be compared against the input, and\nlines will be prefixed with '-' and '+' where they differ. Note that\nunchanged lines are prefixed with two spaces for alignment, and that this\nis not a valid patch format.\n\nIf you wish to use stdin instead of a regular file, use - as the path.\nWhen reading from stdin, the --overwrite flag has no effect: the result\nis always printed to stdout.\n`,\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\tcmd.Flags().StringP(\"config\", \"c\", \"\", \"Configuration file\")\n\t\t\tcmd.Flags().BoolP(\"overwrite\", \"w\", false, \"Overwrite the input file with the results\")\n\t\t\tcmd.Flags().BoolP(\"diff\", \"d\", false, \"Print the differences between the input file and the formatted output\")\n\t\t\tcmd.RunE = WrapCommandFuncForCobra(cmdFmt)\n\t\t},\n\t})\n\n\tRegisterCommand(Command{\n\t\tName:  \"upgrade\",\n\t\tShort: \"Upgrade Caddy (EXPERIMENTAL)\",\n\t\tLong: `\nDownloads an updated Caddy binary with the same modules/plugins at the\nlatest versions. EXPERIMENTAL: May be changed or removed.\n`,\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\tcmd.Flags().BoolP(\"keep-backup\", \"k\", false, \"Keep the backed up binary, instead of deleting it\")\n\t\t\tcmd.RunE = WrapCommandFuncForCobra(cmdUpgrade)\n\t\t},\n\t})\n\n\tRegisterCommand(Command{\n\t\tName:  \"add-package\",\n\t\tUsage: \"<package[@version]...>\",\n\t\tShort: \"Adds Caddy packages (EXPERIMENTAL)\",\n\t\tLong: `\nDownloads an updated Caddy binary with the specified packages (module/plugin)\nadded, with an optional version specified (e.g., \"package@version\"). Retains\nexisting packages. Returns an error if any of the specified packages are already\nincluded. EXPERIMENTAL: May be changed or removed.\n`,\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\tcmd.Flags().BoolP(\"keep-backup\", \"k\", false, \"Keep the backed up binary, instead of deleting it\")\n\t\t\tcmd.RunE = WrapCommandFuncForCobra(cmdAddPackage)\n\t\t},\n\t})\n\n\tRegisterCommand(Command{\n\t\tName:  \"remove-package\",\n\t\tFunc:  cmdRemovePackage,\n\t\tUsage: \"<packages...>\",\n\t\tShort: \"Removes Caddy packages (EXPERIMENTAL)\",\n\t\tLong: `\nDownloads an updated Caddy binaries without the specified packages (module/plugin).\nReturns an error if any of the packages are not included.\nEXPERIMENTAL: May be changed or removed.\n`,\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\tcmd.Flags().BoolP(\"keep-backup\", \"k\", false, \"Keep the backed up binary, instead of deleting it\")\n\t\t\tcmd.RunE = WrapCommandFuncForCobra(cmdRemovePackage)\n\t\t},\n\t})\n\n\tdefaultFactory.Use(func(rootCmd *cobra.Command) {\n\t\tmanpageCommand := Command{\n\t\t\tName:  \"manpage\",\n\t\t\tUsage: \"--directory <path>\",\n\t\t\tShort: \"Generates the manual pages for Caddy commands\",\n\t\t\tLong: `\nGenerates the manual pages for Caddy commands into the designated directory\ntagged into section 8 (System Administration).\n\nThe manual page files are generated into the directory specified by the\nargument of --directory. If the directory does not exist, it will be created.\n`,\n\t\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\t\tcmd.Flags().StringP(\"directory\", \"o\", \"\", \"The output directory where the manpages are generated\")\n\t\t\t\tcmd.RunE = WrapCommandFuncForCobra(func(fl Flags) (int, error) {\n\t\t\t\t\tdir := strings.TrimSpace(fl.String(\"directory\"))\n\t\t\t\t\tif dir == \"\" {\n\t\t\t\t\t\treturn caddy.ExitCodeFailedQuit, fmt.Errorf(\"designated output directory and specified section are required\")\n\t\t\t\t\t}\n\t\t\t\t\tif err := os.MkdirAll(dir, 0o755); err != nil {\n\t\t\t\t\t\treturn caddy.ExitCodeFailedQuit, err\n\t\t\t\t\t}\n\t\t\t\t\tif err := doc.GenManTree(rootCmd, &doc.GenManHeader{\n\t\t\t\t\t\tTitle:   \"Caddy\",\n\t\t\t\t\t\tSection: \"8\", // https://en.wikipedia.org/wiki/Man_page#Manual_sections\n\t\t\t\t\t}, dir); err != nil {\n\t\t\t\t\t\treturn caddy.ExitCodeFailedQuit, err\n\t\t\t\t\t}\n\t\t\t\t\treturn caddy.ExitCodeSuccess, nil\n\t\t\t\t})\n\t\t\t},\n\t\t}\n\n\t\t// source: https://github.com/spf13/cobra/blob/6dec1ae26659a130bdb4c985768d1853b0e1bc06/site/content/completions/_index.md\n\t\tcompletionCommand := Command{\n\t\t\tName:  \"completion\",\n\t\t\tUsage: \"[bash|zsh|fish|powershell]\",\n\t\t\tShort: \"Generate completion script\",\n\t\t\tLong: fmt.Sprintf(`To load completions:\n\n\tBash:\n\n\t  $ source <(%[1]s completion bash)\n\n\t  # To load completions for each session, execute once:\n\t  # Linux:\n\t  $ %[1]s completion bash > /etc/bash_completion.d/%[1]s\n\t  # macOS:\n\t  $ %[1]s completion bash > $(brew --prefix)/etc/bash_completion.d/%[1]s\n\n\tZsh:\n\n\t  # If shell completion is not already enabled in your environment,\n\t  # you will need to enable it.  You can execute the following once:\n\n\t  $ echo \"autoload -U compinit; compinit\" >> ~/.zshrc\n\n\t  # To load completions for each session, execute once:\n\t  $ %[1]s completion zsh > \"${fpath[1]}/_%[1]s\"\n\n\t  # You will need to start a new shell for this setup to take effect.\n\n\tfish:\n\n\t  $ %[1]s completion fish | source\n\n\t  # To load completions for each session, execute once:\n\t  $ %[1]s completion fish > ~/.config/fish/completions/%[1]s.fish\n\n\tPowerShell:\n\n\t  PS> %[1]s completion powershell | Out-String | Invoke-Expression\n\n\t  # To load completions for every new session, run:\n\t  PS> %[1]s completion powershell > %[1]s.ps1\n\t  # and source this file from your PowerShell profile.\n\t`, rootCmd.Root().Name()),\n\t\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\t\tcmd.DisableFlagsInUseLine = true\n\t\t\t\tcmd.ValidArgs = []string{\"bash\", \"zsh\", \"fish\", \"powershell\"}\n\t\t\t\tcmd.Args = cobra.MatchAll(cobra.ExactArgs(1), cobra.OnlyValidArgs)\n\t\t\t\tcmd.RunE = func(cmd *cobra.Command, args []string) error {\n\t\t\t\t\tswitch args[0] {\n\t\t\t\t\tcase \"bash\":\n\t\t\t\t\t\treturn cmd.Root().GenBashCompletion(os.Stdout)\n\t\t\t\t\tcase \"zsh\":\n\t\t\t\t\t\treturn cmd.Root().GenZshCompletion(os.Stdout)\n\t\t\t\t\tcase \"fish\":\n\t\t\t\t\t\treturn cmd.Root().GenFishCompletion(os.Stdout, true)\n\t\t\t\t\tcase \"powershell\":\n\t\t\t\t\t\treturn cmd.Root().GenPowerShellCompletionWithDesc(os.Stdout)\n\t\t\t\t\tdefault:\n\t\t\t\t\t\treturn fmt.Errorf(\"unrecognized shell: %s\", args[0])\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t}\n\n\t\trootCmd.AddCommand(caddyCmdToCobra(manpageCommand))\n\t\trootCmd.AddCommand(caddyCmdToCobra(completionCommand))\n\n\t\t// add manpage and completion commands to the map of\n\t\t// available commands, because they're not registered\n\t\t// through RegisterCommand.\n\t\tcommandsMu.Lock()\n\t\tcommands[manpageCommand.Name] = manpageCommand\n\t\tcommands[completionCommand.Name] = completionCommand\n\t\tcommandsMu.Unlock()\n\t})\n}\n\n// RegisterCommand registers the command cmd.\n// cmd.Name must be unique and conform to the\n// following format:\n//\n//   - lowercase\n//   - alphanumeric and hyphen characters only\n//   - cannot start or end with a hyphen\n//   - hyphen cannot be adjacent to another hyphen\n//\n// This function panics if the name is already registered,\n// if the name does not meet the described format, or if\n// any of the fields are missing from cmd.\n//\n// This function should be used in init().\nfunc RegisterCommand(cmd Command) {\n\tcommandsMu.Lock()\n\tdefer commandsMu.Unlock()\n\n\tif cmd.Name == \"\" {\n\t\tpanic(\"command name is required\")\n\t}\n\tif cmd.Func == nil && cmd.CobraFunc == nil {\n\t\tpanic(\"command function missing\")\n\t}\n\tif cmd.Short == \"\" {\n\t\tpanic(\"command short string is required\")\n\t}\n\tif _, exists := commands[cmd.Name]; exists {\n\t\tpanic(\"command already registered: \" + cmd.Name)\n\t}\n\tif !commandNameRegex.MatchString(cmd.Name) {\n\t\tpanic(\"invalid command name\")\n\t}\n\tdefaultFactory.Use(func(rootCmd *cobra.Command) {\n\t\trootCmd.AddCommand(caddyCmdToCobra(cmd))\n\t})\n\tcommands[cmd.Name] = cmd\n}\n\nvar commandNameRegex = regexp.MustCompile(`^[a-z0-9]$|^([a-z0-9]+-?[a-z0-9]*)+[a-z0-9]$`)\n"
  },
  {
    "path": "cmd/commands_test.go",
    "content": "package caddycmd\n\nimport (\n\t\"maps\"\n\t\"reflect\"\n\t\"slices\"\n\t\"testing\"\n)\n\nfunc TestCommandsAreAvailable(t *testing.T) {\n\t// trigger init, and build the default factory, so that\n\t// all commands from this package are available\n\tcmd := defaultFactory.Build()\n\tif cmd == nil {\n\t\tt.Fatal(\"default factory failed to build\")\n\t}\n\n\t// check that the default factory has 17 commands; it doesn't\n\t// include the commands registered through calls to init in\n\t// other packages\n\tcmds := Commands()\n\tif len(cmds) != 17 {\n\t\tt.Errorf(\"expected 17 commands, got %d\", len(cmds))\n\t}\n\n\tcommandNames := slices.Collect(maps.Keys(cmds))\n\tslices.Sort(commandNames)\n\n\texpectedCommandNames := []string{\n\t\t\"adapt\", \"add-package\", \"build-info\", \"completion\",\n\t\t\"environ\", \"fmt\", \"list-modules\", \"manpage\",\n\t\t\"reload\", \"remove-package\", \"run\", \"start\",\n\t\t\"stop\", \"storage\", \"upgrade\", \"validate\", \"version\",\n\t}\n\n\tif !reflect.DeepEqual(expectedCommandNames, commandNames) {\n\t\tt.Errorf(\"expected %v, got %v\", expectedCommandNames, commandNames)\n\t}\n}\n"
  },
  {
    "path": "cmd/main.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddycmd\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"flag\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/fs\"\n\t\"log\"\n\t\"log/slog\"\n\t\"net\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"runtime/debug\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/KimMachineGun/automemlimit/memlimit\"\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/spf13/pflag\"\n\t\"go.uber.org/automaxprocs/maxprocs\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/exp/zapslog\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n)\n\nfunc init() {\n\t// set a fitting User-Agent for ACME requests\n\tversion, _ := caddy.Version()\n\tcleanModVersion := strings.TrimPrefix(version, \"v\")\n\tua := \"Caddy/\" + cleanModVersion\n\tif uaEnv, ok := os.LookupEnv(\"USERAGENT\"); ok {\n\t\tua = uaEnv + \" \" + ua\n\t}\n\tcertmagic.UserAgent = ua\n\n\t// by using Caddy, user indicates agreement to CA terms\n\t// (very important, as Caddy is often non-interactive\n\t// and thus ACME account creation will fail!)\n\tcertmagic.DefaultACME.Agreed = true\n}\n\n// Main implements the main function of the caddy command.\n// Call this if Caddy is to be the main() of your program.\nfunc Main() {\n\tif len(os.Args) == 0 {\n\t\tfmt.Printf(\"[FATAL] no arguments provided by OS; args[0] must be command\\n\")\n\t\tos.Exit(caddy.ExitCodeFailedStartup)\n\t}\n\n\tif err := defaultFactory.Build().Execute(); err != nil {\n\t\tvar exitError *exitError\n\t\tif errors.As(err, &exitError) {\n\t\t\tos.Exit(exitError.ExitCode)\n\t\t}\n\t\tos.Exit(1)\n\t}\n}\n\n// handlePingbackConn reads from conn and ensures it matches\n// the bytes in expect, or returns an error if it doesn't.\nfunc handlePingbackConn(conn net.Conn, expect []byte) error {\n\tdefer conn.Close()\n\tconfirmationBytes, err := io.ReadAll(io.LimitReader(conn, 32))\n\tif err != nil {\n\t\treturn err\n\t}\n\tif !bytes.Equal(confirmationBytes, expect) {\n\t\treturn fmt.Errorf(\"wrong confirmation: %x\", confirmationBytes)\n\t}\n\treturn nil\n}\n\n// LoadConfig loads the config from configFile and adapts it\n// using adapterName. If adapterName is specified, configFile\n// must be also. If no configFile is specified, it tries\n// loading a default config file. The lack of a config file is\n// not treated as an error, but false will be returned if\n// there is no config available. It prints any warnings to stderr,\n// and returns the resulting JSON config bytes along with\n// the name of the loaded config file (if any).\n// The return values are:\n//   - config bytes (nil if no config)\n//   - config file used (\"\" if none)\n//   - adapter used (\"\" if none)\n//   - error, if any\nfunc LoadConfig(configFile, adapterName string) ([]byte, string, string, error) {\n\treturn loadConfigWithLogger(caddy.Log(), configFile, adapterName)\n}\n\nfunc isCaddyfile(configFile, adapterName string) (bool, error) {\n\tif adapterName == \"caddyfile\" {\n\t\treturn true, nil\n\t}\n\n\t// as a special case, if a config file starts with \"caddyfile\" or\n\t// has a \".caddyfile\" extension, and no adapter is specified, and\n\t// no adapter module name matches the extension, assume\n\t// caddyfile adapter for convenience\n\tbaseConfig := strings.ToLower(filepath.Base(configFile))\n\tbaseConfigExt := filepath.Ext(baseConfig)\n\tstartsOrEndsInCaddyfile := strings.HasPrefix(baseConfig, \"caddyfile\") || strings.HasSuffix(baseConfig, \".caddyfile\")\n\n\tif baseConfigExt == \".json\" {\n\t\treturn false, nil\n\t}\n\n\t// If the adapter is not specified,\n\t// the config file starts with \"caddyfile\",\n\t// the config file has an extension,\n\t// and isn't a JSON file (e.g. Caddyfile.yaml),\n\t// then we don't know what the config format is.\n\tif adapterName == \"\" && startsOrEndsInCaddyfile {\n\t\treturn true, nil\n\t}\n\n\t// adapter is not empty,\n\t// adapter is not \"caddyfile\",\n\t// extension is not \".json\",\n\t// extension is not \".caddyfile\"\n\t// file does not start with \"Caddyfile\"\n\treturn false, nil\n}\n\nfunc loadConfigWithLogger(logger *zap.Logger, configFile, adapterName string) ([]byte, string, string, error) {\n\t// if no logger is provided, use a nop logger\n\t// just so we don't have to check for nil\n\tif logger == nil {\n\t\tlogger = zap.NewNop()\n\t}\n\n\t// specifying an adapter without a config file is ambiguous\n\tif adapterName != \"\" && configFile == \"\" {\n\t\treturn nil, \"\", \"\", fmt.Errorf(\"cannot adapt config without config file (use --config)\")\n\t}\n\n\t// load initial config and adapter\n\tvar config []byte\n\tvar cfgAdapter caddyconfig.Adapter\n\tvar err error\n\tif configFile != \"\" {\n\t\tif configFile == \"-\" {\n\t\t\tconfig, err = io.ReadAll(os.Stdin)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, \"\", \"\", fmt.Errorf(\"reading config from stdin: %v\", err)\n\t\t\t}\n\t\t\tlogger.Info(\"using config from stdin\")\n\t\t} else {\n\t\t\tconfig, err = os.ReadFile(configFile)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, \"\", \"\", fmt.Errorf(\"reading config from file: %v\", err)\n\t\t\t}\n\t\t\tlogger.Info(\"using config from file\", zap.String(\"file\", configFile))\n\t\t}\n\t} else if adapterName == \"\" {\n\t\t// if the Caddyfile adapter is plugged in, we can try using an\n\t\t// adjacent Caddyfile by default\n\t\tcfgAdapter = caddyconfig.GetAdapter(\"caddyfile\")\n\t\tif cfgAdapter != nil {\n\t\t\tconfig, err = os.ReadFile(\"Caddyfile\")\n\t\t\tif errors.Is(err, fs.ErrNotExist) {\n\t\t\t\t// okay, no default Caddyfile; pretend like this never happened\n\t\t\t\tcfgAdapter = nil\n\t\t\t} else if err != nil {\n\t\t\t\t// default Caddyfile exists, but error reading it\n\t\t\t\treturn nil, \"\", \"\", fmt.Errorf(\"reading default Caddyfile: %v\", err)\n\t\t\t} else {\n\t\t\t\t// success reading default Caddyfile\n\t\t\t\tconfigFile = \"Caddyfile\"\n\t\t\t\tlogger.Info(\"using adjacent Caddyfile\")\n\t\t\t}\n\t\t}\n\t}\n\n\tif yes, err := isCaddyfile(configFile, adapterName); yes {\n\t\tadapterName = \"caddyfile\"\n\t} else if err != nil {\n\t\treturn nil, \"\", \"\", err\n\t}\n\n\t// load config adapter\n\tif adapterName != \"\" {\n\t\tcfgAdapter = caddyconfig.GetAdapter(adapterName)\n\t\tif cfgAdapter == nil {\n\t\t\treturn nil, \"\", \"\", fmt.Errorf(\"unrecognized config adapter: %s\", adapterName)\n\t\t}\n\t}\n\n\t// adapt config\n\tif cfgAdapter != nil {\n\t\tadaptedConfig, warnings, err := cfgAdapter.Adapt(config, map[string]any{\n\t\t\t\"filename\": configFile,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn nil, \"\", \"\", fmt.Errorf(\"adapting config using %s: %v\", adapterName, err)\n\t\t}\n\t\tlogger.Info(\"adapted config to JSON\", zap.String(\"adapter\", adapterName))\n\t\tfor _, warn := range warnings {\n\t\t\tmsg := warn.Message\n\t\t\tif warn.Directive != \"\" {\n\t\t\t\tmsg = fmt.Sprintf(\"%s: %s\", warn.Directive, warn.Message)\n\t\t\t}\n\t\t\tlogger.Warn(msg,\n\t\t\t\tzap.String(\"adapter\", adapterName),\n\t\t\t\tzap.String(\"file\", warn.File),\n\t\t\t\tzap.Int(\"line\", warn.Line))\n\t\t}\n\t\tconfig = adaptedConfig\n\t} else if len(config) != 0 {\n\t\t// validate that the config is at least valid JSON\n\t\terr = json.Unmarshal(config, new(any))\n\t\tif err != nil {\n\t\t\tif jsonErr, ok := err.(*json.SyntaxError); ok {\n\t\t\t\treturn nil, \"\", \"\", fmt.Errorf(\"config is not valid JSON: %w, at offset %d; did you mean to use a config adapter (the --adapter flag)?\", err, jsonErr.Offset)\n\t\t\t}\n\t\t\treturn nil, \"\", \"\", fmt.Errorf(\"config is not valid JSON: %w; did you mean to use a config adapter (the --adapter flag)?\", err)\n\t\t}\n\t}\n\n\treturn config, configFile, adapterName, nil\n}\n\n// watchConfigFile watches the config file at filename for changes\n// and reloads the config if the file was updated. This function\n// blocks indefinitely; it only quits if the poller has errors for\n// long enough time. The filename passed in must be the actual\n// config file used, not one to be discovered.\n// Each second the config files is loaded and parsed into an object\n// and is compared to the last config object that was loaded\nfunc watchConfigFile(filename, adapterName string) {\n\tdefer func() {\n\t\tif err := recover(); err != nil {\n\t\t\tlog.Printf(\"[PANIC] watching config file: %v\\n%s\", err, debug.Stack())\n\t\t}\n\t}()\n\n\t// make our logger; since config reloads can change the\n\t// default logger, we need to get it dynamically each time\n\tlogger := func() *zap.Logger {\n\t\treturn caddy.Log().\n\t\t\tNamed(\"watcher\").\n\t\t\tWith(zap.String(\"config_file\", filename))\n\t}\n\n\t// get current config\n\tlastCfg, _, _, err := loadConfigWithLogger(nil, filename, adapterName)\n\tif err != nil {\n\t\tlogger().Error(\"unable to load latest config\", zap.Error(err))\n\t\treturn\n\t}\n\n\tlogger().Info(\"watching config file for changes\")\n\n\t// begin poller\n\t//nolint:staticcheck\n\tfor range time.Tick(1 * time.Second) {\n\t\t// get current config\n\t\tnewCfg, _, _, err := loadConfigWithLogger(nil, filename, adapterName)\n\t\tif err != nil {\n\t\t\tlogger().Error(\"unable to load latest config\", zap.Error(err))\n\t\t\treturn\n\t\t}\n\n\t\t// if it hasn't changed, nothing to do\n\t\tif bytes.Equal(lastCfg, newCfg) {\n\t\t\tcontinue\n\t\t}\n\t\tlogger().Info(\"config file changed; reloading\")\n\n\t\t// remember the current config\n\t\tlastCfg = newCfg\n\n\t\t// apply the updated config\n\t\terr = caddy.Load(lastCfg, false)\n\t\tif err != nil {\n\t\t\tlogger().Error(\"applying latest config\", zap.Error(err))\n\t\t\tcontinue\n\t\t}\n\t}\n}\n\n// Flags wraps a FlagSet so that typed values\n// from flags can be easily retrieved.\ntype Flags struct {\n\t*pflag.FlagSet\n}\n\n// String returns the string representation of the\n// flag given by name. It panics if the flag is not\n// in the flag set.\nfunc (f Flags) String(name string) string {\n\treturn f.FlagSet.Lookup(name).Value.String()\n}\n\n// Bool returns the boolean representation of the\n// flag given by name. It returns false if the flag\n// is not a boolean type. It panics if the flag is\n// not in the flag set.\nfunc (f Flags) Bool(name string) bool {\n\tval, _ := strconv.ParseBool(f.String(name))\n\treturn val\n}\n\n// Int returns the integer representation of the\n// flag given by name. It returns 0 if the flag\n// is not an integer type. It panics if the flag is\n// not in the flag set.\nfunc (f Flags) Int(name string) int {\n\tval, _ := strconv.ParseInt(f.String(name), 0, strconv.IntSize)\n\treturn int(val)\n}\n\n// Float64 returns the float64 representation of the\n// flag given by name. It returns false if the flag\n// is not a float64 type. It panics if the flag is\n// not in the flag set.\nfunc (f Flags) Float64(name string) float64 {\n\tval, _ := strconv.ParseFloat(f.String(name), 64)\n\treturn val\n}\n\n// Duration returns the duration representation of the\n// flag given by name. It returns false if the flag\n// is not a duration type. It panics if the flag is\n// not in the flag set.\nfunc (f Flags) Duration(name string) time.Duration {\n\tval, _ := caddy.ParseDuration(f.String(name))\n\treturn val\n}\n\nfunc loadEnvFromFile(envFile string) error {\n\tfile, err := os.Open(envFile)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"reading environment file: %v\", err)\n\t}\n\tdefer file.Close()\n\n\tenvMap, err := parseEnvFile(file)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"parsing environment file: %v\", err)\n\t}\n\n\tfor k, v := range envMap {\n\t\t// do not overwrite existing environment variables\n\t\t_, exists := os.LookupEnv(k)\n\t\tif !exists {\n\t\t\tif err := os.Setenv(k, v); err != nil {\n\t\t\t\treturn fmt.Errorf(\"setting environment variables: %v\", err)\n\t\t\t}\n\t\t}\n\t}\n\n\t// Update the storage paths to ensure they have the proper\n\t// value after loading a specified env file.\n\tcaddy.ConfigAutosavePath = filepath.Join(caddy.AppConfigDir(), \"autosave.json\")\n\tcaddy.DefaultStorage = &certmagic.FileStorage{Path: caddy.AppDataDir()}\n\n\treturn nil\n}\n\n// parseEnvFile parses an env file from KEY=VALUE format.\n// It's pretty naive. Limited value quotation is supported,\n// but variable and command expansions are not supported.\nfunc parseEnvFile(envInput io.Reader) (map[string]string, error) {\n\tenvMap := make(map[string]string)\n\n\tscanner := bufio.NewScanner(envInput)\n\tvar lineNumber int\n\n\tfor scanner.Scan() {\n\t\tline := strings.TrimSpace(scanner.Text())\n\t\tlineNumber++\n\n\t\t// skip empty lines and lines starting with comment\n\t\tif line == \"\" || strings.HasPrefix(line, \"#\") {\n\t\t\tcontinue\n\t\t}\n\n\t\t// split line into key and value\n\t\tbefore, after, isCut := strings.Cut(line, \"=\")\n\t\tif !isCut {\n\t\t\treturn nil, fmt.Errorf(\"can't parse line %d; line should be in KEY=VALUE format\", lineNumber)\n\t\t}\n\t\tkey, val := before, after\n\n\t\t// sometimes keys are prefixed by \"export \" so file can be sourced in bash; ignore it here\n\t\tkey = strings.TrimPrefix(key, \"export \")\n\n\t\t// validate key and value\n\t\tif key == \"\" {\n\t\t\treturn nil, fmt.Errorf(\"missing or empty key on line %d\", lineNumber)\n\t\t}\n\t\tif strings.Contains(key, \" \") {\n\t\t\treturn nil, fmt.Errorf(\"invalid key on line %d: contains whitespace: %s\", lineNumber, key)\n\t\t}\n\t\tif strings.HasPrefix(val, \" \") || strings.HasPrefix(val, \"\\t\") {\n\t\t\treturn nil, fmt.Errorf(\"invalid value on line %d: whitespace before value: '%s'\", lineNumber, val)\n\t\t}\n\n\t\t// remove any trailing comment after value\n\t\tif commentStart, _, found := strings.Cut(val, \"#\"); found {\n\t\t\tval = strings.TrimRight(commentStart, \" \\t\")\n\t\t}\n\n\t\t// quoted value: support newlines\n\t\tif strings.HasPrefix(val, `\"`) || strings.HasPrefix(val, \"'\") {\n\t\t\tquote := string(val[0])\n\t\t\tfor !strings.HasSuffix(line, quote) || strings.HasSuffix(line, `\\`+quote) {\n\t\t\t\tval = strings.ReplaceAll(val, `\\`+quote, quote)\n\t\t\t\tif !scanner.Scan() {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tlineNumber++\n\t\t\t\tline = strings.ReplaceAll(scanner.Text(), `\\`+quote, quote)\n\t\t\t\tval += \"\\n\" + line\n\t\t\t}\n\t\t\tval = strings.TrimPrefix(val, quote)\n\t\t\tval = strings.TrimSuffix(val, quote)\n\t\t}\n\n\t\tenvMap[key] = val\n\t}\n\n\tif err := scanner.Err(); err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn envMap, nil\n}\n\nfunc printEnvironment() {\n\t_, version := caddy.Version()\n\tfmt.Printf(\"caddy.HomeDir=%s\\n\", caddy.HomeDir())\n\tfmt.Printf(\"caddy.AppDataDir=%s\\n\", caddy.AppDataDir())\n\tfmt.Printf(\"caddy.AppConfigDir=%s\\n\", caddy.AppConfigDir())\n\tfmt.Printf(\"caddy.ConfigAutosavePath=%s\\n\", caddy.ConfigAutosavePath)\n\tfmt.Printf(\"caddy.Version=%s\\n\", version)\n\tfmt.Printf(\"runtime.GOOS=%s\\n\", runtime.GOOS)\n\tfmt.Printf(\"runtime.GOARCH=%s\\n\", runtime.GOARCH)\n\tfmt.Printf(\"runtime.Compiler=%s\\n\", runtime.Compiler)\n\tfmt.Printf(\"runtime.NumCPU=%d\\n\", runtime.NumCPU())\n\tfmt.Printf(\"runtime.GOMAXPROCS=%d\\n\", runtime.GOMAXPROCS(0))\n\tfmt.Printf(\"runtime.Version=%s\\n\", runtime.Version())\n\tcwd, err := os.Getwd()\n\tif err != nil {\n\t\tcwd = fmt.Sprintf(\"<error: %v>\", err)\n\t}\n\tfmt.Printf(\"os.Getwd=%s\\n\\n\", cwd)\n\tfor _, v := range os.Environ() {\n\t\tfmt.Println(v)\n\t}\n}\n\nfunc setResourceLimits(logger *zap.Logger) func() {\n\t// Configure the maximum number of CPUs to use to match the Linux container quota (if any)\n\t// See https://pkg.go.dev/runtime#GOMAXPROCS\n\tundo, err := maxprocs.Set(maxprocs.Logger(logger.Sugar().Infof))\n\tif err != nil {\n\t\tlogger.Warn(\"failed to set GOMAXPROCS\", zap.Error(err))\n\t}\n\n\t// Configure the maximum memory to use to match the Linux container quota (if any) or system memory\n\t// See https://pkg.go.dev/runtime/debug#SetMemoryLimit\n\t_, _ = memlimit.SetGoMemLimitWithOpts(\n\t\tmemlimit.WithLogger(\n\t\t\tslog.New(zapslog.NewHandler(\n\t\t\t\tlogger.Core(),\n\t\t\t\tzapslog.WithName(\"memlimit\"),\n\t\t\t\t// the default enables traces at ERROR level, this disables\n\t\t\t\t// them by setting it to a level higher than any other level\n\t\t\t\tzapslog.AddStacktraceAt(slog.Level(127)),\n\t\t\t)),\n\t\t),\n\t\tmemlimit.WithProvider(\n\t\t\tmemlimit.ApplyFallback(\n\t\t\t\tmemlimit.FromCgroup,\n\t\t\t\tmemlimit.FromSystem,\n\t\t\t),\n\t\t),\n\t)\n\n\treturn undo\n}\n\n// StringSlice is a flag.Value that enables repeated use of a string flag.\ntype StringSlice []string\n\nfunc (ss StringSlice) String() string { return \"[\" + strings.Join(ss, \", \") + \"]\" }\n\nfunc (ss *StringSlice) Set(value string) error {\n\t*ss = append(*ss, value)\n\treturn nil\n}\n\n// Interface guard\nvar _ flag.Value = (*StringSlice)(nil)\n"
  },
  {
    "path": "cmd/main_test.go",
    "content": "package caddycmd\n\nimport (\n\t\"reflect\"\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc TestParseEnvFile(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinput     string\n\t\texpect    map[string]string\n\t\tshouldErr bool\n\t}{\n\t\t{\n\t\t\tinput: `KEY=value`,\n\t\t\texpect: map[string]string{\n\t\t\t\t\"KEY\": \"value\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\tKEY=value\n\t\t\t\tOTHER_KEY=Some Value\n\t\t\t`,\n\t\t\texpect: map[string]string{\n\t\t\t\t\"KEY\":       \"value\",\n\t\t\t\t\"OTHER_KEY\": \"Some Value\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\tKEY=value\n\t\t\t\tINVALID KEY=asdf\n\t\t\t\tOTHER_KEY=Some Value\n\t\t\t`,\n\t\t\tshouldErr: true,\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\tKEY=value\n\t\t\t\tSIMPLE_QUOTED=\"quoted value\"\n\t\t\t\tOTHER_KEY=Some Value\n\t\t\t`,\n\t\t\texpect: map[string]string{\n\t\t\t\t\"KEY\":           \"value\",\n\t\t\t\t\"SIMPLE_QUOTED\": \"quoted value\",\n\t\t\t\t\"OTHER_KEY\":     \"Some Value\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\tKEY=value\n\t\t\t\tNEWLINES=\"foo\n\tbar\"\n\t\t\t\tOTHER_KEY=Some Value\n\t\t\t`,\n\t\t\texpect: map[string]string{\n\t\t\t\t\"KEY\":       \"value\",\n\t\t\t\t\"NEWLINES\":  \"foo\\n\\tbar\",\n\t\t\t\t\"OTHER_KEY\": \"Some Value\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\tKEY=value\n\t\t\t\tESCAPED=\"\\\"escaped quotes\\\"\nhere\"\n\t\t\t\tOTHER_KEY=Some Value\n\t\t\t`,\n\t\t\texpect: map[string]string{\n\t\t\t\t\"KEY\":       \"value\",\n\t\t\t\t\"ESCAPED\":   \"\\\"escaped quotes\\\"\\nhere\",\n\t\t\t\t\"OTHER_KEY\": \"Some Value\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\texport KEY=value\n\t\t\t\tOTHER_KEY=Some Value\n\t\t\t`,\n\t\t\texpect: map[string]string{\n\t\t\t\t\"KEY\":       \"value\",\n\t\t\t\t\"OTHER_KEY\": \"Some Value\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\t=value\n\t\t\t\tOTHER_KEY=Some Value\n\t\t\t`,\n\t\t\tshouldErr: true,\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\tEMPTY=\n\t\t\t\tOTHER_KEY=Some Value\n\t\t\t`,\n\t\t\texpect: map[string]string{\n\t\t\t\t\"EMPTY\":     \"\",\n\t\t\t\t\"OTHER_KEY\": \"Some Value\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\tEMPTY=\"\"\n\t\t\t\tOTHER_KEY=Some Value\n\t\t\t`,\n\t\t\texpect: map[string]string{\n\t\t\t\t\"EMPTY\":     \"\",\n\t\t\t\t\"OTHER_KEY\": \"Some Value\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\tKEY=value\n\t\t\t\t#OTHER_KEY=Some Value\n\t\t\t`,\n\t\t\texpect: map[string]string{\n\t\t\t\t\"KEY\": \"value\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\tKEY=value\n\t\t\t\tCOMMENT=foo bar  # some comment here\n\t\t\t\tOTHER_KEY=Some Value\n\t\t\t`,\n\t\t\texpect: map[string]string{\n\t\t\t\t\"KEY\":       \"value\",\n\t\t\t\t\"COMMENT\":   \"foo bar\",\n\t\t\t\t\"OTHER_KEY\": \"Some Value\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\tKEY=value\n\t\t\t\tWHITESPACE=   foo \n\t\t\t\tOTHER_KEY=Some Value\n\t\t\t`,\n\t\t\tshouldErr: true,\n\t\t},\n\t\t{\n\t\t\tinput: `\n\t\t\t\tKEY=value\n\t\t\t\tWHITESPACE=\"   foo bar \"\n\t\t\t\tOTHER_KEY=Some Value\n\t\t\t`,\n\t\t\texpect: map[string]string{\n\t\t\t\t\"KEY\":        \"value\",\n\t\t\t\t\"WHITESPACE\": \"   foo bar \",\n\t\t\t\t\"OTHER_KEY\":  \"Some Value\",\n\t\t\t},\n\t\t},\n\t} {\n\t\tactual, err := parseEnvFile(strings.NewReader(tc.input))\n\t\tif err != nil && !tc.shouldErr {\n\t\t\tt.Errorf(\"Test %d: Got error but shouldn't have: %v\", i, err)\n\t\t}\n\t\tif err == nil && tc.shouldErr {\n\t\t\tt.Errorf(\"Test %d: Did not get error but should have\", i)\n\t\t}\n\t\tif tc.shouldErr {\n\t\t\tcontinue\n\t\t}\n\t\tif !reflect.DeepEqual(tc.expect, actual) {\n\t\t\tt.Errorf(\"Test %d: Expected %v but got %v\", i, tc.expect, actual)\n\t\t}\n\t}\n}\n\nfunc Test_isCaddyfile(t *testing.T) {\n\ttype args struct {\n\t\tconfigFile  string\n\t\tadapterName string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\targs    args\n\t\twant    bool\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"bare Caddyfile without adapter\",\n\t\t\targs: args{\n\t\t\t\tconfigFile:  \"Caddyfile\",\n\t\t\t\tadapterName: \"\",\n\t\t\t},\n\t\t\twant:    true,\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"local Caddyfile without adapter\",\n\t\t\targs: args{\n\t\t\t\tconfigFile:  \"./Caddyfile\",\n\t\t\t\tadapterName: \"\",\n\t\t\t},\n\t\t\twant:    true,\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"local caddyfile with adapter\",\n\t\t\targs: args{\n\t\t\t\tconfigFile:  \"./Caddyfile\",\n\t\t\t\tadapterName: \"caddyfile\",\n\t\t\t},\n\t\t\twant:    true,\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"ends with .caddyfile with adapter\",\n\t\t\targs: args{\n\t\t\t\tconfigFile:  \"./conf.caddyfile\",\n\t\t\t\tadapterName: \"caddyfile\",\n\t\t\t},\n\t\t\twant:    true,\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"ends with .caddyfile without adapter\",\n\t\t\targs: args{\n\t\t\t\tconfigFile:  \"./conf.caddyfile\",\n\t\t\t\tadapterName: \"\",\n\t\t\t},\n\t\t\twant:    true,\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"config is Caddyfile.yaml with adapter\",\n\t\t\targs: args{\n\t\t\t\tconfigFile:  \"./Caddyfile.yaml\",\n\t\t\t\tadapterName: \"yaml\",\n\t\t\t},\n\t\t\twant:    false,\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"json is not caddyfile but not error\",\n\t\t\targs: args{\n\t\t\t\tconfigFile:  \"./Caddyfile.json\",\n\t\t\t\tadapterName: \"\",\n\t\t\t},\n\t\t\twant:    false,\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"prefix of Caddyfile and ./ with any extension is Caddyfile\",\n\t\t\targs: args{\n\t\t\t\tconfigFile:  \"./Caddyfile.prd\",\n\t\t\t\tadapterName: \"\",\n\t\t\t},\n\t\t\twant:    true,\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"prefix of Caddyfile without ./ with any extension is Caddyfile\",\n\t\t\targs: args{\n\t\t\t\tconfigFile:  \"Caddyfile.prd\",\n\t\t\t\tadapterName: \"\",\n\t\t\t},\n\t\t\twant:    true,\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgot, err := isCaddyfile(tt.args.configFile, tt.args.adapterName)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"isCaddyfile() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif got != tt.want {\n\t\t\t\tt.Errorf(\"isCaddyfile() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "cmd/packagesfuncs.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddycmd\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"os/exec\"\n\t\"path/filepath\"\n\t\"reflect\"\n\t\"runtime\"\n\t\"runtime/debug\"\n\t\"strings\"\n\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc cmdUpgrade(fl Flags) (int, error) {\n\t_, nonstandard, _, err := getModules()\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"unable to enumerate installed plugins: %v\", err)\n\t}\n\tpluginPkgs, err := getPluginPackages(nonstandard)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\treturn upgradeBuild(pluginPkgs, fl)\n}\n\nfunc splitModule(arg string) (module, version string, err error) {\n\tconst versionSplit = \"@\"\n\n\t// accommodate module paths that have @ in them, but we can only tolerate that if there's also\n\t// a version, otherwise we don't know if it's a version separator or part of the file path\n\tlastVersionSplit := strings.LastIndex(arg, versionSplit)\n\tif lastVersionSplit < 0 {\n\t\tmodule = arg\n\t} else {\n\t\tmodule, version = arg[:lastVersionSplit], arg[lastVersionSplit+1:]\n\t}\n\n\tif module == \"\" {\n\t\terr = fmt.Errorf(\"module name is required\")\n\t}\n\n\treturn module, version, err\n}\n\nfunc cmdAddPackage(fl Flags) (int, error) {\n\tif len(fl.Args()) == 0 {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"at least one package name must be specified\")\n\t}\n\t_, nonstandard, _, err := getModules()\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"unable to enumerate installed plugins: %v\", err)\n\t}\n\tpluginPkgs, err := getPluginPackages(nonstandard)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\tfor _, arg := range fl.Args() {\n\t\tmodule, version, err := splitModule(arg)\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"invalid module name: %v\", err)\n\t\t}\n\t\t// only allow a version to be specified if it's different from the existing version\n\t\tif _, ok := pluginPkgs[module]; ok && (version == \"\" || pluginPkgs[module].Version == version) {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"package is already added\")\n\t\t}\n\t\tpluginPkgs[module] = pluginPackage{Version: version, Path: module}\n\t}\n\n\treturn upgradeBuild(pluginPkgs, fl)\n}\n\nfunc cmdRemovePackage(fl Flags) (int, error) {\n\tif len(fl.Args()) == 0 {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"at least one package name must be specified\")\n\t}\n\t_, nonstandard, _, err := getModules()\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"unable to enumerate installed plugins: %v\", err)\n\t}\n\tpluginPkgs, err := getPluginPackages(nonstandard)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\tfor _, arg := range fl.Args() {\n\t\tmodule, _, err := splitModule(arg)\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"invalid module name: %v\", err)\n\t\t}\n\t\tif _, ok := pluginPkgs[module]; !ok {\n\t\t\t// package does not exist\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"package is not added\")\n\t\t}\n\t\tdelete(pluginPkgs, arg)\n\t}\n\n\treturn upgradeBuild(pluginPkgs, fl)\n}\n\nfunc upgradeBuild(pluginPkgs map[string]pluginPackage, fl Flags) (int, error) {\n\tl := caddy.Log()\n\n\tthisExecPath, err := os.Executable()\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"determining current executable path: %v\", err)\n\t}\n\tthisExecStat, err := os.Stat(thisExecPath)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"retrieving current executable permission bits: %v\", err)\n\t}\n\tif thisExecStat.Mode()&os.ModeSymlink == os.ModeSymlink {\n\t\tsymSource := thisExecPath\n\t\t// we are a symlink; resolve it\n\t\tthisExecPath, err = filepath.EvalSymlinks(thisExecPath)\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"resolving current executable symlink: %v\", err)\n\t\t}\n\t\tl.Info(\"this executable is a symlink\", zap.String(\"source\", symSource), zap.String(\"target\", thisExecPath))\n\t}\n\tl.Info(\"this executable will be replaced\", zap.String(\"path\", thisExecPath))\n\n\t// build the request URL to download this custom build\n\tqs := url.Values{\n\t\t\"os\":   {runtime.GOOS},\n\t\t\"arch\": {runtime.GOARCH},\n\t}\n\tfor _, pkgInfo := range pluginPkgs {\n\t\tqs.Add(\"p\", pkgInfo.String())\n\t}\n\n\t// initiate the build\n\tresp, err := downloadBuild(qs)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"download failed: %v\", err)\n\t}\n\tdefer resp.Body.Close()\n\n\t// back up the current binary, in case something goes wrong we can replace it\n\tbackupExecPath := thisExecPath + \".tmp\"\n\tl.Info(\"build acquired; backing up current executable\",\n\t\tzap.String(\"current_path\", thisExecPath),\n\t\tzap.String(\"backup_path\", backupExecPath))\n\terr = os.Rename(thisExecPath, backupExecPath)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"backing up current binary: %v\", err)\n\t}\n\tdefer func() {\n\t\tif err != nil {\n\t\t\terr2 := os.Rename(backupExecPath, thisExecPath)\n\t\t\tif err2 != nil {\n\t\t\t\tl.Error(\"restoring original executable failed; will need to be restored manually\",\n\t\t\t\t\tzap.String(\"backup_path\", backupExecPath),\n\t\t\t\t\tzap.String(\"original_path\", thisExecPath),\n\t\t\t\t\tzap.Error(err2))\n\t\t\t}\n\t\t}\n\t}()\n\n\t// download the file; do this in a closure to close reliably before we execute it\n\terr = writeCaddyBinary(thisExecPath, &resp.Body, thisExecStat)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\tl.Info(\"download successful; displaying new binary details\", zap.String(\"location\", thisExecPath))\n\n\t// use the new binary to print out version and module info\n\tfmt.Print(\"\\nModule versions:\\n\\n\")\n\tif err = listModules(thisExecPath); err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"download succeeded, but unable to execute 'caddy list-modules': %v\", err)\n\t}\n\tfmt.Println(\"\\nVersion:\")\n\tif err = showVersion(thisExecPath); err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"download succeeded, but unable to execute 'caddy version': %v\", err)\n\t}\n\tfmt.Println()\n\n\t// clean up the backup file\n\tif !fl.Bool(\"keep-backup\") {\n\t\tif err = removeCaddyBinary(backupExecPath); err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"download succeeded, but unable to clean up backup binary: %v\", err)\n\t\t}\n\t} else {\n\t\tl.Info(\"skipped cleaning up the backup file\", zap.String(\"backup_path\", backupExecPath))\n\t}\n\n\tl.Info(\"upgrade successful; please restart any running Caddy instances\", zap.String(\"executable\", thisExecPath))\n\n\treturn caddy.ExitCodeSuccess, nil\n}\n\nfunc getModules() (standard, nonstandard, unknown []moduleInfo, err error) {\n\tbi, ok := debug.ReadBuildInfo()\n\tif !ok {\n\t\terr = fmt.Errorf(\"no build info\")\n\t\treturn standard, nonstandard, unknown, err\n\t}\n\n\tfor _, modID := range caddy.Modules() {\n\t\tmodInfo, err := caddy.GetModule(modID)\n\t\tif err != nil {\n\t\t\t// that's weird, shouldn't happen\n\t\t\tunknown = append(unknown, moduleInfo{caddyModuleID: modID, err: err})\n\t\t\tcontinue\n\t\t}\n\n\t\t// to get the Caddy plugin's version info, we need to know\n\t\t// the package that the Caddy module's value comes from; we\n\t\t// can use reflection but we need a non-pointer value (I'm\n\t\t// not sure why), and since New() should return a pointer\n\t\t// value, we need to dereference it first\n\t\tiface := any(modInfo.New())\n\t\tif rv := reflect.ValueOf(iface); rv.Kind() == reflect.Ptr {\n\t\t\tiface = reflect.New(reflect.TypeOf(iface).Elem()).Elem().Interface()\n\t\t}\n\t\tmodPkgPath := reflect.TypeOf(iface).PkgPath()\n\n\t\t// now we find the Go module that the Caddy module's package\n\t\t// belongs to; we assume the Caddy module package path will\n\t\t// be prefixed by its Go module path, and we will choose the\n\t\t// longest matching prefix in case there are nested modules\n\t\tvar matched *debug.Module\n\t\tfor _, dep := range bi.Deps {\n\t\t\tif strings.HasPrefix(modPkgPath, dep.Path) {\n\t\t\t\tif matched == nil || len(dep.Path) > len(matched.Path) {\n\t\t\t\t\tmatched = dep\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tcaddyModGoMod := moduleInfo{caddyModuleID: modID, goModule: matched}\n\n\t\tif strings.HasPrefix(modPkgPath, caddy.ImportPath) {\n\t\t\tstandard = append(standard, caddyModGoMod)\n\t\t} else {\n\t\t\tnonstandard = append(nonstandard, caddyModGoMod)\n\t\t}\n\t}\n\treturn standard, nonstandard, unknown, err\n}\n\nfunc listModules(path string) error {\n\tcmd := exec.Command(path, \"list-modules\", \"--versions\", \"--skip-standard\")\n\tcmd.Stdout = os.Stdout\n\tcmd.Stderr = os.Stderr\n\treturn cmd.Run()\n}\n\nfunc showVersion(path string) error {\n\tcmd := exec.Command(path, \"version\")\n\tcmd.Stdout = os.Stdout\n\tcmd.Stderr = os.Stderr\n\treturn cmd.Run()\n}\n\nfunc downloadBuild(qs url.Values) (*http.Response, error) {\n\tl := caddy.Log()\n\tl.Info(\"requesting build\",\n\t\tzap.String(\"os\", qs.Get(\"os\")),\n\t\tzap.String(\"arch\", qs.Get(\"arch\")),\n\t\tzap.Strings(\"packages\", qs[\"p\"]))\n\tresp, err := http.Get(fmt.Sprintf(\"%s?%s\", downloadPath, qs.Encode()))\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"secure request failed: %v\", err)\n\t}\n\tif resp.StatusCode >= 400 {\n\t\tvar details struct {\n\t\t\tStatusCode int `json:\"status_code\"`\n\t\t\tError      struct {\n\t\t\t\tMessage string `json:\"message\"`\n\t\t\t\tID      string `json:\"id\"`\n\t\t\t} `json:\"error\"`\n\t\t}\n\t\terr2 := json.NewDecoder(resp.Body).Decode(&details)\n\t\tif err2 != nil {\n\t\t\treturn nil, fmt.Errorf(\"download and error decoding failed: HTTP %d: %v\", resp.StatusCode, err2)\n\t\t}\n\t\treturn nil, fmt.Errorf(\"download failed: HTTP %d: %s (id=%s)\", resp.StatusCode, details.Error.Message, details.Error.ID)\n\t}\n\treturn resp, nil\n}\n\nfunc getPluginPackages(modules []moduleInfo) (map[string]pluginPackage, error) {\n\tpluginPkgs := make(map[string]pluginPackage)\n\tfor _, mod := range modules {\n\t\tif mod.goModule.Replace != nil {\n\t\t\treturn nil, fmt.Errorf(\"cannot auto-upgrade when Go module has been replaced: %s => %s\",\n\t\t\t\tmod.goModule.Path, mod.goModule.Replace.Path)\n\t\t}\n\t\tpluginPkgs[mod.goModule.Path] = pluginPackage{Version: mod.goModule.Version, Path: mod.goModule.Path}\n\t}\n\treturn pluginPkgs, nil\n}\n\nfunc writeCaddyBinary(path string, body *io.ReadCloser, fileInfo os.FileInfo) error {\n\tl := caddy.Log()\n\tdestFile, err := os.OpenFile(path, os.O_RDWR|os.O_CREATE|os.O_TRUNC, fileInfo.Mode())\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to open destination file: %v\", err)\n\t}\n\tdefer destFile.Close()\n\n\tl.Info(\"downloading binary\", zap.String(\"destination\", path))\n\n\t_, err = io.Copy(destFile, *body)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to download file: %v\", err)\n\t}\n\n\terr = destFile.Sync()\n\tif err != nil {\n\t\treturn fmt.Errorf(\"syncing downloaded file to device: %v\", err)\n\t}\n\n\treturn nil\n}\n\nconst downloadPath = \"https://caddyserver.com/api/download\"\n\ntype pluginPackage struct {\n\tVersion string\n\tPath    string\n}\n\nfunc (p pluginPackage) String() string {\n\tif p.Version == \"\" {\n\t\treturn p.Path\n\t}\n\treturn p.Path + \"@\" + p.Version\n}\n"
  },
  {
    "path": "cmd/removebinary.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n//go:build !windows\n\npackage caddycmd\n\nimport (\n\t\"os\"\n)\n\n// removeCaddyBinary removes the Caddy binary at the given path.\n//\n// On any non-Windows OS, this simply calls os.Remove, since they should\n// probably not exhibit any issue with processes deleting themselves.\nfunc removeCaddyBinary(path string) error {\n\treturn os.Remove(path)\n}\n"
  },
  {
    "path": "cmd/removebinary_windows.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddycmd\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\t\"syscall\"\n)\n\n// removeCaddyBinary removes the Caddy binary at the given path.\n//\n// On Windows, this uses a syscall to indirectly remove the file,\n// because otherwise we get an \"Access is denied.\" error when trying\n// to delete the binary while Caddy is still running and performing\n// the upgrade. \"cmd.exe /C\" executes a command specified by the\n// following arguments, i.e. \"del\" which will run as a separate process,\n// which avoids the \"Access is denied.\" error.\nfunc removeCaddyBinary(path string) error {\n\tvar sI syscall.StartupInfo\n\tvar pI syscall.ProcessInformation\n\targv, err := syscall.UTF16PtrFromString(filepath.Join(os.Getenv(\"windir\"), \"system32\", \"cmd.exe\") + \" /C del \" + path)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn syscall.CreateProcess(nil, argv, nil, nil, true, 0, nil, nil, &sI, &pI)\n}\n"
  },
  {
    "path": "cmd/storagefuncs.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddycmd\n\nimport (\n\t\"archive/tar\"\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/fs\"\n\t\"os\"\n\n\t\"github.com/caddyserver/certmagic\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\ntype storVal struct {\n\tStorageRaw json.RawMessage `json:\"storage,omitempty\" caddy:\"namespace=caddy.storage inline_key=module\"`\n}\n\n// determineStorage returns the top-level storage module from the given config.\n// It may return nil even if no error.\nfunc determineStorage(configFile string, configAdapter string) (*storVal, error) {\n\tcfg, _, _, err := LoadConfig(configFile, configAdapter)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// storage defaults to FileStorage if not explicitly\n\t// defined in the config, so the config can be valid\n\t// json but unmarshaling will fail.\n\tif !json.Valid(cfg) {\n\t\treturn nil, &json.SyntaxError{}\n\t}\n\tvar tmpStruct storVal\n\terr = json.Unmarshal(cfg, &tmpStruct)\n\tif err != nil {\n\t\t// default case, ignore the error\n\t\tvar jsonError *json.SyntaxError\n\t\tif errors.As(err, &jsonError) {\n\t\t\treturn nil, nil\n\t\t}\n\t\treturn nil, err\n\t}\n\n\treturn &tmpStruct, nil\n}\n\nfunc cmdImportStorage(fl Flags) (int, error) {\n\timportStorageCmdConfigFlag := fl.String(\"config\")\n\timportStorageCmdImportFile := fl.String(\"input\")\n\n\tif importStorageCmdConfigFlag == \"\" {\n\t\treturn caddy.ExitCodeFailedStartup, errors.New(\"--config is required\")\n\t}\n\tif importStorageCmdImportFile == \"\" {\n\t\treturn caddy.ExitCodeFailedStartup, errors.New(\"--input is required\")\n\t}\n\n\t// extract storage from config if possible\n\tstorageCfg, err := determineStorage(importStorageCmdConfigFlag, \"\")\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\t// load specified storage or fallback to default\n\tvar stor certmagic.Storage\n\tctx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tdefer cancel()\n\tif storageCfg != nil && storageCfg.StorageRaw != nil {\n\t\tval, err := ctx.LoadModule(storageCfg, \"StorageRaw\")\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, err\n\t\t}\n\t\tstor, err = val.(caddy.StorageConverter).CertMagicStorage()\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, err\n\t\t}\n\t} else {\n\t\tstor = caddy.DefaultStorage\n\t}\n\n\t// setup input\n\tvar f *os.File\n\tif importStorageCmdImportFile == \"-\" {\n\t\tf = os.Stdin\n\t} else {\n\t\tf, err = os.Open(importStorageCmdImportFile)\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"opening input file: %v\", err)\n\t\t}\n\t\tdefer f.Close()\n\t}\n\n\t// store each archive element\n\ttr := tar.NewReader(f)\n\tfor {\n\t\thdr, err := tr.Next()\n\t\tif err == io.EOF {\n\t\t\tbreak\n\t\t}\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedQuit, fmt.Errorf(\"reading archive: %v\", err)\n\t\t}\n\n\t\tb, err := io.ReadAll(tr)\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedQuit, fmt.Errorf(\"reading archive: %v\", err)\n\t\t}\n\n\t\terr = stor.Store(ctx, hdr.Name, b)\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedQuit, fmt.Errorf(\"reading archive: %v\", err)\n\t\t}\n\t}\n\n\tfmt.Println(\"Successfully imported storage\")\n\treturn caddy.ExitCodeSuccess, nil\n}\n\nfunc cmdExportStorage(fl Flags) (int, error) {\n\texportStorageCmdConfigFlag := fl.String(\"config\")\n\texportStorageCmdOutputFlag := fl.String(\"output\")\n\n\tif exportStorageCmdConfigFlag == \"\" {\n\t\treturn caddy.ExitCodeFailedStartup, errors.New(\"--config is required\")\n\t}\n\tif exportStorageCmdOutputFlag == \"\" {\n\t\treturn caddy.ExitCodeFailedStartup, errors.New(\"--output is required\")\n\t}\n\n\t// extract storage from config if possible\n\tstorageCfg, err := determineStorage(exportStorageCmdConfigFlag, \"\")\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\t// load specified storage or fallback to default\n\tvar stor certmagic.Storage\n\tctx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tdefer cancel()\n\tif storageCfg != nil && storageCfg.StorageRaw != nil {\n\t\tval, err := ctx.LoadModule(storageCfg, \"StorageRaw\")\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, err\n\t\t}\n\t\tstor, err = val.(caddy.StorageConverter).CertMagicStorage()\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, err\n\t\t}\n\t} else {\n\t\tstor = caddy.DefaultStorage\n\t}\n\n\t// enumerate all keys\n\tkeys, err := stor.List(ctx, \"\", true)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\t// setup output\n\tvar f *os.File\n\tif exportStorageCmdOutputFlag == \"-\" {\n\t\tf = os.Stdout\n\t} else {\n\t\tf, err = os.Create(exportStorageCmdOutputFlag)\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"opening output file: %v\", err)\n\t\t}\n\t\tdefer f.Close()\n\t}\n\n\t// `IsTerminal: true` keys hold the values we\n\t// care about, write them out\n\ttw := tar.NewWriter(f)\n\tfor _, k := range keys {\n\t\tinfo, err := stor.Stat(ctx, k)\n\t\tif err != nil {\n\t\t\tif errors.Is(err, fs.ErrNotExist) {\n\t\t\t\tcaddy.Log().Warn(fmt.Sprintf(\"key: %s removed while export is in-progress\", k))\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\treturn caddy.ExitCodeFailedQuit, err\n\t\t}\n\n\t\tif info.IsTerminal {\n\t\t\tv, err := stor.Load(ctx, k)\n\t\t\tif err != nil {\n\t\t\t\tif errors.Is(err, fs.ErrNotExist) {\n\t\t\t\t\tcaddy.Log().Warn(fmt.Sprintf(\"key: %s removed while export is in-progress\", k))\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\treturn caddy.ExitCodeFailedQuit, err\n\t\t\t}\n\n\t\t\thdr := &tar.Header{\n\t\t\t\tName:    k,\n\t\t\t\tMode:    0o600,\n\t\t\t\tSize:    int64(len(v)),\n\t\t\t\tModTime: info.Modified,\n\t\t\t}\n\n\t\t\tif err = tw.WriteHeader(hdr); err != nil {\n\t\t\t\treturn caddy.ExitCodeFailedQuit, fmt.Errorf(\"writing archive: %v\", err)\n\t\t\t}\n\t\t\tif _, err = tw.Write(v); err != nil {\n\t\t\t\treturn caddy.ExitCodeFailedQuit, fmt.Errorf(\"writing archive: %v\", err)\n\t\t\t}\n\t\t}\n\t}\n\tif err = tw.Close(); err != nil {\n\t\treturn caddy.ExitCodeFailedQuit, fmt.Errorf(\"writing archive: %v\", err)\n\t}\n\n\treturn caddy.ExitCodeSuccess, nil\n}\n"
  },
  {
    "path": "cmd/x509rootsfallback.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddycmd\n\nimport (\n\t// For running in minimal environments, this can ease\n\t// headaches related to establishing TLS connections.\n\t// \"Package fallback embeds a set of fallback X.509 trusted\n\t// roots in the application by automatically invoking\n\t// x509.SetFallbackRoots. This allows the application to\n\t// work correctly even if the operating system does not\n\t// provide a verifier or system roots pool. ... It's\n\t// recommended that only binaries, and not libraries,\n\t// import this package. This package must be kept up to\n\t// date for security and compatibility reasons.\"\n\t//\n\t// This is in its own file only because of conflicts\n\t// between gci and goimports when in main.go.\n\t// See https://github.com/daixiang0/gci/issues/76\n\t_ \"golang.org/x/crypto/x509roots/fallback\"\n)\n"
  },
  {
    "path": "context.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"log\"\n\t\"log/slog\"\n\t\"reflect\"\n\t\"sync\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/prometheus/client_golang/prometheus\"\n\t\"github.com/prometheus/client_golang/prometheus/collectors\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/exp/zapslog\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2/internal/filesystems\"\n)\n\n// Context is a type which defines the lifetime of modules that\n// are loaded and provides access to the parent configuration\n// that spawned the modules which are loaded. It should be used\n// with care and wrapped with derivation functions from the\n// standard context package only if you don't need the Caddy\n// specific features. These contexts are canceled when the\n// lifetime of the modules loaded from it is over.\n//\n// Use NewContext() to get a valid value (but most modules will\n// not actually need to do this).\ntype Context struct {\n\tcontext.Context\n\n\tmoduleInstances map[string][]Module\n\tcfg             *Config\n\tancestry        []Module\n\tcleanupFuncs    []func()                // invoked at every config unload\n\texitFuncs       []func(context.Context) // invoked at config unload ONLY IF the process is exiting (EXPERIMENTAL)\n\tmetricsRegistry *prometheus.Registry\n}\n\n// NewContext provides a new context derived from the given\n// context ctx. Normally, you will not need to call this\n// function unless you are loading modules which have a\n// different lifespan than the ones for the context the\n// module was provisioned with. Be sure to call the cancel\n// func when the context is to be cleaned up so that\n// modules which are loaded will be properly unloaded.\n// See standard library context package's documentation.\nfunc NewContext(ctx Context) (Context, context.CancelFunc) {\n\tnewCtx, cancelCause := NewContextWithCause(ctx)\n\treturn newCtx, func() { cancelCause(nil) }\n}\n\n// NewContextWithCause is like NewContext but returns a context.CancelCauseFunc.\n// EXPERIMENTAL: This API is subject to change.\nfunc NewContextWithCause(ctx Context) (Context, context.CancelCauseFunc) {\n\tnewCtx := Context{moduleInstances: make(map[string][]Module), cfg: ctx.cfg, metricsRegistry: prometheus.NewPedanticRegistry()}\n\tc, cancel := context.WithCancelCause(ctx.Context)\n\twrappedCancel := func(cause error) {\n\t\tcancel(cause)\n\n\t\tfor _, f := range ctx.cleanupFuncs {\n\t\t\tf()\n\t\t}\n\n\t\tfor modName, modInstances := range newCtx.moduleInstances {\n\t\t\tfor _, inst := range modInstances {\n\t\t\t\tif cu, ok := inst.(CleanerUpper); ok {\n\t\t\t\t\terr := cu.Cleanup()\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tlog.Printf(\"[ERROR] %s (%p): cleanup: %v\", modName, inst, err)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tnewCtx.Context = c\n\tnewCtx.initMetrics()\n\treturn newCtx, wrappedCancel\n}\n\n// OnCancel executes f when ctx is canceled.\nfunc (ctx *Context) OnCancel(f func()) {\n\tctx.cleanupFuncs = append(ctx.cleanupFuncs, f)\n}\n\n// FileSystems returns a ref to the FilesystemMap.\n// EXPERIMENTAL: This API is subject to change.\nfunc (ctx *Context) FileSystems() FileSystems {\n\t// if no config is loaded, we use a default filesystemmap, which includes the osfs\n\tif ctx.cfg == nil {\n\t\treturn &filesystems.FileSystemMap{}\n\t}\n\treturn ctx.cfg.fileSystems\n}\n\n// Returns the active metrics registry for the context\n// EXPERIMENTAL: This API is subject to change.\nfunc (ctx *Context) GetMetricsRegistry() *prometheus.Registry {\n\treturn ctx.metricsRegistry\n}\n\nfunc (ctx *Context) initMetrics() {\n\tctx.metricsRegistry.MustRegister(\n\t\tcollectors.NewBuildInfoCollector(),\n\t\tcollectors.NewProcessCollector(collectors.ProcessCollectorOpts{}),\n\t\tcollectors.NewGoCollector(),\n\t\tadminMetrics.requestCount,\n\t\tadminMetrics.requestErrors,\n\t\tglobalMetrics.configSuccess,\n\t\tglobalMetrics.configSuccessTime,\n\t)\n}\n\n// OnExit executes f when the process exits gracefully.\n// The function is only executed if the process is gracefully\n// shut down while this context is active.\n//\n// EXPERIMENTAL API: subject to change or removal.\nfunc (ctx *Context) OnExit(f func(context.Context)) {\n\tctx.exitFuncs = append(ctx.exitFuncs, f)\n}\n\n// LoadModule loads the Caddy module(s) from the specified field of the parent struct\n// pointer and returns the loaded module(s). The struct pointer and its field name as\n// a string are necessary so that reflection can be used to read the struct tag on the\n// field to get the module namespace and inline module name key (if specified).\n//\n// The field can be any one of the supported raw module types: json.RawMessage,\n// []json.RawMessage, map[string]json.RawMessage, or []map[string]json.RawMessage.\n// ModuleMap may be used in place of map[string]json.RawMessage. The return value's\n// underlying type mirrors the input field's type:\n//\n//\tjson.RawMessage              => any\n//\t[]json.RawMessage            => []any\n//\t[][]json.RawMessage          => [][]any\n//\tmap[string]json.RawMessage   => map[string]any\n//\t[]map[string]json.RawMessage => []map[string]any\n//\n// The field must have a \"caddy\" struct tag in this format:\n//\n//\tcaddy:\"key1=val1 key2=val2\"\n//\n// To load modules, a \"namespace\" key is required. For example, to load modules\n// in the \"http.handlers\" namespace, you'd put: `namespace=http.handlers` in the\n// Caddy struct tag.\n//\n// The module name must also be available. If the field type is a map or slice of maps,\n// then key is assumed to be the module name if an \"inline_key\" is NOT specified in the\n// caddy struct tag. In this case, the module name does NOT need to be specified in-line\n// with the module itself.\n//\n// If not a map, or if inline_key is non-empty, then the module name must be embedded\n// into the values, which must be objects; then there must be a key in those objects\n// where its associated value is the module name. This is called the \"inline key\",\n// meaning the key containing the module's name that is defined inline with the module\n// itself. You must specify the inline key in a struct tag, along with the namespace:\n//\n//\tcaddy:\"namespace=http.handlers inline_key=handler\"\n//\n// This will look for a key/value pair like `\"handler\": \"...\"` in the json.RawMessage\n// in order to know the module name.\n//\n// To make use of the loaded module(s) (the return value), you will probably want\n// to type-assert each 'any' value(s) to the types that are useful to you\n// and store them on the same struct. Storing them on the same struct makes for\n// easy garbage collection when your host module is no longer needed.\n//\n// Loaded modules have already been provisioned and validated. Upon returning\n// successfully, this method clears the json.RawMessage(s) in the field since\n// the raw JSON is no longer needed, and this allows the GC to free up memory.\nfunc (ctx Context) LoadModule(structPointer any, fieldName string) (any, error) {\n\tval := reflect.ValueOf(structPointer).Elem().FieldByName(fieldName)\n\ttyp := val.Type()\n\n\tfield, ok := reflect.TypeOf(structPointer).Elem().FieldByName(fieldName)\n\tif !ok {\n\t\tpanic(fmt.Sprintf(\"field %s does not exist in %#v\", fieldName, structPointer))\n\t}\n\n\topts, err := ParseStructTag(field.Tag.Get(\"caddy\"))\n\tif err != nil {\n\t\tpanic(fmt.Sprintf(\"malformed tag on field %s: %v\", fieldName, err))\n\t}\n\n\tmoduleNamespace, ok := opts[\"namespace\"]\n\tif !ok {\n\t\tpanic(fmt.Sprintf(\"missing 'namespace' key in struct tag on field %s\", fieldName))\n\t}\n\tinlineModuleKey := opts[\"inline_key\"]\n\n\tvar result any\n\n\tswitch val.Kind() {\n\tcase reflect.Slice:\n\t\tif isJSONRawMessage(typ) {\n\t\t\t// val is `json.RawMessage` ([]uint8 under the hood)\n\n\t\t\tif inlineModuleKey == \"\" {\n\t\t\t\tpanic(\"unable to determine module name without inline_key when type is not a ModuleMap\")\n\t\t\t}\n\t\t\tval, err := ctx.loadModuleInline(inlineModuleKey, moduleNamespace, val.Interface().(json.RawMessage))\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tresult = val\n\t\t} else if isJSONRawMessage(typ.Elem()) {\n\t\t\t// val is `[]json.RawMessage`\n\n\t\t\tif inlineModuleKey == \"\" {\n\t\t\t\tpanic(\"unable to determine module name without inline_key because type is not a ModuleMap\")\n\t\t\t}\n\t\t\tvar all []any\n\t\t\tfor i := 0; i < val.Len(); i++ {\n\t\t\t\tval, err := ctx.loadModuleInline(inlineModuleKey, moduleNamespace, val.Index(i).Interface().(json.RawMessage))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, fmt.Errorf(\"position %d: %v\", i, err)\n\t\t\t\t}\n\t\t\t\tall = append(all, val)\n\t\t\t}\n\t\t\tresult = all\n\t\t} else if typ.Elem().Kind() == reflect.Slice && isJSONRawMessage(typ.Elem().Elem()) {\n\t\t\t// val is `[][]json.RawMessage`\n\n\t\t\tif inlineModuleKey == \"\" {\n\t\t\t\tpanic(\"unable to determine module name without inline_key because type is not a ModuleMap\")\n\t\t\t}\n\t\t\tvar all [][]any\n\t\t\tfor i := 0; i < val.Len(); i++ {\n\t\t\t\tinnerVal := val.Index(i)\n\t\t\t\tvar allInner []any\n\t\t\t\tfor j := 0; j < innerVal.Len(); j++ {\n\t\t\t\t\tinnerInnerVal, err := ctx.loadModuleInline(inlineModuleKey, moduleNamespace, innerVal.Index(j).Interface().(json.RawMessage))\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn nil, fmt.Errorf(\"position %d: %v\", j, err)\n\t\t\t\t\t}\n\t\t\t\t\tallInner = append(allInner, innerInnerVal)\n\t\t\t\t}\n\t\t\t\tall = append(all, allInner)\n\t\t\t}\n\t\t\tresult = all\n\t\t} else if isModuleMapType(typ.Elem()) {\n\t\t\t// val is `[]map[string]json.RawMessage`\n\n\t\t\tvar all []map[string]any\n\t\t\tfor i := 0; i < val.Len(); i++ {\n\t\t\t\tthisSet, err := ctx.loadModulesFromSomeMap(moduleNamespace, inlineModuleKey, val.Index(i))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t\tall = append(all, thisSet)\n\t\t\t}\n\t\t\tresult = all\n\t\t}\n\n\tcase reflect.Map:\n\t\t// val is a ModuleMap or some other kind of map\n\t\tresult, err = ctx.loadModulesFromSomeMap(moduleNamespace, inlineModuleKey, val)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"unrecognized type for module: %s\", typ)\n\t}\n\n\t// we're done with the raw bytes; allow GC to deallocate\n\tval.Set(reflect.Zero(typ))\n\n\treturn result, nil\n}\n\n// emitEvent is a small convenience method so the caddy core can emit events, if the event app is configured.\nfunc (ctx Context) emitEvent(name string, data map[string]any) Event {\n\tif ctx.cfg == nil || ctx.cfg.eventEmitter == nil {\n\t\treturn Event{}\n\t}\n\treturn ctx.cfg.eventEmitter.Emit(ctx, name, data)\n}\n\n// loadModulesFromSomeMap loads modules from val, which must be a type of map[string]any.\n// Depending on inlineModuleKey, it will be interpreted as either a ModuleMap (key is the module\n// name) or as a regular map (key is not the module name, and module name is defined inline).\nfunc (ctx Context) loadModulesFromSomeMap(namespace, inlineModuleKey string, val reflect.Value) (map[string]any, error) {\n\t// if no inline_key is specified, then val must be a ModuleMap,\n\t// where the key is the module name\n\tif inlineModuleKey == \"\" {\n\t\tif !isModuleMapType(val.Type()) {\n\t\t\tpanic(fmt.Sprintf(\"expected ModuleMap because inline_key is empty; but we do not recognize this type: %s\", val.Type()))\n\t\t}\n\t\treturn ctx.loadModuleMap(namespace, val)\n\t}\n\n\t// otherwise, val is a map with modules, but the module name is\n\t// inline with each value (the key means something else)\n\treturn ctx.loadModulesFromRegularMap(namespace, inlineModuleKey, val)\n}\n\n// loadModulesFromRegularMap loads modules from val, where val is a map[string]json.RawMessage.\n// Map keys are NOT interpreted as module names, so module names are still expected to appear\n// inline with the objects.\nfunc (ctx Context) loadModulesFromRegularMap(namespace, inlineModuleKey string, val reflect.Value) (map[string]any, error) {\n\tmods := make(map[string]any)\n\titer := val.MapRange()\n\tfor iter.Next() {\n\t\tk := iter.Key()\n\t\tv := iter.Value()\n\t\tmod, err := ctx.loadModuleInline(inlineModuleKey, namespace, v.Interface().(json.RawMessage))\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"key %s: %v\", k, err)\n\t\t}\n\t\tmods[k.String()] = mod\n\t}\n\treturn mods, nil\n}\n\n// loadModuleMap loads modules from a ModuleMap, i.e. map[string]any, where the key is the\n// module name. With a module map, module names do not need to be defined inline with their values.\nfunc (ctx Context) loadModuleMap(namespace string, val reflect.Value) (map[string]any, error) {\n\tall := make(map[string]any)\n\titer := val.MapRange()\n\tfor iter.Next() {\n\t\tk := iter.Key().Interface().(string)\n\t\tv := iter.Value().Interface().(json.RawMessage)\n\t\tmoduleName := namespace + \".\" + k\n\t\tif namespace == \"\" {\n\t\t\tmoduleName = k\n\t\t}\n\t\tval, err := ctx.LoadModuleByID(moduleName, v)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"module name '%s': %v\", k, err)\n\t\t}\n\t\tall[k] = val\n\t}\n\treturn all, nil\n}\n\n// LoadModuleByID decodes rawMsg into a new instance of mod and\n// returns the value. If mod.New is nil, an error is returned.\n// If the module implements Validator or Provisioner interfaces,\n// those methods are invoked to ensure the module is fully\n// configured and valid before being used.\n//\n// This is a lower-level method and will usually not be called\n// directly by most modules. However, this method is useful when\n// dynamically loading/unloading modules in their own context,\n// like from embedded scripts, etc.\nfunc (ctx Context) LoadModuleByID(id string, rawMsg json.RawMessage) (any, error) {\n\tmodulesMu.RLock()\n\tmodInfo, ok := modules[id]\n\tmodulesMu.RUnlock()\n\tif !ok {\n\t\treturn nil, fmt.Errorf(\"unknown module: %s\", id)\n\t}\n\n\tif modInfo.New == nil {\n\t\treturn nil, fmt.Errorf(\"module '%s' has no constructor\", modInfo.ID)\n\t}\n\n\tval := modInfo.New()\n\n\t// value must be a pointer for unmarshaling into concrete type, even if\n\t// the module's concrete type is a slice or map; New() *should* return\n\t// a pointer, otherwise unmarshaling errors or panics will occur\n\tif rv := reflect.ValueOf(val); rv.Kind() != reflect.Ptr {\n\t\tlog.Printf(\"[WARNING] ModuleInfo.New() for module '%s' did not return a pointer,\"+\n\t\t\t\" so we are using reflection to make a pointer instead; please fix this by\"+\n\t\t\t\" using new(Type) or &Type notation in your module's New() function.\", id)\n\t\tval = reflect.New(rv.Type()).Elem().Addr().Interface().(Module)\n\t}\n\n\t// fill in its config only if there is a config to fill in\n\tif len(rawMsg) > 0 {\n\t\terr := StrictUnmarshalJSON(rawMsg, &val)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"decoding module config: %s: %v\", modInfo, err)\n\t\t}\n\t}\n\n\tif val == nil {\n\t\t// returned module values are almost always type-asserted\n\t\t// before being used, so a nil value would panic; and there\n\t\t// is no good reason to explicitly declare null modules in\n\t\t// a config; it might be because the user is trying to achieve\n\t\t// a result the developer isn't expecting, which is a smell\n\t\treturn nil, fmt.Errorf(\"module value cannot be null\")\n\t}\n\n\tvar err error\n\n\t// if this is an app module, keep a reference to it,\n\t// since submodules may need to reference it during\n\t// provisioning (even though the parent app module\n\t// may not be fully provisioned yet; this is the case\n\t// with the tls app's automation policies, which may\n\t// refer to the tls app to check if a global DNS\n\t// module has been configured for DNS challenges)\n\tif appModule, ok := val.(App); ok {\n\t\tctx.cfg.apps[id] = appModule\n\t\tdefer func() {\n\t\t\tif err != nil {\n\t\t\t\tctx.cfg.failedApps[id] = err\n\t\t\t}\n\t\t}()\n\t}\n\n\tctx.ancestry = append(ctx.ancestry, val)\n\n\tif prov, ok := val.(Provisioner); ok {\n\t\terr = prov.Provision(ctx)\n\t\tif err != nil {\n\t\t\t// incomplete provisioning could have left state\n\t\t\t// dangling, so make sure it gets cleaned up\n\t\t\tif cleanerUpper, ok := val.(CleanerUpper); ok {\n\t\t\t\terr2 := cleanerUpper.Cleanup()\n\t\t\t\tif err2 != nil {\n\t\t\t\t\terr = fmt.Errorf(\"%v; additionally, cleanup: %v\", err, err2)\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn nil, fmt.Errorf(\"provision %s: %v\", modInfo, err)\n\t\t}\n\t}\n\n\tif validator, ok := val.(Validator); ok {\n\t\terr = validator.Validate()\n\t\tif err != nil {\n\t\t\t// since the module was already provisioned, make sure we clean up\n\t\t\tif cleanerUpper, ok := val.(CleanerUpper); ok {\n\t\t\t\terr2 := cleanerUpper.Cleanup()\n\t\t\t\tif err2 != nil {\n\t\t\t\t\terr = fmt.Errorf(\"%v; additionally, cleanup: %v\", err, err2)\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn nil, fmt.Errorf(\"%s: invalid configuration: %v\", modInfo, err)\n\t\t}\n\t}\n\n\tctx.moduleInstances[id] = append(ctx.moduleInstances[id], val)\n\n\t// if the loaded module happens to be an app that can emit events, store it so the\n\t// core can have access to emit events without an import cycle\n\tif ee, ok := val.(eventEmitter); ok {\n\t\tif _, ok := ee.(App); ok {\n\t\t\tctx.cfg.eventEmitter = ee\n\t\t}\n\t}\n\n\treturn val, nil\n}\n\n// loadModuleInline loads a module from a JSON raw message which decodes to\n// a map[string]any, where one of the object keys is moduleNameKey\n// and the corresponding value is the module name (as a string) which can\n// be found in the given scope. In other words, the module name is declared\n// in-line with the module itself.\n//\n// This allows modules to be decoded into their concrete types and used when\n// their names cannot be the unique key in a map, such as when there are\n// multiple instances in the map or it appears in an array (where there are\n// no custom keys). In other words, the key containing the module name is\n// treated special/separate from all the other keys in the object.\nfunc (ctx Context) loadModuleInline(moduleNameKey, moduleScope string, raw json.RawMessage) (any, error) {\n\tmoduleName, raw, err := getModuleNameInline(moduleNameKey, raw)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tval, err := ctx.LoadModuleByID(moduleScope+\".\"+moduleName, raw)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"loading module '%s': %v\", moduleName, err)\n\t}\n\n\treturn val, nil\n}\n\n// App returns the configured app named name. If that app has\n// not yet been loaded and provisioned, it will be immediately\n// loaded and provisioned. If no app with that name is\n// configured, a new empty one will be instantiated instead.\n// (The app module must still be registered.) This must not be\n// called during the Provision/Validate phase to reference a\n// module's own host app (since the parent app module is still\n// in the process of being provisioned, it is not yet ready).\n//\n// We return any type instead of the App type because it is NOT\n// intended for the caller of this method to be the one to start\n// or stop App modules. The caller is expected to assert to the\n// concrete type.\nfunc (ctx Context) App(name string) (any, error) {\n\t// if the app failed to load before, return the cached error\n\tif err, ok := ctx.cfg.failedApps[name]; ok {\n\t\treturn nil, fmt.Errorf(\"loading %s app module: %v\", name, err)\n\t}\n\tif app, ok := ctx.cfg.apps[name]; ok {\n\t\treturn app, nil\n\t}\n\tappRaw := ctx.cfg.AppsRaw[name]\n\tmodVal, err := ctx.LoadModuleByID(name, appRaw)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"loading %s app module: %v\", name, err)\n\t}\n\tif appRaw != nil {\n\t\tctx.cfg.AppsRaw[name] = nil // allow GC to deallocate\n\t}\n\treturn modVal, nil\n}\n\n// AppIfConfigured is like App, but it returns an error if the\n// app has not been configured. This is useful when the app is\n// required and its absence is a configuration error; or when\n// the app is optional and you don't want to instantiate a\n// new one that hasn't been explicitly configured. If the app\n// is not in the configuration, the error wraps ErrNotConfigured.\nfunc (ctx Context) AppIfConfigured(name string) (any, error) {\n\tif ctx.cfg == nil {\n\t\treturn nil, fmt.Errorf(\"app module %s: %w\", name, ErrNotConfigured)\n\t}\n\t// if the app failed to load before, return the cached error\n\tif err, ok := ctx.cfg.failedApps[name]; ok {\n\t\treturn nil, fmt.Errorf(\"loading %s app module: %v\", name, err)\n\t}\n\tif app, ok := ctx.cfg.apps[name]; ok {\n\t\treturn app, nil\n\t}\n\tappRaw := ctx.cfg.AppsRaw[name]\n\tif appRaw == nil {\n\t\treturn nil, fmt.Errorf(\"app module %s: %w\", name, ErrNotConfigured)\n\t}\n\treturn ctx.App(name)\n}\n\n// ErrNotConfigured indicates a module is not configured.\nvar ErrNotConfigured = fmt.Errorf(\"module not configured\")\n\n// Storage returns the configured Caddy storage implementation.\nfunc (ctx Context) Storage() certmagic.Storage {\n\treturn ctx.cfg.storage\n}\n\n// Logger returns a logger that is intended for use by the most\n// recent module associated with the context. Callers should not\n// pass in any arguments unless they want to associate with a\n// different module; it panics if more than 1 value is passed in.\n//\n// Originally, this method's signature was `Logger(mod Module)`,\n// requiring that an instance of a Caddy module be passed in.\n// However, that is no longer necessary, as the closest module\n// most recently associated with the context will be automatically\n// assumed. To prevent a sudden breaking change, this method's\n// signature has been changed to be variadic, but we may remove\n// the parameter altogether in the future. Callers should not\n// pass in any argument. If there is valid need to specify a\n// different module, please open an issue to discuss.\n//\n// PARTIALLY DEPRECATED: The Logger(module) form is deprecated and\n// may be removed in the future. Do not pass in any arguments.\nfunc (ctx Context) Logger(module ...Module) *zap.Logger {\n\tif len(module) > 1 {\n\t\tpanic(\"more than 1 module passed in\")\n\t}\n\tif ctx.cfg == nil {\n\t\t// often the case in tests; just use a dev logger\n\t\tl, err := zap.NewDevelopment()\n\t\tif err != nil {\n\t\t\tpanic(\"config missing, unable to create dev logger: \" + err.Error())\n\t\t}\n\t\treturn l\n\t}\n\tmod := ctx.Module()\n\tif len(module) > 0 {\n\t\tmod = module[0]\n\t}\n\tif mod == nil {\n\t\treturn Log()\n\t}\n\treturn ctx.cfg.Logging.Logger(mod)\n}\n\ntype slogHandlerFactory func(handler slog.Handler, core zapcore.Core, moduleID string) slog.Handler\n\nvar (\n\tslogHandlerFactories   []slogHandlerFactory\n\tslogHandlerFactoriesMu sync.RWMutex\n)\n\n// RegisterSlogHandlerFactory allows modules to register custom log/slog.Handler,\n// for instance, to add contextual data to the logs.\nfunc RegisterSlogHandlerFactory(factory slogHandlerFactory) {\n\tslogHandlerFactoriesMu.Lock()\n\tslogHandlerFactories = append(slogHandlerFactories, factory)\n\tslogHandlerFactoriesMu.Unlock()\n}\n\n// Slogger returns a slog logger that is intended for use by\n// the most recent module associated with the context.\nfunc (ctx Context) Slogger() *slog.Logger {\n\tvar (\n\t\thandler  slog.Handler\n\t\tcore     zapcore.Core\n\t\tmoduleID string\n\t)\n\n\t// the default enables traces at ERROR level, this disables\n\t// them by setting it to a level higher than any other level\n\ttracesOpt := zapslog.AddStacktraceAt(slog.Level(127))\n\n\tif ctx.cfg == nil {\n\t\t// often the case in tests; just use a dev logger\n\t\tl, err := zap.NewDevelopment()\n\t\tif err != nil {\n\t\t\tpanic(\"config missing, unable to create dev logger: \" + err.Error())\n\t\t}\n\n\t\tcore = l.Core()\n\t\thandler = zapslog.NewHandler(core, tracesOpt)\n\t} else {\n\t\tmod := ctx.Module()\n\t\tif mod == nil {\n\t\t\tcore = Log().Core()\n\t\t\thandler = zapslog.NewHandler(core, tracesOpt)\n\t\t} else {\n\t\t\tmoduleID = string(mod.CaddyModule().ID)\n\t\t\tcore = ctx.cfg.Logging.Logger(mod).Core()\n\t\t\thandler = zapslog.NewHandler(core, zapslog.WithName(moduleID), tracesOpt)\n\t\t}\n\t}\n\n\tslogHandlerFactoriesMu.RLock()\n\tfor _, f := range slogHandlerFactories {\n\t\thandler = f(handler, core, moduleID)\n\t}\n\tslogHandlerFactoriesMu.RUnlock()\n\n\treturn slog.New(handler)\n}\n\n// Modules returns the lineage of modules that this context provisioned,\n// with the most recent/current module being last in the list.\nfunc (ctx Context) Modules() []Module {\n\tmods := make([]Module, len(ctx.ancestry))\n\tcopy(mods, ctx.ancestry)\n\treturn mods\n}\n\n// Module returns the current module, or the most recent one\n// provisioned by the context.\nfunc (ctx Context) Module() Module {\n\tif len(ctx.ancestry) == 0 {\n\t\treturn nil\n\t}\n\treturn ctx.ancestry[len(ctx.ancestry)-1]\n}\n\n// WithValue returns a new context with the given key-value pair.\nfunc (ctx *Context) WithValue(key, value any) Context {\n\treturn Context{\n\t\tContext:         context.WithValue(ctx.Context, key, value),\n\t\tmoduleInstances: ctx.moduleInstances,\n\t\tcfg:             ctx.cfg,\n\t\tancestry:        ctx.ancestry,\n\t\tcleanupFuncs:    ctx.cleanupFuncs,\n\t\texitFuncs:       ctx.exitFuncs,\n\t}\n}\n\n// eventEmitter is a small interface that inverts dependencies for\n// the caddyevents package, so the core can emit events without an\n// import cycle (i.e. the caddy package doesn't have to import\n// the caddyevents package, which imports the caddy package).\ntype eventEmitter interface {\n\tEmit(ctx Context, eventName string, data map[string]any) Event\n}\n"
  },
  {
    "path": "context_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport (\n\t\"encoding/json\"\n\t\"io\"\n)\n\nfunc ExampleContext_LoadModule() {\n\t// this whole first part is just setting up for the example;\n\t// note the struct tags - very important; we specify inline_key\n\t// because that is the only way to know the module name\n\tvar ctx Context\n\tmyStruct := &struct {\n\t\t// This godoc comment will appear in module documentation.\n\t\tGuestModuleRaw json.RawMessage `json:\"guest_module,omitempty\" caddy:\"namespace=example inline_key=name\"`\n\n\t\t// this is where the decoded module will be stored; in this\n\t\t// example, we pretend we need an io.Writer but it can be\n\t\t// any interface type that is useful to you\n\t\tguestModule io.Writer\n\t}{\n\t\tGuestModuleRaw: json.RawMessage(`{\"name\":\"module_name\",\"foo\":\"bar\"}`),\n\t}\n\n\t// if a guest module is provided, we can load it easily\n\tif myStruct.GuestModuleRaw != nil {\n\t\tmod, err := ctx.LoadModule(myStruct, \"GuestModuleRaw\")\n\t\tif err != nil {\n\t\t\t// you'd want to actually handle the error here\n\t\t\t// return fmt.Errorf(\"loading guest module: %v\", err)\n\t\t}\n\t\t// mod contains the loaded and provisioned module,\n\t\t// it is now ready for us to use\n\t\tmyStruct.guestModule = mod.(io.Writer)\n\t}\n\n\t// use myStruct.guestModule from now on\n}\n\nfunc ExampleContext_LoadModule_array() {\n\t// this whole first part is just setting up for the example;\n\t// note the struct tags - very important; we specify inline_key\n\t// because that is the only way to know the module name\n\tvar ctx Context\n\tmyStruct := &struct {\n\t\t// This godoc comment will appear in module documentation.\n\t\tGuestModulesRaw []json.RawMessage `json:\"guest_modules,omitempty\" caddy:\"namespace=example inline_key=name\"`\n\n\t\t// this is where the decoded module will be stored; in this\n\t\t// example, we pretend we need an io.Writer but it can be\n\t\t// any interface type that is useful to you\n\t\tguestModules []io.Writer\n\t}{\n\t\tGuestModulesRaw: []json.RawMessage{\n\t\t\tjson.RawMessage(`{\"name\":\"module1_name\",\"foo\":\"bar1\"}`),\n\t\t\tjson.RawMessage(`{\"name\":\"module2_name\",\"foo\":\"bar2\"}`),\n\t\t},\n\t}\n\n\t// since our input is []json.RawMessage, the output will be []any\n\tmods, err := ctx.LoadModule(myStruct, \"GuestModulesRaw\")\n\tif err != nil {\n\t\t// you'd want to actually handle the error here\n\t\t// return fmt.Errorf(\"loading guest modules: %v\", err)\n\t}\n\tfor _, mod := range mods.([]any) {\n\t\tmyStruct.guestModules = append(myStruct.guestModules, mod.(io.Writer))\n\t}\n\n\t// use myStruct.guestModules from now on\n}\n\nfunc ExampleContext_LoadModule_map() {\n\t// this whole first part is just setting up for the example;\n\t// note the struct tags - very important; we don't specify\n\t// inline_key because the map key is the module name\n\tvar ctx Context\n\tmyStruct := &struct {\n\t\t// This godoc comment will appear in module documentation.\n\t\tGuestModulesRaw ModuleMap `json:\"guest_modules,omitempty\" caddy:\"namespace=example\"`\n\n\t\t// this is where the decoded module will be stored; in this\n\t\t// example, we pretend we need an io.Writer but it can be\n\t\t// any interface type that is useful to you\n\t\tguestModules map[string]io.Writer\n\t}{\n\t\tGuestModulesRaw: ModuleMap{\n\t\t\t\"module1_name\": json.RawMessage(`{\"foo\":\"bar1\"}`),\n\t\t\t\"module2_name\": json.RawMessage(`{\"foo\":\"bar2\"}`),\n\t\t},\n\t}\n\n\t// since our input is map[string]json.RawMessage, the output will be map[string]any\n\tmods, err := ctx.LoadModule(myStruct, \"GuestModulesRaw\")\n\tif err != nil {\n\t\t// you'd want to actually handle the error here\n\t\t// return fmt.Errorf(\"loading guest modules: %v\", err)\n\t}\n\tfor modName, mod := range mods.(map[string]any) {\n\t\tmyStruct.guestModules[modName] = mod.(io.Writer)\n\t}\n\n\t// use myStruct.guestModules from now on\n}\n"
  },
  {
    "path": "duration_fuzz.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n//go:build gofuzz\n\npackage caddy\n\nfunc FuzzParseDuration(data []byte) int {\n\t_, err := ParseDuration(string(data))\n\tif err != nil {\n\t\treturn 0\n\t}\n\treturn 1\n}\n"
  },
  {
    "path": "filepath.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n//go:build !windows\n\npackage caddy\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n)\n\n// FastAbs is an optimized version of filepath.Abs for Unix systems,\n// since we don't expect the working directory to ever change once\n// Caddy is running. Avoid the os.Getwd() syscall overhead.\n// It's overall the same as stdlib's implementation, the difference\n// being cached working directory.\nfunc FastAbs(path string) (string, error) {\n\tif filepath.IsAbs(path) {\n\t\treturn filepath.Clean(path), nil\n\t}\n\tif wderr != nil {\n\t\treturn \"\", wderr\n\t}\n\treturn filepath.Join(wd, path), nil\n}\n\nvar wd, wderr = os.Getwd()\n"
  },
  {
    "path": "filepath_windows.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport (\n\t\"path/filepath\"\n)\n\n// FastAbs can't be optimized on Windows because there\n// are special file paths that require the use of syscall.FullPath\n// to handle correctly.\n// Just call stdlib's implementation which uses that function.\nfunc FastAbs(path string) (string, error) {\n\treturn filepath.Abs(path)\n}\n"
  },
  {
    "path": "filesystem.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport \"io/fs\"\n\ntype FileSystems interface {\n\tRegister(k string, v fs.FS)\n\tUnregister(k string)\n\tGet(k string) (v fs.FS, ok bool)\n\tDefault() fs.FS\n}\n"
  },
  {
    "path": "go.mod",
    "content": "module github.com/caddyserver/caddy/v2\n\ngo 1.25.0\n\nrequire (\n\tgithub.com/BurntSushi/toml v1.6.0\n\tgithub.com/DeRuina/timberjack v1.3.9\n\tgithub.com/KimMachineGun/automemlimit v0.7.5\n\tgithub.com/Masterminds/sprig/v3 v3.3.0\n\tgithub.com/alecthomas/chroma/v2 v2.23.1\n\tgithub.com/aryann/difflib v0.0.0-20210328193216-ff5ff6dc229b\n\tgithub.com/caddyserver/certmagic v0.25.2\n\tgithub.com/caddyserver/zerossl v0.1.5\n\tgithub.com/cloudflare/circl v1.6.3\n\tgithub.com/dustin/go-humanize v1.0.1\n\tgithub.com/go-chi/chi/v5 v5.2.5\n\tgithub.com/google/cel-go v0.27.0\n\tgithub.com/google/uuid v1.6.0\n\tgithub.com/klauspost/compress v1.18.4\n\tgithub.com/klauspost/cpuid/v2 v2.3.0\n\tgithub.com/mholt/acmez/v3 v3.1.6\n\tgithub.com/prometheus/client_golang v1.23.2\n\tgithub.com/quic-go/quic-go v0.59.0\n\tgithub.com/smallstep/certificates v0.30.0-rc3\n\tgithub.com/smallstep/nosql v0.7.0\n\tgithub.com/smallstep/truststore v0.13.0\n\tgithub.com/spf13/cobra v1.10.2\n\tgithub.com/spf13/pflag v1.0.10\n\tgithub.com/stretchr/testify v1.11.1\n\tgithub.com/tailscale/tscert v0.0.0-20251216020129-aea342f6d747\n\tgithub.com/yuin/goldmark v1.7.16\n\tgithub.com/yuin/goldmark-highlighting/v2 v2.0.0-20230729083705-37449abec8cc\n\tgo.opentelemetry.io/contrib/exporters/autoexport v0.65.0\n\tgo.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.65.0\n\tgo.opentelemetry.io/contrib/propagators/autoprop v0.65.0\n\tgo.opentelemetry.io/otel v1.40.0\n\tgo.opentelemetry.io/otel/sdk v1.40.0\n\tgo.step.sm/crypto v0.76.2\n\tgo.uber.org/automaxprocs v1.6.0\n\tgo.uber.org/zap v1.27.1\n\tgo.uber.org/zap/exp v0.3.0\n\tgolang.org/x/crypto v0.48.0\n\tgolang.org/x/crypto/x509roots/fallback v0.0.0-20260213171211-a408498e5541\n\tgolang.org/x/net v0.51.0\n\tgolang.org/x/sync v0.19.0\n\tgolang.org/x/term v0.40.0\n\tgolang.org/x/time v0.14.0\n\tgopkg.in/yaml.v3 v3.0.1\n)\n\nrequire (\n\tcel.dev/expr v0.25.1 // indirect\n\tcloud.google.com/go/auth v0.18.1 // indirect\n\tcloud.google.com/go/auth/oauth2adapt v0.2.8 // indirect\n\tcloud.google.com/go/compute/metadata v0.9.0 // indirect\n\tdario.cat/mergo v1.0.2 // indirect\n\tfilippo.io/bigmod v0.1.0 // indirect\n\tgithub.com/antlr4-go/antlr/v4 v4.13.1 // indirect\n\tgithub.com/ccoveille/go-safecast/v2 v2.0.0 // indirect\n\tgithub.com/cenkalti/backoff/v5 v5.0.3 // indirect\n\tgithub.com/coreos/go-oidc/v3 v3.17.0 // indirect\n\tgithub.com/davecgh/go-spew v1.1.1 // indirect\n\tgithub.com/fxamacker/cbor/v2 v2.9.0 // indirect\n\tgithub.com/go-jose/go-jose/v3 v3.0.4 // indirect\n\tgithub.com/go-jose/go-jose/v4 v4.1.3 // indirect\n\tgithub.com/google/certificate-transparency-go v1.1.8-0.20240110162603-74a5dd331745 // indirect\n\tgithub.com/google/go-tpm v0.9.8 // indirect\n\tgithub.com/google/go-tspi v0.3.0 // indirect\n\tgithub.com/google/s2a-go v0.1.9 // indirect\n\tgithub.com/googleapis/enterprise-certificate-proxy v0.3.11 // indirect\n\tgithub.com/googleapis/gax-go/v2 v2.17.0 // indirect\n\tgithub.com/grpc-ecosystem/grpc-gateway/v2 v2.27.7 // indirect\n\tgithub.com/jackc/pgx/v5 v5.6.0 // indirect\n\tgithub.com/jackc/puddle/v2 v2.2.1 // indirect\n\tgithub.com/kylelemons/godebug v1.1.0 // indirect\n\tgithub.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect\n\tgithub.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 // indirect\n\tgithub.com/pmezard/go-difflib v1.0.0 // indirect\n\tgithub.com/prometheus/otlptranslator v1.0.0 // indirect\n\tgithub.com/quic-go/qpack v0.6.0 // indirect\n\tgithub.com/smallstep/cli-utils v0.12.2 // indirect\n\tgithub.com/smallstep/go-attestation v0.4.4-0.20241119153605-2306d5b464ca // indirect\n\tgithub.com/smallstep/linkedca v0.25.0 // indirect\n\tgithub.com/smallstep/pkcs7 v0.2.1 // indirect\n\tgithub.com/smallstep/scep v0.0.0-20250318231241-a25cabb69492 // indirect\n\tgithub.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55 // indirect\n\tgithub.com/x448/float16 v0.8.4 // indirect\n\tgithub.com/zeebo/blake3 v0.2.4 // indirect\n\tgo.opentelemetry.io/auto/sdk v1.2.1 // indirect\n\tgo.opentelemetry.io/contrib/bridges/prometheus v0.65.0 // indirect\n\tgo.opentelemetry.io/contrib/propagators/aws v1.40.0 // indirect\n\tgo.opentelemetry.io/contrib/propagators/b3 v1.40.0 // indirect\n\tgo.opentelemetry.io/contrib/propagators/jaeger v1.40.0 // indirect\n\tgo.opentelemetry.io/contrib/propagators/ot v1.40.0 // indirect\n\tgo.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.16.0 // indirect\n\tgo.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.16.0 // indirect\n\tgo.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.40.0 // indirect\n\tgo.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.40.0 // indirect\n\tgo.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.40.0 // indirect\n\tgo.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.40.0 // indirect\n\tgo.opentelemetry.io/otel/exporters/prometheus v0.62.0 // indirect\n\tgo.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.16.0 // indirect\n\tgo.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.40.0 // indirect\n\tgo.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.40.0 // indirect\n\tgo.opentelemetry.io/otel/log v0.16.0 // indirect\n\tgo.opentelemetry.io/otel/sdk/log v0.16.0 // indirect\n\tgo.opentelemetry.io/otel/sdk/metric v1.40.0 // indirect\n\tgo.yaml.in/yaml/v2 v2.4.3 // indirect\n\tgo.yaml.in/yaml/v3 v3.0.4 // indirect\n\tgolang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect\n\tgolang.org/x/oauth2 v0.35.0 // indirect\n\tgoogle.golang.org/api v0.266.0 // indirect\n\tgoogle.golang.org/genproto/googleapis/api v0.0.0-20260128011058-8636f8732409 // indirect\n\tgoogle.golang.org/genproto/googleapis/rpc v0.0.0-20260203192932-546029d2fa20 // indirect\n\tgoogle.golang.org/grpc/cmd/protoc-gen-go-grpc v1.5.1 // indirect\n)\n\nrequire (\n\tfilippo.io/edwards25519 v1.2.0 // indirect\n\tgithub.com/AndreasBriese/bbloom v0.0.0-20190825152654-46b345b51c96 // indirect\n\tgithub.com/Masterminds/goutils v1.1.1 // indirect\n\tgithub.com/Masterminds/semver/v3 v3.4.0 // indirect\n\tgithub.com/beorn7/perks v1.0.1 // indirect\n\tgithub.com/cespare/xxhash v1.1.0 // indirect\n\tgithub.com/cespare/xxhash/v2 v2.3.0\n\tgithub.com/chzyer/readline v1.5.1 // indirect\n\tgithub.com/cpuguy83/go-md2man/v2 v2.0.7 // indirect\n\tgithub.com/dgraph-io/badger v1.6.2 // indirect\n\tgithub.com/dgraph-io/badger/v2 v2.2007.4 // indirect\n\tgithub.com/dgraph-io/ristretto v0.2.0 // indirect\n\tgithub.com/dgryski/go-farm v0.0.0-20200201041132-a6ae2369ad13 // indirect\n\tgithub.com/dlclark/regexp2 v1.11.5 // indirect\n\tgithub.com/felixge/httpsnoop v1.0.4 // indirect\n\tgithub.com/go-logr/logr v1.4.3 // indirect\n\tgithub.com/go-logr/stdr v1.2.2 // indirect\n\tgithub.com/go-sql-driver/mysql v1.8.1 // indirect\n\tgithub.com/golang/protobuf v1.5.4 // indirect\n\tgithub.com/golang/snappy v0.0.4 // indirect\n\tgithub.com/huandu/xstrings v1.5.0 // indirect\n\tgithub.com/inconshreveable/mousetrap v1.1.0 // indirect\n\tgithub.com/jackc/pgpassfile v1.0.0 // indirect\n\tgithub.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a // indirect\n\tgithub.com/libdns/libdns v1.1.1\n\tgithub.com/manifoldco/promptui v0.9.0 // indirect\n\tgithub.com/mattn/go-colorable v0.1.14 // indirect\n\tgithub.com/mattn/go-isatty v0.0.20 // indirect\n\tgithub.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d // indirect\n\tgithub.com/miekg/dns v1.1.72 // indirect\n\tgithub.com/mitchellh/copystructure v1.2.0 // indirect\n\tgithub.com/mitchellh/go-ps v1.0.0 // indirect\n\tgithub.com/mitchellh/reflectwalk v1.0.2 // indirect\n\tgithub.com/pires/go-proxyproto v0.11.0\n\tgithub.com/pkg/errors v0.9.1 // indirect\n\tgithub.com/prometheus/client_model v0.6.2\n\tgithub.com/prometheus/common v0.67.5 // indirect\n\tgithub.com/prometheus/procfs v0.19.2 // indirect\n\tgithub.com/rs/xid v1.6.0 // indirect\n\tgithub.com/russross/blackfriday/v2 v2.1.0 // indirect\n\tgithub.com/shopspring/decimal v1.4.0 // indirect\n\tgithub.com/shurcooL/sanitized_anchor_name v1.0.0 // indirect\n\tgithub.com/sirupsen/logrus v1.9.4 // indirect\n\tgithub.com/slackhq/nebula v1.10.3 // indirect\n\tgithub.com/spf13/cast v1.7.0 // indirect\n\tgithub.com/urfave/cli v1.22.17 // indirect\n\tgo.etcd.io/bbolt v1.3.10 // indirect\n\tgo.opentelemetry.io/otel/exporters/otlp/otlptrace v1.40.0 // indirect\n\tgo.opentelemetry.io/otel/metric v1.40.0 // indirect\n\tgo.opentelemetry.io/otel/trace v1.40.0\n\tgo.opentelemetry.io/proto/otlp v1.9.0 // indirect\n\tgo.uber.org/multierr v1.11.0 // indirect\n\tgolang.org/x/mod v0.33.0 // indirect\n\tgolang.org/x/sys v0.41.0\n\tgolang.org/x/text v0.34.0\n\tgolang.org/x/tools v0.42.0 // indirect\n\tgoogle.golang.org/grpc v1.79.1 // indirect\n\tgoogle.golang.org/protobuf v1.36.11 // indirect\n\thowett.net/plist v1.0.0 // indirect\n)\n"
  },
  {
    "path": "go.sum",
    "content": "cel.dev/expr v0.25.1 h1:1KrZg61W6TWSxuNZ37Xy49ps13NUovb66QLprthtwi4=\ncel.dev/expr v0.25.1/go.mod h1:hrXvqGP6G6gyx8UAHSHJ5RGk//1Oj5nXQ2NI02Nrsg4=\ncloud.google.com/go v0.123.0 h1:2NAUJwPR47q+E35uaJeYoNhuNEM9kM8SjgRgdeOJUSE=\ncloud.google.com/go v0.123.0/go.mod h1:xBoMV08QcqUGuPW65Qfm1o9Y4zKZBpGS+7bImXLTAZU=\ncloud.google.com/go/auth v0.18.1 h1:IwTEx92GFUo2pJ6Qea0EU3zYvKnTAeRCODxfA/G5UWs=\ncloud.google.com/go/auth v0.18.1/go.mod h1:GfTYoS9G3CWpRA3Va9doKN9mjPGRS+v41jmZAhBzbrA=\ncloud.google.com/go/auth/oauth2adapt v0.2.8 h1:keo8NaayQZ6wimpNSmW5OPc283g65QNIiLpZnkHRbnc=\ncloud.google.com/go/auth/oauth2adapt v0.2.8/go.mod h1:XQ9y31RkqZCcwJWNSx2Xvric3RrU88hAYYbjDWYDL+c=\ncloud.google.com/go/compute/metadata v0.9.0 h1:pDUj4QMoPejqq20dK0Pg2N4yG9zIkYGdBtwLoEkH9Zs=\ncloud.google.com/go/compute/metadata v0.9.0/go.mod h1:E0bWwX5wTnLPedCKqk3pJmVgCBSM6qQI1yTBdEb3C10=\ncloud.google.com/go/iam v1.5.3 h1:+vMINPiDF2ognBJ97ABAYYwRgsaqxPbQDlMnbHMjolc=\ncloud.google.com/go/iam v1.5.3/go.mod h1:MR3v9oLkZCTlaqljW6Eb2d3HGDGK5/bDv93jhfISFvU=\ncloud.google.com/go/kms v1.25.0 h1:gVqvGGUmz0nYCmtoxWmdc1wli2L1apgP8U4fghPGSbQ=\ncloud.google.com/go/kms v1.25.0/go.mod h1:XIdHkzfj0bUO3E+LvwPg+oc7s58/Ns8Nd8Sdtljihbk=\ncloud.google.com/go/longrunning v0.8.0 h1:LiKK77J3bx5gDLi4SMViHixjD2ohlkwBi+mKA7EhfW8=\ncloud.google.com/go/longrunning v0.8.0/go.mod h1:UmErU2Onzi+fKDg2gR7dusz11Pe26aknR4kHmJJqIfk=\ncode.pfad.fr/check v1.1.0 h1:GWvjdzhSEgHvEHe2uJujDcpmZoySKuHQNrZMfzfO0bE=\ncode.pfad.fr/check v1.1.0/go.mod h1:NiUH13DtYsb7xp5wll0U4SXx7KhXQVCtRgdC96IPfoM=\ndario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=\ndario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=\nfilippo.io/bigmod v0.1.0 h1:UNzDk7y9ADKST+axd9skUpBQeW7fG2KrTZyOE4uGQy8=\nfilippo.io/bigmod v0.1.0/go.mod h1:OjOXDNlClLblvXdwgFFOQFJEocLhhtai8vGLy0JCZlI=\nfilippo.io/edwards25519 v1.2.0 h1:crnVqOiS4jqYleHd9vaKZ+HKtHfllngJIiOpNpoJsjo=\nfilippo.io/edwards25519 v1.2.0/go.mod h1:xzAOLCNug/yB62zG1bQ8uziwrIqIuxhctzJT18Q77mc=\ngithub.com/AndreasBriese/bbloom v0.0.0-20190825152654-46b345b51c96 h1:cTp8I5+VIoKjsnZuH8vjyaysT/ses3EvZeaV/1UkF2M=\ngithub.com/AndreasBriese/bbloom v0.0.0-20190825152654-46b345b51c96/go.mod h1:bOvUY6CB00SOBii9/FifXqc0awNKxLFCL/+pkDPuyl8=\ngithub.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=\ngithub.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=\ngithub.com/BurntSushi/toml v1.6.0 h1:dRaEfpa2VI55EwlIW72hMRHdWouJeRF7TPYhI+AUQjk=\ngithub.com/BurntSushi/toml v1.6.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=\ngithub.com/DeRuina/timberjack v1.3.9 h1:6UXZ1I7ExPGTX/1UNYawR58LlOJUHKBPiYC7WQ91eBo=\ngithub.com/DeRuina/timberjack v1.3.9/go.mod h1:RLoeQrwrCGIEF8gO5nV5b/gMD0QIy7bzQhBUgpp1EqE=\ngithub.com/KimMachineGun/automemlimit v0.7.5 h1:RkbaC0MwhjL1ZuBKunGDjE/ggwAX43DwZrJqVwyveTk=\ngithub.com/KimMachineGun/automemlimit v0.7.5/go.mod h1:QZxpHaGOQoYvFhv/r4u3U0JTC2ZcOwbSr11UZF46UBM=\ngithub.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI=\ngithub.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU=\ngithub.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0=\ngithub.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM=\ngithub.com/Masterminds/sprig/v3 v3.3.0 h1:mQh0Yrg1XPo6vjYXgtf5OtijNAKJRNcTdOOGZe3tPhs=\ngithub.com/Masterminds/sprig/v3 v3.3.0/go.mod h1:Zy1iXRYNqNLUolqCpL4uhk6SHUMAOSCzdgBfDb35Lz0=\ngithub.com/OneOfOne/xxhash v1.2.2 h1:KMrpdQIwFcEqXDklaen+P1axHaj9BSKzvpUUfnHldSE=\ngithub.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=\ngithub.com/alecthomas/assert/v2 v2.11.0 h1:2Q9r3ki8+JYXvGsDyBXwH3LcJ+WK5D0gc5E8vS6K3D0=\ngithub.com/alecthomas/assert/v2 v2.11.0/go.mod h1:Bze95FyfUr7x34QZrjL+XP+0qgp/zg8yS+TtBj1WA3k=\ngithub.com/alecthomas/chroma/v2 v2.2.0/go.mod h1:vf4zrexSH54oEjJ7EdB65tGNHmH3pGZmVkgTP5RHvAs=\ngithub.com/alecthomas/chroma/v2 v2.23.1 h1:nv2AVZdTyClGbVQkIzlDm/rnhk1E9bU9nXwmZ/Vk/iY=\ngithub.com/alecthomas/chroma/v2 v2.23.1/go.mod h1:NqVhfBR0lte5Ouh3DcthuUCTUpDC9cxBOfyMbMQPs3o=\ngithub.com/alecthomas/repr v0.0.0-20220113201626-b1b626ac65ae/go.mod h1:2kn6fqh/zIyPLmm3ugklbEi5hg5wS435eygvNfaDQL8=\ngithub.com/alecthomas/repr v0.5.2 h1:SU73FTI9D1P5UNtvseffFSGmdNci/O6RsqzeXJtP0Qs=\ngithub.com/alecthomas/repr v0.5.2/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW57eE/O/4=\ngithub.com/antlr4-go/antlr/v4 v4.13.1 h1:SqQKkuVZ+zWkMMNkjy5FZe5mr5WURWnlpmOuzYWrPrQ=\ngithub.com/antlr4-go/antlr/v4 v4.13.1/go.mod h1:GKmUxMtwp6ZgGwZSva4eWPC5mS6vUAmOABFgjdkM7Nw=\ngithub.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=\ngithub.com/aryann/difflib v0.0.0-20210328193216-ff5ff6dc229b h1:uUXgbcPDK3KpW29o4iy7GtuappbWT0l5NaMo9H9pJDw=\ngithub.com/aryann/difflib v0.0.0-20210328193216-ff5ff6dc229b/go.mod h1:DAHtR1m6lCRdSC2Tm3DSWRPvIPr6xNKyeHdqDQSQT+A=\ngithub.com/aws/aws-sdk-go-v2 v1.41.1 h1:ABlyEARCDLN034NhxlRUSZr4l71mh+T5KAeGh6cerhU=\ngithub.com/aws/aws-sdk-go-v2 v1.41.1/go.mod h1:MayyLB8y+buD9hZqkCW3kX1AKq07Y5pXxtgB+rRFhz0=\ngithub.com/aws/aws-sdk-go-v2/config v1.32.7 h1:vxUyWGUwmkQ2g19n7JY/9YL8MfAIl7bTesIUykECXmY=\ngithub.com/aws/aws-sdk-go-v2/config v1.32.7/go.mod h1:2/Qm5vKUU/r7Y+zUk/Ptt2MDAEKAfUtKc1+3U1Mo3oY=\ngithub.com/aws/aws-sdk-go-v2/credentials v1.19.7 h1:tHK47VqqtJxOymRrNtUXN5SP/zUTvZKeLx4tH6PGQc8=\ngithub.com/aws/aws-sdk-go-v2/credentials v1.19.7/go.mod h1:qOZk8sPDrxhf+4Wf4oT2urYJrYt3RejHSzgAquYeppw=\ngithub.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.17 h1:I0GyV8wiYrP8XpA70g1HBcQO1JlQxCMTW9npl5UbDHY=\ngithub.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.17/go.mod h1:tyw7BOl5bBe/oqvoIeECFJjMdzXoa/dfVz3QQ5lgHGA=\ngithub.com/aws/aws-sdk-go-v2/internal/configsources v1.4.17 h1:xOLELNKGp2vsiteLsvLPwxC+mYmO6OZ8PYgiuPJzF8U=\ngithub.com/aws/aws-sdk-go-v2/internal/configsources v1.4.17/go.mod h1:5M5CI3D12dNOtH3/mk6minaRwI2/37ifCURZISxA/IQ=\ngithub.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.17 h1:WWLqlh79iO48yLkj1v3ISRNiv+3KdQoZ6JWyfcsyQik=\ngithub.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.17/go.mod h1:EhG22vHRrvF8oXSTYStZhJc1aUgKtnJe+aOiFEV90cM=\ngithub.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 h1:WKuaxf++XKWlHWu9ECbMlha8WOEGm0OUEZqm4K/Gcfk=\ngithub.com/aws/aws-sdk-go-v2/internal/ini v1.8.4/go.mod h1:ZWy7j6v1vWGmPReu0iSGvRiise4YI5SkR3OHKTZ6Wuc=\ngithub.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4 h1:0ryTNEdJbzUCEWkVXEXoqlXV72J5keC1GvILMOuD00E=\ngithub.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4/go.mod h1:HQ4qwNZh32C3CBeO6iJLQlgtMzqeG17ziAA/3KDJFow=\ngithub.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.17 h1:RuNSMoozM8oXlgLG/n6WLaFGoea7/CddrCfIiSA+xdY=\ngithub.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.17/go.mod h1:F2xxQ9TZz5gDWsclCtPQscGpP0VUOc8RqgFM3vDENmU=\ngithub.com/aws/aws-sdk-go-v2/service/kms v1.49.5 h1:DKibav4XF66XSeaXcrn9GlWGHos6D/vJ4r7jsK7z5CE=\ngithub.com/aws/aws-sdk-go-v2/service/kms v1.49.5/go.mod h1:1SdcmEGUEQE1mrU2sIgeHtcMSxHuybhPvuEPANzIDfI=\ngithub.com/aws/aws-sdk-go-v2/service/signin v1.0.5 h1:VrhDvQib/i0lxvr3zqlUwLwJP4fpmpyD9wYG1vfSu+Y=\ngithub.com/aws/aws-sdk-go-v2/service/signin v1.0.5/go.mod h1:k029+U8SY30/3/ras4G/Fnv/b88N4mAfliNn08Dem4M=\ngithub.com/aws/aws-sdk-go-v2/service/sso v1.30.9 h1:v6EiMvhEYBoHABfbGB4alOYmCIrcgyPPiBE1wZAEbqk=\ngithub.com/aws/aws-sdk-go-v2/service/sso v1.30.9/go.mod h1:yifAsgBxgJWn3ggx70A3urX2AN49Y5sJTD1UQFlfqBw=\ngithub.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.13 h1:gd84Omyu9JLriJVCbGApcLzVR3XtmC4ZDPcAI6Ftvds=\ngithub.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.13/go.mod h1:sTGThjphYE4Ohw8vJiRStAcu3rbjtXRsdNB0TvZ5wwo=\ngithub.com/aws/aws-sdk-go-v2/service/sts v1.41.6 h1:5fFjR/ToSOzB2OQ/XqWpZBmNvmP/pJ1jOWYlFDJTjRQ=\ngithub.com/aws/aws-sdk-go-v2/service/sts v1.41.6/go.mod h1:qgFDZQSD/Kys7nJnVqYlWKnh0SSdMjAi0uSwON4wgYQ=\ngithub.com/aws/smithy-go v1.24.0 h1:LpilSUItNPFr1eY85RYgTIg5eIEPtvFbskaFcmmIUnk=\ngithub.com/aws/smithy-go v1.24.0/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=\ngithub.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=\ngithub.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=\ngithub.com/caddyserver/certmagic v0.25.2 h1:D7xcS7ggX/WEY54x0czj7ioTkmDWKIgxtIi2OcQclUc=\ngithub.com/caddyserver/certmagic v0.25.2/go.mod h1:llW/CvsNmza8S6hmsuggsZeiX+uS27dkqY27wDIuBWg=\ngithub.com/caddyserver/zerossl v0.1.5 h1:dkvOjBAEEtY6LIGAHei7sw2UgqSD6TrWweXpV7lvEvE=\ngithub.com/caddyserver/zerossl v0.1.5/go.mod h1:CxA0acn7oEGO6//4rtrRjYgEoa4MFw/XofZnrYwGqG4=\ngithub.com/ccoveille/go-safecast/v2 v2.0.0 h1:+5eyITXAUj3wMjad6cRVJKGnC7vDS55zk0INzJagub0=\ngithub.com/ccoveille/go-safecast/v2 v2.0.0/go.mod h1:JIYA4CAR33blIDuE6fSwCp2sz1oOBahXnvmdBhOAABs=\ngithub.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=\ngithub.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=\ngithub.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=\ngithub.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=\ngithub.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=\ngithub.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=\ngithub.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=\ngithub.com/chzyer/logex v1.2.1 h1:XHDu3E6q+gdHgsdTPH6ImJMIp436vR6MPtH8gP05QzM=\ngithub.com/chzyer/logex v1.2.1/go.mod h1:JLbx6lG2kDbNRFnfkgvh4eRJRPX1QCoOIWomwysCBrQ=\ngithub.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=\ngithub.com/chzyer/readline v1.5.1 h1:upd/6fQk4src78LMRzh5vItIt361/o4uq553V8B5sGI=\ngithub.com/chzyer/readline v1.5.1/go.mod h1:Eh+b79XXUwfKfcPLepksvw2tcLE/Ct21YObkaSkeBlk=\ngithub.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=\ngithub.com/chzyer/test v1.0.0 h1:p3BQDXSxOhOG0P9z6/hGnII4LGiEPOYBhs8asl/fC04=\ngithub.com/chzyer/test v1.0.0/go.mod h1:2JlltgoNkt4TW/z9V/IzDdFaMTM2JPIi26O1pF38GC8=\ngithub.com/cloudflare/circl v1.6.3 h1:9GPOhQGF9MCYUeXyMYlqTR6a5gTrgR/fBLXvUgtVcg8=\ngithub.com/cloudflare/circl v1.6.3/go.mod h1:2eXP6Qfat4O/Yhh8BznvKnJ+uzEoTQ6jVKJRn81BiS4=\ngithub.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=\ngithub.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=\ngithub.com/coreos/go-oidc/v3 v3.17.0 h1:hWBGaQfbi0iVviX4ibC7bk8OKT5qNr4klBaCHVNvehc=\ngithub.com/coreos/go-oidc/v3 v3.17.0/go.mod h1:wqPbKFrVnE90vty060SB40FCJ8fTHTxSwyXJqZH+sI8=\ngithub.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=\ngithub.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=\ngithub.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=\ngithub.com/cpuguy83/go-md2man/v2 v2.0.7 h1:zbFlGlXEAKlwXpmvle3d8Oe3YnkKIK4xSRTd3sHPnBo=\ngithub.com/cpuguy83/go-md2man/v2 v2.0.7/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=\ngithub.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=\ngithub.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/dgraph-io/badger v1.6.2 h1:mNw0qs90GVgGGWylh0umH5iag1j6n/PeJtNvL6KY/x8=\ngithub.com/dgraph-io/badger v1.6.2/go.mod h1:JW2yswe3V058sS0kZ2h/AXeDSqFjxnZcRrVH//y2UQE=\ngithub.com/dgraph-io/badger/v2 v2.2007.4 h1:TRWBQg8UrlUhaFdco01nO2uXwzKS7zd+HVdwV/GHc4o=\ngithub.com/dgraph-io/badger/v2 v2.2007.4/go.mod h1:vSw/ax2qojzbN6eXHIx6KPKtCSHJN/Uz0X0VPruTIhk=\ngithub.com/dgraph-io/ristretto v0.0.2/go.mod h1:KPxhHT9ZxKefz+PCeOGsrHpl1qZ7i70dGTu2u+Ahh6E=\ngithub.com/dgraph-io/ristretto v0.0.3-0.20200630154024-f66de99634de/go.mod h1:KPxhHT9ZxKefz+PCeOGsrHpl1qZ7i70dGTu2u+Ahh6E=\ngithub.com/dgraph-io/ristretto v0.2.0 h1:XAfl+7cmoUDWW/2Lx8TGZQjjxIQ2Ley9DSf52dru4WE=\ngithub.com/dgraph-io/ristretto v0.2.0/go.mod h1:8uBHCU/PBV4Ag0CJrP47b9Ofby5dqWNh4FicAdoqFNU=\ngithub.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=\ngithub.com/dgryski/go-farm v0.0.0-20200201041132-a6ae2369ad13 h1:fAjc9m62+UWV/WAFKLNi6ZS0675eEUC9y3AlwSbQu1Y=\ngithub.com/dgryski/go-farm v0.0.0-20200201041132-a6ae2369ad13/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=\ngithub.com/dlclark/regexp2 v1.4.0/go.mod h1:2pZnwuY/m+8K6iRw6wQdMtk+rH5tNGR1i55kozfMjCc=\ngithub.com/dlclark/regexp2 v1.7.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=\ngithub.com/dlclark/regexp2 v1.11.5 h1:Q/sSnsKerHeCkc/jSTNq1oCm7KiVgUMZRDUoRu0JQZQ=\ngithub.com/dlclark/regexp2 v1.11.5/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=\ngithub.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=\ngithub.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=\ngithub.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=\ngithub.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=\ngithub.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=\ngithub.com/fortytw2/leaktest v1.3.0 h1:u8491cBMTQ8ft8aeV+adlcytMZylmA5nnwwkRZjI8vw=\ngithub.com/fortytw2/leaktest v1.3.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g=\ngithub.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=\ngithub.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=\ngithub.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=\ngithub.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM=\ngithub.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ=\ngithub.com/go-chi/chi/v5 v5.2.5 h1:Eg4myHZBjyvJmAFjFvWgrqDTXFyOzjj7YIm3L3mu6Ug=\ngithub.com/go-chi/chi/v5 v5.2.5/go.mod h1:X7Gx4mteadT3eDOMTsXzmI4/rwUpOwBHLpAfupzFJP0=\ngithub.com/go-jose/go-jose/v3 v3.0.4 h1:Wp5HA7bLQcKnf6YYao/4kpRpVMp/yf6+pJKV8WFSaNY=\ngithub.com/go-jose/go-jose/v3 v3.0.4/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ=\ngithub.com/go-jose/go-jose/v4 v4.1.3 h1:CVLmWDhDVRa6Mi/IgCgaopNosCaHz7zrMeF9MlZRkrs=\ngithub.com/go-jose/go-jose/v4 v4.1.3/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08=\ngithub.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=\ngithub.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=\ngithub.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=\ngithub.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=\ngithub.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=\ngithub.com/go-sql-driver/mysql v1.8.1 h1:LedoTUt/eveggdHS9qUFC1EFSa8bU2+1pZjSRpvNJ1Y=\ngithub.com/go-sql-driver/mysql v1.8.1/go.mod h1:wEBSXgmK//2ZFJyE+qWnIsVGmvmEKlqwuVSjsCm7DZg=\ngithub.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=\ngithub.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=\ngithub.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=\ngithub.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=\ngithub.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=\ngithub.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=\ngithub.com/google/btree v1.1.2 h1:xf4v41cLI2Z6FxbKm+8Bu+m8ifhj15JuZ9sa0jZCMUU=\ngithub.com/google/btree v1.1.2/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4=\ngithub.com/google/cel-go v0.27.0 h1:e7ih85+4qVrBuqQWTW4FKSqZYokVuc3HnhH5keboFTo=\ngithub.com/google/cel-go v0.27.0/go.mod h1:tTJ11FWqnhw5KKpnWpvW9CJC3Y9GK4EIS0WXnBbebzw=\ngithub.com/google/certificate-transparency-go v1.0.21/go.mod h1:QeJfpSbVSfYc7RgB3gJFj9cbuQMMchQxrWXz8Ruopmg=\ngithub.com/google/certificate-transparency-go v1.1.8-0.20240110162603-74a5dd331745 h1:heyoXNxkRT155x4jTAiSv5BVSVkueifPUm+Q8LUXMRo=\ngithub.com/google/certificate-transparency-go v1.1.8-0.20240110162603-74a5dd331745/go.mod h1:zN0wUQgV9LjwLZeFHnrAbQi8hzMVvEWePyk+MhPOk7k=\ngithub.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=\ngithub.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=\ngithub.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=\ngithub.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=\ngithub.com/google/go-tpm v0.9.8 h1:slArAR9Ft+1ybZu0lBwpSmpwhRXaa85hWtMinMyRAWo=\ngithub.com/google/go-tpm v0.9.8/go.mod h1:h9jEsEECg7gtLis0upRBQU+GhYVH6jMjrFxI8u6bVUY=\ngithub.com/google/go-tpm-tools v0.4.7 h1:J3ycC8umYxM9A4eF73EofRZu4BxY0jjQnUnkhIBbvws=\ngithub.com/google/go-tpm-tools v0.4.7/go.mod h1:gSyXTZHe3fgbzb6WEGd90QucmsnT1SRdlye82gH8QjQ=\ngithub.com/google/go-tspi v0.3.0 h1:ADtq8RKfP+jrTyIWIZDIYcKOMecRqNJFOew2IT0Inus=\ngithub.com/google/go-tspi v0.3.0/go.mod h1:xfMGI3G0PhxCdNVcYr1C4C+EizojDg/TXuX5by8CiHI=\ngithub.com/google/s2a-go v0.1.9 h1:LGD7gtMgezd8a/Xak7mEWL0PjoTQFvpRudN895yqKW0=\ngithub.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM=\ngithub.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=\ngithub.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=\ngithub.com/googleapis/enterprise-certificate-proxy v0.3.11 h1:vAe81Msw+8tKUxi2Dqh/NZMz7475yUvmRIkXr4oN2ao=\ngithub.com/googleapis/enterprise-certificate-proxy v0.3.11/go.mod h1:RFV7MUdlb7AgEq2v7FmMCfeSMCllAzWxFgRdusoGks8=\ngithub.com/googleapis/gax-go/v2 v2.17.0 h1:RksgfBpxqff0EZkDWYuz9q/uWsTVz+kf43LsZ1J6SMc=\ngithub.com/googleapis/gax-go/v2 v2.17.0/go.mod h1:mzaqghpQp4JDh3HvADwrat+6M3MOIDp5YKHhb9PAgDY=\ngithub.com/grpc-ecosystem/grpc-gateway/v2 v2.27.7 h1:X+2YciYSxvMQK0UZ7sg45ZVabVZBeBuvMkmuI2V3Fak=\ngithub.com/grpc-ecosystem/grpc-gateway/v2 v2.27.7/go.mod h1:lW34nIZuQ8UDPdkon5fmfp2l3+ZkQ2me/+oecHYLOII=\ngithub.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=\ngithub.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM=\ngithub.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg=\ngithub.com/huandu/xstrings v1.5.0 h1:2ag3IFq9ZDANvthTwTiqSSZLjDc+BedvHPAp5tJy2TI=\ngithub.com/huandu/xstrings v1.5.0/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=\ngithub.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=\ngithub.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=\ngithub.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=\ngithub.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=\ngithub.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=\ngithub.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a h1:bbPeKD0xmW/Y25WS6cokEszi5g+S0QxI/d45PkRi7Nk=\ngithub.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=\ngithub.com/jackc/pgx/v5 v5.6.0 h1:SWJzexBzPL5jb0GEsrPMLIsi/3jOo7RHlzTjcAeDrPY=\ngithub.com/jackc/pgx/v5 v5.6.0/go.mod h1:DNZ/vlrUnhWCoFGxHAG8U2ljioxukquj7utPDgtQdTw=\ngithub.com/jackc/puddle/v2 v2.2.1 h1:RhxXJtFG022u4ibrCSMSiu5aOq1i77R3OHKNJj77OAk=\ngithub.com/jackc/puddle/v2 v2.2.1/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=\ngithub.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=\ngithub.com/klauspost/compress v1.12.3/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg=\ngithub.com/klauspost/compress v1.18.4 h1:RPhnKRAQ4Fh8zU2FY/6ZFDwTVTxgJ/EMydqSTzE9a2c=\ngithub.com/klauspost/compress v1.18.4/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=\ngithub.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=\ngithub.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=\ngithub.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=\ngithub.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=\ngithub.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=\ngithub.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=\ngithub.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=\ngithub.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=\ngithub.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=\ngithub.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=\ngithub.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=\ngithub.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=\ngithub.com/letsencrypt/challtestsrv v1.4.2 h1:0ON3ldMhZyWlfVNYYpFuWRTmZNnyfiL9Hh5YzC3JVwU=\ngithub.com/letsencrypt/challtestsrv v1.4.2/go.mod h1:GhqMqcSoeGpYd5zX5TgwA6er/1MbWzx/o7yuuVya+Wk=\ngithub.com/letsencrypt/pebble/v2 v2.10.0 h1:Wq6gYXlsY6ubqI3hhxsTzdyotvfdjFBxuwYqCLCnj/U=\ngithub.com/letsencrypt/pebble/v2 v2.10.0/go.mod h1:Sk8cmUIPcIdv2nINo+9PB4L+ZBhzY+F9A1a/h/xmWiQ=\ngithub.com/libdns/libdns v1.1.1 h1:wPrHrXILoSHKWJKGd0EiAVmiJbFShguILTg9leS/P/U=\ngithub.com/libdns/libdns v1.1.1/go.mod h1:4Bj9+5CQiNMVGf87wjX4CY3HQJypUHRuLvlsfsZqLWQ=\ngithub.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=\ngithub.com/manifoldco/promptui v0.9.0 h1:3V4HzJk1TtXW1MTZMP7mdlwbBpIinw3HztaIlYthEiA=\ngithub.com/manifoldco/promptui v0.9.0/go.mod h1:ka04sppxSGFAtxX0qhlYQjISsg9mR4GWtQEhdbn6Pgg=\ngithub.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=\ngithub.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=\ngithub.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=\ngithub.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=\ngithub.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d h1:5PJl274Y63IEHC+7izoQE9x6ikvDFZS2mDVS3drnohI=\ngithub.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE=\ngithub.com/mholt/acmez/v3 v3.1.6 h1:eGVQNObP0pBN4sxqrXeg7MYqTOWyoiYpQqITVWlrevk=\ngithub.com/mholt/acmez/v3 v3.1.6/go.mod h1:5nTPosTGosLxF3+LU4ygbgMRFDhbAVpqMI4+a4aHLBY=\ngithub.com/miekg/dns v1.1.72 h1:vhmr+TF2A3tuoGNkLDFK9zi36F2LS+hKTRW0Uf8kbzI=\ngithub.com/miekg/dns v1.1.72/go.mod h1:+EuEPhdHOsfk6Wk5TT2CzssZdqkmFhf8r+aVyDEToIs=\ngithub.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw=\ngithub.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s=\ngithub.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=\ngithub.com/mitchellh/go-ps v1.0.0 h1:i6ampVEEF4wQFF+bkYfwYgY+F/uYJDktmvLPf7qIgjc=\ngithub.com/mitchellh/go-ps v1.0.0/go.mod h1:J4lOc8z8yJs6vUwklHw2XEIiT4z4C40KtWVN3nvg8Pg=\ngithub.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=\ngithub.com/mitchellh/reflectwalk v1.0.2 h1:G2LzWKi524PWgd3mLHV8Y5k7s6XUvT0Gef6zxSIeXaQ=\ngithub.com/mitchellh/reflectwalk v1.0.2/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=\ngithub.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=\ngithub.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=\ngithub.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 h1:onHthvaw9LFnH4t2DcNVpwGmV9E1BkGknEliJkfwQj0=\ngithub.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58/go.mod h1:DXv8WO4yhMYhSNPKjeNKa5WY9YCIEBRbNzFFPJbWO6Y=\ngithub.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=\ngithub.com/peterbourgon/diskv/v3 v3.0.1 h1:x06SQA46+PKIUftmEujdwSEpIx8kR+M9eLYsUxeYveU=\ngithub.com/peterbourgon/diskv/v3 v3.0.1/go.mod h1:kJ5Ny7vLdARGU3WUuy6uzO6T0nb/2gWcT1JiBvRmb5o=\ngithub.com/pires/go-proxyproto v0.11.0 h1:gUQpS85X/VJMdUsYyEgyn59uLJvGqPhJV5YvG68wXH4=\ngithub.com/pires/go-proxyproto v0.11.0/go.mod h1:ZKAAyp3cgy5Y5Mo4n9AlScrkCZwUy0g3Jf+slqQVcuU=\ngithub.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=\ngithub.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=\ngithub.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=\ngithub.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=\ngithub.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=\ngithub.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g=\ngithub.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U=\ngithub.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=\ngithub.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=\ngithub.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=\ngithub.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=\ngithub.com/prometheus/common v0.67.5 h1:pIgK94WWlQt1WLwAC5j2ynLaBRDiinoAb86HZHTUGI4=\ngithub.com/prometheus/common v0.67.5/go.mod h1:SjE/0MzDEEAyrdr5Gqc6G+sXI67maCxzaT3A2+HqjUw=\ngithub.com/prometheus/otlptranslator v1.0.0 h1:s0LJW/iN9dkIH+EnhiD3BlkkP5QVIUVEoIwkU+A6qos=\ngithub.com/prometheus/otlptranslator v1.0.0/go.mod h1:vRYWnXvI6aWGpsdY/mOT/cbeVRBlPWtBNDb7kGR3uKM=\ngithub.com/prometheus/procfs v0.19.2 h1:zUMhqEW66Ex7OXIiDkll3tl9a1ZdilUOd/F6ZXw4Vws=\ngithub.com/prometheus/procfs v0.19.2/go.mod h1:M0aotyiemPhBCM0z5w87kL22CxfcH05ZpYlu+b4J7mw=\ngithub.com/quic-go/qpack v0.6.0 h1:g7W+BMYynC1LbYLSqRt8PBg5Tgwxn214ZZR34VIOjz8=\ngithub.com/quic-go/qpack v0.6.0/go.mod h1:lUpLKChi8njB4ty2bFLX2x4gzDqXwUpaO1DP9qMDZII=\ngithub.com/quic-go/quic-go v0.59.0 h1:OLJkp1Mlm/aS7dpKgTc6cnpynnD2Xg7C1pwL6vy/SAw=\ngithub.com/quic-go/quic-go v0.59.0/go.mod h1:upnsH4Ju1YkqpLXC305eW3yDZ4NfnNbmQRCMWS58IKU=\ngithub.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=\ngithub.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=\ngithub.com/rs/xid v1.6.0 h1:fV591PaemRlL6JfRxGDEPl69wICngIQ3shQtzfy2gxU=\ngithub.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=\ngithub.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=\ngithub.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=\ngithub.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=\ngithub.com/schollz/jsonstore v1.1.0 h1:WZBDjgezFS34CHI+myb4s8GGpir3UMpy7vWoCeO0n6E=\ngithub.com/schollz/jsonstore v1.1.0/go.mod h1:15c6+9guw8vDRyozGjN3FoILt0wpruJk9Pi66vjaZfg=\ngithub.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp81k=\ngithub.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME=\ngithub.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo=\ngithub.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=\ngithub.com/sirupsen/logrus v1.9.4 h1:TsZE7l11zFCLZnZ+teH4Umoq5BhEIfIzfRDZ1Uzql2w=\ngithub.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC42RjbV2g=\ngithub.com/slackhq/nebula v1.10.3 h1:EstYj8ODEcv6T0R9X5BVq1zgWZnyU5gtPzk99QF1PMU=\ngithub.com/slackhq/nebula v1.10.3/go.mod h1:IL5TUQm4x9IFx2kCKPYm1gP47pwd5b8QGnnBH2RHnvs=\ngithub.com/smallstep/assert v0.0.0-20200723003110-82e2b9b3b262 h1:unQFBIznI+VYD1/1fApl1A+9VcBk+9dcqGfnePY87LY=\ngithub.com/smallstep/assert v0.0.0-20200723003110-82e2b9b3b262/go.mod h1:MyOHs9Po2fbM1LHej6sBUT8ozbxmMOFG+E+rx/GSGuc=\ngithub.com/smallstep/certificates v0.30.0-rc3 h1:Lx/NNJ4n+L3Pyx5NtVRGXeqviPPXTFFGLRiC1fCwU50=\ngithub.com/smallstep/certificates v0.30.0-rc3/go.mod h1:e5/ylYYpvnjCVZz6RpyOkpTe73EGPYoL+8TZZ5EtLjI=\ngithub.com/smallstep/cli-utils v0.12.2 h1:lGzM9PJrH/qawbzMC/s2SvgLdJPKDWKwKzx9doCVO+k=\ngithub.com/smallstep/cli-utils v0.12.2/go.mod h1:uCPqefO29goHLGqFnwk0i8W7XJu18X3WHQFRtOm/00Y=\ngithub.com/smallstep/go-attestation v0.4.4-0.20241119153605-2306d5b464ca h1:VX8L0r8vybH0bPeaIxh4NQzafKQiqvlOn8pmOXbFLO4=\ngithub.com/smallstep/go-attestation v0.4.4-0.20241119153605-2306d5b464ca/go.mod h1:vNAduivU014fubg6ewygkAvQC0IQVXqdc8vaGl/0er4=\ngithub.com/smallstep/linkedca v0.25.0 h1:txT9QHGbCsJq0MhAghBq7qhurGY727tQuqUi+n4BVBo=\ngithub.com/smallstep/linkedca v0.25.0/go.mod h1:Q3jVAauFKNlF86W5/RFtgQeyDKz98GL/KN3KG4mJOvc=\ngithub.com/smallstep/nosql v0.7.0 h1:YiWC9ZAHcrLCrayfaF+QJUv16I2bZ7KdLC3RpJcnAnE=\ngithub.com/smallstep/nosql v0.7.0/go.mod h1:H5VnKMCbeq9QA6SRY5iqPylfxLfYcLwvUff3onQ8+HU=\ngithub.com/smallstep/pkcs7 v0.2.1 h1:6Kfzr/QizdIuB6LSv8y1LJdZ3aPSfTNhTLqAx9CTLfA=\ngithub.com/smallstep/pkcs7 v0.2.1/go.mod h1:RcXHsMfL+BzH8tRhmrF1NkkpebKpq3JEM66cOFxanf0=\ngithub.com/smallstep/scep v0.0.0-20250318231241-a25cabb69492 h1:k23+s51sgYix4Zgbvpmy+1ZgXLjr4ZTkBTqXmpnImwA=\ngithub.com/smallstep/scep v0.0.0-20250318231241-a25cabb69492/go.mod h1:QQhwLqCS13nhv8L5ov7NgusowENUtXdEzdytjmJHdZQ=\ngithub.com/smallstep/truststore v0.13.0 h1:90if9htAOblavbMeWlqNLnO9bsjjgVv2hQeQJCi/py4=\ngithub.com/smallstep/truststore v0.13.0/go.mod h1:3tmMp2aLKZ/OA/jnFUB0cYPcho402UG2knuJoPh4j7A=\ngithub.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=\ngithub.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=\ngithub.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=\ngithub.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=\ngithub.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=\ngithub.com/spf13/cast v1.7.0 h1:ntdiHjuueXFgm5nzDRdOS4yfT43P5Fnud6DH50rz/7w=\ngithub.com/spf13/cast v1.7.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=\ngithub.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=\ngithub.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU=\ngithub.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4=\ngithub.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=\ngithub.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=\ngithub.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=\ngithub.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=\ngithub.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=\ngithub.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=\ngithub.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=\ngithub.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=\ngithub.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=\ngithub.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=\ngithub.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=\ngithub.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=\ngithub.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=\ngithub.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=\ngithub.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=\ngithub.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=\ngithub.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=\ngithub.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=\ngithub.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=\ngithub.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=\ngithub.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55 h1:Gzfnfk2TWrk8Jj4P4c1a3CtQyMaTVCznlkLZI++hok4=\ngithub.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55/go.mod h1:4k4QO+dQ3R5FofL+SanAUZe+/QfeK0+OIuwDIRu2vSg=\ngithub.com/tailscale/tscert v0.0.0-20251216020129-aea342f6d747 h1:RnBbFMmodYzhC6adOjTbtUQXyzV8dcvKYbolzs6Qch0=\ngithub.com/tailscale/tscert v0.0.0-20251216020129-aea342f6d747/go.mod h1:ejPAJui3kVK4u5TgMtqtXlWf5HnKh9fLy5kvpaeuas0=\ngithub.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=\ngithub.com/urfave/cli v1.22.17 h1:SYzXoiPfQjHBbkYxbew5prZHS1TOLT3ierW8SYLqtVQ=\ngithub.com/urfave/cli v1.22.17/go.mod h1:b0ht0aqgH/6pBYzzxURyrM4xXNgsoT/n2ZzwQiEhNVo=\ngithub.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=\ngithub.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=\ngithub.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=\ngithub.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=\ngithub.com/yuin/goldmark v1.4.15/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=\ngithub.com/yuin/goldmark v1.7.16 h1:n+CJdUxaFMiDUNnWC3dMWCIQJSkxH4uz3ZwQBkAlVNE=\ngithub.com/yuin/goldmark v1.7.16/go.mod h1:ip/1k0VRfGynBgxOz0yCqHrbZXhcjxyuS66Brc7iBKg=\ngithub.com/yuin/goldmark-highlighting/v2 v2.0.0-20230729083705-37449abec8cc h1:+IAOyRda+RLrxa1WC7umKOZRsGq4QrFFMYApOeHzQwQ=\ngithub.com/yuin/goldmark-highlighting/v2 v2.0.0-20230729083705-37449abec8cc/go.mod h1:ovIvrum6DQJA4QsJSovrkC4saKHQVs7TvcaeO8AIl5I=\ngithub.com/zeebo/assert v1.1.0 h1:hU1L1vLTHsnO8x8c9KAR5GmM5QscxHg5RNU5z5qbUWY=\ngithub.com/zeebo/assert v1.1.0/go.mod h1:Pq9JiuJQpG8JLJdtkwrJESF0Foym2/D9XMU5ciN/wJ0=\ngithub.com/zeebo/blake3 v0.2.4 h1:KYQPkhpRtcqh0ssGYcKLG1JYvddkEA8QwCM/yBqhaZI=\ngithub.com/zeebo/blake3 v0.2.4/go.mod h1:7eeQ6d2iXWRGF6npfaxl2CU+xy2Fjo2gxeyZGCRUjcE=\ngithub.com/zeebo/pcg v1.0.1 h1:lyqfGeWiv4ahac6ttHs+I5hwtH/+1mrhlCtVNQM2kHo=\ngithub.com/zeebo/pcg v1.0.1/go.mod h1:09F0S9iiKrwn9rlI5yjLkmrug154/YRW6KnnXVDM/l4=\ngo.etcd.io/bbolt v1.3.10 h1:+BqfJTcCzTItrop8mq/lbzL8wSGtj94UO/3U31shqG0=\ngo.etcd.io/bbolt v1.3.10/go.mod h1:bK3UQLPJZly7IlNmV7uVHJDxfe5aK9Ll93e/74Y9oEQ=\ngo.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=\ngo.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=\ngo.opentelemetry.io/contrib/bridges/prometheus v0.65.0 h1:I/7S/yWobR3QHFLqHsJ8QOndoiFsj1VgHpQiq43KlUI=\ngo.opentelemetry.io/contrib/bridges/prometheus v0.65.0/go.mod h1:jPF6gn3y1E+nozCAEQj3c6NZ8KY+tvAgSVfvoOJUFac=\ngo.opentelemetry.io/contrib/exporters/autoexport v0.65.0 h1:2gApdml7SznX9szEKFjKjM4qGcGSvAybYLBY319XG3g=\ngo.opentelemetry.io/contrib/exporters/autoexport v0.65.0/go.mod h1:0QqAGlbHXhmPYACG3n5hNzO5DnEqqtg4VcK5pr22RI0=\ngo.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 h1:q4XOmH/0opmeuJtPsbFNivyl7bCt7yRBbeEm2sC/XtQ=\ngo.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0/go.mod h1:snMWehoOh2wsEwnvvwtDyFCxVeDAODenXHtn5vzrKjo=\ngo.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.65.0 h1:7iP2uCb7sGddAr30RRS6xjKy7AZ2JtTOPA3oolgVSw8=\ngo.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.65.0/go.mod h1:c7hN3ddxs/z6q9xwvfLPk+UHlWRQyaeR1LdgfL/66l0=\ngo.opentelemetry.io/contrib/propagators/autoprop v0.65.0 h1:kTaCycF9Xkm8VBBvH0rJ4wFeRjtIV55Erk3uuVsIs5s=\ngo.opentelemetry.io/contrib/propagators/autoprop v0.65.0/go.mod h1:rooPzAbXfxMX9fsPJjmOBg2SN4RhFEV8D7cfGK+N3tE=\ngo.opentelemetry.io/contrib/propagators/aws v1.40.0 h1:4VIrh75jW4RTimUNx1DSk+6H9/nDr1FvmKoOVDh3K04=\ngo.opentelemetry.io/contrib/propagators/aws v1.40.0/go.mod h1:B0dCov9KNQGlut3T8wZZjDnLXEXdBroM7bFsHh/gRos=\ngo.opentelemetry.io/contrib/propagators/b3 v1.40.0 h1:xariChe8OOVF3rNlfzGFgQc61npQmXhzZj/i82mxMfg=\ngo.opentelemetry.io/contrib/propagators/b3 v1.40.0/go.mod h1:72WvbdxbOfXaELEQfonFfOL6osvcVjI7uJEE8C2nkrs=\ngo.opentelemetry.io/contrib/propagators/jaeger v1.40.0 h1:aXl9uobjJs5vquMLt9ZkI/3zIuz8XQ3TqOKSWx0/xdU=\ngo.opentelemetry.io/contrib/propagators/jaeger v1.40.0/go.mod h1:ioMePqe6k6c/ovXSkmkMr1mbN5qRBGJxNTVop7/2XO0=\ngo.opentelemetry.io/contrib/propagators/ot v1.40.0 h1:Lon8J5SPmWaL1Ko2TIlCNHJ42/J1b5XbJlgJaE/9m7I=\ngo.opentelemetry.io/contrib/propagators/ot v1.40.0/go.mod h1:dKWtJTlp1Yj+8Cneye5idO46eRPIbi23qVuJYKjNnvY=\ngo.opentelemetry.io/otel v1.40.0 h1:oA5YeOcpRTXq6NN7frwmwFR0Cn3RhTVZvXsP4duvCms=\ngo.opentelemetry.io/otel v1.40.0/go.mod h1:IMb+uXZUKkMXdPddhwAHm6UfOwJyh4ct1ybIlV14J0g=\ngo.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.16.0 h1:ZVg+kCXxd9LtAaQNKBxAvJ5NpMf7LpvEr4MIZqb0TMQ=\ngo.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.16.0/go.mod h1:hh0tMeZ75CCXrHd9OXRYxTlCAdxcXioWHFIpYw2rZu8=\ngo.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.16.0 h1:djrxvDxAe44mJUrKataUbOhCKhR3F8QCyWucO16hTQs=\ngo.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.16.0/go.mod h1:dt3nxpQEiSoKvfTVxp3TUg5fHPLhKtbcnN3Z1I1ePD0=\ngo.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.40.0 h1:NOyNnS19BF2SUDApbOKbDtWZ0IK7b8FJ2uAGdIWOGb0=\ngo.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.40.0/go.mod h1:VL6EgVikRLcJa9ftukrHu/ZkkhFBSo1lzvdBC9CF1ss=\ngo.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.40.0 h1:9y5sHvAxWzft1WQ4BwqcvA+IFVUJ1Ya75mSAUnFEVwE=\ngo.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.40.0/go.mod h1:eQqT90eR3X5Dbs1g9YSM30RavwLF725Ris5/XSXWvqE=\ngo.opentelemetry.io/otel/exporters/otlp/otlptrace v1.40.0 h1:QKdN8ly8zEMrByybbQgv8cWBcdAarwmIPZ6FThrWXJs=\ngo.opentelemetry.io/otel/exporters/otlp/otlptrace v1.40.0/go.mod h1:bTdK1nhqF76qiPoCCdyFIV+N/sRHYXYCTQc+3VCi3MI=\ngo.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.40.0 h1:DvJDOPmSWQHWywQS6lKL+pb8s3gBLOZUtw4N+mavW1I=\ngo.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.40.0/go.mod h1:EtekO9DEJb4/jRyN4v4Qjc2yA7AtfCBuz2FynRUWTXs=\ngo.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.40.0 h1:wVZXIWjQSeSmMoxF74LzAnpVQOAFDo3pPji9Y4SOFKc=\ngo.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.40.0/go.mod h1:khvBS2IggMFNwZK/6lEeHg/W57h/IX6J4URh57fuI40=\ngo.opentelemetry.io/otel/exporters/prometheus v0.62.0 h1:krvC4JMfIOVdEuNPTtQ0ZjCiXrybhv+uOHMfHRmnvVo=\ngo.opentelemetry.io/otel/exporters/prometheus v0.62.0/go.mod h1:fgOE6FM/swEnsVQCqCnbOfRV4tOnWPg7bVeo4izBuhQ=\ngo.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.16.0 h1:ivlbaajBWJqhcCPniDqDJmRwj4lc6sRT+dCAVKNmxlQ=\ngo.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.16.0/go.mod h1:u/G56dEKDDwXNCVLsbSrllB2o8pbtFLUC4HpR66r2dc=\ngo.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.40.0 h1:ZrPRak/kS4xI3AVXy8F7pipuDXmDsrO8Lg+yQjBLjw0=\ngo.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.40.0/go.mod h1:3y6kQCWztq6hyW8Z9YxQDDm0Je9AJoFar2G0yDcmhRk=\ngo.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.40.0 h1:MzfofMZN8ulNqobCmCAVbqVL5syHw+eB2qPRkCMA/fQ=\ngo.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.40.0/go.mod h1:E73G9UFtKRXrxhBsHtG00TB5WxX57lpsQzogDkqBTz8=\ngo.opentelemetry.io/otel/log v0.16.0 h1:DeuBPqCi6pQwtCK0pO4fvMB5eBq6sNxEnuTs88pjsN4=\ngo.opentelemetry.io/otel/log v0.16.0/go.mod h1:rWsmqNVTLIA8UnwYVOItjyEZDbKIkMxdQunsIhpUMes=\ngo.opentelemetry.io/otel/metric v1.40.0 h1:rcZe317KPftE2rstWIBitCdVp89A2HqjkxR3c11+p9g=\ngo.opentelemetry.io/otel/metric v1.40.0/go.mod h1:ib/crwQH7N3r5kfiBZQbwrTge743UDc7DTFVZrrXnqc=\ngo.opentelemetry.io/otel/sdk v1.40.0 h1:KHW/jUzgo6wsPh9At46+h4upjtccTmuZCFAc9OJ71f8=\ngo.opentelemetry.io/otel/sdk v1.40.0/go.mod h1:Ph7EFdYvxq72Y8Li9q8KebuYUr2KoeyHx0DRMKrYBUE=\ngo.opentelemetry.io/otel/sdk/log v0.16.0 h1:e/b4bdlQwC5fnGtG3dlXUrNOnP7c8YLVSpSfEBIkTnI=\ngo.opentelemetry.io/otel/sdk/log v0.16.0/go.mod h1:JKfP3T6ycy7QEuv3Hj8oKDy7KItrEkus8XJE6EoSzw4=\ngo.opentelemetry.io/otel/sdk/log/logtest v0.16.0 h1:/XVkpZ41rVRTP4DfMgYv1nEtNmf65XPPyAdqV90TMy4=\ngo.opentelemetry.io/otel/sdk/log/logtest v0.16.0/go.mod h1:iOOPgQr5MY9oac/F5W86mXdeyWZGleIx3uXO98X2R6Y=\ngo.opentelemetry.io/otel/sdk/metric v1.40.0 h1:mtmdVqgQkeRxHgRv4qhyJduP3fYJRMX4AtAlbuWdCYw=\ngo.opentelemetry.io/otel/sdk/metric v1.40.0/go.mod h1:4Z2bGMf0KSK3uRjlczMOeMhKU2rhUqdWNoKcYrtcBPg=\ngo.opentelemetry.io/otel/trace v1.40.0 h1:WA4etStDttCSYuhwvEa8OP8I5EWu24lkOzp+ZYblVjw=\ngo.opentelemetry.io/otel/trace v1.40.0/go.mod h1:zeAhriXecNGP/s2SEG3+Y8X9ujcJOTqQ5RgdEJcawiA=\ngo.opentelemetry.io/proto/otlp v1.9.0 h1:l706jCMITVouPOqEnii2fIAuO3IVGBRPV5ICjceRb/A=\ngo.opentelemetry.io/proto/otlp v1.9.0/go.mod h1:xE+Cx5E/eEHw+ISFkwPLwCZefwVjY+pqKg1qcK03+/4=\ngo.step.sm/crypto v0.76.2 h1:JJ/yMcs/rmcCAwlo+afrHjq74XBFRTJw5B2y4Q4Z4c4=\ngo.step.sm/crypto v0.76.2/go.mod h1:m6KlB/HzIuGFep0UWI5e0SYi38UxpoKeCg6qUaHV6/Q=\ngo.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs=\ngo.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8=\ngo.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=\ngo.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=\ngo.uber.org/mock v0.6.0 h1:hyF9dfmbgIX5EfOdasqLsWD6xqpNZlXblLB/Dbnwv3Y=\ngo.uber.org/mock v0.6.0/go.mod h1:KiVJ4BqZJaMj4svdfmHM0AUx4NJYO8ZNpPnZn1Z+BBU=\ngo.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=\ngo.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=\ngo.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc=\ngo.uber.org/zap v1.27.1/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=\ngo.uber.org/zap/exp v0.3.0 h1:6JYzdifzYkGmTdRR59oYH+Ng7k49H9qVpWwNSsGJj3U=\ngo.uber.org/zap/exp v0.3.0/go.mod h1:5I384qq7XGxYyByIhHm6jg5CHkGY0nsTfbDLgDDlgJQ=\ngo.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=\ngo.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=\ngo.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=\ngo.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=\ngolang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=\ngolang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=\ngolang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=\ngolang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=\ngolang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=\ngolang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=\ngolang.org/x/crypto v0.33.0/go.mod h1:bVdXmD7IV/4GdElGPozy6U7lWdRXA4qyRVGJV57uQ5M=\ngolang.org/x/crypto v0.48.0 h1:/VRzVqiRSggnhY7gNRxPauEQ5Drw9haKdM0jqfcCFts=\ngolang.org/x/crypto v0.48.0/go.mod h1:r0kV5h3qnFPlQnBSrULhlsRfryS2pmewsg+XfMgkVos=\ngolang.org/x/crypto/x509roots/fallback v0.0.0-20260213171211-a408498e5541 h1:FmKxj9ocLKn45jiR2jQMwCVhDvaK7fKQFzfuT9GvyK8=\ngolang.org/x/crypto/x509roots/fallback v0.0.0-20260213171211-a408498e5541/go.mod h1:+UoQFNBq2p2wO+Q6ddVtYc25GZ6VNdOMyyrd4nrqrKs=\ngolang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY=\ngolang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70=\ngolang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=\ngolang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=\ngolang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=\ngolang.org/x/mod v0.15.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=\ngolang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=\ngolang.org/x/mod v0.33.0 h1:tHFzIWbBifEmbwtGz65eaWyGiGZatSrT9prnU8DbVL8=\ngolang.org/x/mod v0.33.0/go.mod h1:swjeQEj+6r7fODbD2cqrnje9PnziFuw4bmLbBZFrQ5w=\ngolang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=\ngolang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=\ngolang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=\ngolang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=\ngolang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=\ngolang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=\ngolang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=\ngolang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=\ngolang.org/x/net v0.51.0 h1:94R/GTO7mt3/4wIKpcR5gkGmRLOuE/2hNGeWq/GBIFo=\ngolang.org/x/net v0.51.0/go.mod h1:aamm+2QF5ogm02fjy5Bb7CQ0WMt1/WVM7FtyaTLlA9Y=\ngolang.org/x/oauth2 v0.35.0 h1:Mv2mzuHuZuY2+bkyWXIHMfhNdJAdwW3FuWeCPYN5GVQ=\ngolang.org/x/oauth2 v0.35.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=\ngolang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=\ngolang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=\ngolang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=\ngolang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=\ngolang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=\ngolang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=\ngolang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20190626221950-04f50cda93cb/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.0.0-20220310020820-b874c991c1a5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=\ngolang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=\ngolang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=\ngolang.org/x/sys v0.41.0 h1:Ivj+2Cp/ylzLiEU89QhWblYnOE9zerudt9Ftecq2C6k=\ngolang.org/x/sys v0.41.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=\ngolang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=\ngolang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=\ngolang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=\ngolang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=\ngolang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=\ngolang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=\ngolang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=\ngolang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=\ngolang.org/x/term v0.29.0/go.mod h1:6bl4lRlvVuDgSf3179VpIxBF0o10JUpXWOnI7nErv7s=\ngolang.org/x/term v0.40.0 h1:36e4zGLqU4yhjlmxEaagx2KuYbJq3EwY8K943ZsHcvg=\ngolang.org/x/term v0.40.0/go.mod h1:w2P8uVp06p2iyKKuvXIm7N/y0UCRt3UfJTfZ7oOpglM=\ngolang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=\ngolang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=\ngolang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=\ngolang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=\ngolang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=\ngolang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=\ngolang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=\ngolang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=\ngolang.org/x/text v0.22.0/go.mod h1:YRoo4H8PVmsu+E3Ou7cqLVH8oXWIHVoX0jqUWALQhfY=\ngolang.org/x/text v0.34.0 h1:oL/Qq0Kdaqxa1KbNeMKwQq0reLCCaFtqu2eNuSeNHbk=\ngolang.org/x/text v0.34.0/go.mod h1:homfLqTYRFyVYemLBFl5GgL/DWEiH5wcsQ5gSh1yziA=\ngolang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=\ngolang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=\ngolang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=\ngolang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=\ngolang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=\ngolang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=\ngolang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=\ngolang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=\ngolang.org/x/tools v0.42.0 h1:uNgphsn75Tdz5Ji2q36v/nsFSfR/9BRFvqhGBaJGd5k=\ngolang.org/x/tools v0.42.0/go.mod h1:Ma6lCIwGZvHK6XtgbswSoWroEkhugApmsXyrUmBhfr0=\ngolang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=\ngonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=\ngoogle.golang.org/api v0.266.0 h1:hco+oNCf9y7DmLeAtHJi/uBAY7n/7XC9mZPxu1ROiyk=\ngoogle.golang.org/api v0.266.0/go.mod h1:Jzc0+ZfLnyvXma3UtaTl023TdhZu6OMBP9tJ+0EmFD0=\ngoogle.golang.org/genproto v0.0.0-20260128011058-8636f8732409 h1:VQZ/yAbAtjkHgH80teYd2em3xtIkkHd7ZhqfH2N9CsM=\ngoogle.golang.org/genproto v0.0.0-20260128011058-8636f8732409/go.mod h1:rxKD3IEILWEu3P44seeNOAwZN4SaoKaQ/2eTg4mM6EM=\ngoogle.golang.org/genproto/googleapis/api v0.0.0-20260128011058-8636f8732409 h1:merA0rdPeUV3YIIfHHcH4qBkiQAc1nfCKSI7lB4cV2M=\ngoogle.golang.org/genproto/googleapis/api v0.0.0-20260128011058-8636f8732409/go.mod h1:fl8J1IvUjCilwZzQowmw2b7HQB2eAuYBabMXzWurF+I=\ngoogle.golang.org/genproto/googleapis/rpc v0.0.0-20260203192932-546029d2fa20 h1:Jr5R2J6F6qWyzINc+4AM8t5pfUz6beZpHp678GNrMbE=\ngoogle.golang.org/genproto/googleapis/rpc v0.0.0-20260203192932-546029d2fa20/go.mod h1:j9x/tPzZkyxcgEFkiKEEGxfvyumM01BEtsW8xzOahRQ=\ngoogle.golang.org/grpc v1.79.1 h1:zGhSi45ODB9/p3VAawt9a+O/MULLl9dpizzNNpq7flY=\ngoogle.golang.org/grpc v1.79.1/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhHLrHQ=\ngoogle.golang.org/grpc/cmd/protoc-gen-go-grpc v1.5.1 h1:F29+wU6Ee6qgu9TddPgooOdaqsxTMunOoj8KA5yuS5A=\ngoogle.golang.org/grpc/cmd/protoc-gen-go-grpc v1.5.1/go.mod h1:5KF+wpkbTSbGcR9zteSqZV6fqFOWBl4Yde8En8MryZA=\ngoogle.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=\ngoogle.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=\ngopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=\ngopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=\ngopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=\ngopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=\ngopkg.in/yaml.v1 v1.0.0-20140924161607-9f9df34309c0/go.mod h1:WDnlLJ4WF5VGsH/HVa3CI79GS0ol3YnhVnKP89i0kNg=\ngopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=\ngopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=\ngopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=\ngopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=\ngopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=\nhowett.net/plist v1.0.0 h1:7CrbWYbPPO/PyNy38b2EB/+gYbjCe2DXBxgtOOZbSQM=\nhowett.net/plist v1.0.0/go.mod h1:lqaXoTrLY4hg8tnEzNru53gicrbv7rrk+2xJA/7hw9g=\n"
  },
  {
    "path": "internal/filesystems/map.go",
    "content": "package filesystems\n\nimport (\n\t\"io/fs\"\n\t\"strings\"\n\t\"sync\"\n)\n\nconst (\n\tDefaultFileSystemKey = \"default\"\n)\n\nvar DefaultFileSystem = &wrapperFs{key: DefaultFileSystemKey, FS: OsFS{}}\n\n// wrapperFs exists so can easily add to wrapperFs down the line\ntype wrapperFs struct {\n\tkey string\n\tfs.FS\n}\n\n// FileSystemMap stores a map of filesystems\n// the empty key will be overwritten to be the default key\n// it includes a default filesystem, based off the os fs\ntype FileSystemMap struct {\n\tm sync.Map\n}\n\n// note that the first invocation of key cannot be called in a racy context.\nfunc (f *FileSystemMap) key(k string) string {\n\tif k == \"\" {\n\t\tk = DefaultFileSystemKey\n\t}\n\treturn k\n}\n\n// Register will add the filesystem with key to later be retrieved\n// A call with a nil fs will call unregister, ensuring that a call to Default() will never be nil\nfunc (f *FileSystemMap) Register(k string, v fs.FS) {\n\tk = f.key(k)\n\tif v == nil {\n\t\tf.Unregister(k)\n\t\treturn\n\t}\n\tf.m.Store(k, &wrapperFs{key: k, FS: v})\n}\n\n// Unregister will remove the filesystem with key from the filesystem map\n// if the key is the default key, it will set the default to the osFS instead of deleting it\n// modules should call this on cleanup to be safe\nfunc (f *FileSystemMap) Unregister(k string) {\n\tk = f.key(k)\n\tif k == DefaultFileSystemKey {\n\t\tf.m.Store(k, DefaultFileSystem)\n\t} else {\n\t\tf.m.Delete(k)\n\t}\n}\n\n// Get will get a filesystem with a given key\nfunc (f *FileSystemMap) Get(k string) (v fs.FS, ok bool) {\n\tk = f.key(k)\n\tc, ok := f.m.Load(strings.TrimSpace(k))\n\tif !ok {\n\t\tif k == DefaultFileSystemKey {\n\t\t\tf.m.Store(k, DefaultFileSystem)\n\t\t\treturn DefaultFileSystem, true\n\t\t}\n\t\treturn nil, ok\n\t}\n\treturn c.(fs.FS), true\n}\n\n// Default will get the default filesystem in the filesystem map\nfunc (f *FileSystemMap) Default() fs.FS {\n\tval, _ := f.Get(DefaultFileSystemKey)\n\treturn val\n}\n"
  },
  {
    "path": "internal/filesystems/os.go",
    "content": "package filesystems\n\nimport (\n\t\"io/fs\"\n\t\"os\"\n\t\"path/filepath\"\n)\n\n// OsFS is a simple fs.FS implementation that uses the local\n// file system. (We do not use os.DirFS because we do our own\n// rooting or path prefixing without being constrained to a single\n// root folder. The standard os.DirFS implementation is problematic\n// since roots can be dynamic in our application.)\n//\n// OsFS also implements fs.StatFS, fs.GlobFS, fs.ReadDirFS, and fs.ReadFileFS.\ntype OsFS struct{}\n\nfunc (OsFS) Open(name string) (fs.File, error)          { return os.Open(name) }\nfunc (OsFS) Stat(name string) (fs.FileInfo, error)      { return os.Stat(name) }\nfunc (OsFS) Glob(pattern string) ([]string, error)      { return filepath.Glob(pattern) }\nfunc (OsFS) ReadDir(name string) ([]fs.DirEntry, error) { return os.ReadDir(name) }\nfunc (OsFS) ReadFile(name string) ([]byte, error)       { return os.ReadFile(name) }\n\nvar (\n\t_ fs.StatFS     = (*OsFS)(nil)\n\t_ fs.GlobFS     = (*OsFS)(nil)\n\t_ fs.ReadDirFS  = (*OsFS)(nil)\n\t_ fs.ReadFileFS = (*OsFS)(nil)\n)\n"
  },
  {
    "path": "internal/logbuffer.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage internal\n\nimport (\n\t\"sync\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n)\n\n// LogBufferCore is a zapcore.Core that buffers log entries in memory.\ntype LogBufferCore struct {\n\tmu      sync.Mutex\n\tentries []zapcore.Entry\n\tfields  [][]zapcore.Field\n\tlevel   zapcore.LevelEnabler\n}\n\ntype LogBufferCoreInterface interface {\n\tzapcore.Core\n\tFlushTo(*zap.Logger)\n}\n\nfunc NewLogBufferCore(level zapcore.LevelEnabler) *LogBufferCore {\n\treturn &LogBufferCore{\n\t\tlevel: level,\n\t}\n}\n\nfunc (c *LogBufferCore) Enabled(lvl zapcore.Level) bool {\n\treturn c.level.Enabled(lvl)\n}\n\nfunc (c *LogBufferCore) With(fields []zapcore.Field) zapcore.Core {\n\treturn c\n}\n\nfunc (c *LogBufferCore) Check(entry zapcore.Entry, ce *zapcore.CheckedEntry) *zapcore.CheckedEntry {\n\tif c.Enabled(entry.Level) {\n\t\treturn ce.AddCore(entry, c)\n\t}\n\treturn ce\n}\n\nfunc (c *LogBufferCore) Write(entry zapcore.Entry, fields []zapcore.Field) error {\n\tc.mu.Lock()\n\tdefer c.mu.Unlock()\n\tc.entries = append(c.entries, entry)\n\tc.fields = append(c.fields, fields)\n\treturn nil\n}\n\nfunc (c *LogBufferCore) Sync() error { return nil }\n\n// FlushTo flushes buffered logs to the given zap.Logger.\nfunc (c *LogBufferCore) FlushTo(logger *zap.Logger) {\n\tc.mu.Lock()\n\tdefer c.mu.Unlock()\n\tfor idx, entry := range c.entries {\n\t\tlogger.WithOptions().Check(entry.Level, entry.Message).Write(c.fields[idx]...)\n\t}\n\tc.entries = nil\n\tc.fields = nil\n}\n\nvar (\n\t_ zapcore.Core           = (*LogBufferCore)(nil)\n\t_ LogBufferCoreInterface = (*LogBufferCore)(nil)\n)\n"
  },
  {
    "path": "internal/logs.go",
    "content": "package internal\n\nimport \"fmt\"\n\n// MaxSizeSubjectsListForLog returns the keys in the map as a slice of maximum length\n// maxToDisplay. It is useful for logging domains being managed, for example, since a\n// map is typically needed for quick lookup, but a slice is needed for logging, and this\n// can be quite a doozy since there may be a huge amount (hundreds of thousands).\nfunc MaxSizeSubjectsListForLog(subjects map[string]struct{}, maxToDisplay int) []string {\n\tnumberOfNamesToDisplay := min(len(subjects), maxToDisplay)\n\tdomainsToDisplay := make([]string, 0, numberOfNamesToDisplay)\n\tfor domain := range subjects {\n\t\tdomainsToDisplay = append(domainsToDisplay, domain)\n\t\tif len(domainsToDisplay) >= numberOfNamesToDisplay {\n\t\t\tbreak\n\t\t}\n\t}\n\tif len(subjects) > maxToDisplay {\n\t\tdomainsToDisplay = append(domainsToDisplay, fmt.Sprintf(\"(and %d more...)\", len(subjects)-maxToDisplay))\n\t}\n\treturn domainsToDisplay\n}\n"
  },
  {
    "path": "internal/metrics/metrics.go",
    "content": "package metrics\n\nimport (\n\t\"net/http\"\n\t\"strconv\"\n)\n\nfunc SanitizeCode(s int) string {\n\tswitch s {\n\tcase 0, 200:\n\t\treturn \"200\"\n\tdefault:\n\t\treturn strconv.Itoa(s)\n\t}\n}\n\n// Only support the list of \"regular\" HTTP methods, see\n// https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods\nvar methodMap = map[string]string{\n\t\"GET\": http.MethodGet, \"get\": http.MethodGet,\n\t\"HEAD\": http.MethodHead, \"head\": http.MethodHead,\n\t\"PUT\": http.MethodPut, \"put\": http.MethodPut,\n\t\"POST\": http.MethodPost, \"post\": http.MethodPost,\n\t\"DELETE\": http.MethodDelete, \"delete\": http.MethodDelete,\n\t\"CONNECT\": http.MethodConnect, \"connect\": http.MethodConnect,\n\t\"OPTIONS\": http.MethodOptions, \"options\": http.MethodOptions,\n\t\"TRACE\": http.MethodTrace, \"trace\": http.MethodTrace,\n\t\"PATCH\": http.MethodPatch, \"patch\": http.MethodPatch,\n}\n\n// SanitizeMethod sanitizes the method for use as a metric label. This helps\n// prevent high cardinality on the method label. The name is always upper case.\nfunc SanitizeMethod(m string) string {\n\tif m, ok := methodMap[m]; ok {\n\t\treturn m\n\t}\n\n\treturn \"OTHER\"\n}\n"
  },
  {
    "path": "internal/metrics/metrics_test.go",
    "content": "package metrics\n\nimport (\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc TestSanitizeMethod(t *testing.T) {\n\ttests := []struct {\n\t\tmethod   string\n\t\texpected string\n\t}{\n\t\t{method: \"get\", expected: \"GET\"},\n\t\t{method: \"POST\", expected: \"POST\"},\n\t\t{method: \"OPTIONS\", expected: \"OPTIONS\"},\n\t\t{method: \"connect\", expected: \"CONNECT\"},\n\t\t{method: \"trace\", expected: \"TRACE\"},\n\t\t{method: \"UNKNOWN\", expected: \"OTHER\"},\n\t\t{method: strings.Repeat(\"ohno\", 9999), expected: \"OTHER\"},\n\t}\n\n\tfor _, d := range tests {\n\t\tactual := SanitizeMethod(d.method)\n\t\tif actual != d.expected {\n\t\t\tt.Errorf(\"Not same: expected %#v, but got %#v\", d.expected, actual)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "internal/ranges.go",
    "content": "package internal\n\n// PrivateRangesCIDR returns a list of private CIDR range\n// strings, which can be used as a configuration shortcut.\nfunc PrivateRangesCIDR() []string {\n\treturn []string{\n\t\t\"192.168.0.0/16\",\n\t\t\"172.16.0.0/12\",\n\t\t\"10.0.0.0/8\",\n\t\t\"127.0.0.1/8\",\n\t\t\"fd00::/8\",\n\t\t\"::1\",\n\t}\n}\n"
  },
  {
    "path": "internal/sockets.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage internal\n\nimport (\n\t\"fmt\"\n\t\"io/fs\"\n\t\"strconv\"\n\t\"strings\"\n)\n\n// SplitUnixSocketPermissionsBits takes a unix socket address in the\n// unusual \"path|bits\" format (e.g. /run/caddy.sock|0222) and tries\n// to split it into socket path (host) and permissions bits (port).\n// Colons (\":\") can't be used as separator, as socket paths on Windows\n// may include a drive letter (e.g. `unix/c:\\absolute\\path.sock`).\n// Permission bits will default to 0200 if none are specified.\n// Throws an error, if the first carrying bit does not\n// include write perms (e.g. `0422` or `022`).\n// Symbolic permission representation (e.g. `u=w,g=w,o=w`)\n// is not supported and will throw an error for now!\nfunc SplitUnixSocketPermissionsBits(addr string) (path string, fileMode fs.FileMode, err error) {\n\taddrSplit := strings.SplitN(addr, \"|\", 2)\n\n\tif len(addrSplit) == 2 {\n\t\t// parse octal permission bit string as uint32\n\t\tfileModeUInt64, err := strconv.ParseUint(addrSplit[1], 8, 32)\n\t\tif err != nil {\n\t\t\treturn \"\", 0, fmt.Errorf(\"could not parse octal permission bits in %s: %v\", addr, err)\n\t\t}\n\t\tfileMode = fs.FileMode(fileModeUInt64)\n\n\t\t// FileMode.String() returns a string like `-rwxr-xr--` for `u=rwx,g=rx,o=r` (`0754`)\n\t\tif string(fileMode.String()[2]) != \"w\" {\n\t\t\treturn \"\", 0, fmt.Errorf(\"owner of the socket requires '-w-' (write, octal: '2') permissions at least; got '%s' in %s\", fileMode.String()[1:4], addr)\n\t\t}\n\n\t\treturn addrSplit[0], fileMode, nil\n\t}\n\n\t// default to 0200 (symbolic: `u=w,g=,o=`)\n\t// if no permission bits are specified\n\treturn addr, 0o200, nil\n}\n"
  },
  {
    "path": "internal/testmocks/dummyverifier.go",
    "content": "package testmocks\n\nimport (\n\t\"crypto/x509\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddytls\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(new(dummyVerifier))\n}\n\ntype dummyVerifier struct{}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (dummyVerifier) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\treturn nil\n}\n\n// CaddyModule implements caddy.Module.\nfunc (dummyVerifier) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID: \"tls.client_auth.verifier.dummy\",\n\t\tNew: func() caddy.Module {\n\t\t\treturn new(dummyVerifier)\n\t\t},\n\t}\n}\n\n// VerifyClientCertificate implements ClientCertificateVerifier.\nfunc (dummyVerifier) VerifyClientCertificate(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error {\n\treturn nil\n}\n\nvar (\n\t_ caddy.Module                       = dummyVerifier{}\n\t_ caddytls.ClientCertificateVerifier = dummyVerifier{}\n\t_ caddyfile.Unmarshaler              = dummyVerifier{}\n)\n"
  },
  {
    "path": "listen.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n//go:build !unix || solaris\n\npackage caddy\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"os\"\n\t\"slices\"\n\t\"strconv\"\n\t\"sync\"\n\t\"sync/atomic\"\n\t\"time\"\n\n\t\"go.uber.org/zap\"\n)\n\nfunc reuseUnixSocket(_, _ string) (any, error) {\n\treturn nil, nil\n}\n\nfunc listenReusable(ctx context.Context, lnKey string, network, address string, config net.ListenConfig) (any, error) {\n\tvar socketFile *os.File\n\n\tfd := slices.Contains([]string{\"fd\", \"fdgram\"}, network)\n\tif fd {\n\t\tsocketFd, err := strconv.ParseUint(address, 0, strconv.IntSize)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"invalid file descriptor: %v\", err)\n\t\t}\n\n\t\tfunc() {\n\t\t\tsocketFilesMu.Lock()\n\t\t\tdefer socketFilesMu.Unlock()\n\n\t\t\tsocketFdWide := uintptr(socketFd)\n\t\t\tvar ok bool\n\n\t\t\tsocketFile, ok = socketFiles[socketFdWide]\n\n\t\t\tif !ok {\n\t\t\t\tsocketFile = os.NewFile(socketFdWide, lnKey)\n\t\t\t\tif socketFile != nil {\n\t\t\t\t\tsocketFiles[socketFdWide] = socketFile\n\t\t\t\t}\n\t\t\t}\n\t\t}()\n\n\t\tif socketFile == nil {\n\t\t\treturn nil, fmt.Errorf(\"invalid socket file descriptor: %d\", socketFd)\n\t\t}\n\t}\n\n\tdatagram := slices.Contains([]string{\"udp\", \"udp4\", \"udp6\", \"unixgram\", \"fdgram\"}, network)\n\tif datagram {\n\t\tsharedPc, _, err := listenerPool.LoadOrNew(lnKey, func() (Destructor, error) {\n\t\t\tvar (\n\t\t\t\tpc  net.PacketConn\n\t\t\t\terr error\n\t\t\t)\n\t\t\tif fd {\n\t\t\t\tpc, err = net.FilePacketConn(socketFile)\n\t\t\t} else {\n\t\t\t\tpc, err = config.ListenPacket(ctx, network, address)\n\t\t\t}\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\treturn &sharedPacketConn{PacketConn: pc, key: lnKey}, nil\n\t\t})\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn &fakeClosePacketConn{sharedPacketConn: sharedPc.(*sharedPacketConn)}, nil\n\t}\n\n\tsharedLn, _, err := listenerPool.LoadOrNew(lnKey, func() (Destructor, error) {\n\t\tvar (\n\t\t\tln  net.Listener\n\t\t\terr error\n\t\t)\n\t\tif fd {\n\t\t\tln, err = net.FileListener(socketFile)\n\t\t} else {\n\t\t\tln, err = config.Listen(ctx, network, address)\n\t\t}\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn &sharedListener{Listener: ln, key: lnKey}, nil\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &fakeCloseListener{sharedListener: sharedLn.(*sharedListener), keepAliveConfig: config.KeepAliveConfig}, nil\n}\n\n// fakeCloseListener is a private wrapper over a listener that\n// is shared. The state of fakeCloseListener is not shared.\n// This allows one user of a socket to \"close\" the listener\n// while in reality the socket stays open for other users of\n// the listener. In this way, servers become hot-swappable\n// while the listener remains running. Listeners should be\n// re-wrapped in a new fakeCloseListener each time the listener\n// is reused. This type is atomic and values must not be copied.\ntype fakeCloseListener struct {\n\tclosed          int32 // accessed atomically; belongs to this struct only\n\t*sharedListener       // embedded, so we also become a net.Listener\n\tkeepAliveConfig net.KeepAliveConfig\n}\n\ntype canSetKeepAliveConfig interface {\n\tSetKeepAliveConfig(config net.KeepAliveConfig) error\n}\n\nfunc (fcl *fakeCloseListener) Accept() (net.Conn, error) {\n\t// if the listener is already \"closed\", return error\n\tif atomic.LoadInt32(&fcl.closed) == 1 {\n\t\treturn nil, fakeClosedErr(fcl)\n\t}\n\n\t// call underlying accept\n\tconn, err := fcl.sharedListener.Accept()\n\tif err == nil {\n\t\t// if 0, do nothing, Go's default is already set\n\t\t// and if the connection allows setting KeepAlive, set it\n\t\tif tconn, ok := conn.(canSetKeepAliveConfig); ok && fcl.keepAliveConfig.Enable {\n\t\t\terr = tconn.SetKeepAliveConfig(fcl.keepAliveConfig)\n\t\t\tif err != nil {\n\t\t\t\tLog().With(zap.String(\"server\", fcl.sharedListener.key)).Warn(\"unable to set keepalive for new connection:\", zap.Error(err))\n\t\t\t}\n\t\t}\n\t\treturn conn, nil\n\t}\n\n\t// since Accept() returned an error, it may be because our reference to\n\t// the listener (this fakeCloseListener) may have been closed, i.e. the\n\t// server is shutting down; in that case, we need to clear the deadline\n\t// that we set when Close() was called, and return a non-temporary and\n\t// non-timeout error value to the caller, masking the \"true\" error, so\n\t// that server loops / goroutines won't retry, linger, and leak\n\tif atomic.LoadInt32(&fcl.closed) == 1 {\n\t\t// we dereference the sharedListener explicitly even though it's embedded\n\t\t// so that it's clear in the code that side-effects are shared with other\n\t\t// users of this listener, not just our own reference to it; we also don't\n\t\t// do anything with the error because all we could do is log it, but we\n\t\t// explicitly assign it to nothing so we don't forget it's there if needed\n\t\t_ = fcl.sharedListener.clearDeadline()\n\n\t\tif netErr, ok := err.(net.Error); ok && netErr.Timeout() {\n\t\t\treturn nil, fakeClosedErr(fcl)\n\t\t}\n\t}\n\n\treturn nil, err\n}\n\n// Close stops accepting new connections without closing the\n// underlying listener. The underlying listener is only closed\n// if the caller is the last known user of the socket.\nfunc (fcl *fakeCloseListener) Close() error {\n\tif atomic.CompareAndSwapInt32(&fcl.closed, 0, 1) {\n\t\t// There are two ways I know of to get an Accept()\n\t\t// function to return to the server loop that called\n\t\t// it: close the listener, or set a deadline in the\n\t\t// past. Obviously, we can't close the socket yet\n\t\t// since others may be using it (hence this whole\n\t\t// file). But we can set the deadline in the past,\n\t\t// and this is kind of cheating, but it works, and\n\t\t// it apparently even works on Windows.\n\t\t_ = fcl.sharedListener.setDeadline()\n\t\t_, _ = listenerPool.Delete(fcl.sharedListener.key)\n\t}\n\treturn nil\n}\n\n// sharedListener is a wrapper over an underlying listener. The listener\n// and the other fields on the struct are shared state that is synchronized,\n// so sharedListener structs must never be copied (always use a pointer).\ntype sharedListener struct {\n\tnet.Listener\n\tkey        string // uniquely identifies this listener\n\tdeadline   bool   // whether a deadline is currently set\n\tdeadlineMu sync.Mutex\n}\n\nfunc (sl *sharedListener) clearDeadline() error {\n\tvar err error\n\tsl.deadlineMu.Lock()\n\tif sl.deadline {\n\t\tswitch ln := sl.Listener.(type) {\n\t\tcase *net.TCPListener:\n\t\t\terr = ln.SetDeadline(time.Time{})\n\t\t}\n\t\tsl.deadline = false\n\t}\n\tsl.deadlineMu.Unlock()\n\treturn err\n}\n\nfunc (sl *sharedListener) setDeadline() error {\n\ttimeInPast := time.Now().Add(-1 * time.Minute)\n\tvar err error\n\tsl.deadlineMu.Lock()\n\tif !sl.deadline {\n\t\tswitch ln := sl.Listener.(type) {\n\t\tcase *net.TCPListener:\n\t\t\terr = ln.SetDeadline(timeInPast)\n\t\t}\n\t\tsl.deadline = true\n\t}\n\tsl.deadlineMu.Unlock()\n\treturn err\n}\n\n// Destruct is called by the UsagePool when the listener is\n// finally not being used anymore. It closes the socket.\nfunc (sl *sharedListener) Destruct() error {\n\treturn sl.Listener.Close()\n}\n\n// fakeClosePacketConn is like fakeCloseListener, but for PacketConns,\n// or more specifically, *net.UDPConn\ntype fakeClosePacketConn struct {\n\tclosed            int32 // accessed atomically; belongs to this struct only\n\t*sharedPacketConn       // embedded, so we also become a net.PacketConn; its key is used in Close\n}\n\nfunc (fcpc *fakeClosePacketConn) ReadFrom(p []byte) (n int, addr net.Addr, err error) {\n\t// if the listener is already \"closed\", return error\n\tif atomic.LoadInt32(&fcpc.closed) == 1 {\n\t\treturn 0, nil, &net.OpError{\n\t\t\tOp:   \"readfrom\",\n\t\t\tNet:  fcpc.LocalAddr().Network(),\n\t\t\tAddr: fcpc.LocalAddr(),\n\t\t\tErr:  errFakeClosed,\n\t\t}\n\t}\n\n\t// call underlying readfrom\n\tn, addr, err = fcpc.sharedPacketConn.ReadFrom(p)\n\tif err != nil {\n\t\t// this server was stopped, so clear the deadline and let\n\t\t// any new server continue reading; but we will exit\n\t\tif atomic.LoadInt32(&fcpc.closed) == 1 {\n\t\t\tif netErr, ok := err.(net.Error); ok && netErr.Timeout() {\n\t\t\t\tif err = fcpc.SetReadDeadline(time.Time{}); err != nil {\n\t\t\t\t\treturn n, addr, err\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\treturn n, addr, err\n\t}\n\n\treturn n, addr, err\n}\n\n// Close won't close the underlying socket unless there is no more reference, then listenerPool will close it.\nfunc (fcpc *fakeClosePacketConn) Close() error {\n\tif atomic.CompareAndSwapInt32(&fcpc.closed, 0, 1) {\n\t\t_ = fcpc.SetReadDeadline(time.Now()) // unblock ReadFrom() calls to kick old servers out of their loops\n\t\t_, _ = listenerPool.Delete(fcpc.sharedPacketConn.key)\n\t}\n\treturn nil\n}\n\nfunc (fcpc *fakeClosePacketConn) Unwrap() net.PacketConn {\n\treturn fcpc.sharedPacketConn.PacketConn\n}\n\n// sharedPacketConn is like sharedListener, but for net.PacketConns.\ntype sharedPacketConn struct {\n\tnet.PacketConn\n\tkey string\n}\n\n// Destruct closes the underlying socket.\nfunc (spc *sharedPacketConn) Destruct() error {\n\treturn spc.PacketConn.Close()\n}\n\n// Unwrap returns the underlying socket\nfunc (spc *sharedPacketConn) Unwrap() net.PacketConn {\n\treturn spc.PacketConn\n}\n\n// Interface guards (see https://github.com/caddyserver/caddy/issues/3998)\nvar (\n\t_ (interface {\n\t\tUnwrap() net.PacketConn\n\t}) = (*fakeClosePacketConn)(nil)\n)\n\n// socketFiles is a fd -> *os.File map used to make a FileListener/FilePacketConn from a socket file descriptor.\nvar socketFiles = map[uintptr]*os.File{}\n\n// socketFilesMu synchronizes socketFiles insertions\nvar socketFilesMu sync.Mutex\n"
  },
  {
    "path": "listen_unix.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Even though the filename ends in _unix.go, we still have to specify the\n// build constraint here, because the filename convention only works for\n// literal GOOS values, and \"unix\" is a shortcut unique to build tags.\n//go:build unix && !solaris\n\npackage caddy\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/fs\"\n\t\"net\"\n\t\"os\"\n\t\"slices\"\n\t\"strconv\"\n\t\"sync\"\n\t\"sync/atomic\"\n\t\"syscall\"\n\n\t\"go.uber.org/zap\"\n\t\"golang.org/x/sys/unix\"\n)\n\n// reuseUnixSocket copies and reuses the unix domain socket (UDS) if we already\n// have it open; if not, unlink it so we can have it.\n// No-op if not a unix network.\nfunc reuseUnixSocket(network, addr string) (any, error) {\n\tsocketKey := listenerKey(network, addr)\n\n\tsocket, exists := unixSockets[socketKey]\n\tif exists {\n\t\t// make copy of file descriptor\n\t\tsocketFile, err := socket.File() // does dup() deep down\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\t// use copied fd to make new Listener or PacketConn, then replace\n\t\t// it in the map so that future copies always come from the most\n\t\t// recent fd (as the previous ones will be closed, and we'd get\n\t\t// \"use of closed network connection\" errors) -- note that we\n\t\t// preserve the *pointer* to the counter (not just the value) so\n\t\t// that all socket wrappers will refer to the same value\n\t\tswitch unixSocket := socket.(type) {\n\t\tcase *unixListener:\n\t\t\tln, err := net.FileListener(socketFile)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tatomic.AddInt32(unixSocket.count, 1)\n\t\t\tunixSockets[socketKey] = &unixListener{ln.(*net.UnixListener), socketKey, unixSocket.count}\n\n\t\tcase *unixConn:\n\t\t\tpc, err := net.FilePacketConn(socketFile)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tatomic.AddInt32(unixSocket.count, 1)\n\t\t\tunixSockets[socketKey] = &unixConn{pc.(*net.UnixConn), socketKey, unixSocket.count}\n\t\t}\n\n\t\treturn unixSockets[socketKey], nil\n\t}\n\n\t// from what I can tell after some quick research, it's quite common for programs to\n\t// leave their socket file behind after they close, so the typical pattern is to\n\t// unlink it before you bind to it -- this is often crucial if the last program using\n\t// it was killed forcefully without a chance to clean up the socket, but there is a\n\t// race, as the comment in net.UnixListener.close() explains... oh well, I guess?\n\tif err := syscall.Unlink(addr); err != nil && !errors.Is(err, fs.ErrNotExist) {\n\t\treturn nil, err\n\t}\n\n\treturn nil, nil\n}\n\n// listenReusable creates a new listener for the given network and address, and adds it to listenerPool.\nfunc listenReusable(ctx context.Context, lnKey string, network, address string, config net.ListenConfig) (any, error) {\n\t// even though SO_REUSEPORT lets us bind the socket multiple times,\n\t// we still put it in the listenerPool so we can count how many\n\t// configs are using this socket; necessary to ensure we can know\n\t// whether to enforce shutdown delays, for example (see #5393).\n\tvar (\n\t\tln         io.Closer\n\t\terr        error\n\t\tsocketFile *os.File\n\t)\n\n\tfd := slices.Contains([]string{\"fd\", \"fdgram\"}, network)\n\tif fd {\n\t\tsocketFd, err := strconv.ParseUint(address, 0, strconv.IntSize)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"invalid file descriptor: %v\", err)\n\t\t}\n\n\t\tfunc() {\n\t\t\tsocketFilesMu.Lock()\n\t\t\tdefer socketFilesMu.Unlock()\n\n\t\t\tsocketFdWide := uintptr(socketFd)\n\t\t\tvar ok bool\n\n\t\t\tsocketFile, ok = socketFiles[socketFdWide]\n\n\t\t\tif !ok {\n\t\t\t\tsocketFile = os.NewFile(socketFdWide, lnKey)\n\t\t\t\tif socketFile != nil {\n\t\t\t\t\tsocketFiles[socketFdWide] = socketFile\n\t\t\t\t}\n\t\t\t}\n\t\t}()\n\n\t\tif socketFile == nil {\n\t\t\treturn nil, fmt.Errorf(\"invalid socket file descriptor: %d\", socketFd)\n\t\t}\n\t} else {\n\t\t// wrap any Control function set by the user so we can also add our reusePort control without clobbering theirs\n\t\toldControl := config.Control\n\t\tconfig.Control = func(network, address string, c syscall.RawConn) error {\n\t\t\tif oldControl != nil {\n\t\t\t\tif err := oldControl(network, address, c); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn reusePort(network, address, c)\n\t\t}\n\t}\n\n\tdatagram := slices.Contains([]string{\"udp\", \"udp4\", \"udp6\", \"unixgram\", \"fdgram\"}, network)\n\tif datagram {\n\t\tif fd {\n\t\t\tln, err = net.FilePacketConn(socketFile)\n\t\t} else {\n\t\t\tln, err = config.ListenPacket(ctx, network, address)\n\t\t}\n\t} else {\n\t\tif fd {\n\t\t\tln, err = net.FileListener(socketFile)\n\t\t} else {\n\t\t\tln, err = config.Listen(ctx, network, address)\n\t\t}\n\t}\n\n\tif err == nil {\n\t\tlistenerPool.LoadOrStore(lnKey, nil)\n\t}\n\n\tif datagram {\n\t\tif !fd {\n\t\t\t// TODO: Not 100% sure this is necessary, but we do this for net.UnixListener, so...\n\t\t\tif unix, ok := ln.(*net.UnixConn); ok {\n\t\t\t\tone := int32(1)\n\t\t\t\tln = &unixConn{unix, lnKey, &one}\n\t\t\t\tunixSockets[lnKey] = ln.(*unixConn)\n\t\t\t}\n\t\t}\n\t\t// lightly wrap the connection so that when it is closed,\n\t\t// we can decrement the usage pool counter\n\t\tif specificLn, ok := ln.(net.PacketConn); ok {\n\t\t\tln = deletePacketConn{specificLn, lnKey}\n\t\t}\n\t} else {\n\t\tif !fd {\n\t\t\t// if new listener is a unix socket, make sure we can reuse it later\n\t\t\t// (we do our own \"unlink on close\" -- not required, but more tidy)\n\t\t\tif unix, ok := ln.(*net.UnixListener); ok {\n\t\t\t\tunix.SetUnlinkOnClose(false)\n\t\t\t\tone := int32(1)\n\t\t\t\tln = &unixListener{unix, lnKey, &one}\n\t\t\t\tunixSockets[lnKey] = ln.(*unixListener)\n\t\t\t}\n\t\t}\n\t\t// lightly wrap the listener so that when it is closed,\n\t\t// we can decrement the usage pool counter\n\t\tif specificLn, ok := ln.(net.Listener); ok {\n\t\t\tln = deleteListener{specificLn, lnKey}\n\t\t}\n\t}\n\n\t// other types, I guess we just return them directly\n\treturn ln, err\n}\n\n// reusePort sets SO_REUSEPORT. Ineffective for unix sockets.\nfunc reusePort(network, address string, conn syscall.RawConn) error {\n\tif IsUnixNetwork(network) {\n\t\treturn nil\n\t}\n\treturn conn.Control(func(descriptor uintptr) {\n\t\tif err := unix.SetsockoptInt(int(descriptor), unix.SOL_SOCKET, unixSOREUSEPORT, 1); err != nil {\n\t\t\tLog().Error(\"setting SO_REUSEPORT\",\n\t\t\t\tzap.String(\"network\", network),\n\t\t\t\tzap.String(\"address\", address),\n\t\t\t\tzap.Uintptr(\"descriptor\", descriptor),\n\t\t\t\tzap.Error(err))\n\t\t}\n\t})\n}\n\ntype unixListener struct {\n\t*net.UnixListener\n\tmapKey string\n\tcount  *int32 // accessed atomically\n}\n\nfunc (uln *unixListener) Close() error {\n\tnewCount := atomic.AddInt32(uln.count, -1)\n\tif newCount == 0 {\n\t\tfile, err := uln.File()\n\t\tvar name string\n\t\tif err == nil {\n\t\t\tname = file.Name()\n\t\t}\n\t\tdefer func() {\n\t\t\tunixSocketsMu.Lock()\n\t\t\tdelete(unixSockets, uln.mapKey)\n\t\t\tunixSocketsMu.Unlock()\n\t\t\tif err == nil {\n\t\t\t\t_ = syscall.Unlink(name)\n\t\t\t}\n\t\t}()\n\t}\n\treturn uln.UnixListener.Close()\n}\n\ntype unixConn struct {\n\t*net.UnixConn\n\tmapKey string\n\tcount  *int32 // accessed atomically\n}\n\nfunc (uc *unixConn) Close() error {\n\tnewCount := atomic.AddInt32(uc.count, -1)\n\tif newCount == 0 {\n\t\tfile, err := uc.File()\n\t\tvar name string\n\t\tif err == nil {\n\t\t\tname = file.Name()\n\t\t}\n\t\tdefer func() {\n\t\t\tunixSocketsMu.Lock()\n\t\t\tdelete(unixSockets, uc.mapKey)\n\t\t\tunixSocketsMu.Unlock()\n\t\t\tif err == nil {\n\t\t\t\t_ = syscall.Unlink(name)\n\t\t\t}\n\t\t}()\n\t}\n\treturn uc.UnixConn.Close()\n}\n\nfunc (uc *unixConn) Unwrap() net.PacketConn {\n\treturn uc.UnixConn\n}\n\n// unixSockets keeps track of the currently-active unix sockets\n// so we can transfer their FDs gracefully during reloads.\nvar unixSockets = make(map[string]interface {\n\tFile() (*os.File, error)\n})\n\n// socketFiles is a fd -> *os.File map used to make a FileListener/FilePacketConn from a socket file descriptor.\nvar socketFiles = map[uintptr]*os.File{}\n\n// socketFilesMu synchronizes socketFiles insertions\nvar socketFilesMu sync.Mutex\n\n// deleteListener is a type that simply deletes itself\n// from the listenerPool when it closes. It is used\n// solely for the purpose of reference counting (i.e.\n// counting how many configs are using a given socket).\ntype deleteListener struct {\n\tnet.Listener\n\tlnKey string\n}\n\nfunc (dl deleteListener) Close() error {\n\t_, _ = listenerPool.Delete(dl.lnKey)\n\treturn dl.Listener.Close()\n}\n\n// deletePacketConn is like deleteListener, but\n// for net.PacketConns.\ntype deletePacketConn struct {\n\tnet.PacketConn\n\tlnKey string\n}\n\nfunc (dl deletePacketConn) Close() error {\n\t_, _ = listenerPool.Delete(dl.lnKey)\n\treturn dl.PacketConn.Close()\n}\n\nfunc (dl deletePacketConn) Unwrap() net.PacketConn {\n\treturn dl.PacketConn\n}\n"
  },
  {
    "path": "listen_unix_setopt.go",
    "content": "//go:build unix && !freebsd && !solaris\n\npackage caddy\n\nimport \"golang.org/x/sys/unix\"\n\nconst unixSOREUSEPORT = unix.SO_REUSEPORT\n"
  },
  {
    "path": "listen_unix_setopt_freebsd.go",
    "content": "//go:build freebsd\n\npackage caddy\n\nimport \"golang.org/x/sys/unix\"\n\nconst unixSOREUSEPORT = unix.SO_REUSEPORT_LB\n"
  },
  {
    "path": "listeners.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/fs\"\n\t\"net\"\n\t\"net/netip\"\n\t\"os\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"sync/atomic\"\n\n\t\"github.com/quic-go/quic-go\"\n\t\"github.com/quic-go/quic-go/http3\"\n\th3qlog \"github.com/quic-go/quic-go/http3/qlog\"\n\t\"go.uber.org/zap\"\n\t\"golang.org/x/time/rate\"\n\n\t\"github.com/caddyserver/caddy/v2/internal\"\n)\n\n// NetworkAddress represents one or more network addresses.\n// It contains the individual components for a parsed network\n// address of the form accepted by ParseNetworkAddress().\ntype NetworkAddress struct {\n\t// Should be a network value accepted by Go's net package or\n\t// by a plugin providing a listener for that network type.\n\tNetwork string\n\n\t// The \"main\" part of the network address is the host, which\n\t// often takes the form of a hostname, DNS name, IP address,\n\t// or socket path.\n\tHost string\n\n\t// For addresses that contain a port, ranges are given by\n\t// [StartPort, EndPort]; i.e. for a single port, StartPort\n\t// and EndPort are the same. For no port, they are 0.\n\tStartPort uint\n\tEndPort   uint\n}\n\n// ListenAll calls Listen for all addresses represented by this struct, i.e. all ports in the range.\n// (If the address doesn't use ports or has 1 port only, then only 1 listener will be created.)\n// It returns an error if any listener failed to bind, and closes any listeners opened up to that point.\nfunc (na NetworkAddress) ListenAll(ctx context.Context, config net.ListenConfig) ([]any, error) {\n\tvar listeners []any\n\tvar err error\n\n\t// if one of the addresses has a failure, we need to close\n\t// any that did open a socket to avoid leaking resources\n\tdefer func() {\n\t\tif err == nil {\n\t\t\treturn\n\t\t}\n\t\tfor _, ln := range listeners {\n\t\t\tif cl, ok := ln.(io.Closer); ok {\n\t\t\t\tcl.Close()\n\t\t\t}\n\t\t}\n\t}()\n\n\t// an address can contain a port range, which represents multiple addresses;\n\t// some addresses don't use ports at all and have a port range size of 1;\n\t// whatever the case, iterate each address represented and bind a socket\n\tfor portOffset := uint(0); portOffset < na.PortRangeSize(); portOffset++ {\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\treturn nil, ctx.Err()\n\t\tdefault:\n\t\t}\n\n\t\t// create (or reuse) the listener ourselves\n\t\tvar ln any\n\t\tln, err = na.Listen(ctx, portOffset, config)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tlisteners = append(listeners, ln)\n\t}\n\n\treturn listeners, nil\n}\n\n// Listen is similar to net.Listen, with a few differences:\n//\n// Listen announces on the network address using the port calculated by adding\n// portOffset to the start port. (For network types that do not use ports, the\n// portOffset is ignored.)\n//\n// First Listen checks if a plugin can provide a listener from this address. Otherwise,\n// the provided ListenConfig is used to create the listener. Its Control function,\n// if set, may be wrapped by an internally-used Control function. The provided\n// context may be used to cancel long operations early. The context is not used\n// to close the listener after it has been created.\n//\n// Caddy's listeners can overlap each other: multiple listeners may be created on\n// the same socket at the same time. This is useful because during config changes,\n// the new config is started while the old config is still running. How this is\n// accomplished varies by platform and network type. For example, on Unix, SO_REUSEPORT\n// is set except on Unix sockets, for which the file descriptor is duplicated and\n// reused; on Windows, the close logic is virtualized using timeouts. Like normal\n// listeners, be sure to Close() them when you are done.\n//\n// This method returns any type, as the implementations of listeners for various\n// network types are not interchangeable. The type of listener returned is switched\n// on the network type. Stream-based networks (\"tcp\", \"unix\", \"unixpacket\", etc.)\n// return a net.Listener; datagram-based networks (\"udp\", \"unixgram\", etc.) return\n// a net.PacketConn; and so forth. The actual concrete types are not guaranteed to\n// be standard, exported types (wrapping is necessary to provide graceful reloads).\n//\n// Unix sockets will be unlinked before being created, to ensure we can bind to\n// it even if the previous program using it exited uncleanly; it will also be\n// unlinked upon a graceful exit (or when a new config does not use that socket).\n// Listen synchronizes binds to unix domain sockets to avoid race conditions\n// while an existing socket is unlinked.\nfunc (na NetworkAddress) Listen(ctx context.Context, portOffset uint, config net.ListenConfig) (any, error) {\n\tif na.IsUnixNetwork() {\n\t\tunixSocketsMu.Lock()\n\t\tdefer unixSocketsMu.Unlock()\n\t}\n\n\t// check to see if plugin provides listener\n\tif ln, err := getListenerFromPlugin(ctx, na.Network, na.Host, na.port(), portOffset, config); ln != nil || err != nil {\n\t\treturn ln, err\n\t}\n\n\t// create (or reuse) the listener ourselves\n\treturn na.listen(ctx, portOffset, config)\n}\n\nfunc (na NetworkAddress) listen(ctx context.Context, portOffset uint, config net.ListenConfig) (any, error) {\n\tvar (\n\t\tln           any\n\t\terr          error\n\t\taddress      string\n\t\tunixFileMode fs.FileMode\n\t)\n\n\t// split unix socket addr early so lnKey\n\t// is independent of permissions bits\n\tif na.IsUnixNetwork() {\n\t\taddress, unixFileMode, err = internal.SplitUnixSocketPermissionsBits(na.Host)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t} else if na.IsFdNetwork() {\n\t\taddress = na.Host\n\t} else {\n\t\taddress = na.JoinHostPort(portOffset)\n\t}\n\n\tif strings.HasPrefix(na.Network, \"ip\") {\n\t\tln, err = config.ListenPacket(ctx, na.Network, address)\n\t} else {\n\t\tif na.IsUnixNetwork() {\n\t\t\t// if this is a unix socket, see if we already have it open\n\t\t\tln, err = reuseUnixSocket(na.Network, address)\n\t\t}\n\n\t\tif ln == nil && err == nil {\n\t\t\t// otherwise, create a new listener\n\t\t\tlnKey := listenerKey(na.Network, address)\n\t\t\tln, err = listenReusable(ctx, lnKey, na.Network, address, config)\n\t\t}\n\t}\n\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tif ln == nil {\n\t\treturn nil, fmt.Errorf(\"unsupported network type: %s\", na.Network)\n\t}\n\n\tif IsUnixNetwork(na.Network) {\n\t\tisAbstractUnixSocket := strings.HasPrefix(address, \"@\")\n\t\tif !isAbstractUnixSocket {\n\t\t\terr = os.Chmod(address, unixFileMode)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"unable to set permissions (%s) on %s: %v\", unixFileMode, address, err)\n\t\t\t}\n\t\t}\n\t}\n\n\treturn ln, nil\n}\n\n// IsUnixNetwork returns true if na.Network is\n// unix, unixgram, or unixpacket.\nfunc (na NetworkAddress) IsUnixNetwork() bool {\n\treturn IsUnixNetwork(na.Network)\n}\n\n// IsFdNetwork returns true if na.Network is\n// fd or fdgram.\nfunc (na NetworkAddress) IsFdNetwork() bool {\n\treturn IsFdNetwork(na.Network)\n}\n\n// JoinHostPort is like net.JoinHostPort, but where the port\n// is StartPort + offset.\nfunc (na NetworkAddress) JoinHostPort(offset uint) string {\n\tif na.IsUnixNetwork() || na.IsFdNetwork() {\n\t\treturn na.Host\n\t}\n\treturn net.JoinHostPort(na.Host, strconv.FormatUint(uint64(na.StartPort+offset), 10))\n}\n\n// Expand returns one NetworkAddress for each port in the port range.\nfunc (na NetworkAddress) Expand() []NetworkAddress {\n\tsize := na.PortRangeSize()\n\taddrs := make([]NetworkAddress, size)\n\tfor portOffset := range size {\n\t\taddrs[portOffset] = na.At(portOffset)\n\t}\n\treturn addrs\n}\n\n// At returns a NetworkAddress with a port range of just 1\n// at the given port offset; i.e. a NetworkAddress that\n// represents precisely 1 address only.\nfunc (na NetworkAddress) At(portOffset uint) NetworkAddress {\n\tna2 := na\n\tna2.StartPort, na2.EndPort = na.StartPort+portOffset, na.StartPort+portOffset\n\treturn na2\n}\n\n// PortRangeSize returns how many ports are in\n// pa's port range. Port ranges are inclusive,\n// so the size is the difference of start and\n// end ports plus one.\nfunc (na NetworkAddress) PortRangeSize() uint {\n\tif na.EndPort < na.StartPort {\n\t\treturn 0\n\t}\n\treturn (na.EndPort - na.StartPort) + 1\n}\n\nfunc (na NetworkAddress) isLoopback() bool {\n\tif na.IsUnixNetwork() || na.IsFdNetwork() {\n\t\treturn true\n\t}\n\tif na.Host == \"localhost\" {\n\t\treturn true\n\t}\n\tif ip, err := netip.ParseAddr(na.Host); err == nil {\n\t\treturn ip.IsLoopback()\n\t}\n\treturn false\n}\n\nfunc (na NetworkAddress) isWildcardInterface() bool {\n\tif na.Host == \"\" {\n\t\treturn true\n\t}\n\tif ip, err := netip.ParseAddr(na.Host); err == nil {\n\t\treturn ip.IsUnspecified()\n\t}\n\treturn false\n}\n\nfunc (na NetworkAddress) port() string {\n\tif na.StartPort == na.EndPort {\n\t\treturn strconv.FormatUint(uint64(na.StartPort), 10)\n\t}\n\treturn fmt.Sprintf(\"%d-%d\", na.StartPort, na.EndPort)\n}\n\n// String reconstructs the address string for human display.\n// The output can be parsed by ParseNetworkAddress(). If the\n// address is a unix socket, any non-zero port will be dropped.\nfunc (na NetworkAddress) String() string {\n\tif na.Network == \"tcp\" && (na.Host != \"\" || na.port() != \"\") {\n\t\tna.Network = \"\" // omit default network value for brevity\n\t}\n\treturn JoinNetworkAddress(na.Network, na.Host, na.port())\n}\n\n// IsUnixNetwork returns true if the netw is a unix network.\nfunc IsUnixNetwork(netw string) bool {\n\treturn strings.HasPrefix(netw, \"unix\")\n}\n\n// IsFdNetwork returns true if the netw is a fd network.\nfunc IsFdNetwork(netw string) bool {\n\treturn strings.HasPrefix(netw, \"fd\")\n}\n\n// ParseNetworkAddress parses addr into its individual\n// components. The input string is expected to be of\n// the form \"network/host:port-range\" where any part is\n// optional. The default network, if unspecified, is tcp.\n// Port ranges are inclusive.\n//\n// Network addresses are distinct from URLs and do not\n// use URL syntax.\nfunc ParseNetworkAddress(addr string) (NetworkAddress, error) {\n\treturn ParseNetworkAddressWithDefaults(addr, \"tcp\", 0)\n}\n\n// ParseNetworkAddressWithDefaults is like ParseNetworkAddress but allows\n// the default network and port to be specified.\nfunc ParseNetworkAddressWithDefaults(addr, defaultNetwork string, defaultPort uint) (NetworkAddress, error) {\n\tvar host, port string\n\tnetwork, host, port, err := SplitNetworkAddress(addr)\n\tif err != nil {\n\t\treturn NetworkAddress{}, err\n\t}\n\tif network == \"\" {\n\t\tnetwork = defaultNetwork\n\t}\n\tif IsUnixNetwork(network) {\n\t\t_, _, err := internal.SplitUnixSocketPermissionsBits(host)\n\t\treturn NetworkAddress{\n\t\t\tNetwork: network,\n\t\t\tHost:    host,\n\t\t}, err\n\t}\n\tif IsFdNetwork(network) {\n\t\treturn NetworkAddress{\n\t\t\tNetwork: network,\n\t\t\tHost:    host,\n\t\t}, nil\n\t}\n\tvar start, end uint64\n\tif port == \"\" {\n\t\tstart = uint64(defaultPort)\n\t\tend = uint64(defaultPort)\n\t} else {\n\t\tbefore, after, found := strings.Cut(port, \"-\")\n\t\tif !found {\n\t\t\tafter = before\n\t\t}\n\t\tstart, err = strconv.ParseUint(before, 10, 16)\n\t\tif err != nil {\n\t\t\treturn NetworkAddress{}, fmt.Errorf(\"invalid start port: %v\", err)\n\t\t}\n\t\tend, err = strconv.ParseUint(after, 10, 16)\n\t\tif err != nil {\n\t\t\treturn NetworkAddress{}, fmt.Errorf(\"invalid end port: %v\", err)\n\t\t}\n\t\tif end < start {\n\t\t\treturn NetworkAddress{}, fmt.Errorf(\"end port must not be less than start port\")\n\t\t}\n\t\tif (end - start) > maxPortSpan {\n\t\t\treturn NetworkAddress{}, fmt.Errorf(\"port range exceeds %d ports\", maxPortSpan)\n\t\t}\n\t}\n\treturn NetworkAddress{\n\t\tNetwork:   network,\n\t\tHost:      host,\n\t\tStartPort: uint(start),\n\t\tEndPort:   uint(end),\n\t}, nil\n}\n\n// SplitNetworkAddress splits a into its network, host, and port components.\n// Note that port may be a port range (:X-Y), or omitted for unix sockets.\nfunc SplitNetworkAddress(a string) (network, host, port string, err error) {\n\tbeforeSlash, afterSlash, slashFound := strings.Cut(a, \"/\")\n\tif slashFound {\n\t\tnetwork = strings.ToLower(strings.TrimSpace(beforeSlash))\n\t\ta = afterSlash\n\t\tif IsUnixNetwork(network) || IsFdNetwork(network) {\n\t\t\thost = a\n\t\t\treturn network, host, port, err\n\t\t}\n\t}\n\n\thost, port, err = net.SplitHostPort(a)\n\tfirstErr := err\n\n\tif err != nil {\n\t\t// in general, if there was an error, it was likely \"missing port\",\n\t\t// so try removing square brackets around an IPv6 host, adding a bogus\n\t\t// port to take advantage of standard library's robust parser, then\n\t\t// strip the artificial port.\n\t\thost, _, err = net.SplitHostPort(net.JoinHostPort(strings.Trim(a, \"[]\"), \"0\"))\n\t\tport = \"\"\n\t}\n\n\tif err != nil {\n\t\terr = errors.Join(firstErr, err)\n\t}\n\n\treturn network, host, port, err\n}\n\n// JoinNetworkAddress combines network, host, and port into a single\n// address string of the form accepted by ParseNetworkAddress(). For\n// unix sockets, the network should be \"unix\" (or \"unixgram\" or\n// \"unixpacket\") and the path to the socket should be given as the\n// host parameter.\nfunc JoinNetworkAddress(network, host, port string) string {\n\tvar a string\n\tif network != \"\" {\n\t\ta = network + \"/\"\n\t}\n\tif (host != \"\" && port == \"\") || IsUnixNetwork(network) || IsFdNetwork(network) {\n\t\ta += host\n\t} else if port != \"\" {\n\t\ta += net.JoinHostPort(host, port)\n\t}\n\treturn a\n}\n\n// ListenQUIC returns a http3.QUICEarlyListener suitable for use in a Caddy module.\n//\n// The network will be transformed into a QUIC-compatible type if the same address can be used with\n// different networks. Currently this just means that for tcp, udp will be used with the same\n// address instead.\n//\n// NOTE: This API is EXPERIMENTAL and may be changed or removed.\n// NOTE: user should close the returned listener twice, once to stop accepting new connections, the second time to free up the packet conn.\nfunc (na NetworkAddress) ListenQUIC(ctx context.Context, portOffset uint, config net.ListenConfig, tlsConf *tls.Config, pcWrappers []PacketConnWrapper, allow0rttconf *bool) (http3.QUICListener, error) {\n\tlnKey := listenerKey(\"quic\"+na.Network, na.JoinHostPort(portOffset))\n\n\tsharedEarlyListener, _, err := listenerPool.LoadOrNew(lnKey, func() (Destructor, error) {\n\t\tlnAny, err := na.Listen(ctx, portOffset, config)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tln := lnAny.(net.PacketConn)\n\n\t\th3ln := ln\n\t\tif len(pcWrappers) == 0 {\n\t\t\tfor {\n\t\t\t\t// retrieve the underlying socket, so quic-go can optimize.\n\t\t\t\tif unwrapper, ok := h3ln.(interface{ Unwrap() net.PacketConn }); ok {\n\t\t\t\t\th3ln = unwrapper.Unwrap()\n\t\t\t\t} else {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\t// wrap packet conn before QUIC\n\t\t\tfor _, pcWrapper := range pcWrappers {\n\t\t\t\th3ln = pcWrapper.WrapPacketConn(h3ln)\n\t\t\t}\n\t\t}\n\n\t\tsqs := newSharedQUICState(tlsConf)\n\t\t// http3.ConfigureTLSConfig only uses this field and tls App sets this field as well\n\t\t//nolint:gosec\n\t\tquicTlsConfig := &tls.Config{GetConfigForClient: sqs.getConfigForClient}\n\t\t// Require clients to verify their source address when we're handling more than 1000 handshakes per second.\n\t\t// TODO: make tunable?\n\t\tlimiter := rate.NewLimiter(1000, 1000)\n\t\ttr := &quic.Transport{\n\t\t\tConn:                h3ln,\n\t\t\tVerifySourceAddress: func(addr net.Addr) bool { return !limiter.Allow() },\n\t\t}\n\t\tallow0rtt := true\n\t\tif allow0rttconf != nil {\n\t\t\tallow0rtt = *allow0rttconf\n\t\t}\n\t\tearlyLn, err := tr.ListenEarly(\n\t\t\thttp3.ConfigureTLSConfig(quicTlsConfig),\n\t\t\t&quic.Config{\n\t\t\t\tAllow0RTT: allow0rtt,\n\t\t\t\tTracer:    h3qlog.DefaultConnectionTracer,\n\t\t\t},\n\t\t)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\t// TODO: figure out when to close the listener and the transport\n\t\t// using the original net.PacketConn to close them properly\n\t\treturn &sharedQuicListener{EarlyListener: earlyLn, packetConn: ln, sqs: sqs, key: lnKey}, nil\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tsql := sharedEarlyListener.(*sharedQuicListener)\n\t// add current tls.Config to sqs, so GetConfigForClient will always return the latest tls.Config in case of context cancellation\n\tctx, cancel := sql.sqs.addState(tlsConf)\n\n\treturn &fakeCloseQuicListener{\n\t\tsharedQuicListener: sql,\n\t\tcontext:            ctx,\n\t\tcontextCancel:      cancel,\n\t}, nil\n}\n\n// ListenerUsage returns the current usage count of the given listener address.\nfunc ListenerUsage(network, addr string) int {\n\tcount, _ := listenerPool.References(listenerKey(network, addr))\n\treturn count\n}\n\n// contextAndCancelFunc groups context and its cancelFunc\ntype contextAndCancelFunc struct {\n\tcontext.Context\n\tcontext.CancelCauseFunc\n}\n\n// sharedQUICState manages GetConfigForClient\n// see issue: https://github.com/caddyserver/caddy/pull/4849\ntype sharedQUICState struct {\n\trmu           sync.RWMutex\n\ttlsConfs      map[*tls.Config]contextAndCancelFunc\n\tactiveTlsConf *tls.Config\n}\n\n// newSharedQUICState creates a new sharedQUICState\nfunc newSharedQUICState(tlsConfig *tls.Config) *sharedQUICState {\n\tsqtc := &sharedQUICState{\n\t\ttlsConfs:      make(map[*tls.Config]contextAndCancelFunc),\n\t\tactiveTlsConf: tlsConfig,\n\t}\n\tsqtc.addState(tlsConfig)\n\treturn sqtc\n}\n\n// getConfigForClient is used as tls.Config's GetConfigForClient field\nfunc (sqs *sharedQUICState) getConfigForClient(ch *tls.ClientHelloInfo) (*tls.Config, error) {\n\tsqs.rmu.RLock()\n\tdefer sqs.rmu.RUnlock()\n\treturn sqs.activeTlsConf.GetConfigForClient(ch)\n}\n\n// addState adds tls.Config and activeRequests to the map if not present and returns the corresponding context and its cancelFunc\n// so that when cancelled, the active tls.Config will change\nfunc (sqs *sharedQUICState) addState(tlsConfig *tls.Config) (context.Context, context.CancelCauseFunc) {\n\tsqs.rmu.Lock()\n\tdefer sqs.rmu.Unlock()\n\n\tif cacc, ok := sqs.tlsConfs[tlsConfig]; ok {\n\t\treturn cacc.Context, cacc.CancelCauseFunc\n\t}\n\n\tctx, cancel := context.WithCancelCause(context.Background())\n\twrappedCancel := func(cause error) {\n\t\tcancel(cause)\n\n\t\tsqs.rmu.Lock()\n\t\tdefer sqs.rmu.Unlock()\n\n\t\tdelete(sqs.tlsConfs, tlsConfig)\n\t\tif sqs.activeTlsConf == tlsConfig {\n\t\t\t// select another tls.Config, if there is none,\n\t\t\t// related sharedQuicListener will be destroyed anyway\n\t\t\tfor tc := range sqs.tlsConfs {\n\t\t\t\tsqs.activeTlsConf = tc\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\tsqs.tlsConfs[tlsConfig] = contextAndCancelFunc{ctx, wrappedCancel}\n\t// there should be at most 2 tls.Configs\n\tif len(sqs.tlsConfs) > 2 {\n\t\tLog().Warn(\"quic listener tls configs are more than 2\", zap.Int(\"number of configs\", len(sqs.tlsConfs)))\n\t}\n\treturn ctx, wrappedCancel\n}\n\n// sharedQuicListener is like sharedListener, but for quic.EarlyListeners.\ntype sharedQuicListener struct {\n\t*quic.EarlyListener\n\tpacketConn net.PacketConn // we have to hold these because quic-go won't close listeners it didn't create\n\tsqs        *sharedQUICState\n\tkey        string\n}\n\n// Destruct closes the underlying QUIC listener and its associated net.PacketConn.\nfunc (sql *sharedQuicListener) Destruct() error {\n\t// close EarlyListener first to stop any operations being done to the net.PacketConn\n\t_ = sql.EarlyListener.Close()\n\t// then close the net.PacketConn\n\treturn sql.packetConn.Close()\n}\n\n// fakeClosedErr returns an error value that is not temporary\n// nor a timeout, suitable for making the caller think the\n// listener is actually closed\nfunc fakeClosedErr(l interface{ Addr() net.Addr }) error {\n\treturn &net.OpError{\n\t\tOp:   \"accept\",\n\t\tNet:  l.Addr().Network(),\n\t\tAddr: l.Addr(),\n\t\tErr:  errFakeClosed,\n\t}\n}\n\n// errFakeClosed is the underlying error value returned by\n// fakeCloseListener.Accept() after Close() has been called,\n// indicating that it is pretending to be closed so that the\n// server using it can terminate, while the underlying\n// socket is actually left open.\nvar errFakeClosed = fmt.Errorf(\"QUIC listener 'closed' 😉\")\n\ntype fakeCloseQuicListener struct {\n\tclosed              int32 // accessed atomically; belongs to this struct only\n\t*sharedQuicListener       // embedded, so we also become a quic.EarlyListener\n\tcontext             context.Context\n\tcontextCancel       context.CancelCauseFunc\n}\n\n// Currently Accept ignores the passed context, however a situation where\n// someone would need a hotswappable QUIC-only (not http3, since it uses context.Background here)\n// server on which Accept would be called with non-empty contexts\n// (mind that the default net listeners' Accept doesn't take a context argument)\n// sounds way too rare for us to sacrifice efficiency here.\nfunc (fcql *fakeCloseQuicListener) Accept(_ context.Context) (*quic.Conn, error) {\n\tconn, err := fcql.sharedQuicListener.Accept(fcql.context)\n\tif err == nil {\n\t\treturn conn, nil\n\t}\n\n\t// if the listener is \"closed\", return a fake closed error instead\n\tif atomic.LoadInt32(&fcql.closed) == 1 && errors.Is(err, context.Canceled) {\n\t\treturn nil, fakeClosedErr(fcql)\n\t}\n\treturn nil, err\n}\n\nfunc (fcql *fakeCloseQuicListener) Close() error {\n\tif atomic.CompareAndSwapInt32(&fcql.closed, 0, 1) {\n\t\tfcql.contextCancel(errFakeClosed)\n\t} else if atomic.CompareAndSwapInt32(&fcql.closed, 1, 2) {\n\t\t_, _ = listenerPool.Delete(fcql.sharedQuicListener.key)\n\t}\n\treturn nil\n}\n\n// RegisterNetwork registers a network type with Caddy so that if a listener is\n// created for that network type, getListener will be invoked to get the listener.\n// This should be called during init() and will panic if the network type is standard\n// or reserved, or if it is already registered. EXPERIMENTAL and subject to change.\nfunc RegisterNetwork(network string, getListener ListenerFunc) {\n\tnetwork = strings.TrimSpace(strings.ToLower(network))\n\n\tif network == \"tcp\" || network == \"tcp4\" || network == \"tcp6\" ||\n\t\tnetwork == \"udp\" || network == \"udp4\" || network == \"udp6\" ||\n\t\tnetwork == \"unix\" || network == \"unixpacket\" || network == \"unixgram\" ||\n\t\tstrings.HasPrefix(network, \"ip:\") || strings.HasPrefix(network, \"ip4:\") || strings.HasPrefix(network, \"ip6:\") ||\n\t\tnetwork == \"fd\" || network == \"fdgram\" {\n\t\tpanic(\"network type \" + network + \" is reserved\")\n\t}\n\n\tif _, ok := networkTypes[strings.ToLower(network)]; ok {\n\t\tpanic(\"network type \" + network + \" is already registered\")\n\t}\n\n\tnetworkTypes[network] = getListener\n}\n\nvar unixSocketsMu sync.Mutex\n\n// getListenerFromPlugin returns a listener on the given network and address\n// if a plugin has registered the network name. It may return (nil, nil) if\n// no plugin can provide a listener.\nfunc getListenerFromPlugin(ctx context.Context, network, host, port string, portOffset uint, config net.ListenConfig) (any, error) {\n\t// get listener from plugin if network type is registered\n\tif getListener, ok := networkTypes[network]; ok {\n\t\tLog().Debug(\"getting listener from plugin\", zap.String(\"network\", network))\n\t\treturn getListener(ctx, network, host, port, portOffset, config)\n\t}\n\n\treturn nil, nil\n}\n\nfunc listenerKey(network, addr string) string {\n\treturn network + \"/\" + addr\n}\n\n// ListenerFunc is a function that can return a listener given a network and address.\n// The listeners must be capable of overlapping: with Caddy, new configs are loaded\n// before old ones are unloaded, so listeners may overlap briefly if the configs\n// both need the same listener. EXPERIMENTAL and subject to change.\ntype ListenerFunc func(ctx context.Context, network, host, portRange string, portOffset uint, cfg net.ListenConfig) (any, error)\n\nvar networkTypes = map[string]ListenerFunc{}\n\n// ListenerWrapper is a type that wraps a listener\n// so it can modify the input listener's methods.\n// Modules that implement this interface are found\n// in the caddy.listeners namespace. Usually, to\n// wrap a listener, you will define your own struct\n// type that embeds the input listener, then\n// implement your own methods that you want to wrap,\n// calling the underlying listener's methods where\n// appropriate.\ntype ListenerWrapper interface {\n\tWrapListener(net.Listener) net.Listener\n}\n\n// PacketConnWrapper is a type that wraps a packet conn\n// so it can modify the input packet conn methods.\n// Modules that implement this interface are found\n// in the caddy.packetconns namespace. Usually, to\n// wrap a packet conn, you will define your own struct\n// type that embeds the input packet conn, then\n// implement your own methods that you want to wrap,\n// calling the underlying packet conn methods where\n// appropriate.\ntype PacketConnWrapper interface {\n\tWrapPacketConn(net.PacketConn) net.PacketConn\n}\n\n// listenerPool stores and allows reuse of active listeners.\nvar listenerPool = NewUsagePool()\n\nconst maxPortSpan = 65535\n"
  },
  {
    "path": "listeners_fuzz.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n//go:build gofuzz\n\npackage caddy\n\nfunc FuzzParseNetworkAddress(data []byte) int {\n\t_, err := ParseNetworkAddress(string(data))\n\tif err != nil {\n\t\treturn 0\n\t}\n\treturn 1\n}\n"
  },
  {
    "path": "listeners_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport (\n\t\"reflect\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/internal\"\n)\n\nfunc TestSplitNetworkAddress(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinput         string\n\t\texpectNetwork string\n\t\texpectHost    string\n\t\texpectPort    string\n\t\texpectErr     bool\n\t}{\n\t\t{\n\t\t\tinput:      \"\",\n\t\t\texpectHost: \"\",\n\t\t},\n\t\t{\n\t\t\tinput:      \"foo\",\n\t\t\texpectHost: \"foo\",\n\t\t},\n\t\t{\n\t\t\tinput: \":\", // empty host & empty port\n\t\t},\n\t\t{\n\t\t\tinput:      \"::\",\n\t\t\texpectHost: \"::\",\n\t\t},\n\t\t{\n\t\t\tinput:      \"[::]\",\n\t\t\texpectHost: \"::\",\n\t\t},\n\t\t{\n\t\t\tinput:      \":1234\",\n\t\t\texpectPort: \"1234\",\n\t\t},\n\t\t{\n\t\t\tinput:      \"foo:1234\",\n\t\t\texpectHost: \"foo\",\n\t\t\texpectPort: \"1234\",\n\t\t},\n\t\t{\n\t\t\tinput:      \"foo:1234-5678\",\n\t\t\texpectHost: \"foo\",\n\t\t\texpectPort: \"1234-5678\",\n\t\t},\n\t\t{\n\t\t\tinput:         \"udp/foo:1234\",\n\t\t\texpectNetwork: \"udp\",\n\t\t\texpectHost:    \"foo\",\n\t\t\texpectPort:    \"1234\",\n\t\t},\n\t\t{\n\t\t\tinput:         \"tcp6/foo:1234-5678\",\n\t\t\texpectNetwork: \"tcp6\",\n\t\t\texpectHost:    \"foo\",\n\t\t\texpectPort:    \"1234-5678\",\n\t\t},\n\t\t{\n\t\t\tinput:         \"udp/\",\n\t\t\texpectNetwork: \"udp\",\n\t\t\texpectHost:    \"\",\n\t\t},\n\t\t{\n\t\t\tinput:         \"unix//foo/bar\",\n\t\t\texpectNetwork: \"unix\",\n\t\t\texpectHost:    \"/foo/bar\",\n\t\t},\n\t\t{\n\t\t\tinput:         \"unixgram//foo/bar\",\n\t\t\texpectNetwork: \"unixgram\",\n\t\t\texpectHost:    \"/foo/bar\",\n\t\t},\n\t\t{\n\t\t\tinput:         \"unixpacket//foo/bar\",\n\t\t\texpectNetwork: \"unixpacket\",\n\t\t\texpectHost:    \"/foo/bar\",\n\t\t},\n\t} {\n\t\tactualNetwork, actualHost, actualPort, err := SplitNetworkAddress(tc.input)\n\t\tif tc.expectErr && err == nil {\n\t\t\tt.Errorf(\"Test %d: Expected error but got %v\", i, err)\n\t\t}\n\t\tif !tc.expectErr && err != nil {\n\t\t\tt.Errorf(\"Test %d: Expected no error but got %v\", i, err)\n\t\t}\n\t\tif actualNetwork != tc.expectNetwork {\n\t\t\tt.Errorf(\"Test %d: Expected network '%s' but got '%s'\", i, tc.expectNetwork, actualNetwork)\n\t\t}\n\t\tif actualHost != tc.expectHost {\n\t\t\tt.Errorf(\"Test %d: Expected host '%s' but got '%s'\", i, tc.expectHost, actualHost)\n\t\t}\n\t\tif actualPort != tc.expectPort {\n\t\t\tt.Errorf(\"Test %d: Expected port '%s' but got '%s'\", i, tc.expectPort, actualPort)\n\t\t}\n\t}\n}\n\nfunc TestJoinNetworkAddress(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tnetwork, host, port string\n\t\texpect              string\n\t}{\n\t\t{\n\t\t\tnetwork: \"\", host: \"\", port: \"\",\n\t\t\texpect: \"\",\n\t\t},\n\t\t{\n\t\t\tnetwork: \"tcp\", host: \"\", port: \"\",\n\t\t\texpect: \"tcp/\",\n\t\t},\n\t\t{\n\t\t\tnetwork: \"\", host: \"foo\", port: \"\",\n\t\t\texpect: \"foo\",\n\t\t},\n\t\t{\n\t\t\tnetwork: \"\", host: \"\", port: \"1234\",\n\t\t\texpect: \":1234\",\n\t\t},\n\t\t{\n\t\t\tnetwork: \"\", host: \"\", port: \"1234-5678\",\n\t\t\texpect: \":1234-5678\",\n\t\t},\n\t\t{\n\t\t\tnetwork: \"\", host: \"foo\", port: \"1234\",\n\t\t\texpect: \"foo:1234\",\n\t\t},\n\t\t{\n\t\t\tnetwork: \"udp\", host: \"foo\", port: \"1234\",\n\t\t\texpect: \"udp/foo:1234\",\n\t\t},\n\t\t{\n\t\t\tnetwork: \"udp\", host: \"\", port: \"1234\",\n\t\t\texpect: \"udp/:1234\",\n\t\t},\n\t\t{\n\t\t\tnetwork: \"unix\", host: \"/foo/bar\", port: \"\",\n\t\t\texpect: \"unix//foo/bar\",\n\t\t},\n\t\t{\n\t\t\tnetwork: \"unix\", host: \"/foo/bar\", port: \"0\",\n\t\t\texpect: \"unix//foo/bar\",\n\t\t},\n\t\t{\n\t\t\tnetwork: \"unix\", host: \"/foo/bar\", port: \"1234\",\n\t\t\texpect: \"unix//foo/bar\",\n\t\t},\n\t\t{\n\t\t\tnetwork: \"\", host: \"::1\", port: \"1234\",\n\t\t\texpect: \"[::1]:1234\",\n\t\t},\n\t} {\n\t\tactual := JoinNetworkAddress(tc.network, tc.host, tc.port)\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d: Expected '%s' but got '%s'\", i, tc.expect, actual)\n\t\t}\n\t}\n}\n\nfunc TestParseNetworkAddress(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinput          string\n\t\tdefaultNetwork string\n\t\tdefaultPort    uint\n\t\texpectAddr     NetworkAddress\n\t\texpectErr      bool\n\t}{\n\t\t{\n\t\t\tinput:      \"\",\n\t\t\texpectAddr: NetworkAddress{},\n\t\t},\n\t\t{\n\t\t\tinput:          \":\",\n\t\t\tdefaultNetwork: \"udp\",\n\t\t\texpectAddr: NetworkAddress{\n\t\t\t\tNetwork: \"udp\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:          \"[::]\",\n\t\t\tdefaultNetwork: \"udp\",\n\t\t\tdefaultPort:    53,\n\t\t\texpectAddr: NetworkAddress{\n\t\t\t\tNetwork:   \"udp\",\n\t\t\t\tHost:      \"::\",\n\t\t\t\tStartPort: 53,\n\t\t\t\tEndPort:   53,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:          \":1234\",\n\t\t\tdefaultNetwork: \"udp\",\n\t\t\texpectAddr: NetworkAddress{\n\t\t\t\tNetwork:   \"udp\",\n\t\t\t\tHost:      \"\",\n\t\t\t\tStartPort: 1234,\n\t\t\t\tEndPort:   1234,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:          \"udp/:1234\",\n\t\t\tdefaultNetwork: \"udp\",\n\t\t\texpectAddr: NetworkAddress{\n\t\t\t\tNetwork:   \"udp\",\n\t\t\t\tHost:      \"\",\n\t\t\t\tStartPort: 1234,\n\t\t\t\tEndPort:   1234,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:          \"tcp6/:1234\",\n\t\t\tdefaultNetwork: \"tcp\",\n\t\t\texpectAddr: NetworkAddress{\n\t\t\t\tNetwork:   \"tcp6\",\n\t\t\t\tHost:      \"\",\n\t\t\t\tStartPort: 1234,\n\t\t\t\tEndPort:   1234,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:          \"tcp4/localhost:1234\",\n\t\t\tdefaultNetwork: \"tcp\",\n\t\t\texpectAddr: NetworkAddress{\n\t\t\t\tNetwork:   \"tcp4\",\n\t\t\t\tHost:      \"localhost\",\n\t\t\t\tStartPort: 1234,\n\t\t\t\tEndPort:   1234,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:          \"unix//foo/bar\",\n\t\t\tdefaultNetwork: \"tcp\",\n\t\t\texpectAddr: NetworkAddress{\n\t\t\t\tNetwork: \"unix\",\n\t\t\t\tHost:    \"/foo/bar\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:          \"localhost:1234-1234\",\n\t\t\tdefaultNetwork: \"tcp\",\n\t\t\texpectAddr: NetworkAddress{\n\t\t\t\tNetwork:   \"tcp\",\n\t\t\t\tHost:      \"localhost\",\n\t\t\t\tStartPort: 1234,\n\t\t\t\tEndPort:   1234,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:          \"localhost:2-1\",\n\t\t\tdefaultNetwork: \"tcp\",\n\t\t\texpectErr:      true,\n\t\t},\n\t\t{\n\t\t\tinput:          \"localhost:0\",\n\t\t\tdefaultNetwork: \"tcp\",\n\t\t\texpectAddr: NetworkAddress{\n\t\t\t\tNetwork:   \"tcp\",\n\t\t\t\tHost:      \"localhost\",\n\t\t\t\tStartPort: 0,\n\t\t\t\tEndPort:   0,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:          \"localhost:1-999999999999\",\n\t\t\tdefaultNetwork: \"tcp\",\n\t\t\texpectErr:      true,\n\t\t},\n\t} {\n\t\tactualAddr, err := ParseNetworkAddressWithDefaults(tc.input, tc.defaultNetwork, tc.defaultPort)\n\t\tif tc.expectErr && err == nil {\n\t\t\tt.Errorf(\"Test %d: Expected error but got: %v\", i, err)\n\t\t}\n\t\tif !tc.expectErr && err != nil {\n\t\t\tt.Errorf(\"Test %d: Expected no error but got: %v\", i, err)\n\t\t}\n\n\t\tif actualAddr.Network != tc.expectAddr.Network {\n\t\t\tt.Errorf(\"Test %d: Expected network '%v' but got '%v'\", i, tc.expectAddr, actualAddr)\n\t\t}\n\t\tif !reflect.DeepEqual(tc.expectAddr, actualAddr) {\n\t\t\tt.Errorf(\"Test %d: Expected addresses %v but got %v\", i, tc.expectAddr, actualAddr)\n\t\t}\n\t}\n}\n\nfunc TestParseNetworkAddressWithDefaults(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinput          string\n\t\tdefaultNetwork string\n\t\tdefaultPort    uint\n\t\texpectAddr     NetworkAddress\n\t\texpectErr      bool\n\t}{\n\t\t{\n\t\t\tinput:      \"\",\n\t\t\texpectAddr: NetworkAddress{},\n\t\t},\n\t\t{\n\t\t\tinput:          \":\",\n\t\t\tdefaultNetwork: \"udp\",\n\t\t\texpectAddr: NetworkAddress{\n\t\t\t\tNetwork: \"udp\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:          \"[::]\",\n\t\t\tdefaultNetwork: \"udp\",\n\t\t\tdefaultPort:    53,\n\t\t\texpectAddr: NetworkAddress{\n\t\t\t\tNetwork:   \"udp\",\n\t\t\t\tHost:      \"::\",\n\t\t\t\tStartPort: 53,\n\t\t\t\tEndPort:   53,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:          \":1234\",\n\t\t\tdefaultNetwork: \"udp\",\n\t\t\texpectAddr: NetworkAddress{\n\t\t\t\tNetwork:   \"udp\",\n\t\t\t\tHost:      \"\",\n\t\t\t\tStartPort: 1234,\n\t\t\t\tEndPort:   1234,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:          \"udp/:1234\",\n\t\t\tdefaultNetwork: \"udp\",\n\t\t\texpectAddr: NetworkAddress{\n\t\t\t\tNetwork:   \"udp\",\n\t\t\t\tHost:      \"\",\n\t\t\t\tStartPort: 1234,\n\t\t\t\tEndPort:   1234,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:          \"tcp6/:1234\",\n\t\t\tdefaultNetwork: \"tcp\",\n\t\t\texpectAddr: NetworkAddress{\n\t\t\t\tNetwork:   \"tcp6\",\n\t\t\t\tHost:      \"\",\n\t\t\t\tStartPort: 1234,\n\t\t\t\tEndPort:   1234,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:          \"tcp4/localhost:1234\",\n\t\t\tdefaultNetwork: \"tcp\",\n\t\t\texpectAddr: NetworkAddress{\n\t\t\t\tNetwork:   \"tcp4\",\n\t\t\t\tHost:      \"localhost\",\n\t\t\t\tStartPort: 1234,\n\t\t\t\tEndPort:   1234,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:          \"unix//foo/bar\",\n\t\t\tdefaultNetwork: \"tcp\",\n\t\t\texpectAddr: NetworkAddress{\n\t\t\t\tNetwork: \"unix\",\n\t\t\t\tHost:    \"/foo/bar\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:          \"localhost:1234-1234\",\n\t\t\tdefaultNetwork: \"tcp\",\n\t\t\texpectAddr: NetworkAddress{\n\t\t\t\tNetwork:   \"tcp\",\n\t\t\t\tHost:      \"localhost\",\n\t\t\t\tStartPort: 1234,\n\t\t\t\tEndPort:   1234,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:          \"localhost:2-1\",\n\t\t\tdefaultNetwork: \"tcp\",\n\t\t\texpectErr:      true,\n\t\t},\n\t\t{\n\t\t\tinput:          \"localhost:0\",\n\t\t\tdefaultNetwork: \"tcp\",\n\t\t\texpectAddr: NetworkAddress{\n\t\t\t\tNetwork:   \"tcp\",\n\t\t\t\tHost:      \"localhost\",\n\t\t\t\tStartPort: 0,\n\t\t\t\tEndPort:   0,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput:          \"localhost:1-999999999999\",\n\t\t\tdefaultNetwork: \"tcp\",\n\t\t\texpectErr:      true,\n\t\t},\n\t} {\n\t\tactualAddr, err := ParseNetworkAddressWithDefaults(tc.input, tc.defaultNetwork, tc.defaultPort)\n\t\tif tc.expectErr && err == nil {\n\t\t\tt.Errorf(\"Test %d: Expected error but got: %v\", i, err)\n\t\t}\n\t\tif !tc.expectErr && err != nil {\n\t\t\tt.Errorf(\"Test %d: Expected no error but got: %v\", i, err)\n\t\t}\n\n\t\tif actualAddr.Network != tc.expectAddr.Network {\n\t\t\tt.Errorf(\"Test %d: Expected network '%v' but got '%v'\", i, tc.expectAddr, actualAddr)\n\t\t}\n\t\tif !reflect.DeepEqual(tc.expectAddr, actualAddr) {\n\t\t\tt.Errorf(\"Test %d: Expected addresses %v but got %v\", i, tc.expectAddr, actualAddr)\n\t\t}\n\t}\n}\n\nfunc TestJoinHostPort(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tpa     NetworkAddress\n\t\toffset uint\n\t\texpect string\n\t}{\n\t\t{\n\t\t\tpa: NetworkAddress{\n\t\t\t\tNetwork:   \"tcp\",\n\t\t\t\tHost:      \"localhost\",\n\t\t\t\tStartPort: 1234,\n\t\t\t\tEndPort:   1234,\n\t\t\t},\n\t\t\texpect: \"localhost:1234\",\n\t\t},\n\t\t{\n\t\t\tpa: NetworkAddress{\n\t\t\t\tNetwork:   \"tcp\",\n\t\t\t\tHost:      \"localhost\",\n\t\t\t\tStartPort: 1234,\n\t\t\t\tEndPort:   1235,\n\t\t\t},\n\t\t\texpect: \"localhost:1234\",\n\t\t},\n\t\t{\n\t\t\tpa: NetworkAddress{\n\t\t\t\tNetwork:   \"tcp\",\n\t\t\t\tHost:      \"localhost\",\n\t\t\t\tStartPort: 1234,\n\t\t\t\tEndPort:   1235,\n\t\t\t},\n\t\t\toffset: 1,\n\t\t\texpect: \"localhost:1235\",\n\t\t},\n\t\t{\n\t\t\tpa: NetworkAddress{\n\t\t\t\tNetwork: \"unix\",\n\t\t\t\tHost:    \"/run/php/php7.3-fpm.sock\",\n\t\t\t},\n\t\t\texpect: \"/run/php/php7.3-fpm.sock\",\n\t\t},\n\t} {\n\t\tactual := tc.pa.JoinHostPort(tc.offset)\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d: Expected '%s' but got '%s'\", i, tc.expect, actual)\n\t\t}\n\t}\n}\n\nfunc TestExpand(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinput  NetworkAddress\n\t\texpect []NetworkAddress\n\t}{\n\t\t{\n\t\t\tinput: NetworkAddress{\n\t\t\t\tNetwork:   \"tcp\",\n\t\t\t\tHost:      \"localhost\",\n\t\t\t\tStartPort: 2000,\n\t\t\t\tEndPort:   2000,\n\t\t\t},\n\t\t\texpect: []NetworkAddress{\n\t\t\t\t{\n\t\t\t\t\tNetwork:   \"tcp\",\n\t\t\t\t\tHost:      \"localhost\",\n\t\t\t\t\tStartPort: 2000,\n\t\t\t\t\tEndPort:   2000,\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: NetworkAddress{\n\t\t\t\tNetwork:   \"tcp\",\n\t\t\t\tHost:      \"localhost\",\n\t\t\t\tStartPort: 2000,\n\t\t\t\tEndPort:   2002,\n\t\t\t},\n\t\t\texpect: []NetworkAddress{\n\t\t\t\t{\n\t\t\t\t\tNetwork:   \"tcp\",\n\t\t\t\t\tHost:      \"localhost\",\n\t\t\t\t\tStartPort: 2000,\n\t\t\t\t\tEndPort:   2000,\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tNetwork:   \"tcp\",\n\t\t\t\t\tHost:      \"localhost\",\n\t\t\t\t\tStartPort: 2001,\n\t\t\t\t\tEndPort:   2001,\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tNetwork:   \"tcp\",\n\t\t\t\t\tHost:      \"localhost\",\n\t\t\t\t\tStartPort: 2002,\n\t\t\t\t\tEndPort:   2002,\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: NetworkAddress{\n\t\t\t\tNetwork:   \"tcp\",\n\t\t\t\tHost:      \"localhost\",\n\t\t\t\tStartPort: 2000,\n\t\t\t\tEndPort:   1999,\n\t\t\t},\n\t\t\texpect: []NetworkAddress{},\n\t\t},\n\t\t{\n\t\t\tinput: NetworkAddress{\n\t\t\t\tNetwork:   \"unix\",\n\t\t\t\tHost:      \"/foo/bar\",\n\t\t\t\tStartPort: 0,\n\t\t\t\tEndPort:   0,\n\t\t\t},\n\t\t\texpect: []NetworkAddress{\n\t\t\t\t{\n\t\t\t\t\tNetwork:   \"unix\",\n\t\t\t\t\tHost:      \"/foo/bar\",\n\t\t\t\t\tStartPort: 0,\n\t\t\t\t\tEndPort:   0,\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t} {\n\t\tactual := tc.input.Expand()\n\t\tif !reflect.DeepEqual(actual, tc.expect) {\n\t\t\tt.Errorf(\"Test %d: Expected %+v but got %+v\", i, tc.expect, actual)\n\t\t}\n\t}\n}\n\nfunc TestSplitUnixSocketPermissionsBits(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinput          string\n\t\texpectNetwork  string\n\t\texpectPath     string\n\t\texpectFileMode string\n\t\texpectErr      bool\n\t}{\n\t\t{\n\t\t\tinput:          \"./foo.socket\",\n\t\t\texpectPath:     \"./foo.socket\",\n\t\t\texpectFileMode: \"--w-------\",\n\t\t},\n\t\t{\n\t\t\tinput:          `.\\relative\\path.socket`,\n\t\t\texpectPath:     `.\\relative\\path.socket`,\n\t\t\texpectFileMode: \"--w-------\",\n\t\t},\n\t\t{\n\t\t\t// literal colon in resulting address\n\t\t\t// and defaulting to 0200 bits\n\t\t\tinput:          \"./foo.socket:0666\",\n\t\t\texpectPath:     \"./foo.socket:0666\",\n\t\t\texpectFileMode: \"--w-------\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"./foo.socket|0220\",\n\t\t\texpectPath:     \"./foo.socket\",\n\t\t\texpectFileMode: \"--w--w----\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"/var/run/foo|222\",\n\t\t\texpectPath:     \"/var/run/foo\",\n\t\t\texpectFileMode: \"--w--w--w-\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"./foo.socket|0660\",\n\t\t\texpectPath:     \"./foo.socket\",\n\t\t\texpectFileMode: \"-rw-rw----\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"./foo.socket|0666\",\n\t\t\texpectPath:     \"./foo.socket\",\n\t\t\texpectFileMode: \"-rw-rw-rw-\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"/var/run/foo|666\",\n\t\t\texpectPath:     \"/var/run/foo\",\n\t\t\texpectFileMode: \"-rw-rw-rw-\",\n\t\t},\n\t\t{\n\t\t\tinput:          `c:\\absolute\\path.socket|220`,\n\t\t\texpectPath:     `c:\\absolute\\path.socket`,\n\t\t\texpectFileMode: \"--w--w----\",\n\t\t},\n\t\t{\n\t\t\t// symbolic permission representation is not supported for now\n\t\t\tinput:     \"./foo.socket|u=rw,g=rw,o=rw\",\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\t// octal (base-8) permission representation has to be between\n\t\t\t// `0` for no read, no write, no exec (`---`) and\n\t\t\t// `7` for read (4), write (2), exec (1) (`rwx` => `4+2+1 = 7`)\n\t\t\tinput:     \"./foo.socket|888\",\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\t// too many colons in address\n\t\t\tinput:     \"./foo.socket|123456|0660\",\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\t// owner is missing write perms\n\t\t\tinput:     \"./foo.socket|0522\",\n\t\t\texpectErr: true,\n\t\t},\n\t} {\n\t\tactualPath, actualFileMode, err := internal.SplitUnixSocketPermissionsBits(tc.input)\n\t\tif tc.expectErr && err == nil {\n\t\t\tt.Errorf(\"Test %d: Expected error but got: %v\", i, err)\n\t\t}\n\t\tif !tc.expectErr && err != nil {\n\t\t\tt.Errorf(\"Test %d: Expected no error but got: %v\", i, err)\n\t\t}\n\t\tif actualPath != tc.expectPath {\n\t\t\tt.Errorf(\"Test %d: Expected path '%s' but got '%s'\", i, tc.expectPath, actualPath)\n\t\t}\n\t\t// fileMode.Perm().String() parses 0 to \"----------\"\n\t\tif !tc.expectErr && actualFileMode.Perm().String() != tc.expectFileMode {\n\t\t\tt.Errorf(\"Test %d: Expected perms '%s' but got '%s'\", i, tc.expectFileMode, actualFileMode.Perm().String())\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "logging.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"log\"\n\t\"os\"\n\t\"slices\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\t\"golang.org/x/term\"\n\n\t\"github.com/caddyserver/caddy/v2/internal\"\n)\n\nfunc init() {\n\tRegisterModule(StdoutWriter{})\n\tRegisterModule(StderrWriter{})\n\tRegisterModule(DiscardWriter{})\n}\n\n// Logging facilitates logging within Caddy. The default log is\n// called \"default\" and you can customize it. You can also define\n// additional logs.\n//\n// By default, all logs at INFO level and higher are written to\n// standard error (\"stderr\" writer) in a human-readable format\n// (\"console\" encoder if stdout is an interactive terminal, \"json\"\n// encoder otherwise).\n//\n// All defined logs accept all log entries by default, but you\n// can filter by level and module/logger names. A logger's name\n// is the same as the module's name, but a module may append to\n// logger names for more specificity. For example, you can\n// filter logs emitted only by HTTP handlers using the name\n// \"http.handlers\", because all HTTP handler module names have\n// that prefix.\n//\n// Caddy logs (except the sink) are zero-allocation, so they are\n// very high-performing in terms of memory and CPU time. Enabling\n// sampling can further increase throughput on extremely high-load\n// servers.\ntype Logging struct {\n\t// Sink is the destination for all unstructured logs emitted\n\t// from Go's standard library logger. These logs are common\n\t// in dependencies that are not designed specifically for use\n\t// in Caddy. Because it is global and unstructured, the sink\n\t// lacks most advanced features and customizations.\n\tSink *SinkLog `json:\"sink,omitempty\"`\n\n\t// Logs are your logs, keyed by an arbitrary name of your\n\t// choosing. The default log can be customized by defining\n\t// a log called \"default\". You can further define other logs\n\t// and filter what kinds of entries they accept.\n\tLogs map[string]*CustomLog `json:\"logs,omitempty\"`\n\n\t// a list of all keys for open writers; all writers\n\t// that are opened to provision this logging config\n\t// must have their keys added to this list so they\n\t// can be closed when cleaning up\n\twriterKeys []string\n}\n\n// openLogs sets up the config and opens all the configured writers.\n// It closes its logs when ctx is canceled, so it should clean up\n// after itself.\nfunc (logging *Logging) openLogs(ctx Context) error {\n\t// make sure to deallocate resources when context is done\n\tctx.OnCancel(func() {\n\t\terr := logging.closeLogs()\n\t\tif err != nil {\n\t\t\tLog().Error(\"closing logs\", zap.Error(err))\n\t\t}\n\t})\n\n\t// set up the \"sink\" log first (std lib's default global logger)\n\tif logging.Sink != nil {\n\t\terr := logging.Sink.provision(ctx, logging)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"setting up sink log: %v\", err)\n\t\t}\n\t}\n\n\t// as a special case, set up the default structured Caddy log next\n\tif err := logging.setupNewDefault(ctx); err != nil {\n\t\treturn err\n\t}\n\n\t// then set up any other custom logs\n\tfor name, l := range logging.Logs {\n\t\t// the default log is already set up\n\t\tif name == DefaultLoggerName {\n\t\t\tcontinue\n\t\t}\n\n\t\terr := l.provision(ctx, logging)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"setting up custom log '%s': %v\", name, err)\n\t\t}\n\n\t\t// Any other logs that use the discard writer can be deleted\n\t\t// entirely. This avoids encoding and processing of each\n\t\t// log entry that would just be thrown away anyway. Notably,\n\t\t// we do not reach this point for the default log, which MUST\n\t\t// exist, otherwise core log emissions would panic because\n\t\t// they use the Log() function directly which expects a non-nil\n\t\t// logger. Even if we keep logs with a discard writer, they\n\t\t// have a nop core, and keeping them at all seems unnecessary.\n\t\tif _, ok := l.writerOpener.(*DiscardWriter); ok {\n\t\t\tdelete(logging.Logs, name)\n\t\t\tcontinue\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (logging *Logging) setupNewDefault(ctx Context) error {\n\tif logging.Logs == nil {\n\t\tlogging.Logs = make(map[string]*CustomLog)\n\t}\n\n\t// extract the user-defined default log, if any\n\tnewDefault := new(defaultCustomLog)\n\tif userDefault, ok := logging.Logs[DefaultLoggerName]; ok {\n\t\tnewDefault.CustomLog = userDefault\n\t} else {\n\t\t// if none, make one with our own default settings\n\t\tvar err error\n\t\tnewDefault, err = newDefaultProductionLog()\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"setting up default Caddy log: %v\", err)\n\t\t}\n\t\tlogging.Logs[DefaultLoggerName] = newDefault.CustomLog\n\t}\n\n\t// options for the default logger\n\toptions, err := newDefault.CustomLog.buildOptions()\n\tif err != nil {\n\t\treturn fmt.Errorf(\"setting up default log: %v\", err)\n\t}\n\n\t// set up this new log\n\terr = newDefault.CustomLog.provision(ctx, logging)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"setting up default log: %v\", err)\n\t}\n\n\tfilteringCore := &filteringCore{newDefault.CustomLog.core, newDefault.CustomLog}\n\tnewDefault.logger = zap.New(filteringCore, options...)\n\n\t// redirect the default caddy logs\n\tdefaultLoggerMu.Lock()\n\toldDefault := defaultLogger\n\tdefaultLogger = newDefault\n\tdefaultLoggerMu.Unlock()\n\n\t// if the new writer is different, indicate it in the logs for convenience\n\tvar newDefaultLogWriterKey, currentDefaultLogWriterKey string\n\tvar newDefaultLogWriterStr, currentDefaultLogWriterStr string\n\tif newDefault.writerOpener != nil {\n\t\tnewDefaultLogWriterKey = newDefault.writerOpener.WriterKey()\n\t\tnewDefaultLogWriterStr = newDefault.writerOpener.String()\n\t}\n\tif oldDefault.writerOpener != nil {\n\t\tcurrentDefaultLogWriterKey = oldDefault.writerOpener.WriterKey()\n\t\tcurrentDefaultLogWriterStr = oldDefault.writerOpener.String()\n\t}\n\tif newDefaultLogWriterKey != currentDefaultLogWriterKey {\n\t\toldDefault.logger.Info(\"redirected default logger\",\n\t\t\tzap.String(\"from\", currentDefaultLogWriterStr),\n\t\t\tzap.String(\"to\", newDefaultLogWriterStr),\n\t\t)\n\t}\n\n\t// if we had a buffered core, flush its contents ASAP\n\t// before we try to log anything else, so the order of\n\t// logs is preserved\n\tif oldBufferCore, ok := oldDefault.logger.Core().(*internal.LogBufferCore); ok {\n\t\toldBufferCore.FlushTo(newDefault.logger)\n\t}\n\n\treturn nil\n}\n\n// closeLogs cleans up resources allocated during openLogs.\n// A successful call to openLogs calls this automatically\n// when the context is canceled.\nfunc (logging *Logging) closeLogs() error {\n\tfor _, key := range logging.writerKeys {\n\t\t_, err := writers.Delete(key)\n\t\tif err != nil {\n\t\t\tlog.Printf(\"[ERROR] Closing log writer %v: %v\", key, err)\n\t\t}\n\t}\n\treturn nil\n}\n\n// Logger returns a logger that is ready for the module to use.\nfunc (logging *Logging) Logger(mod Module) *zap.Logger {\n\tmodID := string(mod.CaddyModule().ID)\n\tvar cores []zapcore.Core\n\tvar options []zap.Option\n\n\tif logging != nil {\n\t\tfor _, l := range logging.Logs {\n\t\t\tif l.matchesModule(modID) {\n\t\t\t\tif len(l.Include) == 0 && len(l.Exclude) == 0 {\n\t\t\t\t\tcores = append(cores, l.core)\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tif len(options) == 0 {\n\t\t\t\t\tnewOptions, err := l.buildOptions()\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tLog().Error(\"building options for logger\", zap.String(\"module\", modID), zap.Error(err))\n\t\t\t\t\t}\n\t\t\t\t\toptions = newOptions\n\t\t\t\t}\n\t\t\t\tcores = append(cores, &filteringCore{Core: l.core, cl: l})\n\t\t\t}\n\t\t}\n\t}\n\n\tmultiCore := zapcore.NewTee(cores...)\n\n\treturn zap.New(multiCore, options...).Named(modID)\n}\n\n// openWriter opens a writer using opener, and returns true if\n// the writer is new, or false if the writer already exists.\nfunc (logging *Logging) openWriter(opener WriterOpener) (io.WriteCloser, bool, error) {\n\tkey := opener.WriterKey()\n\twriter, loaded, err := writers.LoadOrNew(key, func() (Destructor, error) {\n\t\tw, err := opener.OpenWriter()\n\t\treturn writerDestructor{w}, err\n\t})\n\tif err != nil {\n\t\treturn nil, false, err\n\t}\n\tlogging.writerKeys = append(logging.writerKeys, key)\n\treturn writer.(io.WriteCloser), !loaded, nil\n}\n\n// WriterOpener is a module that can open a log writer.\n// It can return a human-readable string representation\n// of itself so that operators can understand where\n// the logs are going.\ntype WriterOpener interface {\n\tfmt.Stringer\n\n\t// WriterKey is a string that uniquely identifies this\n\t// writer configuration. It is not shown to humans.\n\tWriterKey() string\n\n\t// OpenWriter opens a log for writing. The writer\n\t// should be safe for concurrent use but need not\n\t// be synchronous.\n\tOpenWriter() (io.WriteCloser, error)\n}\n\n// IsWriterStandardStream returns true if the input is a\n// writer-opener to a standard stream (stdout, stderr).\nfunc IsWriterStandardStream(wo WriterOpener) bool {\n\tswitch wo.(type) {\n\tcase StdoutWriter, StderrWriter,\n\t\t*StdoutWriter, *StderrWriter:\n\t\treturn true\n\t}\n\treturn false\n}\n\ntype writerDestructor struct {\n\tio.WriteCloser\n}\n\nfunc (wdest writerDestructor) Destruct() error {\n\treturn wdest.Close()\n}\n\n// BaseLog contains the common logging parameters for logging.\ntype BaseLog struct {\n\t// The module that writes out log entries for the sink.\n\tWriterRaw json.RawMessage `json:\"writer,omitempty\" caddy:\"namespace=caddy.logging.writers inline_key=output\"`\n\n\t// The encoder is how the log entries are formatted or encoded.\n\tEncoderRaw json.RawMessage `json:\"encoder,omitempty\" caddy:\"namespace=caddy.logging.encoders inline_key=format\"`\n\n\t// Tees entries through a zap.Core module which can extract\n\t// log entry metadata and fields for further processing.\n\tCoreRaw json.RawMessage `json:\"core,omitempty\" caddy:\"namespace=caddy.logging.cores inline_key=module\"`\n\n\t// Level is the minimum level to emit, and is inclusive.\n\t// Possible levels: DEBUG, INFO, WARN, ERROR, PANIC, and FATAL\n\tLevel string `json:\"level,omitempty\"`\n\n\t// Sampling configures log entry sampling. If enabled,\n\t// only some log entries will be emitted. This is useful\n\t// for improving performance on extremely high-pressure\n\t// servers.\n\tSampling *LogSampling `json:\"sampling,omitempty\"`\n\n\t// If true, the log entry will include the caller's\n\t// file name and line number. Default off.\n\tWithCaller bool `json:\"with_caller,omitempty\"`\n\n\t// If non-zero, and `with_caller` is true, this many\n\t// stack frames will be skipped when determining the\n\t// caller. Default 0.\n\tWithCallerSkip int `json:\"with_caller_skip,omitempty\"`\n\n\t// If not empty, the log entry will include a stack trace\n\t// for all logs at the given level or higher. See `level`\n\t// for possible values. Default off.\n\tWithStacktrace string `json:\"with_stacktrace,omitempty\"`\n\n\twriterOpener WriterOpener\n\twriter       io.WriteCloser\n\tencoder      zapcore.Encoder\n\tlevelEnabler zapcore.LevelEnabler\n\tcore         zapcore.Core\n}\n\nfunc (cl *BaseLog) provisionCommon(ctx Context, logging *Logging) error {\n\tif cl.WriterRaw != nil {\n\t\tmod, err := ctx.LoadModule(cl, \"WriterRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading log writer module: %v\", err)\n\t\t}\n\t\tcl.writerOpener = mod.(WriterOpener)\n\t}\n\tif cl.writerOpener == nil {\n\t\tcl.writerOpener = StderrWriter{}\n\t}\n\tvar err error\n\tcl.writer, _, err = logging.openWriter(cl.writerOpener)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"opening log writer using %#v: %v\", cl.writerOpener, err)\n\t}\n\n\t// set up the log level\n\tcl.levelEnabler, err = parseLevel(cl.Level)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif cl.EncoderRaw != nil {\n\t\tmod, err := ctx.LoadModule(cl, \"EncoderRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading log encoder module: %v\", err)\n\t\t}\n\t\tcl.encoder = mod.(zapcore.Encoder)\n\n\t\t// if the encoder module needs the writer to determine\n\t\t// the correct default to use for a nested encoder, we\n\t\t// pass it down as a secondary provisioning step\n\t\tif cfd, ok := mod.(ConfiguresFormatterDefault); ok {\n\t\t\tif err := cfd.ConfigureDefaultFormat(cl.writerOpener); err != nil {\n\t\t\t\treturn fmt.Errorf(\"configuring default format for encoder module: %v\", err)\n\t\t\t}\n\t\t}\n\t}\n\tif cl.encoder == nil {\n\t\tcl.encoder = newDefaultProductionLogEncoder(cl.writerOpener)\n\t}\n\tcl.buildCore()\n\tif cl.CoreRaw != nil {\n\t\tmod, err := ctx.LoadModule(cl, \"CoreRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading log core module: %v\", err)\n\t\t}\n\t\tcore := mod.(zapcore.Core)\n\t\tcl.core = zapcore.NewTee(cl.core, core)\n\t}\n\treturn nil\n}\n\nfunc (cl *BaseLog) buildCore() {\n\t// logs which only discard their output don't need\n\t// to perform encoding or any other processing steps\n\t// at all, so just shortcut to a nop core instead\n\tif _, ok := cl.writerOpener.(*DiscardWriter); ok {\n\t\tcl.core = zapcore.NewNopCore()\n\t\treturn\n\t}\n\tc := zapcore.NewCore(\n\t\tcl.encoder,\n\t\tzapcore.AddSync(cl.writer),\n\t\tcl.levelEnabler,\n\t)\n\tif cl.Sampling != nil {\n\t\tif cl.Sampling.Interval == 0 {\n\t\t\tcl.Sampling.Interval = 1 * time.Second\n\t\t}\n\t\tif cl.Sampling.First == 0 {\n\t\t\tcl.Sampling.First = 100\n\t\t}\n\t\tif cl.Sampling.Thereafter == 0 {\n\t\t\tcl.Sampling.Thereafter = 100\n\t\t}\n\t\tc = zapcore.NewSamplerWithOptions(c, cl.Sampling.Interval,\n\t\t\tcl.Sampling.First, cl.Sampling.Thereafter)\n\t}\n\tcl.core = c\n}\n\nfunc (cl *BaseLog) buildOptions() ([]zap.Option, error) {\n\tvar options []zap.Option\n\tif cl.WithCaller {\n\t\toptions = append(options, zap.AddCaller())\n\t\tif cl.WithCallerSkip != 0 {\n\t\t\toptions = append(options, zap.AddCallerSkip(cl.WithCallerSkip))\n\t\t}\n\t}\n\tif cl.WithStacktrace != \"\" {\n\t\tlevelEnabler, err := parseLevel(cl.WithStacktrace)\n\t\tif err != nil {\n\t\t\treturn options, fmt.Errorf(\"setting up default Caddy log: %v\", err)\n\t\t}\n\t\toptions = append(options, zap.AddStacktrace(levelEnabler))\n\t}\n\treturn options, nil\n}\n\n// SinkLog configures the default Go standard library\n// global logger in the log package. This is necessary because\n// module dependencies which are not built specifically for\n// Caddy will use the standard logger. This is also known as\n// the \"sink\" logger.\ntype SinkLog struct {\n\tBaseLog\n}\n\nfunc (sll *SinkLog) provision(ctx Context, logging *Logging) error {\n\tif err := sll.provisionCommon(ctx, logging); err != nil {\n\t\treturn err\n\t}\n\n\toptions, err := sll.buildOptions()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tlogger := zap.New(sll.core, options...)\n\tctx.cleanupFuncs = append(ctx.cleanupFuncs, zap.RedirectStdLog(logger))\n\treturn nil\n}\n\n// CustomLog represents a custom logger configuration.\n//\n// By default, a log will emit all log entries. Some entries\n// will be skipped if sampling is enabled. Further, the Include\n// and Exclude parameters define which loggers (by name) are\n// allowed or rejected from emitting in this log. If both Include\n// and Exclude are populated, their values must be mutually\n// exclusive, and longer namespaces have priority. If neither\n// are populated, all logs are emitted.\ntype CustomLog struct {\n\tBaseLog\n\n\t// Include defines the names of loggers to emit in this\n\t// log. For example, to include only logs emitted by the\n\t// admin API, you would include \"admin.api\".\n\tInclude []string `json:\"include,omitempty\"`\n\n\t// Exclude defines the names of loggers that should be\n\t// skipped by this log. For example, to exclude only\n\t// HTTP access logs, you would exclude \"http.log.access\".\n\tExclude []string `json:\"exclude,omitempty\"`\n}\n\nfunc (cl *CustomLog) provision(ctx Context, logging *Logging) error {\n\tif err := cl.provisionCommon(ctx, logging); err != nil {\n\t\treturn err\n\t}\n\n\t// If both Include and Exclude lists are populated, then each item must\n\t// be a superspace or subspace of an item in the other list, because\n\t// populating both lists means that any given item is either a rule\n\t// or an exception to another rule. But if the item is not a super-\n\t// or sub-space of any item in the other list, it is neither a rule\n\t// nor an exception, and is a contradiction. Ensure, too, that the\n\t// sets do not intersect, which is also a contradiction.\n\tif len(cl.Include) > 0 && len(cl.Exclude) > 0 {\n\t\t// prevent intersections\n\t\tfor _, allow := range cl.Include {\n\t\t\tif slices.Contains(cl.Exclude, allow) {\n\t\t\t\treturn fmt.Errorf(\"include and exclude must not intersect, but found %s in both lists\", allow)\n\t\t\t}\n\t\t}\n\n\t\t// ensure namespaces are nested\n\touter:\n\t\tfor _, allow := range cl.Include {\n\t\t\tfor _, deny := range cl.Exclude {\n\t\t\t\tif strings.HasPrefix(allow+\".\", deny+\".\") ||\n\t\t\t\t\tstrings.HasPrefix(deny+\".\", allow+\".\") {\n\t\t\t\t\tcontinue outer\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn fmt.Errorf(\"when both include and exclude are populated, each element must be a superspace or subspace of one in the other list; check '%s' in include\", allow)\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (cl *CustomLog) matchesModule(moduleID string) bool {\n\treturn cl.loggerAllowed(moduleID, true)\n}\n\n// loggerAllowed returns true if name is allowed to emit\n// to cl. isModule should be true if name is the name of\n// a module and you want to see if ANY of that module's\n// logs would be permitted.\nfunc (cl *CustomLog) loggerAllowed(name string, isModule bool) bool {\n\t// accept all loggers by default\n\tif len(cl.Include) == 0 && len(cl.Exclude) == 0 {\n\t\treturn true\n\t}\n\n\t// append a dot so that partial names don't match\n\t// (i.e. we don't want \"foo.b\" to match \"foo.bar\"); we\n\t// will also have to append a dot when we do HasPrefix\n\t// below to compensate for when namespaces are equal\n\tif name != \"\" && name != \"*\" && name != \".\" {\n\t\tname += \".\"\n\t}\n\n\tvar longestAccept, longestReject int\n\n\tif len(cl.Include) > 0 {\n\t\tfor _, namespace := range cl.Include {\n\t\t\tvar hasPrefix bool\n\t\t\tif isModule {\n\t\t\t\thasPrefix = strings.HasPrefix(namespace+\".\", name)\n\t\t\t} else {\n\t\t\t\thasPrefix = strings.HasPrefix(name, namespace+\".\")\n\t\t\t}\n\t\t\tif hasPrefix && len(namespace) > longestAccept {\n\t\t\t\tlongestAccept = len(namespace)\n\t\t\t}\n\t\t}\n\t\t// the include list was populated, meaning that\n\t\t// a match in this list is absolutely required\n\t\t// if we are to accept the entry\n\t\tif longestAccept == 0 {\n\t\t\treturn false\n\t\t}\n\t}\n\n\tif len(cl.Exclude) > 0 {\n\t\tfor _, namespace := range cl.Exclude {\n\t\t\t// * == all logs emitted by modules\n\t\t\t// . == all logs emitted by core\n\t\t\tif (namespace == \"*\" && name != \".\") ||\n\t\t\t\t(namespace == \".\" && name == \".\") {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tif strings.HasPrefix(name, namespace+\".\") &&\n\t\t\t\tlen(namespace) > longestReject {\n\t\t\t\tlongestReject = len(namespace)\n\t\t\t}\n\t\t}\n\t\t// the reject list is populated, so we have to\n\t\t// reject this entry if its match is better\n\t\t// than the best from the accept list\n\t\tif longestReject > longestAccept {\n\t\t\treturn false\n\t\t}\n\t}\n\n\treturn (longestAccept > longestReject) ||\n\t\t(len(cl.Include) == 0 && longestReject == 0)\n}\n\n// filteringCore filters log entries based on logger name,\n// according to the rules of a CustomLog.\ntype filteringCore struct {\n\tzapcore.Core\n\tcl *CustomLog\n}\n\n// With properly wraps With.\nfunc (fc *filteringCore) With(fields []zapcore.Field) zapcore.Core {\n\treturn &filteringCore{\n\t\tCore: fc.Core.With(fields),\n\t\tcl:   fc.cl,\n\t}\n}\n\n// Check only allows the log entry if its logger name\n// is allowed from the include/exclude rules of fc.cl.\nfunc (fc *filteringCore) Check(e zapcore.Entry, ce *zapcore.CheckedEntry) *zapcore.CheckedEntry {\n\tif fc.cl.loggerAllowed(e.LoggerName, false) {\n\t\treturn fc.Core.Check(e, ce)\n\t}\n\treturn ce\n}\n\n// LogSampling configures log entry sampling.\ntype LogSampling struct {\n\t// The window over which to conduct sampling.\n\tInterval time.Duration `json:\"interval,omitempty\"`\n\n\t// Log this many entries within a given level and\n\t// message for each interval.\n\tFirst int `json:\"first,omitempty\"`\n\n\t// If more entries with the same level and message\n\t// are seen during the same interval, keep one in\n\t// this many entries until the end of the interval.\n\tThereafter int `json:\"thereafter,omitempty\"`\n}\n\ntype (\n\t// StdoutWriter writes logs to standard out.\n\tStdoutWriter struct{}\n\n\t// StderrWriter writes logs to standard error.\n\tStderrWriter struct{}\n\n\t// DiscardWriter discards all writes.\n\tDiscardWriter struct{}\n)\n\n// CaddyModule returns the Caddy module information.\nfunc (StdoutWriter) CaddyModule() ModuleInfo {\n\treturn ModuleInfo{\n\t\tID:  \"caddy.logging.writers.stdout\",\n\t\tNew: func() Module { return new(StdoutWriter) },\n\t}\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (StderrWriter) CaddyModule() ModuleInfo {\n\treturn ModuleInfo{\n\t\tID:  \"caddy.logging.writers.stderr\",\n\t\tNew: func() Module { return new(StderrWriter) },\n\t}\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (DiscardWriter) CaddyModule() ModuleInfo {\n\treturn ModuleInfo{\n\t\tID:  \"caddy.logging.writers.discard\",\n\t\tNew: func() Module { return new(DiscardWriter) },\n\t}\n}\n\nfunc (StdoutWriter) String() string  { return \"stdout\" }\nfunc (StderrWriter) String() string  { return \"stderr\" }\nfunc (DiscardWriter) String() string { return \"discard\" }\n\n// WriterKey returns a unique key representing stdout.\nfunc (StdoutWriter) WriterKey() string { return \"std:out\" }\n\n// WriterKey returns a unique key representing stderr.\nfunc (StderrWriter) WriterKey() string { return \"std:err\" }\n\n// WriterKey returns a unique key representing discard.\nfunc (DiscardWriter) WriterKey() string { return \"discard\" }\n\n// OpenWriter returns os.Stdout that can't be closed.\nfunc (StdoutWriter) OpenWriter() (io.WriteCloser, error) {\n\treturn notClosable{os.Stdout}, nil\n}\n\n// OpenWriter returns os.Stderr that can't be closed.\nfunc (StderrWriter) OpenWriter() (io.WriteCloser, error) {\n\treturn notClosable{os.Stderr}, nil\n}\n\n// OpenWriter returns io.Discard that can't be closed.\nfunc (DiscardWriter) OpenWriter() (io.WriteCloser, error) {\n\treturn notClosable{io.Discard}, nil\n}\n\n// notClosable is an io.WriteCloser that can't be closed.\ntype notClosable struct{ io.Writer }\n\nfunc (fc notClosable) Close() error { return nil }\n\ntype defaultCustomLog struct {\n\t*CustomLog\n\tlogger *zap.Logger\n}\n\n// newDefaultProductionLog configures a custom log that is\n// intended for use by default if no other log is specified\n// in a config. It writes to stderr, uses the console encoder,\n// and enables INFO-level logs and higher.\nfunc newDefaultProductionLog() (*defaultCustomLog, error) {\n\tcl := new(CustomLog)\n\tcl.writerOpener = StderrWriter{}\n\tvar err error\n\tcl.writer, err = cl.writerOpener.OpenWriter()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tcl.encoder = newDefaultProductionLogEncoder(cl.writerOpener)\n\tcl.levelEnabler = zapcore.InfoLevel\n\n\tcl.buildCore()\n\n\tlogger := zap.New(cl.core)\n\n\t// capture logs from other libraries which\n\t// may not be using zap logging directly\n\t_ = zap.RedirectStdLog(logger)\n\n\treturn &defaultCustomLog{\n\t\tCustomLog: cl,\n\t\tlogger:    logger,\n\t}, nil\n}\n\nfunc newDefaultProductionLogEncoder(wo WriterOpener) zapcore.Encoder {\n\tencCfg := zap.NewProductionEncoderConfig()\n\tif IsWriterStandardStream(wo) && term.IsTerminal(int(os.Stderr.Fd())) {\n\t\t// if interactive terminal, make output more human-readable by default\n\t\tencCfg.EncodeTime = func(ts time.Time, encoder zapcore.PrimitiveArrayEncoder) {\n\t\t\tencoder.AppendString(ts.UTC().Format(\"2006/01/02 15:04:05.000\"))\n\t\t}\n\t\tif coloringEnabled {\n\t\t\tencCfg.EncodeLevel = zapcore.CapitalColorLevelEncoder\n\t\t}\n\n\t\treturn zapcore.NewConsoleEncoder(encCfg)\n\t}\n\treturn zapcore.NewJSONEncoder(encCfg)\n}\n\nfunc parseLevel(levelInput string) (zapcore.LevelEnabler, error) {\n\trepl := NewReplacer()\n\tlevel, err := repl.ReplaceOrErr(levelInput, true, true)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"invalid log level: %v\", err)\n\t}\n\tlevel = strings.ToLower(level)\n\n\t// set up the log level\n\tswitch level {\n\tcase \"debug\":\n\t\treturn zapcore.DebugLevel, nil\n\tcase \"\", \"info\":\n\t\treturn zapcore.InfoLevel, nil\n\tcase \"warn\":\n\t\treturn zapcore.WarnLevel, nil\n\tcase \"error\":\n\t\treturn zapcore.ErrorLevel, nil\n\tcase \"panic\":\n\t\treturn zapcore.PanicLevel, nil\n\tcase \"fatal\":\n\t\treturn zapcore.FatalLevel, nil\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"unrecognized log level: %s\", level)\n\t}\n}\n\n// Log returns the current default logger.\nfunc Log() *zap.Logger {\n\tdefaultLoggerMu.RLock()\n\tdefer defaultLoggerMu.RUnlock()\n\treturn defaultLogger.logger\n}\n\n// BufferedLog sets the default logger to one that buffers\n// logs before a config is loaded.\n// Returns the buffered logger, the original default logger\n// (for flushing on errors), and the buffer core so that the\n// caller can flush the logs after the config is loaded or\n// fails to load.\nfunc BufferedLog() (*zap.Logger, *zap.Logger, *internal.LogBufferCore) {\n\tdefaultLoggerMu.Lock()\n\tdefer defaultLoggerMu.Unlock()\n\torigLogger := defaultLogger.logger\n\tbufferCore := internal.NewLogBufferCore(zap.InfoLevel)\n\tdefaultLogger.logger = zap.New(bufferCore)\n\treturn defaultLogger.logger, origLogger, bufferCore\n}\n\nvar (\n\tcoloringEnabled  = os.Getenv(\"NO_COLOR\") == \"\" && os.Getenv(\"TERM\") != \"xterm-mono\"\n\tdefaultLogger, _ = newDefaultProductionLog()\n\tdefaultLoggerMu  sync.RWMutex\n)\n\nvar writers = NewUsagePool()\n\n// ConfiguresFormatterDefault is an optional interface that\n// encoder modules can implement to configure the default\n// format of their encoder. This is useful for encoders\n// which nest an encoder, that needs to know the writer\n// in order to determine the correct default.\ntype ConfiguresFormatterDefault interface {\n\tConfigureDefaultFormat(WriterOpener) error\n}\n\nconst DefaultLoggerName = \"default\"\n\n// Interface guards\nvar (\n\t_ io.WriteCloser = (*notClosable)(nil)\n\t_ WriterOpener   = (*StdoutWriter)(nil)\n\t_ WriterOpener   = (*StderrWriter)(nil)\n)\n"
  },
  {
    "path": "logging_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport \"testing\"\n\nfunc TestCustomLog_loggerAllowed(t *testing.T) {\n\ttype fields struct {\n\t\tBaseLog BaseLog\n\t\tInclude []string\n\t\tExclude []string\n\t}\n\ttype args struct {\n\t\tname     string\n\t\tisModule bool\n\t}\n\ttests := []struct {\n\t\tname   string\n\t\tfields fields\n\t\targs   args\n\t\twant   bool\n\t}{\n\t\t{\n\t\t\tname: \"include\",\n\t\t\tfields: fields{\n\t\t\t\tInclude: []string{\"foo\"},\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tname:     \"foo\",\n\t\t\t\tisModule: true,\n\t\t\t},\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"exclude\",\n\t\t\tfields: fields{\n\t\t\t\tExclude: []string{\"foo\"},\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tname:     \"foo\",\n\t\t\t\tisModule: true,\n\t\t\t},\n\t\t\twant: false,\n\t\t},\n\t\t{\n\t\t\tname: \"include and exclude\",\n\t\t\tfields: fields{\n\t\t\t\tInclude: []string{\"foo\"},\n\t\t\t\tExclude: []string{\"foo\"},\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tname:     \"foo\",\n\t\t\t\tisModule: true,\n\t\t\t},\n\t\t\twant: false,\n\t\t},\n\t\t{\n\t\t\tname: \"include and exclude (longer namespace)\",\n\t\t\tfields: fields{\n\t\t\t\tInclude: []string{\"foo.bar\"},\n\t\t\t\tExclude: []string{\"foo\"},\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tname:     \"foo.bar\",\n\t\t\t\tisModule: true,\n\t\t\t},\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"excluded module is not printed\",\n\t\t\tfields: fields{\n\t\t\t\tInclude: []string{\"admin.api.load\"},\n\t\t\t\tExclude: []string{\"admin.api\"},\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tname:     \"admin.api\",\n\t\t\t\tisModule: false,\n\t\t\t},\n\t\t\twant: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tcl := &CustomLog{\n\t\t\t\tBaseLog: tt.fields.BaseLog,\n\t\t\t\tInclude: tt.fields.Include,\n\t\t\t\tExclude: tt.fields.Exclude,\n\t\t\t}\n\t\t\tif got := cl.loggerAllowed(tt.args.name, tt.args.isModule); got != tt.want {\n\t\t\t\tt.Errorf(\"CustomLog.loggerAllowed() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "metrics.go",
    "content": "package caddy\n\nimport (\n\t\"net/http\"\n\n\t\"github.com/prometheus/client_golang/prometheus\"\n\n\t\"github.com/caddyserver/caddy/v2/internal/metrics\"\n)\n\n// define and register the metrics used in this package.\nfunc init() {\n\tconst ns, sub = \"caddy\", \"admin\"\n\tadminMetrics.requestCount = prometheus.NewCounterVec(prometheus.CounterOpts{\n\t\tNamespace: ns,\n\t\tSubsystem: sub,\n\t\tName:      \"http_requests_total\",\n\t\tHelp:      \"Counter of requests made to the Admin API's HTTP endpoints.\",\n\t}, []string{\"handler\", \"path\", \"code\", \"method\"})\n\tadminMetrics.requestErrors = prometheus.NewCounterVec(prometheus.CounterOpts{\n\t\tNamespace: ns,\n\t\tSubsystem: sub,\n\t\tName:      \"http_request_errors_total\",\n\t\tHelp:      \"Number of requests resulting in middleware errors.\",\n\t}, []string{\"handler\", \"path\", \"method\"})\n\tglobalMetrics.configSuccess = prometheus.NewGauge(prometheus.GaugeOpts{\n\t\tName: \"caddy_config_last_reload_successful\",\n\t\tHelp: \"Whether the last configuration reload attempt was successful.\",\n\t})\n\tglobalMetrics.configSuccessTime = prometheus.NewGauge(prometheus.GaugeOpts{\n\t\tName: \"caddy_config_last_reload_success_timestamp_seconds\",\n\t\tHelp: \"Timestamp of the last successful configuration reload.\",\n\t})\n}\n\n// adminMetrics is a collection of metrics that can be tracked for the admin API.\nvar adminMetrics = struct {\n\trequestCount  *prometheus.CounterVec\n\trequestErrors *prometheus.CounterVec\n}{}\n\n// globalMetrics is a collection of metrics that can be tracked for Caddy global state\nvar globalMetrics = struct {\n\tconfigSuccess     prometheus.Gauge\n\tconfigSuccessTime prometheus.Gauge\n}{}\n\n// Similar to promhttp.InstrumentHandlerCounter, but upper-cases method names\n// instead of lower-casing them.\n//\n// Unlike promhttp.InstrumentHandlerCounter, this assumes a \"code\" and \"method\"\n// label is present, and will panic otherwise.\nfunc instrumentHandlerCounter(counter *prometheus.CounterVec, next http.Handler) http.HandlerFunc {\n\treturn http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\td := newDelegator(w)\n\t\tnext.ServeHTTP(d, r)\n\t\tcounter.With(prometheus.Labels{\n\t\t\t\"code\":   metrics.SanitizeCode(d.status),\n\t\t\t\"method\": metrics.SanitizeMethod(r.Method),\n\t\t}).Inc()\n\t})\n}\n\nfunc newDelegator(w http.ResponseWriter) *delegator {\n\treturn &delegator{\n\t\tResponseWriter: w,\n\t}\n}\n\ntype delegator struct {\n\thttp.ResponseWriter\n\tstatus int\n}\n\nfunc (d *delegator) WriteHeader(code int) {\n\td.status = code\n\td.ResponseWriter.WriteHeader(code)\n}\n\n// Unwrap returns the underlying ResponseWriter, necessary for\n// http.ResponseController to work correctly.\nfunc (d *delegator) Unwrap() http.ResponseWriter {\n\treturn d.ResponseWriter\n}\n"
  },
  {
    "path": "modules/caddyevents/app.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyevents\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"strings\"\n\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(App{})\n}\n\n// App implements a global eventing system within Caddy.\n// Modules can emit and subscribe to events, providing\n// hooks into deep parts of the code base that aren't\n// otherwise accessible. Events provide information about\n// what and when things are happening, and this facility\n// allows handlers to take action when events occur,\n// add information to the event's metadata, and even\n// control program flow in some cases.\n//\n// Events are propagated in a DOM-like fashion. An event\n// emitted from module `a.b.c` (the \"origin\") will first\n// invoke handlers listening to `a.b.c`, then `a.b`,\n// then `a`, then those listening regardless of origin.\n// If a handler returns the special error Aborted, then\n// propagation immediately stops and the event is marked\n// as aborted. Emitters may optionally choose to adjust\n// program flow based on an abort.\n//\n// Modules can subscribe to events by origin and/or name.\n// A handler is invoked only if it is subscribed to the\n// event by name and origin. Subscriptions should be\n// registered during the provisioning phase, before apps\n// are started.\n//\n// Event handlers are fired synchronously as part of the\n// regular flow of the program. This allows event handlers\n// to control the flow of the program if the origin permits\n// it and also allows handlers to convey new information\n// back into the origin module before it continues.\n// In essence, event handlers are similar to HTTP\n// middleware handlers.\n//\n// Event bindings/subscribers are unordered; i.e.\n// event handlers are invoked in an arbitrary order.\n// Event handlers should not rely on the logic of other\n// handlers to succeed.\n//\n// The entirety of this app module is EXPERIMENTAL and\n// subject to change. Pay attention to release notes.\ntype App struct {\n\t// Subscriptions bind handlers to one or more events\n\t// either globally or scoped to specific modules or module\n\t// namespaces.\n\tSubscriptions []*Subscription `json:\"subscriptions,omitempty\"`\n\n\t// Map of event name to map of module ID/namespace to handlers\n\tsubscriptions map[string]map[caddy.ModuleID][]Handler\n\n\tlogger  *zap.Logger\n\tstarted bool\n}\n\n// Subscription represents binding of one or more handlers to\n// one or more events.\ntype Subscription struct {\n\t// The name(s) of the event(s) to bind to. Default: all events.\n\tEvents []string `json:\"events,omitempty\"`\n\n\t// The ID or namespace of the module(s) from which events\n\t// originate to listen to for events. Default: all modules.\n\t//\n\t// Events propagate up, so events emitted by module \"a.b.c\"\n\t// will also trigger the event for \"a.b\" and \"a\". Thus, to\n\t// receive all events from \"a.b.c\" and \"a.b.d\", for example,\n\t// one can subscribe to either \"a.b\" or all of \"a\" entirely.\n\tModules []caddy.ModuleID `json:\"modules,omitempty\"`\n\n\t// The event handler modules. These implement the actual\n\t// behavior to invoke when an event occurs. At least one\n\t// handler is required.\n\tHandlersRaw []json.RawMessage `json:\"handlers,omitempty\" caddy:\"namespace=events.handlers inline_key=handler\"`\n\n\t// The decoded handlers; Go code that is subscribing to\n\t// an event should set this field directly; HandlersRaw\n\t// is meant for JSON configuration to fill out this field.\n\tHandlers []Handler `json:\"-\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (App) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"events\",\n\t\tNew: func() caddy.Module { return new(App) },\n\t}\n}\n\n// Provision sets up the app.\nfunc (app *App) Provision(ctx caddy.Context) error {\n\tapp.logger = ctx.Logger()\n\tapp.subscriptions = make(map[string]map[caddy.ModuleID][]Handler)\n\n\tfor _, sub := range app.Subscriptions {\n\t\tif sub.HandlersRaw == nil {\n\t\t\tcontinue\n\t\t}\n\t\thandlersIface, err := ctx.LoadModule(sub, \"HandlersRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading event subscriber modules: %v\", err)\n\t\t}\n\t\tfor _, h := range handlersIface.([]any) {\n\t\t\tsub.Handlers = append(sub.Handlers, h.(Handler))\n\t\t}\n\t\tif len(sub.Handlers) == 0 {\n\t\t\t// pointless to bind without any handlers\n\t\t\treturn fmt.Errorf(\"no handlers defined\")\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// Start runs the app.\nfunc (app *App) Start() error {\n\tfor _, sub := range app.Subscriptions {\n\t\tif err := app.Subscribe(sub); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tapp.started = true\n\n\treturn nil\n}\n\n// Stop gracefully shuts down the app.\nfunc (app *App) Stop() error {\n\treturn nil\n}\n\n// Subscribe binds one or more event handlers to one or more events\n// according to the subscription s. For now, subscriptions can only\n// be created during the provision phase; new bindings cannot be\n// created after the events app has started.\nfunc (app *App) Subscribe(s *Subscription) error {\n\tif app.started {\n\t\treturn fmt.Errorf(\"events already started; new subscriptions closed\")\n\t}\n\n\t// handle special case of catch-alls (omission of event name or module space implies all)\n\tif len(s.Events) == 0 {\n\t\ts.Events = []string{\"\"}\n\t}\n\tif len(s.Modules) == 0 {\n\t\ts.Modules = []caddy.ModuleID{\"\"}\n\t}\n\n\tfor _, eventName := range s.Events {\n\t\tif app.subscriptions[eventName] == nil {\n\t\t\tapp.subscriptions[eventName] = make(map[caddy.ModuleID][]Handler)\n\t\t}\n\t\tfor _, originModule := range s.Modules {\n\t\t\tapp.subscriptions[eventName][originModule] = append(app.subscriptions[eventName][originModule], s.Handlers...)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// On is syntactic sugar for Subscribe() that binds a single handler\n// to a single event from any module. If the eventName is empty string,\n// it counts for all events.\nfunc (app *App) On(eventName string, handler Handler) error {\n\treturn app.Subscribe(&Subscription{\n\t\tEvents:   []string{eventName},\n\t\tHandlers: []Handler{handler},\n\t})\n}\n\n// Emit creates and dispatches an event named eventName to all relevant handlers with\n// the metadata data. Events are emitted and propagated synchronously. The returned Event\n// value will have any additional information from the invoked handlers.\n//\n// Note that the data map is not copied, for efficiency. After Emit() is called, the\n// data passed in should not be changed in other goroutines.\nfunc (app *App) Emit(ctx caddy.Context, eventName string, data map[string]any) caddy.Event {\n\tlogger := app.logger.With(zap.String(\"name\", eventName))\n\n\te, err := caddy.NewEvent(ctx, eventName, data)\n\tif err != nil {\n\t\tlogger.Error(\"failed to create event\", zap.Error(err))\n\t}\n\n\tvar originModule caddy.ModuleInfo\n\tvar originModuleID caddy.ModuleID\n\tvar originModuleName string\n\tif origin := e.Origin(); origin != nil {\n\t\toriginModule = origin.CaddyModule()\n\t\toriginModuleID = originModule.ID\n\t\toriginModuleName = originModule.String()\n\t}\n\n\tlogger = logger.With(\n\t\tzap.String(\"id\", e.ID().String()),\n\t\tzap.String(\"origin\", originModuleName))\n\n\t// add event info to replacer, make sure it's in the context\n\trepl, ok := ctx.Context.Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\tif !ok {\n\t\trepl = caddy.NewReplacer()\n\t\tctx.Context = context.WithValue(ctx.Context, caddy.ReplacerCtxKey, repl)\n\t}\n\trepl.Map(func(key string) (any, bool) {\n\t\tswitch key {\n\t\tcase \"event\":\n\t\t\treturn e, true\n\t\tcase \"event.id\":\n\t\t\treturn e.ID(), true\n\t\tcase \"event.name\":\n\t\t\treturn e.Name(), true\n\t\tcase \"event.time\":\n\t\t\treturn e.Timestamp(), true\n\t\tcase \"event.time_unix\":\n\t\t\treturn e.Timestamp().UnixMilli(), true\n\t\tcase \"event.module\":\n\t\t\treturn originModuleID, true\n\t\tcase \"event.data\":\n\t\t\treturn e.Data, true\n\t\t}\n\n\t\tif after, ok0 := strings.CutPrefix(key, \"event.data.\"); ok0 {\n\t\t\tkey = after\n\t\t\tif val, ok := e.Data[key]; ok {\n\t\t\t\treturn val, true\n\t\t\t}\n\t\t}\n\n\t\treturn nil, false\n\t})\n\n\tlogger = logger.WithLazy(zap.Any(\"data\", e.Data))\n\n\tlogger.Debug(\"event\")\n\n\t// invoke handlers bound to the event by name and also all events; this for loop\n\t// iterates twice at most: once for the event name, once for \"\" (all events)\n\tfor {\n\t\tmoduleID := originModuleID\n\n\t\t// implement propagation up the module tree (i.e. start with \"a.b.c\" then \"a.b\" then \"a\" then \"\")\n\t\tfor {\n\t\t\tif app.subscriptions[eventName] == nil {\n\t\t\t\tbreak // shortcut if event not bound at all\n\t\t\t}\n\n\t\t\tfor _, handler := range app.subscriptions[eventName][moduleID] {\n\t\t\t\tselect {\n\t\t\t\tcase <-ctx.Done():\n\t\t\t\t\tlogger.Error(\"context canceled; event handling stopped\")\n\t\t\t\t\treturn e\n\t\t\t\tdefault:\n\t\t\t\t}\n\n\t\t\t\t// this log can be a useful sanity check to ensure your handlers are in fact being invoked\n\t\t\t\t// (see https://github.com/mholt/caddy-events-exec/issues/6)\n\t\t\t\tlogger.Debug(\"invoking subscribed handler\",\n\t\t\t\t\tzap.String(\"subscribed_to\", eventName),\n\t\t\t\t\tzap.Any(\"handler\", handler))\n\n\t\t\t\tif err := handler.Handle(ctx, e); err != nil {\n\t\t\t\t\taborted := errors.Is(err, caddy.ErrEventAborted)\n\n\t\t\t\t\tlogger.Error(\"handler error\",\n\t\t\t\t\t\tzap.Error(err),\n\t\t\t\t\t\tzap.Bool(\"aborted\", aborted))\n\n\t\t\t\t\tif aborted {\n\t\t\t\t\t\te.Aborted = err\n\t\t\t\t\t\treturn e\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif moduleID == \"\" {\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tlastDot := strings.LastIndex(string(moduleID), \".\")\n\t\t\tif lastDot < 0 {\n\t\t\t\tmoduleID = \"\" // include handlers bound to events regardless of module\n\t\t\t} else {\n\t\t\t\tmoduleID = moduleID[:lastDot]\n\t\t\t}\n\t\t}\n\n\t\t// include handlers listening to all events\n\t\tif eventName == \"\" {\n\t\t\tbreak\n\t\t}\n\t\teventName = \"\"\n\t}\n\n\treturn e\n}\n\n// Handler is a type that can handle events.\ntype Handler interface {\n\tHandle(context.Context, caddy.Event) error\n}\n\n// Interface guards\nvar (\n\t_ caddy.App         = (*App)(nil)\n\t_ caddy.Provisioner = (*App)(nil)\n)\n"
  },
  {
    "path": "modules/caddyevents/eventsconfig/caddyfile.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Package eventsconfig is for configuring caddyevents.App with the\n// Caddyfile. This code can't be in the caddyevents package because\n// the httpcaddyfile package imports caddyhttp, which imports\n// caddyevents: hence, it creates an import cycle.\npackage eventsconfig\n\nimport (\n\t\"encoding/json\"\n\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyevents\"\n)\n\nfunc init() {\n\thttpcaddyfile.RegisterGlobalOption(\"events\", parseApp)\n}\n\n// parseApp configures the \"events\" global option from Caddyfile to set up the events app.\n// Syntax:\n//\n//\tevents {\n//\t\ton <event> <handler_module...>\n//\t}\n//\n// If <event> is *, then it will bind to all events.\nfunc parseApp(d *caddyfile.Dispenser, _ any) (any, error) {\n\td.Next() // consume option name\n\tapp := new(caddyevents.App)\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"on\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\teventName := d.Val()\n\t\t\tif eventName == \"*\" {\n\t\t\t\teventName = \"\"\n\t\t\t}\n\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\thandlerName := d.Val()\n\t\t\tmodID := \"events.handlers.\" + handlerName\n\t\t\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\n\t\t\tapp.Subscriptions = append(app.Subscriptions, &caddyevents.Subscription{\n\t\t\t\tEvents: []string{eventName},\n\t\t\t\tHandlersRaw: []json.RawMessage{\n\t\t\t\t\tcaddyconfig.JSONModuleObject(unm, \"handler\", handlerName, nil),\n\t\t\t\t},\n\t\t\t})\n\n\t\tdefault:\n\t\t\treturn nil, d.ArgErr()\n\t\t}\n\t}\n\n\treturn httpcaddyfile.App{\n\t\tName:  \"events\",\n\t\tValue: caddyconfig.JSON(app, nil),\n\t}, nil\n}\n"
  },
  {
    "path": "modules/caddyfs/filesystem.go",
    "content": "package caddyfs\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io/fs\"\n\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Filesystems{})\n\thttpcaddyfile.RegisterGlobalOption(\"filesystem\", parseFilesystems)\n}\n\ntype moduleEntry struct {\n\tKey           string          `json:\"name,omitempty\"`\n\tFileSystemRaw json.RawMessage `json:\"file_system,omitempty\" caddy:\"namespace=caddy.fs inline_key=backend\"`\n\tfileSystem    fs.FS\n}\n\n// Filesystems loads caddy.fs modules into the global filesystem map\ntype Filesystems struct {\n\tFilesystems []*moduleEntry `json:\"filesystems\"`\n\n\tdefers []func()\n}\n\nfunc parseFilesystems(d *caddyfile.Dispenser, existingVal any) (any, error) {\n\tp := &Filesystems{}\n\tcurrent, ok := existingVal.(*Filesystems)\n\tif ok {\n\t\tp = current\n\t}\n\tx := &moduleEntry{}\n\terr := x.UnmarshalCaddyfile(d)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tp.Filesystems = append(p.Filesystems, x)\n\treturn p, nil\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Filesystems) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.filesystems\",\n\t\tNew: func() caddy.Module { return new(Filesystems) },\n\t}\n}\n\nfunc (xs *Filesystems) Start() error { return nil }\nfunc (xs *Filesystems) Stop() error  { return nil }\n\nfunc (xs *Filesystems) Provision(ctx caddy.Context) error {\n\t// load the filesystem module\n\tfor _, f := range xs.Filesystems {\n\t\tif len(f.FileSystemRaw) > 0 {\n\t\t\tmod, err := ctx.LoadModule(f, \"FileSystemRaw\")\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"loading file system module: %v\", err)\n\t\t\t}\n\t\t\tf.fileSystem = mod.(fs.FS)\n\t\t}\n\t\t// register that module\n\t\tctx.Logger().Debug(\"registering fs\", zap.String(\"fs\", f.Key))\n\t\tctx.FileSystems().Register(f.Key, f.fileSystem)\n\t\t// remember to unregister the module when we are done\n\t\txs.defers = append(xs.defers, func() {\n\t\t\tctx.Logger().Debug(\"unregistering fs\", zap.String(\"fs\", f.Key))\n\t\t\tctx.FileSystems().Unregister(f.Key)\n\t\t})\n\t}\n\treturn nil\n}\n\nfunc (f *Filesystems) Cleanup() error {\n\tfor _, v := range f.defers {\n\t\tv()\n\t}\n\treturn nil\n}\n\nfunc (f *moduleEntry) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\tfor d.Next() {\n\t\t// key required for now\n\t\tif !d.Args(&f.Key) {\n\t\t\treturn d.ArgErr()\n\t\t}\n\t\t// get the module json\n\t\tif !d.NextArg() {\n\t\t\treturn d.ArgErr()\n\t\t}\n\t\tname := d.Val()\n\t\tmodID := \"caddy.fs.\" + name\n\t\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tfsys, ok := unm.(fs.FS)\n\t\tif !ok {\n\t\t\treturn d.Errf(\"module %s (%T) is not a supported file system implementation (requires fs.FS)\", modID, unm)\n\t\t}\n\t\tf.FileSystemRaw = caddyconfig.JSONModuleObject(fsys, \"backend\", name, nil)\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "modules/caddyhttp/app.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"cmp\"\n\t\"context\"\n\t\"crypto/tls\"\n\t\"errors\"\n\t\"fmt\"\n\t\"maps\"\n\t\"net\"\n\t\"net/http\"\n\t\"strconv\"\n\t\"sync\"\n\t\"time\"\n\n\t\"go.uber.org/zap\"\n\t\"golang.org/x/net/http2\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyevents\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddytls\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(App{})\n}\n\n// App is a robust, production-ready HTTP server.\n//\n// HTTPS is enabled by default if host matchers with qualifying names are used\n// in any of routes; certificates are automatically provisioned and renewed.\n// Additionally, automatic HTTPS will also enable HTTPS for servers that listen\n// only on the HTTPS port but which do not have any TLS connection policies\n// defined by adding a good, default TLS connection policy.\n//\n// In HTTP routes, additional placeholders are available (replace any `*`):\n//\n// Placeholder | Description\n// ------------|---------------\n// `{http.request.body}` | The request body (⚠️ inefficient; use only for debugging)\n// `{http.request.body_base64}` | The request body, base64-encoded (⚠️ for debugging)\n// `{http.request.cookie.*}` | HTTP request cookie\n// `{http.request.duration}` | Time up to now spent handling the request (after decoding headers from client)\n// `{http.request.duration_ms}` | Same as 'duration', but in milliseconds.\n// `{http.request.uuid}` | The request unique identifier\n// `{http.request.header.*}` | Specific request header field\n// `{http.request.host}` | The host part of the request's Host header\n// `{http.request.host.labels.*}` | Request host labels (0-based from right); e.g. for foo.example.com: 0=com, 1=example, 2=foo\n// `{http.request.hostport}` | The host and port from the request's Host header\n// `{http.request.method}` | The request method\n// `{http.request.orig_method}` | The request's original method\n// `{http.request.orig_uri}` | The request's original URI\n// `{http.request.orig_uri.path}` | The request's original path\n// `{http.request.orig_uri.path.*}` | Parts of the original path, split by `/` (0-based from left)\n// `{http.request.orig_uri.path.dir}` | The request's original directory\n// `{http.request.orig_uri.path.file}` | The request's original filename\n// `{http.request.orig_uri.query}` | The request's original query string (without `?`)\n// `{http.request.port}` | The port part of the request's Host header\n// `{http.request.proto}` | The protocol of the request\n// `{http.request.local.host}` | The host (IP) part of the local address the connection arrived on\n// `{http.request.local.port}` | The port part of the local address the connection arrived on\n// `{http.request.local}` | The local address the connection arrived on\n// `{http.request.remote.host}` | The host (IP) part of the remote client's address, if available (not known with HTTP/3 early data)\n// `{http.request.remote.port}` | The port part of the remote client's address\n// `{http.request.remote}` | The address of the remote client\n// `{http.request.scheme}` | The request scheme, typically `http` or `https`\n// `{http.request.tls.version}` | The TLS version name\n// `{http.request.tls.cipher_suite}` | The TLS cipher suite\n// `{http.request.tls.resumed}` | The TLS connection resumed a previous connection\n// `{http.request.tls.proto}` | The negotiated next protocol\n// `{http.request.tls.proto_mutual}` | The negotiated next protocol was advertised by the server\n// `{http.request.tls.server_name}` | The server name requested by the client, if any\n// `{http.request.tls.ech}` | Whether ECH was offered by the client and accepted by the server\n// `{http.request.tls.client.fingerprint}` | The SHA256 checksum of the client certificate\n// `{http.request.tls.client.public_key}` | The public key of the client certificate.\n// `{http.request.tls.client.public_key_sha256}` | The SHA256 checksum of the client's public key.\n// `{http.request.tls.client.certificate_pem}` | The PEM-encoded value of the certificate.\n// `{http.request.tls.client.certificate_der_base64}` | The base64-encoded value of the certificate.\n// `{http.request.tls.client.issuer}` | The issuer DN of the client certificate\n// `{http.request.tls.client.serial}` | The serial number of the client certificate\n// `{http.request.tls.client.subject}` | The subject DN of the client certificate\n// `{http.request.tls.client.san.dns_names.*}` | SAN DNS names(index optional)\n// `{http.request.tls.client.san.emails.*}` | SAN email addresses (index optional)\n// `{http.request.tls.client.san.ips.*}` | SAN IP addresses (index optional)\n// `{http.request.tls.client.san.uris.*}` | SAN URIs (index optional)\n// `{http.request.uri}` | The full request URI\n// `{http.request.uri.path}` | The path component of the request URI\n// `{http.request.uri.path.*}` | Parts of the path, split by `/` (0-based from left)\n// `{http.request.uri.path.dir}` | The directory, excluding leaf filename\n// `{http.request.uri.path.file}` | The filename of the path, excluding directory\n// `{http.request.uri.query}` | The query string (without `?`)\n// `{http.request.uri.query.*}` | Individual query string value\n// `{http.response.header.*}` | Specific response header field\n// `{http.vars.*}` | Custom variables in the HTTP handler chain\n// `{http.shutting_down}` | True if the HTTP app is shutting down\n// `{http.time_until_shutdown}` | Time until HTTP server shutdown, if scheduled\ntype App struct {\n\t// HTTPPort specifies the port to use for HTTP (as opposed to HTTPS),\n\t// which is used when setting up HTTP->HTTPS redirects or ACME HTTP\n\t// challenge solvers. Default: 80.\n\tHTTPPort int `json:\"http_port,omitempty\"`\n\n\t// HTTPSPort specifies the port to use for HTTPS, which is used when\n\t// solving the ACME TLS-ALPN challenges, or whenever HTTPS is needed\n\t// but no specific port number is given. Default: 443.\n\tHTTPSPort int `json:\"https_port,omitempty\"`\n\n\t// GracePeriod is how long to wait for active connections when shutting\n\t// down the servers. During the grace period, no new connections are\n\t// accepted, idle connections are closed, and active connections will\n\t// be given the full length of time to become idle and close.\n\t// Once the grace period is over, connections will be forcefully closed.\n\t// If zero, the grace period is eternal. Default: 0.\n\tGracePeriod caddy.Duration `json:\"grace_period,omitempty\"`\n\n\t// ShutdownDelay is how long to wait before initiating the grace\n\t// period. When this app is stopping (e.g. during a config reload or\n\t// process exit), all servers will be shut down. Normally this immediately\n\t// initiates the grace period. However, if this delay is configured, servers\n\t// will not be shut down until the delay is over. During this time, servers\n\t// continue to function normally and allow new connections. At the end, the\n\t// grace period will begin. This can be useful to allow downstream load\n\t// balancers time to move this instance out of the rotation without hiccups.\n\t//\n\t// When shutdown has been scheduled, placeholders {http.shutting_down} (bool)\n\t// and {http.time_until_shutdown} (duration) may be useful for health checks.\n\tShutdownDelay caddy.Duration `json:\"shutdown_delay,omitempty\"`\n\n\t// Servers is the list of servers, keyed by arbitrary names chosen\n\t// at your discretion for your own convenience; the keys do not\n\t// affect functionality.\n\tServers map[string]*Server `json:\"servers,omitempty\"`\n\n\t// If set, metrics observations will be enabled.\n\t// This setting is EXPERIMENTAL and subject to change.\n\tMetrics *Metrics `json:\"metrics,omitempty\"`\n\n\tctx    caddy.Context\n\tlogger *zap.Logger\n\ttlsApp *caddytls.TLS\n\n\t// stopped indicates whether the app has stopped\n\t// It can only happen if it has started successfully in the first place.\n\t// Otherwise, Cleanup will call Stop to clean up resources.\n\tstopped bool\n\n\t// used temporarily between phases 1 and 2 of auto HTTPS\n\tallCertDomains map[string]struct{}\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (App) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http\",\n\t\tNew: func() caddy.Module { return new(App) },\n\t}\n}\n\n// Provision sets up the app.\nfunc (app *App) Provision(ctx caddy.Context) error {\n\t// store some references\n\tapp.logger = ctx.Logger()\n\tapp.ctx = ctx\n\n\t// provision TLS and events apps\n\ttlsAppIface, err := ctx.App(\"tls\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"getting tls app: %v\", err)\n\t}\n\tapp.tlsApp = tlsAppIface.(*caddytls.TLS)\n\n\teventsAppIface, err := ctx.App(\"events\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"getting events app: %v\", err)\n\t}\n\n\trepl := caddy.NewReplacer()\n\n\t// this provisions the matchers for each route,\n\t// and prepares auto HTTP->HTTPS redirects, and\n\t// is required before we provision each server\n\terr = app.automaticHTTPSPhase1(ctx, repl)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif app.Metrics != nil {\n\t\tapp.Metrics.init = sync.Once{}\n\t\tapp.Metrics.httpMetrics = &httpMetrics{}\n\t\t// Scan config for allowed hosts to prevent cardinality explosion\n\t\tapp.Metrics.scanConfigForHosts(app)\n\t}\n\t// prepare each server\n\toldContext := ctx.Context\n\tfor srvName, srv := range app.Servers {\n\t\tctx.Context = context.WithValue(oldContext, ServerCtxKey, srv)\n\t\tsrv.name = srvName\n\t\tsrv.tlsApp = app.tlsApp\n\t\tsrv.events = eventsAppIface.(*caddyevents.App)\n\t\tsrv.ctx = ctx\n\t\tsrv.logger = app.logger.Named(\"log\")\n\t\tsrv.errorLogger = app.logger.Named(\"log.error\")\n\t\tsrv.shutdownAtMu = new(sync.RWMutex)\n\n\t\tif srv.Metrics != nil {\n\t\t\tsrv.logger.Warn(\"per-server 'metrics' is deprecated; use 'metrics' in the root 'http' app instead\")\n\t\t\tapp.Metrics = cmp.Or(app.Metrics, &Metrics{\n\t\t\t\tinit:        sync.Once{},\n\t\t\t\thttpMetrics: &httpMetrics{},\n\t\t\t})\n\t\t\tapp.Metrics.PerHost = app.Metrics.PerHost || srv.Metrics.PerHost\n\t\t}\n\n\t\t// only enable access logs if configured\n\t\tif srv.Logs != nil {\n\t\t\tsrv.accessLogger = app.logger.Named(\"log.access\")\n\t\t\tif srv.Logs.Trace {\n\t\t\t\tsrv.traceLogger = app.logger.Named(\"log.trace\")\n\t\t\t}\n\t\t}\n\n\t\t// if no protocols configured explicitly, enable all except h2c\n\t\tif len(srv.Protocols) == 0 {\n\t\t\tsrv.Protocols = []string{\"h1\", \"h2\", \"h3\"}\n\t\t}\n\n\t\tsrvProtocolsUnique := map[string]struct{}{}\n\t\tfor _, srvProtocol := range srv.Protocols {\n\t\t\tsrvProtocolsUnique[srvProtocol] = struct{}{}\n\t\t}\n\n\t\tif srv.ListenProtocols != nil {\n\t\t\tif len(srv.ListenProtocols) != len(srv.Listen) {\n\t\t\t\treturn fmt.Errorf(\"server %s: listener protocols count does not match address count: %d != %d\",\n\t\t\t\t\tsrvName, len(srv.ListenProtocols), len(srv.Listen))\n\t\t\t}\n\n\t\t\tfor i, lnProtocols := range srv.ListenProtocols {\n\t\t\t\tif lnProtocols != nil {\n\t\t\t\t\t// populate empty listen protocols with server protocols\n\t\t\t\t\tlnProtocolsDefault := false\n\t\t\t\t\tvar lnProtocolsInclude []string\n\t\t\t\t\tsrvProtocolsInclude := maps.Clone(srvProtocolsUnique)\n\n\t\t\t\t\t// keep existing listener protocols unless they are empty\n\t\t\t\t\tfor _, lnProtocol := range lnProtocols {\n\t\t\t\t\t\tif lnProtocol == \"\" {\n\t\t\t\t\t\t\tlnProtocolsDefault = true\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tlnProtocolsInclude = append(lnProtocolsInclude, lnProtocol)\n\t\t\t\t\t\t\tdelete(srvProtocolsInclude, lnProtocol)\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\t// append server protocols to listener protocols if any listener protocols were empty\n\t\t\t\t\tif lnProtocolsDefault {\n\t\t\t\t\t\tfor _, srvProtocol := range srv.Protocols {\n\t\t\t\t\t\t\tif _, ok := srvProtocolsInclude[srvProtocol]; ok {\n\t\t\t\t\t\t\t\tlnProtocolsInclude = append(lnProtocolsInclude, srvProtocol)\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tsrv.ListenProtocols[i] = lnProtocolsInclude\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// if not explicitly configured by the user, disallow TLS\n\t\t// client auth bypass (domain fronting) which could\n\t\t// otherwise be exploited by sending an unprotected SNI\n\t\t// value during a TLS handshake, then putting a protected\n\t\t// domain in the Host header after establishing connection;\n\t\t// this is a safe default, but we allow users to override\n\t\t// it for example in the case of running a proxy where\n\t\t// domain fronting is desired and access is not restricted\n\t\t// based on hostname\n\t\tif srv.StrictSNIHost == nil && srv.hasTLSClientAuth() {\n\t\t\tapp.logger.Warn(\"enabling strict SNI-Host enforcement because TLS client auth is configured\",\n\t\t\t\tzap.String(\"server_id\", srvName))\n\t\t\ttrueBool := true\n\t\t\tsrv.StrictSNIHost = &trueBool\n\t\t}\n\n\t\t// set up the trusted proxies source\n\t\tfor srv.TrustedProxiesRaw != nil {\n\t\t\tval, err := ctx.LoadModule(srv, \"TrustedProxiesRaw\")\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"loading trusted proxies modules: %v\", err)\n\t\t\t}\n\t\t\tsrv.trustedProxies = val.(IPRangeSource)\n\t\t}\n\n\t\t// set the default client IP header to read from\n\t\tif srv.ClientIPHeaders == nil {\n\t\t\tsrv.ClientIPHeaders = []string{\"X-Forwarded-For\"}\n\t\t}\n\n\t\t// process each listener address\n\t\tfor i := range srv.Listen {\n\t\t\tlnOut, err := repl.ReplaceOrErr(srv.Listen[i], true, true)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"server %s, listener %d: %v\", srvName, i, err)\n\t\t\t}\n\t\t\tsrv.Listen[i] = lnOut\n\t\t}\n\n\t\t// set up each listener modifier\n\t\tif srv.ListenerWrappersRaw != nil {\n\t\t\tvals, err := ctx.LoadModule(srv, \"ListenerWrappersRaw\")\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"loading listener wrapper modules: %v\", err)\n\t\t\t}\n\t\t\tvar hasTLSPlaceholder bool\n\t\t\tfor i, val := range vals.([]any) {\n\t\t\t\tif _, ok := val.(*tlsPlaceholderWrapper); ok {\n\t\t\t\t\tif i == 0 {\n\t\t\t\t\t\t// putting the tls placeholder wrapper first is nonsensical because\n\t\t\t\t\t\t// that is the default, implicit setting: without it, all wrappers\n\t\t\t\t\t\t// will go after the TLS listener anyway\n\t\t\t\t\t\treturn fmt.Errorf(\"it is unnecessary to specify the TLS listener wrapper in the first position because that is the default\")\n\t\t\t\t\t}\n\t\t\t\t\tif hasTLSPlaceholder {\n\t\t\t\t\t\treturn fmt.Errorf(\"TLS listener wrapper can only be specified once\")\n\t\t\t\t\t}\n\t\t\t\t\thasTLSPlaceholder = true\n\t\t\t\t}\n\t\t\t\tsrv.listenerWrappers = append(srv.listenerWrappers, val.(caddy.ListenerWrapper))\n\t\t\t}\n\t\t\t// if any wrappers were configured but the TLS placeholder wrapper is\n\t\t\t// absent, prepend it so all defined wrappers come after the TLS\n\t\t\t// handshake; this simplifies logic when starting the server, since we\n\t\t\t// can simply assume the TLS placeholder will always be there\n\t\t\tif !hasTLSPlaceholder && len(srv.listenerWrappers) > 0 {\n\t\t\t\tsrv.listenerWrappers = append([]caddy.ListenerWrapper{new(tlsPlaceholderWrapper)}, srv.listenerWrappers...)\n\t\t\t}\n\t\t}\n\n\t\t// set up each packet conn modifier\n\t\tif srv.PacketConnWrappersRaw != nil {\n\t\t\tvals, err := ctx.LoadModule(srv, \"PacketConnWrappersRaw\")\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"loading packet conn wrapper modules: %v\", err)\n\t\t\t}\n\t\t\t// if any wrappers were configured, they come before the QUIC handshake;\n\t\t\t// unlike TLS above, there is no QUIC placeholder\n\t\t\tfor _, val := range vals.([]any) {\n\t\t\t\tsrv.packetConnWrappers = append(srv.packetConnWrappers, val.(caddy.PacketConnWrapper))\n\t\t\t}\n\t\t}\n\n\t\t// pre-compile the primary handler chain, and be sure to wrap it in our\n\t\t// route handler so that important security checks are done, etc.\n\t\tprimaryRoute := emptyHandler\n\t\tif srv.Routes != nil {\n\t\t\terr := srv.Routes.ProvisionHandlers(ctx, app.Metrics)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"server %s: setting up route handlers: %v\", srvName, err)\n\t\t\t}\n\t\t\tprimaryRoute = srv.Routes.Compile(emptyHandler)\n\t\t}\n\t\tsrv.primaryHandlerChain = srv.wrapPrimaryRoute(primaryRoute)\n\n\t\t// pre-compile the error handler chain\n\t\tif srv.Errors != nil {\n\t\t\terr := srv.Errors.Routes.Provision(ctx)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"server %s: setting up error handling routes: %v\", srvName, err)\n\t\t\t}\n\t\t\tsrv.errorHandlerChain = srv.Errors.Routes.Compile(errorEmptyHandler)\n\t\t}\n\n\t\t// provision the named routes (they get compiled at runtime)\n\t\tfor name, route := range srv.NamedRoutes {\n\t\t\terr := route.Provision(ctx, app.Metrics)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"server %s: setting up named route '%s' handlers: %v\", name, srvName, err)\n\t\t\t}\n\t\t}\n\n\t\t// prepare the TLS connection policies\n\t\terr = srv.TLSConnPolicies.Provision(ctx)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"server %s: setting up TLS connection policies: %v\", srvName, err)\n\t\t}\n\n\t\t// if there is no idle timeout, set a sane default; users have complained\n\t\t// before that aggressive CDNs leave connections open until the server\n\t\t// closes them, so if we don't close them it leads to resource exhaustion\n\t\tif srv.IdleTimeout == 0 {\n\t\t\tsrv.IdleTimeout = defaultIdleTimeout\n\t\t}\n\t\tif srv.ReadHeaderTimeout == 0 {\n\t\t\tsrv.ReadHeaderTimeout = defaultReadHeaderTimeout // see #6663\n\t\t}\n\t}\n\tctx.Context = oldContext\n\treturn nil\n}\n\n// Validate ensures the app's configuration is valid.\nfunc (app *App) Validate() error {\n\tlnAddrs := make(map[string]string)\n\n\tfor srvName, srv := range app.Servers {\n\t\t// each server must use distinct listener addresses\n\t\tfor _, addr := range srv.Listen {\n\t\t\tlistenAddr, err := caddy.ParseNetworkAddress(addr)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"invalid listener address '%s': %v\", addr, err)\n\t\t\t}\n\t\t\t// check that every address in the port range is unique to this server;\n\t\t\t// we do not use <= here because PortRangeSize() adds 1 to EndPort for us\n\t\t\tfor i := uint(0); i < listenAddr.PortRangeSize(); i++ {\n\t\t\t\taddr := caddy.JoinNetworkAddress(listenAddr.Network, listenAddr.Host, strconv.FormatUint(uint64(listenAddr.StartPort+i), 10))\n\t\t\t\tif sn, ok := lnAddrs[addr]; ok {\n\t\t\t\t\treturn fmt.Errorf(\"server %s: listener address repeated: %s (already claimed by server '%s')\", srvName, addr, sn)\n\t\t\t\t}\n\t\t\t\tlnAddrs[addr] = srvName\n\t\t\t}\n\t\t}\n\n\t\t// logger names must not have ports\n\t\tif srv.Logs != nil {\n\t\t\tfor host := range srv.Logs.LoggerNames {\n\t\t\t\tif _, _, err := net.SplitHostPort(host); err == nil {\n\t\t\t\t\treturn fmt.Errorf(\"server %s: logger name must not have a port: %s\", srvName, host)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc removeTLSALPN(srv *Server, target string) {\n\tfor _, cp := range srv.TLSConnPolicies {\n\t\t// the TLSConfig was already provisioned, so... manually remove it\n\t\tfor i, np := range cp.TLSConfig.NextProtos {\n\t\t\tif np == target {\n\t\t\t\tcp.TLSConfig.NextProtos = append(cp.TLSConfig.NextProtos[:i], cp.TLSConfig.NextProtos[i+1:]...)\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\t// remove it from the parent connection policy too, just to keep things tidy\n\t\tfor i, alpn := range cp.ALPN {\n\t\t\tif alpn == target {\n\t\t\t\tcp.ALPN = append(cp.ALPN[:i], cp.ALPN[i+1:]...)\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n}\n\n// Start runs the app. It finishes automatic HTTPS if enabled,\n// including management of certificates.\nfunc (app *App) Start() error {\n\t// get a logger compatible with http.Server\n\tserverLogger, err := zap.NewStdLogAt(app.logger.Named(\"stdlib\"), zap.DebugLevel)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"failed to set up server logger: %v\", err)\n\t}\n\n\tfor srvName, srv := range app.Servers {\n\t\tsrv.server = &http.Server{\n\t\t\tReadTimeout:       time.Duration(srv.ReadTimeout),\n\t\t\tReadHeaderTimeout: time.Duration(srv.ReadHeaderTimeout),\n\t\t\tWriteTimeout:      time.Duration(srv.WriteTimeout),\n\t\t\tIdleTimeout:       time.Duration(srv.IdleTimeout),\n\t\t\tMaxHeaderBytes:    srv.MaxHeaderBytes,\n\t\t\tHandler:           srv,\n\t\t\tErrorLog:          serverLogger,\n\t\t\tProtocols:         new(http.Protocols),\n\t\t\tConnContext: func(ctx context.Context, c net.Conn) context.Context {\n\t\t\t\tif nc, ok := c.(interface{ tlsNetConn() net.Conn }); ok {\n\t\t\t\t\tgetTlsConStateFunc := sync.OnceValue(func() *tls.ConnectionState {\n\t\t\t\t\t\ttlsConnState := nc.tlsNetConn().(connectionStater).ConnectionState()\n\t\t\t\t\t\treturn &tlsConnState\n\t\t\t\t\t})\n\t\t\t\t\tctx = context.WithValue(ctx, tlsConnectionStateFuncCtxKey, getTlsConStateFunc)\n\t\t\t\t}\n\t\t\t\treturn ctx\n\t\t\t},\n\t\t}\n\n\t\t// disable HTTP/2, which we enabled by default during provisioning\n\t\tif !srv.protocol(\"h2\") {\n\t\t\tsrv.server.TLSNextProto = make(map[string]func(*http.Server, *tls.Conn, http.Handler))\n\t\t\tremoveTLSALPN(srv, \"h2\")\n\t\t}\n\t\tif !srv.protocol(\"h1\") {\n\t\t\tremoveTLSALPN(srv, \"http/1.1\")\n\t\t}\n\n\t\t// configure the http versions the server will serve\n\t\tif srv.protocol(\"h1\") {\n\t\t\tsrv.server.Protocols.SetHTTP1(true)\n\t\t}\n\n\t\tif srv.protocol(\"h2\") || srv.protocol(\"h2c\") {\n\t\t\t// skip setting h2 because if NextProtos is present, it's list of alpn versions will take precedence.\n\t\t\t// it will always be present because http2.ConfigureServer will populate that field\n\t\t\t// enabling h2c because some listener wrapper will wrap the connection that is no longer *tls.Conn\n\t\t\t// However, we need to handle the case that if the connection is h2c but h2c is not enabled. We identify\n\t\t\t// this type of connection by checking if it's behind a TLS listener wrapper or if it implements tls.ConnectionState.\n\t\t\tsrv.server.Protocols.SetUnencryptedHTTP2(true)\n\t\t\t// when h2c is enabled but h2 disabled, we already removed h2 from NextProtos\n\t\t\t// the handshake will never succeed with h2\n\t\t\t// http2.ConfigureServer will enable the server to handle both h2 and h2c\n\t\t\th2server := new(http2.Server)\n\t\t\t//nolint:errcheck\n\t\t\thttp2.ConfigureServer(srv.server, h2server)\n\t\t}\n\n\t\t// this TLS config is used by the std lib to choose the actual TLS config for connections\n\t\t// by looking through the connection policies to find the first one that matches\n\t\ttlsCfg := srv.TLSConnPolicies.TLSConfig(app.ctx)\n\t\tsrv.configureServer(srv.server)\n\n\t\tfor lnIndex, lnAddr := range srv.Listen {\n\t\t\tlistenAddr, err := caddy.ParseNetworkAddress(lnAddr)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"%s: parsing listen address '%s': %v\", srvName, lnAddr, err)\n\t\t\t}\n\n\t\t\tsrv.addresses = append(srv.addresses, listenAddr)\n\n\t\t\tprotocols := srv.Protocols\n\t\t\tif srv.ListenProtocols != nil && srv.ListenProtocols[lnIndex] != nil {\n\t\t\t\tprotocols = srv.ListenProtocols[lnIndex]\n\t\t\t}\n\n\t\t\tprotocolsUnique := map[string]struct{}{}\n\t\t\tfor _, protocol := range protocols {\n\t\t\t\tprotocolsUnique[protocol] = struct{}{}\n\t\t\t}\n\t\t\t_, h1ok := protocolsUnique[\"h1\"]\n\t\t\t_, h2ok := protocolsUnique[\"h2\"]\n\t\t\t_, h2cok := protocolsUnique[\"h2c\"]\n\t\t\t_, h3ok := protocolsUnique[\"h3\"]\n\n\t\t\tfor portOffset := uint(0); portOffset < listenAddr.PortRangeSize(); portOffset++ {\n\t\t\t\thostport := listenAddr.JoinHostPort(portOffset)\n\n\t\t\t\t// enable TLS if there is a policy and if this is not the HTTP port\n\t\t\t\tuseTLS := len(srv.TLSConnPolicies) > 0 && int(listenAddr.StartPort+portOffset) != app.httpPort()\n\n\t\t\t\tif h1ok || h2ok && useTLS || h2cok {\n\t\t\t\t\t// create the listener for this socket\n\t\t\t\t\tlnAny, err := listenAddr.Listen(app.ctx, portOffset, net.ListenConfig{\n\t\t\t\t\t\tKeepAliveConfig: net.KeepAliveConfig{\n\t\t\t\t\t\t\tEnable:   srv.KeepAliveInterval >= 0,\n\t\t\t\t\t\t\tInterval: time.Duration(srv.KeepAliveInterval),\n\t\t\t\t\t\t\tIdle:     time.Duration(srv.KeepAliveIdle),\n\t\t\t\t\t\t\tCount:    srv.KeepAliveCount,\n\t\t\t\t\t\t},\n\t\t\t\t\t})\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn fmt.Errorf(\"listening on %s: %v\", listenAddr.At(portOffset), err)\n\t\t\t\t\t}\n\t\t\t\t\tln, ok := lnAny.(net.Listener)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\treturn fmt.Errorf(\"network '%s' cannot handle HTTP/1 or HTTP/2 connections\", listenAddr.Network)\n\t\t\t\t\t}\n\n\t\t\t\t\t// wrap listener before TLS (up to the TLS placeholder wrapper)\n\t\t\t\t\tvar lnWrapperIdx int\n\t\t\t\t\tfor i, lnWrapper := range srv.listenerWrappers {\n\t\t\t\t\t\tif _, ok := lnWrapper.(*tlsPlaceholderWrapper); ok {\n\t\t\t\t\t\t\tlnWrapperIdx = i + 1 // mark the next wrapper's spot\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\t\t\t\t\t\tln = lnWrapper.WrapListener(ln)\n\t\t\t\t\t}\n\n\t\t\t\t\tif useTLS {\n\t\t\t\t\t\t// create TLS listener - this enables and terminates TLS\n\t\t\t\t\t\tln = tls.NewListener(ln, tlsCfg)\n\t\t\t\t\t}\n\n\t\t\t\t\t// finish wrapping listener where we left off before TLS\n\t\t\t\t\tfor i := lnWrapperIdx; i < len(srv.listenerWrappers); i++ {\n\t\t\t\t\t\tln = srv.listenerWrappers[i].WrapListener(ln)\n\t\t\t\t\t}\n\n\t\t\t\t\t// check if the connection is h2c\n\t\t\t\t\tln = &http2Listener{\n\t\t\t\t\t\tuseTLS:   useTLS,\n\t\t\t\t\t\tuseH1:    h1ok,\n\t\t\t\t\t\tuseH2:    h2ok || h2cok,\n\t\t\t\t\t\tListener: ln,\n\t\t\t\t\t\tlogger:   app.logger,\n\t\t\t\t\t}\n\n\t\t\t\t\t// if binding to port 0, the OS chooses a port for us;\n\t\t\t\t\t// but the user won't know the port unless we print it\n\t\t\t\t\tif !listenAddr.IsUnixNetwork() && !listenAddr.IsFdNetwork() && listenAddr.StartPort == 0 && listenAddr.EndPort == 0 {\n\t\t\t\t\t\tapp.logger.Info(\"port 0 listener\",\n\t\t\t\t\t\t\tzap.String(\"input_address\", lnAddr),\n\t\t\t\t\t\t\tzap.String(\"actual_address\", ln.Addr().String()))\n\t\t\t\t\t}\n\n\t\t\t\t\tapp.logger.Debug(\"starting server loop\",\n\t\t\t\t\t\tzap.String(\"address\", ln.Addr().String()),\n\t\t\t\t\t\tzap.Bool(\"tls\", useTLS),\n\t\t\t\t\t\tzap.Bool(\"http3\", srv.h3server != nil))\n\n\t\t\t\t\tsrv.listeners = append(srv.listeners, ln)\n\n\t\t\t\t\t//nolint:errcheck\n\t\t\t\t\tgo srv.server.Serve(ln)\n\t\t\t\t}\n\n\t\t\t\tif h2ok && !useTLS {\n\t\t\t\t\t// Can only serve h2 with TLS enabled\n\t\t\t\t\tapp.logger.Warn(\"HTTP/2 skipped because it requires TLS\",\n\t\t\t\t\t\tzap.String(\"network\", listenAddr.Network),\n\t\t\t\t\t\tzap.String(\"addr\", hostport))\n\t\t\t\t}\n\n\t\t\t\tif h3ok {\n\t\t\t\t\t// Can't serve HTTP/3 on the same socket as HTTP/1 and 2 because it uses\n\t\t\t\t\t// a different transport mechanism... which is fine, but the OS doesn't\n\t\t\t\t\t// differentiate between a SOCK_STREAM file and a SOCK_DGRAM file; they\n\t\t\t\t\t// are still one file on the system. So even though \"unixpacket\" and\n\t\t\t\t\t// \"unixgram\" are different network types just as \"tcp\" and \"udp\" are,\n\t\t\t\t\t// the OS will not let us use the same file as both STREAM and DGRAM.\n\t\t\t\t\tif listenAddr.IsUnixNetwork() {\n\t\t\t\t\t\tapp.logger.Warn(\"HTTP/3 disabled because Unix can't multiplex STREAM and DGRAM on same socket\",\n\t\t\t\t\t\t\tzap.String(\"file\", hostport))\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\n\t\t\t\t\tif useTLS {\n\t\t\t\t\t\t// enable HTTP/3 if configured\n\t\t\t\t\t\tapp.logger.Info(\"enabling HTTP/3 listener\", zap.String(\"addr\", hostport))\n\t\t\t\t\t\tif err := srv.serveHTTP3(listenAddr.At(portOffset), tlsCfg); err != nil {\n\t\t\t\t\t\t\treturn err\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\t// Can only serve h3 with TLS enabled\n\t\t\t\t\t\tapp.logger.Warn(\"HTTP/3 skipped because it requires TLS\",\n\t\t\t\t\t\t\tzap.String(\"network\", listenAddr.Network),\n\t\t\t\t\t\t\tzap.String(\"addr\", hostport))\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tsrv.logger.Info(\"server running\",\n\t\t\tzap.String(\"name\", srvName),\n\t\t\tzap.Strings(\"protocols\", srv.Protocols))\n\t}\n\n\t// finish automatic HTTPS by finally beginning\n\t// certificate management\n\terr = app.automaticHTTPSPhase2()\n\tif err != nil {\n\t\treturn fmt.Errorf(\"finalizing automatic HTTPS: %v\", err)\n\t}\n\n\treturn nil\n}\n\n// Stop gracefully shuts down the HTTP server.\nfunc (app *App) Stop() error {\n\tctx := context.Background()\n\n\t// see if any listeners in our config will be closing or if they are continuing\n\t// through a reload; because if any are closing, we will enforce shutdown delay\n\tvar delay bool\n\tscheduledTime := time.Now().Add(time.Duration(app.ShutdownDelay))\n\tif app.ShutdownDelay > 0 {\n\t\tfor _, server := range app.Servers {\n\t\t\tfor _, na := range server.addresses {\n\t\t\t\tfor _, addr := range na.Expand() {\n\t\t\t\t\tif caddy.ListenerUsage(addr.Network, addr.JoinHostPort(0)) < 2 {\n\t\t\t\t\t\tapp.logger.Debug(\"listener closing and shutdown delay is configured\", zap.String(\"address\", addr.String()))\n\t\t\t\t\t\tserver.shutdownAtMu.Lock()\n\t\t\t\t\t\tserver.shutdownAt = scheduledTime\n\t\t\t\t\t\tserver.shutdownAtMu.Unlock()\n\t\t\t\t\t\tdelay = true\n\t\t\t\t\t} else {\n\t\t\t\t\t\tapp.logger.Debug(\"shutdown delay configured but listener will remain open\", zap.String(\"address\", addr.String()))\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t// honor scheduled/delayed shutdown time\n\tif delay {\n\t\tapp.logger.Info(\"shutdown scheduled\",\n\t\t\tzap.Duration(\"delay_duration\", time.Duration(app.ShutdownDelay)),\n\t\t\tzap.Time(\"time\", scheduledTime))\n\t\ttime.Sleep(time.Duration(app.ShutdownDelay))\n\t}\n\n\t// enforce grace period if configured\n\tif app.GracePeriod > 0 {\n\t\tvar cancel context.CancelFunc\n\t\ttimeout := time.Duration(app.GracePeriod)\n\t\tctx, cancel = context.WithTimeoutCause(ctx, timeout, fmt.Errorf(\"server graceful shutdown %ds timeout\", int(timeout.Seconds())))\n\t\tdefer cancel()\n\t\tapp.logger.Info(\"servers shutting down; grace period initiated\", zap.Duration(\"duration\", timeout))\n\t} else {\n\t\tapp.logger.Info(\"servers shutting down with eternal grace period\")\n\t}\n\n\t// goroutines aren't guaranteed to be scheduled right away,\n\t// so we'll use one WaitGroup to wait for all the goroutines\n\t// to start their server shutdowns, and another to wait for\n\t// them to finish; we'll always block for them to start so\n\t// that when we return the caller can be confident* that the\n\t// old servers are no longer accepting new connections\n\t// (* the scheduler might still pause them right before\n\t// calling Shutdown(), but it's unlikely)\n\tvar startedShutdown, finishedShutdown sync.WaitGroup\n\n\t// these will run in goroutines\n\tstopServer := func(server *Server) {\n\t\tdefer finishedShutdown.Done()\n\t\tstartedShutdown.Done()\n\n\t\t// possible if server failed to Start\n\t\tif server.server == nil {\n\t\t\treturn\n\t\t}\n\n\t\tif err := server.server.Shutdown(ctx); err != nil {\n\t\t\tif cause := context.Cause(ctx); cause != nil && errors.Is(err, context.DeadlineExceeded) {\n\t\t\t\terr = cause\n\t\t\t}\n\t\t\tapp.logger.Error(\"server shutdown\",\n\t\t\t\tzap.Error(err),\n\t\t\t\tzap.Strings(\"addresses\", server.Listen))\n\t\t}\n\t}\n\tstopH3Server := func(server *Server) {\n\t\tdefer finishedShutdown.Done()\n\t\tstartedShutdown.Done()\n\n\t\tif server.h3server == nil {\n\t\t\treturn\n\t\t}\n\n\t\t// closing quic listeners won't affect accepted connections now\n\t\t// so like stdlib, close listeners first, but keep the net.PacketConns open\n\t\tfor _, h3ln := range server.quicListeners {\n\t\t\tif err := h3ln.Close(); err != nil {\n\t\t\t\tapp.logger.Error(\"http3 listener close\",\n\t\t\t\t\tzap.Error(err))\n\t\t\t}\n\t\t}\n\n\t\tif err := server.h3server.Shutdown(ctx); err != nil {\n\t\t\tif cause := context.Cause(ctx); cause != nil && errors.Is(err, context.DeadlineExceeded) {\n\t\t\t\terr = cause\n\t\t\t}\n\t\t\tapp.logger.Error(\"HTTP/3 server shutdown\",\n\t\t\t\tzap.Error(err),\n\t\t\t\tzap.Strings(\"addresses\", server.Listen))\n\t\t}\n\n\t\t// close the underlying net.PacketConns now\n\t\t// see the comment for ListenQUIC\n\t\tfor _, h3ln := range server.quicListeners {\n\t\t\tif err := h3ln.Close(); err != nil {\n\t\t\t\tapp.logger.Error(\"http3 listener close socket\",\n\t\t\t\t\tzap.Error(err))\n\t\t\t}\n\t\t}\n\t}\n\n\tfor _, server := range app.Servers {\n\t\tstartedShutdown.Add(2)\n\t\tfinishedShutdown.Add(2)\n\t\tgo stopServer(server)\n\t\tgo stopH3Server(server)\n\t}\n\n\t// block until all the goroutines have been run by the scheduler;\n\t// this means that they have likely called Shutdown() by now\n\tstartedShutdown.Wait()\n\n\t// if the process is exiting, we need to block here and wait\n\t// for the grace periods to complete, otherwise the process will\n\t// terminate before the servers are finished shutting down; but\n\t// we don't really need to wait for the grace period to finish\n\t// if the process isn't exiting (but note that frequent config\n\t// reloads with long grace periods for a sustained length of time\n\t// may deplete resources)\n\tif caddy.Exiting() {\n\t\tfinishedShutdown.Wait()\n\t}\n\n\t// run stop callbacks now that the server shutdowns are complete\n\tfor name, s := range app.Servers {\n\t\tfor _, stopHook := range s.onStopFuncs {\n\t\t\tif err := stopHook(ctx); err != nil {\n\t\t\t\tapp.logger.Error(\"server stop hook\", zap.String(\"server\", name), zap.Error(err))\n\t\t\t}\n\t\t}\n\t}\n\n\tapp.stopped = true\n\treturn nil\n}\n\n// Cleanup will close remaining listeners if they still remain\n// because some of the servers fail to start.\n// It simply calls Stop because Stop won't be called when Start fails.\nfunc (app *App) Cleanup() error {\n\tif app.stopped {\n\t\treturn nil\n\t}\n\treturn app.Stop()\n}\n\nfunc (app *App) httpPort() int {\n\tif app.HTTPPort == 0 {\n\t\treturn DefaultHTTPPort\n\t}\n\treturn app.HTTPPort\n}\n\nfunc (app *App) httpsPort() int {\n\tif app.HTTPSPort == 0 {\n\t\treturn DefaultHTTPSPort\n\t}\n\treturn app.HTTPSPort\n}\n\nconst (\n\t// defaultIdleTimeout is the default HTTP server timeout\n\t// for closing idle connections; useful to avoid resource\n\t// exhaustion behind hungry CDNs, for example (we've had\n\t// several complaints without this).\n\tdefaultIdleTimeout = caddy.Duration(5 * time.Minute)\n\n\t// defaultReadHeaderTimeout is the default timeout for\n\t// reading HTTP headers from clients. Headers are generally\n\t// small, often less than 1 KB, so it shouldn't take a\n\t// long time even on legitimately slow connections or\n\t// busy servers to read it.\n\tdefaultReadHeaderTimeout = caddy.Duration(time.Minute)\n)\n\n// Interface guards\nvar (\n\t_ caddy.App         = (*App)(nil)\n\t_ caddy.Provisioner = (*App)(nil)\n\t_ caddy.Validator   = (*App)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/autohttps.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\t\"slices\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/internal\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddytls\"\n)\n\n// AutoHTTPSConfig is used to disable automatic HTTPS\n// or certain aspects of it for a specific server.\n// HTTPS is enabled automatically and by default when\n// qualifying hostnames are available from the config.\ntype AutoHTTPSConfig struct {\n\t// If true, automatic HTTPS will be entirely disabled,\n\t// including certificate management and redirects.\n\tDisabled bool `json:\"disable,omitempty\"`\n\n\t// If true, only automatic HTTP->HTTPS redirects will\n\t// be disabled, but other auto-HTTPS features will\n\t// remain enabled.\n\tDisableRedir bool `json:\"disable_redirects,omitempty\"`\n\n\t// If true, automatic certificate management will be\n\t// disabled, but other auto-HTTPS features will\n\t// remain enabled.\n\tDisableCerts bool `json:\"disable_certificates,omitempty\"`\n\n\t// Hosts/domain names listed here will not be included\n\t// in automatic HTTPS (they will not have certificates\n\t// loaded nor redirects applied).\n\tSkip []string `json:\"skip,omitempty\"`\n\n\t// Hosts/domain names listed here will still be enabled\n\t// for automatic HTTPS (unless in the Skip list), except\n\t// that certificates will not be provisioned and managed\n\t// for these names.\n\tSkipCerts []string `json:\"skip_certificates,omitempty\"`\n\n\t// By default, automatic HTTPS will obtain and renew\n\t// certificates for qualifying hostnames. However, if\n\t// a certificate with a matching SAN is already loaded\n\t// into the cache, certificate management will not be\n\t// enabled. To force automated certificate management\n\t// regardless of loaded certificates, set this to true.\n\tIgnoreLoadedCerts bool `json:\"ignore_loaded_certificates,omitempty\"`\n}\n\n// automaticHTTPSPhase1 provisions all route matchers, determines\n// which domain names found in the routes qualify for automatic\n// HTTPS, and sets up HTTP->HTTPS redirects. This phase must occur\n// at the beginning of provisioning, because it may add routes and\n// even servers to the app, which still need to be set up with the\n// rest of them during provisioning.\nfunc (app *App) automaticHTTPSPhase1(ctx caddy.Context, repl *caddy.Replacer) error {\n\tlogger := app.logger.Named(\"auto_https\")\n\n\t// this map acts as a set to store the domain names\n\t// for which we will manage certificates automatically\n\tuniqueDomainsForCerts := make(map[string]struct{})\n\n\t// this maps domain names for automatic HTTP->HTTPS\n\t// redirects to their destination server addresses\n\t// (there might be more than 1 if bind is used; see\n\t// https://github.com/caddyserver/caddy/issues/3443)\n\tredirDomains := make(map[string][]caddy.NetworkAddress)\n\n\t// the log configuration for an HTTPS enabled server\n\tvar logCfg *ServerLogConfig\n\n\t// Sort server names to ensure deterministic iteration.\n\t// This prevents race conditions where the order of server processing\n\t// could affect which server gets assigned the HTTP->HTTPS redirect listener.\n\tsrvNames := make([]string, 0, len(app.Servers))\n\tfor name := range app.Servers {\n\t\tsrvNames = append(srvNames, name)\n\t}\n\tslices.Sort(srvNames)\n\tfor _, srvName := range srvNames {\n\t\tsrv := app.Servers[srvName]\n\t\t// as a prerequisite, provision route matchers; this is\n\t\t// required for all routes on all servers, and must be\n\t\t// done before we attempt to do phase 1 of auto HTTPS,\n\t\t// since we have to access the decoded host matchers the\n\t\t// handlers will be provisioned later\n\t\tif srv.Routes != nil {\n\t\t\terr := srv.Routes.ProvisionMatchers(ctx)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"server %s: setting up route matchers: %v\", srvName, err)\n\t\t\t}\n\t\t}\n\n\t\t// prepare for automatic HTTPS\n\t\tif srv.AutoHTTPS == nil {\n\t\t\tsrv.AutoHTTPS = new(AutoHTTPSConfig)\n\t\t}\n\t\tif srv.AutoHTTPS.Disabled {\n\t\t\tlogger.Info(\"automatic HTTPS is completely disabled for server\", zap.String(\"server_name\", srvName))\n\t\t\tcontinue\n\t\t}\n\n\t\t// skip if all listeners use the HTTP port\n\t\tif !srv.listenersUseAnyPortOtherThan(app.httpPort()) {\n\t\t\tlogger.Warn(\"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server\",\n\t\t\t\tzap.String(\"server_name\", srvName),\n\t\t\t\tzap.Int(\"http_port\", app.httpPort()),\n\t\t\t)\n\t\t\tsrv.AutoHTTPS.Disabled = true\n\t\t\tcontinue\n\t\t}\n\n\t\t// if all listeners are on the HTTPS port, make sure\n\t\t// there is at least one TLS connection policy; it\n\t\t// should be obvious that they want to use TLS without\n\t\t// needing to specify one empty policy to enable it\n\t\tif srv.TLSConnPolicies == nil &&\n\t\t\t!srv.listenersUseAnyPortOtherThan(app.httpsPort()) {\n\t\t\tlogger.Info(\"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS\",\n\t\t\t\tzap.String(\"server_name\", srvName),\n\t\t\t\tzap.Int(\"https_port\", app.httpsPort()),\n\t\t\t)\n\t\t\tsrv.TLSConnPolicies = caddytls.ConnectionPolicies{new(caddytls.ConnectionPolicy)}\n\t\t}\n\n\t\t// find all qualifying domain names (deduplicated) in this server\n\t\t// (this is where we need the provisioned, decoded request matchers)\n\t\tserverDomainSet := make(map[string]struct{})\n\t\tfor routeIdx, route := range srv.Routes {\n\t\t\tfor matcherSetIdx, matcherSet := range route.MatcherSets {\n\t\t\t\tfor matcherIdx, m := range matcherSet {\n\t\t\t\t\tif hm, ok := m.(*MatchHost); ok {\n\t\t\t\t\t\tfor hostMatcherIdx, d := range *hm {\n\t\t\t\t\t\t\tvar err error\n\t\t\t\t\t\t\td, err = repl.ReplaceOrErr(d, true, false)\n\t\t\t\t\t\t\tif err != nil {\n\t\t\t\t\t\t\t\treturn fmt.Errorf(\"%s: route %d, matcher set %d, matcher %d, host matcher %d: %v\",\n\t\t\t\t\t\t\t\t\tsrvName, routeIdx, matcherSetIdx, matcherIdx, hostMatcherIdx, err)\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif !slices.Contains(srv.AutoHTTPS.Skip, d) {\n\t\t\t\t\t\t\t\tserverDomainSet[d] = struct{}{}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// build the list of domains that could be used with ECH (if enabled)\n\t\t// so the TLS app can know to publish ECH configs for them\n\t\techDomains := make([]string, 0, len(serverDomainSet))\n\t\tfor d := range serverDomainSet {\n\t\t\techDomains = append(echDomains, d)\n\t\t}\n\t\tapp.tlsApp.RegisterServerNames(echDomains)\n\n\t\t// nothing more to do here if there are no domains that qualify for\n\t\t// automatic HTTPS and there are no explicit TLS connection policies:\n\t\t// if there is at least one domain but no TLS conn policy (F&&T), we'll\n\t\t// add one below; if there are no domains but at least one TLS conn\n\t\t// policy (meaning TLS is enabled) (T&&F), it could be a catch-all with\n\t\t// on-demand TLS -- and in that case we would still need HTTP->HTTPS\n\t\t// redirects, which we set up below; hence these two conditions\n\t\tif len(serverDomainSet) == 0 && len(srv.TLSConnPolicies) == 0 {\n\t\t\tcontinue\n\t\t}\n\n\t\t// clone the logger so we can apply it to the HTTP server\n\t\t// (not sure if necessary to clone it; but probably safer)\n\t\t// (we choose one log cfg arbitrarily; not sure which is best)\n\t\tif srv.Logs != nil {\n\t\t\tlogCfg = srv.Logs.clone()\n\t\t}\n\n\t\t// for all the hostnames we found, filter them so we have\n\t\t// a deduplicated list of names for which to obtain certs\n\t\t// (only if cert management not disabled for this server)\n\t\tif srv.AutoHTTPS.DisableCerts {\n\t\t\tlogger.Warn(\"skipping automated certificate management for server because it is disabled\", zap.String(\"server_name\", srvName))\n\t\t} else {\n\t\t\tfor d := range serverDomainSet {\n\t\t\t\tif certmagic.SubjectQualifiesForCert(d) &&\n\t\t\t\t\t!slices.Contains(srv.AutoHTTPS.SkipCerts, d) {\n\t\t\t\t\t// if a certificate for this name is already loaded,\n\t\t\t\t\t// don't obtain another one for it, unless we are\n\t\t\t\t\t// supposed to ignore loaded certificates\n\t\t\t\t\tif !srv.AutoHTTPS.IgnoreLoadedCerts && app.tlsApp.HasCertificateForSubject(d) {\n\t\t\t\t\t\tlogger.Info(\"skipping automatic certificate management because one or more matching certificates are already loaded\",\n\t\t\t\t\t\t\tzap.String(\"domain\", d),\n\t\t\t\t\t\t\tzap.String(\"server_name\", srvName),\n\t\t\t\t\t\t)\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\n\t\t\t\t\t// most clients don't accept wildcards like *.tld... we\n\t\t\t\t\t// can handle that, but as a courtesy, warn the user\n\t\t\t\t\tif strings.Contains(d, \"*\") &&\n\t\t\t\t\t\tstrings.Count(strings.Trim(d, \".\"), \".\") == 1 {\n\t\t\t\t\t\tlogger.Warn(\"most clients do not trust second-level wildcard certificates (*.tld)\",\n\t\t\t\t\t\t\tzap.String(\"domain\", d))\n\t\t\t\t\t}\n\n\t\t\t\t\tuniqueDomainsForCerts[d] = struct{}{}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// tell the server to use TLS if it is not already doing so\n\t\tif srv.TLSConnPolicies == nil {\n\t\t\tsrv.TLSConnPolicies = caddytls.ConnectionPolicies{new(caddytls.ConnectionPolicy)}\n\t\t}\n\n\t\t// nothing left to do if auto redirects are disabled\n\t\tif srv.AutoHTTPS.DisableRedir {\n\t\t\tlogger.Info(\"automatic HTTP->HTTPS redirects are disabled\", zap.String(\"server_name\", srvName))\n\t\t\tcontinue\n\t\t}\n\n\t\tlogger.Info(\"enabling automatic HTTP->HTTPS redirects\", zap.String(\"server_name\", srvName))\n\n\t\t// create HTTP->HTTPS redirects\n\t\tfor _, listenAddr := range srv.Listen {\n\t\t\t// figure out the address we will redirect to...\n\t\t\taddr, err := caddy.ParseNetworkAddress(listenAddr)\n\t\t\tif err != nil {\n\t\t\t\tmsg := \"%s: invalid listener address: %v\"\n\t\t\t\tif strings.Count(listenAddr, \":\") > 1 {\n\t\t\t\t\tmsg = msg + \", there are too many colons, so the port is ambiguous. Did you mean to wrap the IPv6 address with [] brackets?\"\n\t\t\t\t}\n\t\t\t\treturn fmt.Errorf(msg, srvName, listenAddr)\n\t\t\t}\n\n\t\t\t// this address might not have a hostname, i.e. might be a\n\t\t\t// catch-all address for a particular port; we need to keep\n\t\t\t// track if it is, so we can set up redirects for it anyway\n\t\t\t// (e.g. the user might have enabled on-demand TLS); we use\n\t\t\t// an empty string to indicate a catch-all, which we have to\n\t\t\t// treat special later\n\t\t\tif len(serverDomainSet) == 0 {\n\t\t\t\tredirDomains[\"\"] = append(redirDomains[\"\"], addr)\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// ...and associate it with each domain in this server\n\t\t\tfor d := range serverDomainSet {\n\t\t\t\t// if this domain is used on more than one HTTPS-enabled\n\t\t\t\t// port, we'll have to choose one, so prefer the HTTPS port\n\t\t\t\tif _, ok := redirDomains[d]; !ok ||\n\t\t\t\t\taddr.StartPort == uint(app.httpsPort()) {\n\t\t\t\t\tredirDomains[d] = append(redirDomains[d], addr)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t// if all servers have auto_https disabled and no domains need certs,\n\t// skip the rest of the TLS automation setup to avoid creating\n\t// unnecessary PKI infrastructure and automation policies\n\tallServersDisabled := true\n\tfor _, srv := range app.Servers {\n\t\tif srv.AutoHTTPS == nil || !srv.AutoHTTPS.Disabled {\n\t\t\tallServersDisabled = false\n\t\t\tbreak\n\t\t}\n\t}\n\n\tif allServersDisabled && len(uniqueDomainsForCerts) == 0 {\n\t\tlogger.Debug(\"all servers have automatic HTTPS disabled and no domains need certificates, skipping TLS automation setup\")\n\t\treturn nil\n\t}\n\n\t// we now have a list of all the unique names for which we need certs\n\tvar internal, tailscale []string\nuniqueDomainsLoop:\n\tfor d := range uniqueDomainsForCerts {\n\t\t// some names we've found might already have automation policies\n\t\t// explicitly specified for them; we should exclude those from\n\t\t// our hidden/implicit policy, since applying a name to more than\n\t\t// one automation policy would be confusing and an error\n\t\tif app.tlsApp.Automation != nil {\n\t\t\tfor _, ap := range app.tlsApp.Automation.Policies {\n\t\t\t\tfor _, apHost := range ap.Subjects() {\n\t\t\t\t\tif apHost == d {\n\t\t\t\t\t\t// if the automation policy has all internal subjects but no issuers,\n\t\t\t\t\t\t// it will default to CertMagic's issuers which are public CAs; use\n\t\t\t\t\t\t// our internal issuer instead\n\t\t\t\t\t\tif len(ap.Issuers) == 0 && ap.AllInternalSubjects() {\n\t\t\t\t\t\t\tiss := new(caddytls.InternalIssuer)\n\t\t\t\t\t\t\tif err := iss.Provision(ctx); err != nil {\n\t\t\t\t\t\t\t\treturn err\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tap.Issuers = append(ap.Issuers, iss)\n\t\t\t\t\t\t}\n\t\t\t\t\t\tcontinue uniqueDomainsLoop\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// if no automation policy exists for the name yet, we will associate it with an implicit one;\n\t\t// we handle tailscale domains specially, and we also separate out identifiers that need the\n\t\t// internal issuer (self-signed certs); certmagic does not consider public IP addresses to be\n\t\t// disqualified for public certs, because there are public CAs that will issue certs for IPs.\n\t\t// However, with auto-HTTPS, many times there is no issuer explicitly defined, and the default\n\t\t// issuers do not (currently, as of 2024) issue IP certificates; so assign all IP subjects to\n\t\t// the internal issuer when there are no explicit automation policies\n\t\tshouldUseInternal := func(ident string) bool {\n\t\t\tusingDefaultIssuersAndIsIP := certmagic.SubjectIsIP(ident) &&\n\t\t\t\t(app.tlsApp == nil || app.tlsApp.Automation == nil || len(app.tlsApp.Automation.Policies) == 0)\n\t\t\treturn !certmagic.SubjectQualifiesForPublicCert(d) || usingDefaultIssuersAndIsIP\n\t\t}\n\t\tif isTailscaleDomain(d) {\n\t\t\ttailscale = append(tailscale, d)\n\t\t\tdelete(uniqueDomainsForCerts, d) // not managed by us; handled separately\n\t\t} else if shouldUseInternal(d) {\n\t\t\tinternal = append(internal, d)\n\t\t}\n\t}\n\n\t// ensure there is an automation policy to handle these certs\n\terr := app.createAutomationPolicies(ctx, internal, tailscale)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// we need to reduce the mapping, i.e. group domains by address\n\t// since new routes are appended to servers by their address\n\tdomainsByAddr := make(map[string][]string)\n\tfor domain, addrs := range redirDomains {\n\t\tfor _, addr := range addrs {\n\t\t\taddrStr := addr.String()\n\t\t\tdomainsByAddr[addrStr] = append(domainsByAddr[addrStr], domain)\n\t\t}\n\t}\n\n\t// these keep track of the redirect server address(es)\n\t// and the routes for those servers which actually\n\t// respond with the redirects\n\tredirServerAddrs := make(map[string]struct{})\n\tredirServers := make(map[string][]Route)\n\tvar redirRoutes RouteList\n\n\tfor addrStr, domains := range domainsByAddr {\n\t\t// build the matcher set for this redirect route; (note that we happen\n\t\t// to bypass Provision and Validate steps for these matcher modules)\n\t\tmatcherSet := MatcherSet{MatchProtocol(\"http\")}\n\t\t// match on known domain names, unless it's our special case of a\n\t\t// catch-all which is an empty string (common among catch-all sites\n\t\t// that enable on-demand TLS for yet-unknown domain names)\n\t\tif len(domains) != 1 || domains[0] != \"\" {\n\t\t\tmatcherSet = append(matcherSet, MatchHost(domains))\n\t\t}\n\n\t\taddr, err := caddy.ParseNetworkAddress(addrStr)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tredirRoute := app.makeRedirRoute(addr.StartPort, matcherSet)\n\n\t\t// use the network/host information from the address,\n\t\t// but change the port to the HTTP port then rebuild\n\t\tredirAddr := addr\n\t\tredirAddr.StartPort = uint(app.httpPort())\n\t\tredirAddr.EndPort = redirAddr.StartPort\n\t\tredirAddrStr := redirAddr.String()\n\n\t\tredirServers[redirAddrStr] = append(redirServers[redirAddrStr], redirRoute)\n\t}\n\n\t// on-demand TLS means that hostnames may be used which are not\n\t// explicitly defined in the config, and we still need to redirect\n\t// those; so we can append a single catch-all route (notice there\n\t// is no Host matcher) after the other redirect routes which will\n\t// allow us to handle unexpected/new hostnames... however, it's\n\t// not entirely clear what the redirect destination should be,\n\t// so I'm going to just hard-code the app's HTTPS port and call\n\t// it good for now...\n\t// TODO: This implies that all plaintext requests will be blindly\n\t// redirected to their HTTPS equivalent, even if this server\n\t// doesn't handle that hostname at all; I don't think this is a\n\t// bad thing, and it also obscures the actual hostnames that this\n\t// server is configured to match on, which may be desirable, but\n\t// it's not something that should be relied on. We can change this\n\t// if we want to.\n\tappendCatchAll := func(routes []Route) []Route {\n\t\treturn append(routes, app.makeRedirRoute(uint(app.httpsPort()), MatcherSet{MatchProtocol(\"http\")}))\n\t}\n\n\t// Sort redirect addresses to ensure deterministic process\n\tredirServerAddrsSorted := make([]string, 0, len(redirServers))\n\tfor addr := range redirServers {\n\t\tredirServerAddrsSorted = append(redirServerAddrsSorted, addr)\n\t}\n\tslices.Sort(redirServerAddrsSorted)\n\nredirServersLoop:\n\tfor _, redirServerAddr := range redirServerAddrsSorted {\n\t\troutes := redirServers[redirServerAddr]\n\t\t// for each redirect listener, see if there's already a\n\t\t// server configured to listen on that exact address; if so,\n\t\t// insert the redirect route to the end of its route list\n\t\t// after any other routes with host matchers; otherwise,\n\t\t// we'll create a new server for all the listener addresses\n\t\t// that are unused and serve the remaining redirects from it\n\n\t\t// Sort redirect routes by host specificity to ensure exact matches\n\t\t// take precedence over wildcards, preventing ambiguous routing.\n\t\tslices.SortFunc(routes, func(a, b Route) int {\n\t\t\thostA := getFirstHostFromRoute(a)\n\t\t\thostB := getFirstHostFromRoute(b)\n\n\t\t\t// Catch-all routes (empty host) have the lowest priority\n\t\t\tif hostA == \"\" && hostB != \"\" {\n\t\t\t\treturn 1\n\t\t\t}\n\t\t\tif hostB == \"\" && hostA != \"\" {\n\t\t\t\treturn -1\n\t\t\t}\n\n\t\t\thasWildcardA := strings.Contains(hostA, \"*\")\n\t\t\thasWildcardB := strings.Contains(hostB, \"*\")\n\n\t\t\t// Exact domains take precedence over wildcards\n\t\t\tif !hasWildcardA && hasWildcardB {\n\t\t\t\treturn -1\n\t\t\t}\n\t\t\tif hasWildcardA && !hasWildcardB {\n\t\t\t\treturn 1\n\t\t\t}\n\n\t\t\t// If both are exact or both are wildcards, the longer one is more specific\n\t\t\tif len(hostA) != len(hostB) {\n\t\t\t\treturn len(hostB) - len(hostA)\n\t\t\t}\n\n\t\t\t// Tie-breaker: alphabetical order to ensure determinism\n\t\t\treturn strings.Compare(hostA, hostB)\n\t\t})\n\n\t\t// Use the sorted srvNames to consistently find the target server\n\t\tfor _, srvName := range srvNames {\n\t\t\tsrv := app.Servers[srvName]\n\t\t\t// only look at servers which listen on an address which\n\t\t\t// we want to add redirects to\n\t\t\tif !srv.hasListenerAddress(redirServerAddr) {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// find the index of the route after the last route with a host\n\t\t\t// matcher, then insert the redirects there, but before any\n\t\t\t// user-defined catch-all routes\n\t\t\t// see https://github.com/caddyserver/caddy/issues/3212\n\t\t\tinsertIndex := srv.findLastRouteWithHostMatcher()\n\n\t\t\t// add the redirects at the insert index, except for when\n\t\t\t// we have a catch-all for HTTPS, in which case the user's\n\t\t\t// defined catch-all should take precedence. See #4829\n\t\t\tif len(uniqueDomainsForCerts) != 0 {\n\t\t\t\tsrv.Routes = append(srv.Routes[:insertIndex], append(routes, srv.Routes[insertIndex:]...)...)\n\t\t\t}\n\n\t\t\t// append our catch-all route in case the user didn't define their own\n\t\t\tsrv.Routes = appendCatchAll(srv.Routes)\n\n\t\t\tcontinue redirServersLoop\n\t\t}\n\n\t\t// no server with this listener address exists;\n\t\t// save this address and route for custom server\n\t\tredirServerAddrs[redirServerAddr] = struct{}{}\n\t\tredirRoutes = append(redirRoutes, routes...)\n\t}\n\n\t// if there are routes remaining which do not belong\n\t// in any existing server, make our own to serve the\n\t// rest of the redirects\n\tif len(redirServerAddrs) > 0 {\n\t\tredirServerAddrsList := make([]string, 0, len(redirServerAddrs))\n\t\tfor a := range redirServerAddrs {\n\t\t\tredirServerAddrsList = append(redirServerAddrsList, a)\n\t\t}\n\t\tapp.Servers[\"remaining_auto_https_redirects\"] = &Server{\n\t\t\tListen: redirServerAddrsList,\n\t\t\tRoutes: appendCatchAll(redirRoutes),\n\t\t\tLogs:   logCfg,\n\t\t}\n\t}\n\n\t// persist the domains/IPs we're managing certs for through provisioning/startup\n\tapp.allCertDomains = uniqueDomainsForCerts\n\n\tlogger.Debug(\"adjusted config\",\n\t\tzap.Reflect(\"tls\", app.tlsApp),\n\t\tzap.Reflect(\"http\", app))\n\n\treturn nil\n}\n\nfunc (app *App) makeRedirRoute(redirToPort uint, matcherSet MatcherSet) Route {\n\tredirTo := \"https://{http.request.host}\"\n\n\t// since this is an external redirect, we should only append an explicit\n\t// port if we know it is not the officially standardized HTTPS port, and,\n\t// notably, also not the port that Caddy thinks is the HTTPS port (the\n\t// configurable HTTPSPort parameter) - we can't change the standard HTTPS\n\t// port externally, so that config parameter is for internal use only;\n\t// we also do not append the port if it happens to be the HTTP port as\n\t// well, obviously (for example, user defines the HTTP port explicitly\n\t// in the list of listen addresses for a server)\n\tif redirToPort != uint(app.httpPort()) &&\n\t\tredirToPort != uint(app.httpsPort()) &&\n\t\tredirToPort != DefaultHTTPPort &&\n\t\tredirToPort != DefaultHTTPSPort {\n\t\tredirTo += \":\" + strconv.Itoa(int(redirToPort))\n\t}\n\n\tredirTo += \"{http.request.uri}\"\n\treturn Route{\n\t\tMatcherSets: []MatcherSet{matcherSet},\n\t\tHandlers: []MiddlewareHandler{\n\t\t\tStaticResponse{\n\t\t\t\tStatusCode: WeakString(strconv.Itoa(http.StatusPermanentRedirect)),\n\t\t\t\tHeaders: http.Header{\n\t\t\t\t\t\"Location\": []string{redirTo},\n\t\t\t\t},\n\t\t\t\tClose: true,\n\t\t\t},\n\t\t},\n\t}\n}\n\n// createAutomationPolicies ensures that automated certificates for this\n// app are managed properly. This adds up to two automation policies:\n// one for the public names, and one for the internal names. If a catch-all\n// automation policy exists, it will be shallow-copied and used as the\n// base for the new ones (this is important for preserving behavior the\n// user intends to be \"defaults\").\nfunc (app *App) createAutomationPolicies(ctx caddy.Context, internalNames, tailscaleNames []string) error {\n\t// before we begin, loop through the existing automation policies\n\t// and, for any ACMEIssuers we find, make sure they're filled in\n\t// with default values that might be specified in our HTTP app; also\n\t// look for a base (or \"catch-all\" / default) automation policy,\n\t// which we're going to essentially require, to make sure it has\n\t// those defaults, too\n\tvar basePolicy *caddytls.AutomationPolicy\n\tvar foundBasePolicy bool\n\tif app.tlsApp.Automation == nil {\n\t\t// we will expect this to not be nil from now on\n\t\tapp.tlsApp.Automation = new(caddytls.AutomationConfig)\n\t}\n\tfor _, ap := range app.tlsApp.Automation.Policies {\n\t\t// on-demand policies can have the tailscale manager added implicitly\n\t\t// if there's no explicit manager configured -- for convenience\n\t\tif ap.OnDemand && len(ap.Managers) == 0 {\n\t\t\tvar ts caddytls.Tailscale\n\t\t\tif err := ts.Provision(ctx); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tap.Managers = []certmagic.Manager{ts}\n\n\t\t\t// must reprovision the automation policy so that the underlying\n\t\t\t// CertMagic config knows about the updated Managers\n\t\t\tif err := ap.Provision(app.tlsApp); err != nil {\n\t\t\t\treturn fmt.Errorf(\"re-provisioning automation policy: %v\", err)\n\t\t\t}\n\t\t}\n\n\t\t// set up default issuer -- honestly, this is only\n\t\t// really necessary because the HTTP app is opinionated\n\t\t// and has settings which could be inferred as new\n\t\t// defaults for the ACMEIssuer in the TLS app (such as\n\t\t// what the HTTP and HTTPS ports are)\n\t\tif ap.Issuers == nil {\n\t\t\tvar err error\n\t\t\tap.Issuers, err = caddytls.DefaultIssuersProvisioned(ctx)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t\tfor _, iss := range ap.Issuers {\n\t\t\tif acmeIssuer, ok := iss.(acmeCapable); ok {\n\t\t\t\terr := app.fillInACMEIssuer(acmeIssuer.GetACMEIssuer())\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// while we're here, is this the catch-all/base policy?\n\t\tif !foundBasePolicy && len(ap.SubjectsRaw) == 0 {\n\t\t\tbasePolicy = ap\n\t\t\tfoundBasePolicy = true\n\t\t}\n\t}\n\n\t// Ensure automation policies' CertMagic configs are rebuilt when\n\t// ACME issuer templates may have been modified above (for example,\n\t// alternate ports filled in by the HTTP app). If a policy is already\n\t// provisioned, perform a lightweight rebuild of the CertMagic config\n\t// so issuers receive SetConfig with the updated templates; otherwise\n\t// run a normal Provision to initialize the policy.\n\tfor i, ap := range app.tlsApp.Automation.Policies {\n\t\t// If the policy is already provisioned, rebuild only the CertMagic\n\t\t// config so issuers get SetConfig with updated templates. Otherwise\n\t\t// provision the policy normally (which may load modules).\n\t\tif ap.IsProvisioned() {\n\t\t\tif err := ap.RebuildCertMagic(app.tlsApp); err != nil {\n\t\t\t\treturn fmt.Errorf(\"rebuilding certmagic config for automation policy %d: %v\", i, err)\n\t\t\t}\n\t\t} else {\n\t\t\tif err := ap.Provision(app.tlsApp); err != nil {\n\t\t\t\treturn fmt.Errorf(\"provisioning automation policy %d after auto-HTTPS defaults: %v\", i, err)\n\t\t\t}\n\t\t}\n\t}\n\n\tif basePolicy == nil {\n\t\t// no base policy found; we will make one\n\t\tbasePolicy = new(caddytls.AutomationPolicy)\n\t}\n\n\t// if the basePolicy has an existing ACMEIssuer (particularly to\n\t// include any type that embeds/wraps an ACMEIssuer), let's use it\n\t// (I guess we just use the first one?), otherwise we'll make one\n\tvar baseACMEIssuer *caddytls.ACMEIssuer\n\tfor _, iss := range basePolicy.Issuers {\n\t\tif acmeWrapper, ok := iss.(acmeCapable); ok {\n\t\t\tbaseACMEIssuer = acmeWrapper.GetACMEIssuer()\n\t\t\tbreak\n\t\t}\n\t}\n\tif baseACMEIssuer == nil {\n\t\t// note that this happens if basePolicy.Issuers is empty\n\t\t// OR if it is not empty but does not have not an ACMEIssuer\n\t\tbaseACMEIssuer = new(caddytls.ACMEIssuer)\n\t}\n\n\t// if there was a base policy to begin with, we already\n\t// filled in its issuer's defaults; if there wasn't, we\n\t// still need to do that\n\tif !foundBasePolicy {\n\t\terr := app.fillInACMEIssuer(baseACMEIssuer)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// never overwrite any other issuer that might already be configured\n\tif basePolicy.Issuers == nil {\n\t\tvar err error\n\t\tbasePolicy.Issuers, err = caddytls.DefaultIssuersProvisioned(ctx)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tfor _, iss := range basePolicy.Issuers {\n\t\t\tif acmeIssuer, ok := iss.(acmeCapable); ok {\n\t\t\t\terr := app.fillInACMEIssuer(acmeIssuer.GetACMEIssuer())\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tif !foundBasePolicy {\n\t\t// there was no base policy to begin with, so add\n\t\t// our base/catch-all policy - this will serve the\n\t\t// public-looking names as well as any other names\n\t\t// that don't match any other policy\n\t\terr := app.tlsApp.AddAutomationPolicy(basePolicy)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t} else {\n\t\t// a base policy already existed; we might have\n\t\t// changed it, so re-provision it\n\t\terr := basePolicy.Provision(app.tlsApp)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// public names will be taken care of by the base (catch-all)\n\t// policy, which we've ensured exists if not already specified;\n\t// internal names, however, need to be handled by an internal\n\t// issuer, which we need to make a new policy for, scoped to\n\t// just those names (yes, this logic is a bit asymmetric, but\n\t// it works, because our assumed/natural default issuer is an\n\t// ACME issuer)\n\tif len(internalNames) > 0 {\n\t\tinternalIssuer := new(caddytls.InternalIssuer)\n\n\t\t// shallow-copy the base policy; we want to inherit\n\t\t// from it, not replace it... this takes two lines to\n\t\t// overrule compiler optimizations\n\t\tpolicyCopy := *basePolicy\n\t\tnewPolicy := &policyCopy\n\n\t\t// very important to provision the issuer, since we\n\t\t// are bypassing the JSON-unmarshaling step\n\t\tif err := internalIssuer.Provision(ctx); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\t// this policy should apply only to the given names\n\t\t// and should use our issuer -- yes, this overrides\n\t\t// any issuer that may have been set in the base\n\t\t// policy, but we do this because these names do not\n\t\t// already have a policy associated with them, which\n\t\t// is easy to do; consider the case of a Caddyfile\n\t\t// that has only \"localhost\" as a name, but sets the\n\t\t// default/global ACME CA to the Let's Encrypt staging\n\t\t// endpoint... they probably don't intend to change the\n\t\t// fundamental set of names that setting applies to,\n\t\t// rather they just want to change the CA for the set\n\t\t// of names that would normally use the production API;\n\t\t// anyway, that gets into the weeds a bit...\n\t\tnewPolicy.SubjectsRaw = internalNames\n\t\tnewPolicy.Issuers = []certmagic.Issuer{internalIssuer}\n\t\terr := app.tlsApp.AddAutomationPolicy(newPolicy)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// tailscale names go in their own automation policies because\n\t// they require on-demand TLS to be enabled, which we obviously\n\t// can't enable for everything\n\tif len(tailscaleNames) > 0 {\n\t\tpolicyCopy := *basePolicy\n\t\tnewPolicy := &policyCopy\n\n\t\tvar ts caddytls.Tailscale\n\t\tif err := ts.Provision(ctx); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tnewPolicy.SubjectsRaw = tailscaleNames\n\t\tnewPolicy.Issuers = nil\n\t\tnewPolicy.Managers = append(newPolicy.Managers, ts)\n\t\terr := app.tlsApp.AddAutomationPolicy(newPolicy)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// we just changed a lot of stuff, so double-check that it's all good\n\terr := app.tlsApp.Validate()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\n// fillInACMEIssuer fills in default values into acmeIssuer that\n// are defined in app; these values at time of writing are just\n// app.HTTPPort and app.HTTPSPort, which are used by ACMEIssuer.\n// Sure, we could just use the global/CertMagic defaults, but if\n// a user has configured those ports in the HTTP app, it makes\n// sense to use them in the TLS app too, even if they forgot (or\n// were too lazy, like me) to set it in each automation policy\n// that uses it -- this just makes things a little less tedious\n// for the user, so they don't have to repeat those ports in\n// potentially many places. This function never steps on existing\n// config values. If any changes are made, acmeIssuer is\n// reprovisioned. acmeIssuer must not be nil.\nfunc (app *App) fillInACMEIssuer(acmeIssuer *caddytls.ACMEIssuer) error {\n\tif app.HTTPPort > 0 || app.HTTPSPort > 0 {\n\t\tif acmeIssuer.Challenges == nil {\n\t\t\tacmeIssuer.Challenges = new(caddytls.ChallengesConfig)\n\t\t}\n\t}\n\tif app.HTTPPort > 0 {\n\t\tif acmeIssuer.Challenges.HTTP == nil {\n\t\t\tacmeIssuer.Challenges.HTTP = new(caddytls.HTTPChallengeConfig)\n\t\t}\n\t\t// don't overwrite existing explicit config\n\t\tif acmeIssuer.Challenges.HTTP.AlternatePort == 0 {\n\t\t\tacmeIssuer.Challenges.HTTP.AlternatePort = app.HTTPPort\n\t\t}\n\t}\n\tif app.HTTPSPort > 0 {\n\t\tif acmeIssuer.Challenges.TLSALPN == nil {\n\t\t\tacmeIssuer.Challenges.TLSALPN = new(caddytls.TLSALPNChallengeConfig)\n\t\t}\n\t\t// don't overwrite existing explicit config\n\t\tif acmeIssuer.Challenges.TLSALPN.AlternatePort == 0 {\n\t\t\tacmeIssuer.Challenges.TLSALPN.AlternatePort = app.HTTPSPort\n\t\t}\n\t}\n\t// we must provision all ACME issuers, even if nothing\n\t// was changed, because we don't know if they are new\n\t// and haven't been provisioned yet; if an ACME issuer\n\t// never gets provisioned, its Agree field stays false,\n\t// which leads to, um, problems later on\n\treturn acmeIssuer.Provision(app.ctx)\n}\n\n// automaticHTTPSPhase2 begins certificate management for\n// all names in the qualifying domain set for each server.\n// This phase must occur after provisioning and at the end\n// of app start, after all the servers have been started.\n// Doing this last ensures that there won't be any race\n// for listeners on the HTTP or HTTPS ports when management\n// is async (if CertMagic's solvers bind to those ports\n// first, then our servers would fail to bind to them,\n// which would be bad, since CertMagic's bindings are\n// temporary and don't serve the user's sites!).\nfunc (app *App) automaticHTTPSPhase2() error {\n\tif len(app.allCertDomains) == 0 {\n\t\treturn nil\n\t}\n\tapp.logger.Info(\"enabling automatic TLS certificate management\",\n\t\tzap.Strings(\"domains\", internal.MaxSizeSubjectsListForLog(app.allCertDomains, 1000)),\n\t)\n\terr := app.tlsApp.Manage(app.allCertDomains)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"managing certificates for %d domains: %s\", len(app.allCertDomains), err)\n\t}\n\tapp.allCertDomains = nil // no longer needed; allow GC to deallocate\n\treturn nil\n}\n\nfunc isTailscaleDomain(name string) bool {\n\treturn strings.HasSuffix(strings.ToLower(name), \".ts.net\")\n}\n\ntype acmeCapable interface{ GetACMEIssuer() *caddytls.ACMEIssuer }\n\n// getFirstHostFromRoute traverses a route's matchers to find the Host rule.\n// Since we are dealing with internally generated redirect routes, the host\n// is typically the first string within the MatchHost.\nfunc getFirstHostFromRoute(r Route) string {\n\tfor _, matcherSet := range r.MatcherSets {\n\t\tfor _, m := range matcherSet {\n\t\t\t// Check if the matcher is of type MatchHost (value or pointer)\n\t\t\tswitch hm := m.(type) {\n\t\t\tcase MatchHost:\n\t\t\t\tif len(hm) > 0 {\n\t\t\t\t\treturn hm[0]\n\t\t\t\t}\n\t\t\tcase *MatchHost:\n\t\t\t\tif len(*hm) > 0 {\n\t\t\t\t\treturn (*hm)[0]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\t// Return an empty string if it's a catch-all route (no specific host)\n\treturn \"\"\n}\n"
  },
  {
    "path": "modules/caddyhttp/caddyauth/argon2id.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyauth\n\nimport (\n\t\"crypto/rand\"\n\t\"crypto/subtle\"\n\t\"encoding/base64\"\n\t\"fmt\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"golang.org/x/crypto/argon2\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Argon2idHash{})\n}\n\nconst (\n\targon2idName           = \"argon2id\"\n\tdefaultArgon2idTime    = 1\n\tdefaultArgon2idMemory  = 46 * 1024\n\tdefaultArgon2idThreads = 1\n\tdefaultArgon2idKeylen  = 32\n\tdefaultSaltLength      = 16\n)\n\n// Argon2idHash implements the Argon2id password hashing.\ntype Argon2idHash struct {\n\tsalt    []byte\n\ttime    uint32\n\tmemory  uint32\n\tthreads uint8\n\tkeyLen  uint32\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Argon2idHash) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.authentication.hashes.argon2id\",\n\t\tNew: func() caddy.Module { return new(Argon2idHash) },\n\t}\n}\n\n// Compare checks if the plaintext password matches the given Argon2id hash.\nfunc (Argon2idHash) Compare(hashed, plaintext []byte) (bool, error) {\n\targHash, storedKey, err := DecodeHash(hashed)\n\tif err != nil {\n\t\treturn false, err\n\t}\n\n\tcomputedKey := argon2.IDKey(\n\t\tplaintext,\n\t\targHash.salt,\n\t\targHash.time,\n\t\targHash.memory,\n\t\targHash.threads,\n\t\targHash.keyLen,\n\t)\n\n\treturn subtle.ConstantTimeCompare(storedKey, computedKey) == 1, nil\n}\n\n// Hash generates an Argon2id hash of the given plaintext using the configured parameters and salt.\nfunc (b Argon2idHash) Hash(plaintext []byte) ([]byte, error) {\n\tif b.salt == nil {\n\t\ts, err := generateSalt(defaultSaltLength)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tb.salt = s\n\t}\n\n\tkey := argon2.IDKey(\n\t\tplaintext,\n\t\tb.salt,\n\t\tb.time,\n\t\tb.memory,\n\t\tb.threads,\n\t\tb.keyLen,\n\t)\n\n\thash := fmt.Sprintf(\n\t\t\"$argon2id$v=%d$m=%d,t=%d,p=%d$%s$%s\",\n\t\targon2.Version,\n\t\tb.memory,\n\t\tb.time,\n\t\tb.threads,\n\t\tbase64.RawStdEncoding.EncodeToString(b.salt),\n\t\tbase64.RawStdEncoding.EncodeToString(key),\n\t)\n\n\treturn []byte(hash), nil\n}\n\n// DecodeHash parses an Argon2id PHC string into an Argon2idHash struct and returns the struct along with the derived key.\nfunc DecodeHash(hash []byte) (*Argon2idHash, []byte, error) {\n\tparts := strings.Split(string(hash), \"$\")\n\tif len(parts) != 6 {\n\t\treturn nil, nil, fmt.Errorf(\"invalid hash format\")\n\t}\n\n\tif parts[1] != argon2idName {\n\t\treturn nil, nil, fmt.Errorf(\"unsupported variant: %s\", parts[1])\n\t}\n\n\tversion, err := strconv.Atoi(strings.TrimPrefix(parts[2], \"v=\"))\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"invalid version: %w\", err)\n\t}\n\tif version != argon2.Version {\n\t\treturn nil, nil, fmt.Errorf(\"incompatible version: %d\", version)\n\t}\n\n\tparams := strings.Split(parts[3], \",\")\n\tif len(params) != 3 {\n\t\treturn nil, nil, fmt.Errorf(\"invalid parameters\")\n\t}\n\n\tmem, err := strconv.ParseUint(strings.TrimPrefix(params[0], \"m=\"), 10, 32)\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"invalid memory parameter: %w\", err)\n\t}\n\n\titer, err := strconv.ParseUint(strings.TrimPrefix(params[1], \"t=\"), 10, 32)\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"invalid iterations parameter: %w\", err)\n\t}\n\n\tthreads, err := strconv.ParseUint(strings.TrimPrefix(params[2], \"p=\"), 10, 8)\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"invalid parallelism parameter: %w\", err)\n\t}\n\n\tsalt, err := base64.RawStdEncoding.Strict().DecodeString(parts[4])\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"decode salt: %w\", err)\n\t}\n\n\tkey, err := base64.RawStdEncoding.Strict().DecodeString(parts[5])\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"decode key: %w\", err)\n\t}\n\n\treturn &Argon2idHash{\n\t\tsalt:    salt,\n\t\ttime:    uint32(iter),\n\t\tmemory:  uint32(mem),\n\t\tthreads: uint8(threads),\n\t\tkeyLen:  uint32(len(key)),\n\t}, key, nil\n}\n\n// FakeHash returns a constant fake hash for timing attacks mitigation.\nfunc (Argon2idHash) FakeHash() []byte {\n\t// hashed with the following command:\n\t// caddy hash-password --plaintext \"antitiming\" --algorithm \"argon2id\"\n\treturn []byte(\"$argon2id$v=19$m=47104,t=1,p=1$P2nzckEdTZ3bxCiBCkRTyA$xQL3Z32eo5jKl7u5tcIsnEKObYiyNZQQf5/4sAau6Pg\")\n}\n\n// Interface guards\nvar (\n\t_ Comparer = (*Argon2idHash)(nil)\n\t_ Hasher   = (*Argon2idHash)(nil)\n)\n\nfunc generateSalt(length int) ([]byte, error) {\n\tsalt := make([]byte, length)\n\tif _, err := rand.Read(salt); err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to generate salt: %w\", err)\n\t}\n\treturn salt, nil\n}\n"
  },
  {
    "path": "modules/caddyhttp/caddyauth/basicauth.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyauth\n\nimport (\n\t\"encoding/base64\"\n\t\"encoding/hex\"\n\t\"encoding/json\"\n\t\"fmt\"\n\tweakrand \"math/rand/v2\"\n\t\"net/http\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"golang.org/x/sync/singleflight\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(HTTPBasicAuth{})\n}\n\n// HTTPBasicAuth facilitates HTTP basic authentication.\ntype HTTPBasicAuth struct {\n\t// The algorithm with which the passwords are hashed. Default: bcrypt\n\tHashRaw json.RawMessage `json:\"hash,omitempty\" caddy:\"namespace=http.authentication.hashes inline_key=algorithm\"`\n\n\t// The list of accounts to authenticate.\n\tAccountList []Account `json:\"accounts,omitempty\"`\n\n\t// The name of the realm. Default: restricted\n\tRealm string `json:\"realm,omitempty\"`\n\n\t// If non-nil, a mapping of plaintext passwords to their\n\t// hashes will be cached in memory (with random eviction).\n\t// This can greatly improve the performance of traffic-heavy\n\t// servers that use secure password hashing algorithms, with\n\t// the downside that plaintext passwords will be stored in\n\t// memory for a longer time (this should not be a problem\n\t// as long as your machine is not compromised, at which point\n\t// all bets are off, since basicauth necessitates plaintext\n\t// passwords being received over the wire anyway). Note that\n\t// a cache hit does not mean it is a valid password.\n\tHashCache *Cache `json:\"hash_cache,omitempty\"`\n\n\tAccounts map[string]Account `json:\"-\"`\n\tHash     Comparer           `json:\"-\"`\n\n\t// fakePassword is used when a given user is not found,\n\t// so that timing side-channels can be mitigated: it gives\n\t// us something to hash and compare even if the user does\n\t// not exist, which should have similar timing as a user\n\t// account that does exist.\n\tfakePassword []byte\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (HTTPBasicAuth) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.authentication.providers.http_basic\",\n\t\tNew: func() caddy.Module { return new(HTTPBasicAuth) },\n\t}\n}\n\n// Provision provisions the HTTP basic auth provider.\nfunc (hba *HTTPBasicAuth) Provision(ctx caddy.Context) error {\n\tif hba.HashRaw == nil {\n\t\thba.HashRaw = json.RawMessage(`{\"algorithm\": \"bcrypt\"}`)\n\t}\n\n\t// load password hasher\n\thasherIface, err := ctx.LoadModule(hba, \"HashRaw\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"loading password hasher module: %v\", err)\n\t}\n\thba.Hash = hasherIface.(Comparer)\n\n\tif hba.Hash == nil {\n\t\treturn fmt.Errorf(\"hash is required\")\n\t}\n\n\t// if supported, generate a fake password we can compare against if needed\n\tif hasher, ok := hba.Hash.(Hasher); ok {\n\t\thba.fakePassword = hasher.FakeHash()\n\t}\n\n\trepl := caddy.NewReplacer()\n\n\t// load account list\n\thba.Accounts = make(map[string]Account)\n\tfor i, acct := range hba.AccountList {\n\t\tif _, ok := hba.Accounts[acct.Username]; ok {\n\t\t\treturn fmt.Errorf(\"account %d: username is not unique: %s\", i, acct.Username)\n\t\t}\n\n\t\tacct.Username = repl.ReplaceAll(acct.Username, \"\")\n\t\tacct.Password = repl.ReplaceAll(acct.Password, \"\")\n\n\t\tif acct.Username == \"\" || acct.Password == \"\" {\n\t\t\treturn fmt.Errorf(\"account %d: username and password are required\", i)\n\t\t}\n\n\t\t// TODO: Remove support for redundantly-encoded b64-encoded hashes\n\t\t// Passwords starting with '$' are likely in Modular Crypt Format,\n\t\t// so we don't need to base64 decode them. But historically, we\n\t\t// required redundant base64, so we try to decode it otherwise.\n\t\tif strings.HasPrefix(acct.Password, \"$\") {\n\t\t\tacct.password = []byte(acct.Password)\n\t\t} else {\n\t\t\tacct.password, err = base64.StdEncoding.DecodeString(acct.Password)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"base64-decoding password: %v\", err)\n\t\t\t}\n\t\t}\n\n\t\thba.Accounts[acct.Username] = acct\n\t}\n\thba.AccountList = nil // allow GC to deallocate\n\n\tif hba.HashCache != nil {\n\t\thba.HashCache.cache = make(map[string]bool)\n\t\thba.HashCache.mu = new(sync.RWMutex)\n\t\thba.HashCache.g = new(singleflight.Group)\n\t}\n\n\treturn nil\n}\n\n// Authenticate validates the user credentials in req and returns the user, if valid.\nfunc (hba HTTPBasicAuth) Authenticate(w http.ResponseWriter, req *http.Request) (User, bool, error) {\n\tusername, plaintextPasswordStr, ok := req.BasicAuth()\n\tif !ok {\n\t\treturn hba.promptForCredentials(w, nil)\n\t}\n\n\taccount, accountExists := hba.Accounts[username]\n\tif !accountExists {\n\t\t// don't return early if account does not exist; we want\n\t\t// to try to avoid side-channels that leak existence, so\n\t\t// we use a fake password to simulate realistic CPU cycles\n\t\taccount.password = hba.fakePassword\n\t}\n\n\tsame, err := hba.correctPassword(account, []byte(plaintextPasswordStr))\n\tif err != nil || !same || !accountExists {\n\t\treturn hba.promptForCredentials(w, err)\n\t}\n\n\treturn User{ID: username}, true, nil\n}\n\nfunc (hba HTTPBasicAuth) correctPassword(account Account, plaintextPassword []byte) (bool, error) {\n\tcompare := func() (bool, error) {\n\t\treturn hba.Hash.Compare(account.password, plaintextPassword)\n\t}\n\n\t// if no caching is enabled, simply return the result of hashing + comparing\n\tif hba.HashCache == nil {\n\t\treturn compare()\n\t}\n\n\t// compute a cache key that is unique for these input parameters\n\tcacheKey := hex.EncodeToString(append(account.password, plaintextPassword...))\n\n\t// fast track: if the result of the input is already cached, use it\n\thba.HashCache.mu.RLock()\n\tsame, ok := hba.HashCache.cache[cacheKey]\n\thba.HashCache.mu.RUnlock()\n\tif ok {\n\t\treturn same, nil\n\t}\n\t// slow track: do the expensive op, then add it to the cache\n\t// but perform it in a singleflight group so that multiple\n\t// parallel requests using the same password don't cause a\n\t// thundering herd problem by all performing the same hashing\n\t// operation before the first one finishes and caches it.\n\tv, err, _ := hba.HashCache.g.Do(cacheKey, func() (any, error) {\n\t\treturn compare()\n\t})\n\tif err != nil {\n\t\treturn false, err\n\t}\n\tsame = v.(bool)\n\thba.HashCache.mu.Lock()\n\tif len(hba.HashCache.cache) >= 1000 {\n\t\thba.HashCache.makeRoom() // keep cache size under control\n\t}\n\thba.HashCache.cache[cacheKey] = same\n\thba.HashCache.mu.Unlock()\n\n\treturn same, nil\n}\n\nfunc (hba HTTPBasicAuth) promptForCredentials(w http.ResponseWriter, err error) (User, bool, error) {\n\t// browsers show a message that says something like:\n\t// \"The website says: <realm>\"\n\t// which is kinda dumb, but whatever.\n\trealm := hba.Realm\n\tif realm == \"\" {\n\t\trealm = \"restricted\"\n\t}\n\tw.Header().Set(\"WWW-Authenticate\", fmt.Sprintf(`Basic realm=\"%s\"`, realm))\n\treturn User{}, false, err\n}\n\n// Cache enables caching of basic auth results. This is especially\n// helpful for secure password hashes which can be expensive to\n// compute on every HTTP request.\ntype Cache struct {\n\tmu *sync.RWMutex\n\tg  *singleflight.Group\n\n\t// map of concatenated hashed password + plaintext password, to result\n\tcache map[string]bool\n}\n\n// makeRoom deletes about 1/10 of the items in the cache\n// in order to keep its size under control. It must not be\n// called without a lock on c.mu.\nfunc (c *Cache) makeRoom() {\n\t// we delete more than just 1 entry so that we don't have\n\t// to do this on every request; assuming the capacity of\n\t// the cache is on a long tail, we can save a lot of CPU\n\t// time by doing a whole bunch of deletions now and then\n\t// we won't have to do them again for a while\n\tnumToDelete := max(len(c.cache)/10, 1)\n\tfor deleted := 0; deleted <= numToDelete; deleted++ {\n\t\t// Go maps are \"nondeterministic\" not actually random,\n\t\t// so although we could just chop off the \"front\" of the\n\t\t// map with less code, this is a heavily skewed eviction\n\t\t// strategy; generating random numbers is cheap and\n\t\t// ensures a much better distribution.\n\t\t//nolint:gosec\n\t\trnd := weakrand.IntN(len(c.cache))\n\t\ti := 0\n\t\tfor key := range c.cache {\n\t\t\tif i == rnd {\n\t\t\t\tdelete(c.cache, key)\n\t\t\t\tbreak\n\t\t\t}\n\t\t\ti++\n\t\t}\n\t}\n}\n\n// Comparer is a type that can securely compare\n// a plaintext password with a hashed password\n// in constant-time. Comparers should hash the\n// plaintext password and then use constant-time\n// comparison.\ntype Comparer interface {\n\t// Compare returns true if the result of hashing\n\t// plaintextPassword is hashedPassword, false\n\t// otherwise. An error is returned only if\n\t// there is a technical/configuration error.\n\tCompare(hashedPassword, plaintextPassword []byte) (bool, error)\n}\n\n// Hasher is a type that can generate a secure hash\n// given a plaintext. Hashing modules which implement\n// this interface can be used with the hash-password\n// subcommand as well as benefitting from anti-timing\n// features. A hasher also returns a fake hash which\n// can be used for timing side-channel mitigation.\ntype Hasher interface {\n\tHash(plaintext []byte) ([]byte, error)\n\tFakeHash() []byte\n}\n\n// Account contains a username and password.\ntype Account struct {\n\t// A user's username.\n\tUsername string `json:\"username\"`\n\n\t// The user's hashed password, in Modular Crypt Format (with `$` prefix)\n\t// or base64-encoded.\n\tPassword string `json:\"password\"` //nolint:gosec // false positive, this is a hashed password\n\n\tpassword []byte\n}\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner = (*HTTPBasicAuth)(nil)\n\t_ Authenticator     = (*HTTPBasicAuth)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/caddyauth/bcrypt.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyauth\n\nimport (\n\t\"errors\"\n\n\t\"golang.org/x/crypto/bcrypt\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(BcryptHash{})\n}\n\n// defaultBcryptCost cost 14 strikes a solid balance between security, usability, and hardware performance\nconst (\n\tbcryptName        = \"bcrypt\"\n\tdefaultBcryptCost = 14\n)\n\n// BcryptHash implements the bcrypt hash.\ntype BcryptHash struct {\n\t// cost is the bcrypt hashing difficulty factor (work factor).\n\t// Higher values increase computation time and security.\n\tcost int\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (BcryptHash) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.authentication.hashes.bcrypt\",\n\t\tNew: func() caddy.Module { return new(BcryptHash) },\n\t}\n}\n\n// Compare compares passwords.\nfunc (BcryptHash) Compare(hashed, plaintext []byte) (bool, error) {\n\terr := bcrypt.CompareHashAndPassword(hashed, plaintext)\n\tif errors.Is(err, bcrypt.ErrMismatchedHashAndPassword) {\n\t\treturn false, nil\n\t}\n\tif err != nil {\n\t\treturn false, err\n\t}\n\treturn true, nil\n}\n\n// Hash hashes plaintext using a random salt.\nfunc (b BcryptHash) Hash(plaintext []byte) ([]byte, error) {\n\tcost := b.cost\n\tif cost < bcrypt.MinCost || cost > bcrypt.MaxCost {\n\t\tcost = defaultBcryptCost\n\t}\n\n\treturn bcrypt.GenerateFromPassword(plaintext, cost)\n}\n\n// FakeHash returns a fake hash.\nfunc (BcryptHash) FakeHash() []byte {\n\t// hashed with the following command:\n\t// caddy hash-password --plaintext \"antitiming\" --algorithm \"bcrypt\"\n\treturn []byte(\"$2a$14$X3ulqf/iGxnf1k6oMZ.RZeJUoqI9PX2PM4rS5lkIKJXduLGXGPrt6\")\n}\n\n// Interface guards\nvar (\n\t_ Comparer = (*BcryptHash)(nil)\n\t_ Hasher   = (*BcryptHash)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/caddyauth/caddyauth.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyauth\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Authentication{})\n}\n\n// Authentication is a middleware which provides user authentication.\n// Rejects requests with HTTP 401 if the request is not authenticated.\n//\n// After a successful authentication, the placeholder\n// `{http.auth.user.id}` will be set to the username, and also\n// `{http.auth.user.*}` placeholders may be set for any authentication\n// modules that provide user metadata.\n//\n// In case of an error, the placeholder `{http.auth.<provider>.error}`\n// will be set to the error message returned by the authentication\n// provider.\n//\n// Its API is still experimental and may be subject to change.\ntype Authentication struct {\n\t// A set of authentication providers. If none are specified,\n\t// all requests will always be unauthenticated.\n\tProvidersRaw caddy.ModuleMap `json:\"providers,omitempty\" caddy:\"namespace=http.authentication.providers\"`\n\n\tProviders map[string]Authenticator `json:\"-\"`\n\n\tlogger *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Authentication) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.authentication\",\n\t\tNew: func() caddy.Module { return new(Authentication) },\n\t}\n}\n\n// Provision sets up an Authentication module by initializing its logger,\n// loading and registering all configured authentication providers.\nfunc (a *Authentication) Provision(ctx caddy.Context) error {\n\ta.logger = ctx.Logger()\n\ta.Providers = make(map[string]Authenticator)\n\tmods, err := ctx.LoadModule(a, \"ProvidersRaw\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"loading authentication providers: %v\", err)\n\t}\n\tfor modName, modIface := range mods.(map[string]any) {\n\t\ta.Providers[modName] = modIface.(Authenticator)\n\t}\n\treturn nil\n}\n\nfunc (a Authentication) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\tvar user User\n\tvar authed bool\n\tvar err error\n\tfor provName, prov := range a.Providers {\n\t\tuser, authed, err = prov.Authenticate(w, r)\n\t\tif err != nil {\n\t\t\tif c := a.logger.Check(zapcore.ErrorLevel, \"auth provider returned error\"); c != nil {\n\t\t\t\tc.Write(zap.String(\"provider\", provName), zap.Error(err))\n\t\t\t}\n\t\t\t// Set the error from the authentication provider in a placeholder,\n\t\t\t// so it can be used in the handle_errors directive.\n\t\t\trepl.Set(\"http.auth.\"+provName+\".error\", err.Error())\n\t\t\tcontinue\n\t\t}\n\t\tif authed {\n\t\t\tbreak\n\t\t}\n\t}\n\tif !authed {\n\t\treturn caddyhttp.Error(http.StatusUnauthorized, fmt.Errorf(\"not authenticated\"))\n\t}\n\n\trepl.Set(\"http.auth.user.id\", user.ID)\n\tfor k, v := range user.Metadata {\n\t\trepl.Set(\"http.auth.user.\"+k, v)\n\t}\n\n\treturn next.ServeHTTP(w, r)\n}\n\n// Authenticator is a type which can authenticate a request.\n// If a request was not authenticated, it returns false. An\n// error is only returned if authenticating the request fails\n// for a technical reason (not for bad/missing credentials).\ntype Authenticator interface {\n\tAuthenticate(http.ResponseWriter, *http.Request) (User, bool, error)\n}\n\n// User represents an authenticated user.\ntype User struct {\n\t// The ID of the authenticated user.\n\tID string\n\n\t// Any other relevant data about this\n\t// user. Keys should be adhere to Caddy\n\t// conventions (snake_casing), as all\n\t// keys will be made available as\n\t// placeholders.\n\tMetadata map[string]string\n}\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner           = (*Authentication)(nil)\n\t_ caddyhttp.MiddlewareHandler = (*Authentication)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/caddyauth/caddyfile.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyauth\n\nimport (\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\thttpcaddyfile.RegisterHandlerDirective(\"basicauth\", parseCaddyfile) // deprecated\n\thttpcaddyfile.RegisterHandlerDirective(\"basic_auth\", parseCaddyfile)\n}\n\n// parseCaddyfile sets up the handler from Caddyfile tokens. Syntax:\n//\n//\tbasic_auth [<matcher>] [<hash_algorithm> [<realm>]] {\n//\t    <username> <hashed_password>\n//\t    ...\n//\t}\n//\n// If no hash algorithm is supplied, bcrypt will be assumed.\nfunc parseCaddyfile(h httpcaddyfile.Helper) (caddyhttp.MiddlewareHandler, error) {\n\th.Next() // consume directive name\n\n\t// \"basicauth\" is deprecated, replaced by \"basic_auth\"\n\tif h.Val() == \"basicauth\" {\n\t\tcaddy.Log().Named(\"config.adapter.caddyfile\").Warn(\"the 'basicauth' directive is deprecated, please use 'basic_auth' instead!\")\n\t}\n\n\tvar ba HTTPBasicAuth\n\tba.HashCache = new(Cache)\n\n\tvar cmp Comparer\n\targs := h.RemainingArgs()\n\n\tvar hashName string\n\tswitch len(args) {\n\tcase 0:\n\t\thashName = bcryptName\n\tcase 1:\n\t\thashName = args[0]\n\tcase 2:\n\t\thashName = args[0]\n\t\tba.Realm = args[1]\n\tdefault:\n\t\treturn nil, h.ArgErr()\n\t}\n\n\tswitch hashName {\n\tcase bcryptName:\n\t\tcmp = BcryptHash{}\n\tcase argon2idName:\n\t\tcmp = Argon2idHash{}\n\tdefault:\n\t\treturn nil, h.Errf(\"unrecognized hash algorithm: %s\", hashName)\n\t}\n\n\tba.HashRaw = caddyconfig.JSONModuleObject(cmp, \"algorithm\", hashName, nil)\n\n\tfor h.NextBlock(0) {\n\t\tusername := h.Val()\n\n\t\tvar b64Pwd string\n\t\th.Args(&b64Pwd)\n\t\tif h.NextArg() {\n\t\t\treturn nil, h.ArgErr()\n\t\t}\n\n\t\tif username == \"\" || b64Pwd == \"\" {\n\t\t\treturn nil, h.Err(\"username and password cannot be empty or missing\")\n\t\t}\n\n\t\tba.AccountList = append(ba.AccountList, Account{\n\t\t\tUsername: username,\n\t\t\tPassword: b64Pwd,\n\t\t})\n\t}\n\n\treturn Authentication{\n\t\tProvidersRaw: caddy.ModuleMap{\n\t\t\t\"http_basic\": caddyconfig.JSON(ba, nil),\n\t\t},\n\t}, nil\n}\n"
  },
  {
    "path": "modules/caddyhttp/caddyauth/command.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyauth\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"fmt\"\n\t\"os\"\n\t\"os/signal\"\n\n\t\"github.com/spf13/cobra\"\n\t\"golang.org/x/term\"\n\n\tcaddycmd \"github.com/caddyserver/caddy/v2/cmd\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddycmd.RegisterCommand(caddycmd.Command{\n\t\tName:  \"hash-password\",\n\t\tUsage: \"[--plaintext <password>] [--algorithm <argon2id|bcrypt>] [--bcrypt-cost <difficulty>] [--argon2id-time <iterations>] [--argon2id-memory <KiB>] [--argon2id-threads <n>] [--argon2id-keylen <bytes>]\",\n\t\tShort: \"Hashes a password and writes base64\",\n\t\tLong: `\nConvenient way to hash a plaintext password. The resulting\nhash is written to stdout as a base64 string.\n\n--plaintext\n    The password to hash. If omitted, it will be read from stdin.\n    If Caddy is attached to a controlling TTY, the input will not be echoed.\n\n--algorithm\n    Selects the hashing algorithm. Valid options are:\n      * 'argon2id' (recommended for modern security)\n      * 'bcrypt'  (legacy, slower, configurable cost)\n\nbcrypt-specific parameters:\n\n--bcrypt-cost\n    Sets the bcrypt hashing difficulty. Higher values increase security by\n    making the hash computation slower and more CPU-intensive.\n    Must be within the valid range [bcrypt.MinCost, bcrypt.MaxCost]. \n    If omitted or invalid, the default cost is used.\n\nArgon2id-specific parameters:\n\n--argon2id-time\n    Number of iterations to perform. Increasing this makes\n    hashing slower and more resistant to brute-force attacks.\n\n--argon2id-memory\n    Amount of memory to use during hashing.\n    Larger values increase resistance to GPU/ASIC attacks.\n\n--argon2id-threads\n    Number of CPU threads to use. Increase for faster hashing\n    on multi-core systems.\n\n--argon2id-keylen\n    Length of the resulting hash in bytes. Longer keys increase\n    security but slightly increase storage size.\n`,\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\tcmd.Flags().StringP(\"plaintext\", \"p\", \"\", \"The plaintext password\")\n\t\t\tcmd.Flags().StringP(\"algorithm\", \"a\", bcryptName, \"Name of the hash algorithm\")\n\t\t\tcmd.Flags().Int(\"bcrypt-cost\", defaultBcryptCost, \"Bcrypt hashing cost (only used with 'bcrypt' algorithm)\")\n\t\t\tcmd.Flags().Uint32(\"argon2id-time\", defaultArgon2idTime, \"Number of iterations for Argon2id hashing. Increasing this makes the hash slower and more resistant to brute-force attacks.\")\n\t\t\tcmd.Flags().Uint32(\"argon2id-memory\", defaultArgon2idMemory, \"Memory to use in KiB for Argon2id hashing. Larger values increase resistance to GPU/ASIC attacks.\")\n\t\t\tcmd.Flags().Uint8(\"argon2id-threads\", defaultArgon2idThreads, \"Number of CPU threads to use for Argon2id hashing. Increase for faster hashing on multi-core systems.\")\n\t\t\tcmd.Flags().Uint32(\"argon2id-keylen\", defaultArgon2idKeylen, \"Length of the resulting Argon2id hash in bytes. Longer hashes increase security but slightly increase storage size.\")\n\t\t\tcmd.RunE = caddycmd.WrapCommandFuncForCobra(cmdHashPassword)\n\t\t},\n\t})\n}\n\nfunc cmdHashPassword(fs caddycmd.Flags) (int, error) {\n\tvar err error\n\n\talgorithm := fs.String(\"algorithm\")\n\tplaintext := []byte(fs.String(\"plaintext\"))\n\tbcryptCost := fs.Int(\"bcrypt-cost\")\n\n\tif len(plaintext) == 0 {\n\t\tfd := int(os.Stdin.Fd())\n\t\tif term.IsTerminal(fd) {\n\t\t\t// ensure the terminal state is restored on SIGINT\n\t\t\tstate, _ := term.GetState(fd)\n\t\t\tc := make(chan os.Signal, 1)\n\t\t\tsignal.Notify(c, os.Interrupt)\n\t\t\tgo func() {\n\t\t\t\t<-c\n\t\t\t\t_ = term.Restore(fd, state)\n\t\t\t\tos.Exit(caddy.ExitCodeFailedStartup)\n\t\t\t}()\n\t\t\tdefer signal.Stop(c)\n\n\t\t\tfmt.Fprint(os.Stderr, \"Enter password: \")\n\t\t\tplaintext, err = term.ReadPassword(fd)\n\t\t\tfmt.Fprintln(os.Stderr)\n\t\t\tif err != nil {\n\t\t\t\treturn caddy.ExitCodeFailedStartup, err\n\t\t\t}\n\n\t\t\tfmt.Fprint(os.Stderr, \"Confirm password: \")\n\t\t\tconfirmation, err := term.ReadPassword(fd)\n\t\t\tfmt.Fprintln(os.Stderr)\n\t\t\tif err != nil {\n\t\t\t\treturn caddy.ExitCodeFailedStartup, err\n\t\t\t}\n\n\t\t\tif !bytes.Equal(plaintext, confirmation) {\n\t\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"password does not match\")\n\t\t\t}\n\t\t} else {\n\t\t\trd := bufio.NewReader(os.Stdin)\n\t\t\tplaintext, err = rd.ReadBytes('\\n')\n\t\t\tif err != nil {\n\t\t\t\treturn caddy.ExitCodeFailedStartup, err\n\t\t\t}\n\n\t\t\tplaintext = plaintext[:len(plaintext)-1] // Trailing newline\n\t\t}\n\n\t\tif len(plaintext) == 0 {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"plaintext is required\")\n\t\t}\n\t}\n\n\tvar hash []byte\n\tvar hashString string\n\tswitch algorithm {\n\tcase bcryptName:\n\t\thash, err = BcryptHash{cost: bcryptCost}.Hash(plaintext)\n\t\thashString = string(hash)\n\tcase argon2idName:\n\t\ttime, err := fs.GetUint32(\"argon2id-time\")\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"failed to get argon2id time parameter: %w\", err)\n\t\t}\n\t\tmemory, err := fs.GetUint32(\"argon2id-memory\")\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"failed to get argon2id memory parameter: %w\", err)\n\t\t}\n\t\tthreads, err := fs.GetUint8(\"argon2id-threads\")\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"failed to get argon2id threads parameter: %w\", err)\n\t\t}\n\t\tkeyLen, err := fs.GetUint32(\"argon2id-keylen\")\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"failed to get argon2id keylen parameter: %w\", err)\n\t\t}\n\n\t\thash, _ = Argon2idHash{\n\t\t\ttime:    time,\n\t\t\tmemory:  memory,\n\t\t\tthreads: threads,\n\t\t\tkeyLen:  keyLen,\n\t\t}.Hash(plaintext)\n\n\t\thashString = string(hash)\n\tdefault:\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"unrecognized hash algorithm: %s\", algorithm)\n\t}\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\tfmt.Println(hashString)\n\n\treturn 0, nil\n}\n"
  },
  {
    "path": "modules/caddyhttp/caddyhttp.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"bytes\"\n\t\"encoding/json\"\n\t\"io\"\n\t\"net\"\n\t\"net/http\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(tlsPlaceholderWrapper{})\n}\n\n// RequestMatcher is a type that can match to a request.\n// A route matcher MUST NOT modify the request, with the\n// only exception being its context.\n//\n// Deprecated: Matchers should now implement RequestMatcherWithError.\n// You may remove any interface guards for RequestMatcher\n// but keep your Match() methods for backwards compatibility.\ntype RequestMatcher interface {\n\tMatch(*http.Request) bool\n}\n\n// RequestMatcherWithError is like RequestMatcher but can return an error.\n// An error during matching will abort the request middleware chain and\n// invoke the error middleware chain.\n//\n// This will eventually replace RequestMatcher. Matcher modules\n// should implement both interfaces, and once all modules have\n// been updated to use RequestMatcherWithError, the RequestMatcher\n// interface may eventually be dropped.\ntype RequestMatcherWithError interface {\n\tMatchWithError(*http.Request) (bool, error)\n}\n\n// Handler is like http.Handler except ServeHTTP may return an error.\n//\n// If any handler encounters an error, it should be returned for proper\n// handling. Return values should be propagated down the middleware chain\n// by returning it unchanged. Returned errors should not be re-wrapped\n// if they are already HandlerError values.\ntype Handler interface {\n\tServeHTTP(http.ResponseWriter, *http.Request) error\n}\n\n// HandlerFunc is a convenience type like http.HandlerFunc.\ntype HandlerFunc func(http.ResponseWriter, *http.Request) error\n\n// ServeHTTP implements the Handler interface.\nfunc (f HandlerFunc) ServeHTTP(w http.ResponseWriter, r *http.Request) error {\n\treturn f(w, r)\n}\n\n// Middleware chains one Handler to the next by being passed\n// the next Handler in the chain.\ntype Middleware func(Handler) Handler\n\n// MiddlewareHandler is like Handler except it takes as a third\n// argument the next handler in the chain. The next handler will\n// never be nil, but may be a no-op handler if this is the last\n// handler in the chain. Handlers which act as middleware should\n// call the next handler's ServeHTTP method so as to propagate\n// the request down the chain properly. Handlers which act as\n// responders (content origins) need not invoke the next handler,\n// since the last handler in the chain should be the first to\n// write the response.\ntype MiddlewareHandler interface {\n\tServeHTTP(http.ResponseWriter, *http.Request, Handler) error\n}\n\n// emptyHandler is used as a no-op handler.\nvar emptyHandler Handler = HandlerFunc(func(_ http.ResponseWriter, req *http.Request) error {\n\tSetVar(req.Context(), \"unhandled\", true)\n\treturn nil\n})\n\n// An implicit suffix middleware that, if reached, sets the StatusCode to the\n// error stored in the ErrorCtxKey. This is to prevent situations where the\n// Error chain does not actually handle the error (for instance, it matches only\n// on some errors). See #3053\nvar errorEmptyHandler Handler = HandlerFunc(func(w http.ResponseWriter, r *http.Request) error {\n\thttpError := r.Context().Value(ErrorCtxKey)\n\tif handlerError, ok := httpError.(HandlerError); ok {\n\t\tw.WriteHeader(handlerError.StatusCode)\n\t} else {\n\t\tw.WriteHeader(http.StatusInternalServerError)\n\t}\n\treturn nil\n})\n\n// ResponseHandler pairs a response matcher with custom handling\n// logic. Either the status code can be changed to something else\n// while using the original response body, or, if a status code\n// is not set, it can execute a custom route list; this is useful\n// for executing handler routes based on the properties of an HTTP\n// response that has not been written out to the client yet.\n//\n// To use this type, provision it at module load time, then when\n// ready to use, match the response against its matcher; if it\n// matches (or doesn't have a matcher), change the status code on\n// the response if configured; otherwise invoke the routes by\n// calling `rh.Routes.Compile(next).ServeHTTP(rw, req)` (or similar).\ntype ResponseHandler struct {\n\t// The response matcher for this handler. If empty/nil,\n\t// it always matches.\n\tMatch *ResponseMatcher `json:\"match,omitempty\"`\n\n\t// To write the original response body but with a different\n\t// status code, set this field to the desired status code.\n\t// If set, this takes priority over routes.\n\tStatusCode WeakString `json:\"status_code,omitempty\"`\n\n\t// The list of HTTP routes to execute if no status code is\n\t// specified. If evaluated, the original response body\n\t// will not be written.\n\tRoutes RouteList `json:\"routes,omitempty\"`\n}\n\n// Provision sets up the routes in rh.\nfunc (rh *ResponseHandler) Provision(ctx caddy.Context) error {\n\tif rh.Routes != nil {\n\t\terr := rh.Routes.Provision(ctx)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\n// WeakString is a type that unmarshals any JSON value\n// as a string literal, with the following exceptions:\n//\n// 1. actual string values are decoded as strings; and\n// 2. null is decoded as empty string;\n//\n// and provides methods for getting the value as various\n// primitive types. However, using this type removes any\n// type safety as far as deserializing JSON is concerned.\ntype WeakString string\n\n// UnmarshalJSON satisfies json.Unmarshaler according to\n// this type's documentation.\nfunc (ws *WeakString) UnmarshalJSON(b []byte) error {\n\tif len(b) == 0 {\n\t\treturn io.EOF\n\t}\n\tif b[0] == byte('\"') && b[len(b)-1] == byte('\"') {\n\t\tvar s string\n\t\terr := json.Unmarshal(b, &s)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t*ws = WeakString(s)\n\t\treturn nil\n\t}\n\tif bytes.Equal(b, []byte(\"null\")) {\n\t\treturn nil\n\t}\n\t*ws = WeakString(b)\n\treturn nil\n}\n\n// MarshalJSON marshals was a boolean if true or false,\n// a number if an integer, or a string otherwise.\nfunc (ws WeakString) MarshalJSON() ([]byte, error) {\n\tif ws == \"true\" {\n\t\treturn []byte(\"true\"), nil\n\t}\n\tif ws == \"false\" {\n\t\treturn []byte(\"false\"), nil\n\t}\n\tif num, err := strconv.Atoi(string(ws)); err == nil {\n\t\treturn json.Marshal(num)\n\t}\n\treturn json.Marshal(string(ws))\n}\n\n// Int returns ws as an integer. If ws is not an\n// integer, 0 is returned.\nfunc (ws WeakString) Int() int {\n\tnum, _ := strconv.Atoi(string(ws))\n\treturn num\n}\n\n// Float64 returns ws as a float64. If ws is not a\n// float value, the zero value is returned.\nfunc (ws WeakString) Float64() float64 {\n\tnum, _ := strconv.ParseFloat(string(ws), 64)\n\treturn num\n}\n\n// Bool returns ws as a boolean. If ws is not a\n// boolean, false is returned.\nfunc (ws WeakString) Bool() bool {\n\treturn string(ws) == \"true\"\n}\n\n// String returns ws as a string.\nfunc (ws WeakString) String() string {\n\treturn string(ws)\n}\n\n// StatusCodeMatches returns true if a real HTTP status code matches\n// the configured status code, which may be either a real HTTP status\n// code or an integer representing a class of codes (e.g. 4 for all\n// 4xx statuses).\nfunc StatusCodeMatches(actual, configured int) bool {\n\tif actual == configured {\n\t\treturn true\n\t}\n\tif configured < 100 &&\n\t\tactual >= configured*100 &&\n\t\tactual < (configured+1)*100 {\n\t\treturn true\n\t}\n\treturn false\n}\n\n// SanitizedPathJoin performs filepath.Join(root, reqPath) that\n// is safe against directory traversal attacks. It uses logic\n// similar to that in the Go standard library, specifically\n// in the implementation of http.Dir. The root is assumed to\n// be a trusted path, but reqPath is not; and the output will\n// never be outside of root. The resulting path can be used\n// with the local file system. If root is empty, the current\n// directory is assumed. If the cleaned request path is deemed\n// not local according to lexical processing (i.e. ignoring links),\n// it will be rejected as unsafe and only the root will be returned.\nfunc SanitizedPathJoin(root, reqPath string) string {\n\tif root == \"\" {\n\t\troot = \".\"\n\t}\n\n\trelPath := path.Clean(\"/\" + reqPath)[1:] // clean path and trim the leading /\n\tif relPath != \"\" && !filepath.IsLocal(relPath) {\n\t\t// path is unsafe (see https://github.com/golang/go/issues/56336#issuecomment-1416214885)\n\t\treturn root\n\t}\n\n\tpath := filepath.Join(root, filepath.FromSlash(relPath))\n\n\t// filepath.Join also cleans the path, and cleaning strips\n\t// the trailing slash, so we need to re-add it afterwards.\n\t// if the length is 1, then it's a path to the root,\n\t// and that should return \".\", so we don't append the separator.\n\tif strings.HasSuffix(reqPath, \"/\") && len(reqPath) > 1 {\n\t\tpath += separator\n\t}\n\n\treturn path\n}\n\n// CleanPath cleans path p according to path.Clean(), but only\n// merges repeated slashes if collapseSlashes is true, and always\n// preserves trailing slashes.\nfunc CleanPath(p string, collapseSlashes bool) string {\n\tif collapseSlashes {\n\t\treturn cleanPath(p)\n\t}\n\n\t// insert an invalid/impossible URI character into each two consecutive\n\t// slashes to expand empty path segments; then clean the path as usual,\n\t// and then remove the remaining temporary characters.\n\tconst tmpCh = 0xff\n\tvar sb strings.Builder\n\tfor i, ch := range p {\n\t\tif ch == '/' && i > 0 && p[i-1] == '/' {\n\t\t\tsb.WriteByte(tmpCh)\n\t\t}\n\t\tsb.WriteRune(ch)\n\t}\n\thalfCleaned := cleanPath(sb.String())\n\thalfCleaned = strings.ReplaceAll(halfCleaned, string([]byte{tmpCh}), \"\")\n\n\treturn halfCleaned\n}\n\n// cleanPath does path.Clean(p) but preserves any trailing slash.\nfunc cleanPath(p string) string {\n\tcleaned := path.Clean(p)\n\tif cleaned != \"/\" && strings.HasSuffix(p, \"/\") {\n\t\tcleaned = cleaned + \"/\"\n\t}\n\treturn cleaned\n}\n\n// tlsPlaceholderWrapper is a no-op listener wrapper that marks\n// where the TLS listener should be in a chain of listener wrappers.\n// It should only be used if another listener wrapper must be placed\n// in front of the TLS handshake.\ntype tlsPlaceholderWrapper struct{}\n\nfunc (tlsPlaceholderWrapper) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.listeners.tls\",\n\t\tNew: func() caddy.Module { return new(tlsPlaceholderWrapper) },\n\t}\n}\n\nfunc (tlsPlaceholderWrapper) WrapListener(ln net.Listener) net.Listener { return ln }\n\nfunc (tlsPlaceholderWrapper) UnmarshalCaddyfile(d *caddyfile.Dispenser) error { return nil }\n\nconst (\n\t// DefaultHTTPPort is the default port for HTTP.\n\tDefaultHTTPPort = 80\n\n\t// DefaultHTTPSPort is the default port for HTTPS.\n\tDefaultHTTPSPort = 443\n)\n\nconst separator = string(filepath.Separator)\n\n// Interface guard\nvar (\n\t_ caddy.ListenerWrapper = (*tlsPlaceholderWrapper)(nil)\n\t_ caddyfile.Unmarshaler = (*tlsPlaceholderWrapper)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/caddyhttp_test.go",
    "content": "package caddyhttp\n\nimport (\n\t\"net/url\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"testing\"\n)\n\nfunc TestSanitizedPathJoin(t *testing.T) {\n\t// For reference:\n\t// %2e = .\n\t// %2f = /\n\t// %5c = \\\n\tfor i, tc := range []struct {\n\t\tinputRoot     string\n\t\tinputPath     string\n\t\texpect        string\n\t\texpectWindows string\n\t}{\n\t\t{\n\t\t\tinputPath: \"\",\n\t\t\texpect:    \".\",\n\t\t},\n\t\t{\n\t\t\tinputPath: \"/\",\n\t\t\texpect:    \".\",\n\t\t},\n\t\t{\n\t\t\t// fileserver.MatchFile passes an inputPath of \"//\" for some try_files values.\n\t\t\t// See https://github.com/caddyserver/caddy/issues/6352\n\t\t\tinputPath: \"//\",\n\t\t\texpect:    filepath.FromSlash(\"./\"),\n\t\t},\n\t\t{\n\t\t\tinputPath: \"/foo\",\n\t\t\texpect:    \"foo\",\n\t\t},\n\t\t{\n\t\t\tinputPath: \"/foo/\",\n\t\t\texpect:    filepath.FromSlash(\"foo/\"),\n\t\t},\n\t\t{\n\t\t\tinputPath: \"/foo/bar\",\n\t\t\texpect:    filepath.FromSlash(\"foo/bar\"),\n\t\t},\n\t\t{\n\t\t\tinputRoot: \"/a\",\n\t\t\tinputPath: \"/foo/bar\",\n\t\t\texpect:    filepath.FromSlash(\"/a/foo/bar\"),\n\t\t},\n\t\t{\n\t\t\tinputPath: \"/foo/../bar\",\n\t\t\texpect:    \"bar\",\n\t\t},\n\t\t{\n\t\t\tinputRoot: \"/a/b\",\n\t\t\tinputPath: \"/foo/../bar\",\n\t\t\texpect:    filepath.FromSlash(\"/a/b/bar\"),\n\t\t},\n\t\t{\n\t\t\tinputRoot: \"/a/b\",\n\t\t\tinputPath: \"/..%2fbar\",\n\t\t\texpect:    filepath.FromSlash(\"/a/b/bar\"),\n\t\t},\n\t\t{\n\t\t\tinputRoot: \"/a/b\",\n\t\t\tinputPath: \"/%2e%2e%2fbar\",\n\t\t\texpect:    filepath.FromSlash(\"/a/b/bar\"),\n\t\t},\n\t\t{\n\t\t\t// inputPath fails the IsLocal test so only the root is returned,\n\t\t\t// but with a trailing slash since one was included in inputPath\n\t\t\tinputRoot: \"/a/b\",\n\t\t\tinputPath: \"/%2e%2e%2f%2e%2e%2f\",\n\t\t\texpect:    filepath.FromSlash(\"/a/b/\"),\n\t\t},\n\t\t{\n\t\t\tinputRoot: \"/a/b\",\n\t\t\tinputPath: \"/foo%2fbar\",\n\t\t\texpect:    filepath.FromSlash(\"/a/b/foo/bar\"),\n\t\t},\n\t\t{\n\t\t\tinputRoot: \"/a/b\",\n\t\t\tinputPath: \"/foo%252fbar\",\n\t\t\texpect:    filepath.FromSlash(\"/a/b/foo%2fbar\"),\n\t\t},\n\t\t{\n\t\t\tinputRoot: \"C:\\\\www\",\n\t\t\tinputPath: \"/foo/bar\",\n\t\t\texpect:    filepath.Join(\"C:\\\\www\", \"foo\", \"bar\"),\n\t\t},\n\t\t{\n\t\t\tinputRoot:     \"C:\\\\www\",\n\t\t\tinputPath:     \"/D:\\\\foo\\\\bar\",\n\t\t\texpect:        filepath.Join(\"C:\\\\www\", \"D:\\\\foo\\\\bar\"),\n\t\t\texpectWindows: \"C:\\\\www\", // inputPath fails IsLocal on Windows\n\t\t},\n\t\t{\n\t\t\tinputRoot:     `C:\\www`,\n\t\t\tinputPath:     `/..\\windows\\win.ini`,\n\t\t\texpect:        `C:\\www/..\\windows\\win.ini`,\n\t\t\texpectWindows: `C:\\www`,\n\t\t},\n\t\t{\n\t\t\tinputRoot:     `C:\\www`,\n\t\t\tinputPath:     `/..\\..\\..\\..\\..\\..\\..\\..\\..\\..\\windows\\win.ini`,\n\t\t\texpect:        `C:\\www/..\\..\\..\\..\\..\\..\\..\\..\\..\\..\\windows\\win.ini`,\n\t\t\texpectWindows: `C:\\www`,\n\t\t},\n\t\t{\n\t\t\tinputRoot:     `C:\\www`,\n\t\t\tinputPath:     `/..%5cwindows%5cwin.ini`,\n\t\t\texpect:        `C:\\www/..\\windows\\win.ini`,\n\t\t\texpectWindows: `C:\\www`,\n\t\t},\n\t\t{\n\t\t\tinputRoot:     `C:\\www`,\n\t\t\tinputPath:     `/..%5c..%5c..%5c..%5c..%5c..%5c..%5c..%5c..%5c..%5cwindows%5cwin.ini`,\n\t\t\texpect:        `C:\\www/..\\..\\..\\..\\..\\..\\..\\..\\..\\..\\windows\\win.ini`,\n\t\t\texpectWindows: `C:\\www`,\n\t\t},\n\t\t{\n\t\t\t// https://github.com/golang/go/issues/56336#issuecomment-1416214885\n\t\t\tinputRoot: \"root\",\n\t\t\tinputPath: \"/a/b/../../c\",\n\t\t\texpect:    filepath.FromSlash(\"root/c\"),\n\t\t},\n\t} {\n\t\t// we don't *need* to use an actual parsed URL, but it\n\t\t// adds some authenticity to the tests since real-world\n\t\t// values will be coming in from URLs; thus, the test\n\t\t// corpus can contain paths as encoded by clients, which\n\t\t// more closely emulates the actual attack vector\n\t\tu, err := url.Parse(\"http://test:9999\" + tc.inputPath)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Test %d: invalid URL: %v\", i, err)\n\t\t}\n\t\tactual := SanitizedPathJoin(tc.inputRoot, u.Path)\n\t\tif runtime.GOOS == \"windows\" && tc.expectWindows != \"\" {\n\t\t\ttc.expect = tc.expectWindows\n\t\t}\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d: SanitizedPathJoin('%s', '%s') =>  '%s' (expected '%s')\",\n\t\t\t\ti, tc.inputRoot, tc.inputPath, actual, tc.expect)\n\t\t}\n\t}\n}\n\nfunc TestCleanPath(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinput        string\n\t\tmergeSlashes bool\n\t\texpect       string\n\t}{\n\t\t{\n\t\t\tinput:  \"/foo\",\n\t\t\texpect: \"/foo\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"/foo/\",\n\t\t\texpect: \"/foo/\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"//foo\",\n\t\t\texpect: \"//foo\",\n\t\t},\n\t\t{\n\t\t\tinput:        \"//foo\",\n\t\t\tmergeSlashes: true,\n\t\t\texpect:       \"/foo\",\n\t\t},\n\t\t{\n\t\t\tinput:        \"/foo//bar/\",\n\t\t\tmergeSlashes: true,\n\t\t\texpect:       \"/foo/bar/\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"/foo/./.././bar\",\n\t\t\texpect: \"/bar\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"/foo//./..//./bar\",\n\t\t\texpect: \"/foo//bar\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"/foo///./..//./bar\",\n\t\t\texpect: \"/foo///bar\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"/foo///./..//.\",\n\t\t\texpect: \"/foo//\",\n\t\t},\n\t\t{\n\t\t\tinput:  \"/foo//./bar\",\n\t\t\texpect: \"/foo//bar\",\n\t\t},\n\t} {\n\t\tactual := CleanPath(tc.input, tc.mergeSlashes)\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d [input='%s' mergeSlashes=%t]: Got '%s', expected '%s'\",\n\t\t\t\ti, tc.input, tc.mergeSlashes, actual, tc.expect)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/celmatcher.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"crypto/x509/pkix\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"reflect\"\n\t\"regexp\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/google/cel-go/cel\"\n\t\"github.com/google/cel-go/common\"\n\t\"github.com/google/cel-go/common/ast\"\n\t\"github.com/google/cel-go/common/operators\"\n\t\"github.com/google/cel-go/common/types\"\n\t\"github.com/google/cel-go/common/types/ref\"\n\t\"github.com/google/cel-go/common/types/traits\"\n\t\"github.com/google/cel-go/ext\"\n\t\"github.com/google/cel-go/interpreter\"\n\t\"github.com/google/cel-go/interpreter/functions\"\n\t\"github.com/google/cel-go/parser\"\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(MatchExpression{})\n}\n\n// MatchExpression matches requests by evaluating a\n// [CEL](https://github.com/google/cel-spec) expression.\n// This enables complex logic to be expressed using a comfortable,\n// familiar syntax. Please refer to\n// [the standard definitions of CEL functions and operators](https://github.com/google/cel-spec/blob/master/doc/langdef.md#standard-definitions).\n//\n// This matcher's JSON interface is actually a string, not a struct.\n// The generated docs are not correct because this type has custom\n// marshaling logic.\n//\n// COMPATIBILITY NOTE: This module is still experimental and is not\n// subject to Caddy's compatibility guarantee.\ntype MatchExpression struct {\n\t// The CEL expression to evaluate. Any Caddy placeholders\n\t// will be expanded and situated into proper CEL function\n\t// calls before evaluating.\n\tExpr string `json:\"expr,omitempty\"`\n\n\t// Name is an optional name for this matcher.\n\t// This is used to populate the name for regexp\n\t// matchers that appear in the expression.\n\tName string `json:\"name,omitempty\"`\n\n\texpandedExpr string\n\tprg          cel.Program\n\tta           types.Adapter\n\n\tlog *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchExpression) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.matchers.expression\",\n\t\tNew: func() caddy.Module { return new(MatchExpression) },\n\t}\n}\n\n// MarshalJSON marshals m's expression.\nfunc (m MatchExpression) MarshalJSON() ([]byte, error) {\n\t// if the name is empty, then we can marshal just the expression string\n\tif m.Name == \"\" {\n\t\treturn json.Marshal(m.Expr)\n\t}\n\t// otherwise, we need to marshal the full object, using an\n\t// anonymous struct to avoid infinite recursion\n\treturn json.Marshal(struct {\n\t\tExpr string `json:\"expr\"`\n\t\tName string `json:\"name\"`\n\t}{\n\t\tExpr: m.Expr,\n\t\tName: m.Name,\n\t})\n}\n\n// UnmarshalJSON unmarshals m's expression.\nfunc (m *MatchExpression) UnmarshalJSON(data []byte) error {\n\t// if the data is a string, then it's just the expression\n\tif data[0] == '\"' {\n\t\treturn json.Unmarshal(data, &m.Expr)\n\t}\n\t// otherwise, it's a full object, so unmarshal it,\n\t// using an temp map to avoid infinite recursion\n\tvar tmpJson map[string]any\n\terr := json.Unmarshal(data, &tmpJson)\n\t*m = MatchExpression{\n\t\tExpr: tmpJson[\"expr\"].(string),\n\t\tName: tmpJson[\"name\"].(string),\n\t}\n\treturn err\n}\n\n// Provision sets ups m.\nfunc (m *MatchExpression) Provision(ctx caddy.Context) error {\n\tm.log = ctx.Logger()\n\n\t// replace placeholders with a function call - this is just some\n\t// light (and possibly naïve) syntactic sugar\n\tm.expandedExpr = placeholderRegexp.ReplaceAllString(m.Expr, placeholderExpansion)\n\n\t// as a second pass, we'll strip the escape character from an escaped\n\t// placeholder, so that it can be used as an input to other CEL functions\n\tm.expandedExpr = escapedPlaceholderRegexp.ReplaceAllString(m.expandedExpr, escapedPlaceholderExpansion)\n\n\t// our type adapter expands CEL's standard type support\n\tm.ta = celTypeAdapter{}\n\n\t// initialize the CEL libraries from the Matcher implementations which\n\t// have been configured to support CEL.\n\tmatcherLibProducers := []CELLibraryProducer{}\n\tfor _, info := range caddy.GetModules(\"http.matchers\") {\n\t\tp, ok := info.New().(CELLibraryProducer)\n\t\tif ok {\n\t\t\tmatcherLibProducers = append(matcherLibProducers, p)\n\t\t}\n\t}\n\n\t// add the matcher name to the context so that the matcher name\n\t// can be used by regexp matchers being provisioned\n\tctx = ctx.WithValue(MatcherNameCtxKey, m.Name)\n\n\t// Assemble the compilation and program options from the different library\n\t// producers into a single cel.Library implementation.\n\tmatcherEnvOpts := []cel.EnvOption{}\n\tmatcherProgramOpts := []cel.ProgramOption{}\n\tfor _, producer := range matcherLibProducers {\n\t\tl, err := producer.CELLibrary(ctx)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"error initializing CEL library for %T: %v\", producer, err)\n\t\t}\n\t\tmatcherEnvOpts = append(matcherEnvOpts, l.CompileOptions()...)\n\t\tmatcherProgramOpts = append(matcherProgramOpts, l.ProgramOptions()...)\n\t}\n\tmatcherLib := cel.Lib(NewMatcherCELLibrary(matcherEnvOpts, matcherProgramOpts))\n\n\t// create the CEL environment\n\tenv, err := cel.NewEnv(\n\t\tcel.Function(CELPlaceholderFuncName, cel.SingletonBinaryBinding(m.caddyPlaceholderFunc), cel.Overload(\n\t\t\tCELPlaceholderFuncName+\"_httpRequest_string\",\n\t\t\t[]*cel.Type{httpRequestObjectType, cel.StringType},\n\t\t\tcel.AnyType,\n\t\t)),\n\t\tcel.Variable(CELRequestVarName, httpRequestObjectType),\n\t\tcel.CustomTypeAdapter(m.ta),\n\t\text.Strings(),\n\t\text.Bindings(),\n\t\text.Lists(),\n\t\text.Math(),\n\t\tmatcherLib,\n\t)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"setting up CEL environment: %v\", err)\n\t}\n\n\t// parse and type-check the expression\n\tchecked, issues := env.Compile(m.expandedExpr)\n\tif issues.Err() != nil {\n\t\treturn fmt.Errorf(\"compiling CEL program: %s\", issues.Err())\n\t}\n\n\t// request matching is a boolean operation, so we don't really know\n\t// what to do if the expression returns a non-boolean type\n\tif checked.OutputType() != cel.BoolType {\n\t\treturn fmt.Errorf(\"CEL request matcher expects return type of bool, not %s\", checked.OutputType())\n\t}\n\n\t// compile the \"program\"\n\tm.prg, err = env.Program(checked, cel.EvalOptions(cel.OptOptimize))\n\tif err != nil {\n\t\treturn fmt.Errorf(\"compiling CEL program: %s\", err)\n\t}\n\treturn nil\n}\n\n// Match returns true if r matches m.\nfunc (m MatchExpression) Match(r *http.Request) bool {\n\tmatch, err := m.MatchWithError(r)\n\tif err != nil {\n\t\tSetVar(r.Context(), MatcherErrorVarKey, err)\n\t}\n\treturn match\n}\n\n// MatchWithError returns true if r matches m.\nfunc (m MatchExpression) MatchWithError(r *http.Request) (bool, error) {\n\tcelReq := celHTTPRequest{r}\n\tout, _, err := m.prg.Eval(celReq)\n\tif err != nil {\n\t\tm.log.Error(\"evaluating expression\", zap.Error(err))\n\t\treturn false, err\n\t}\n\tif outBool, ok := out.Value().(bool); ok {\n\t\treturn outBool, nil\n\t}\n\treturn false, nil\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (m *MatchExpression) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume matcher name\n\n\t// if there's multiple args, then we need to keep the raw\n\t// tokens because the user may have used quotes within their\n\t// CEL expression (e.g. strings) and we should retain that\n\tif d.CountRemainingArgs() > 1 {\n\t\tm.Expr = strings.Join(d.RemainingArgsRaw(), \" \")\n\t\treturn nil\n\t}\n\n\t// there should at least be one arg\n\tif !d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\n\t// if there's only one token, then we can safely grab the\n\t// cleaned token (no quotes) and use that as the expression\n\t// because there's no valid CEL expression that is only a\n\t// quoted string; commonly quotes are used in Caddyfile to\n\t// define the expression\n\tm.Expr = d.Val()\n\n\t// use the named matcher's name, to fill regexp\n\t// matchers names by default\n\tm.Name = d.GetContextString(caddyfile.MatcherNameCtxKey)\n\n\treturn nil\n}\n\n// caddyPlaceholderFunc implements the custom CEL function that accesses the\n// Replacer on a request and gets values from it.\nfunc (m MatchExpression) caddyPlaceholderFunc(lhs, rhs ref.Val) ref.Val {\n\tcelReq, ok := lhs.(celHTTPRequest)\n\tif !ok {\n\t\treturn types.NewErr(\n\t\t\t\"invalid request of type '%v' to %s(request, placeholderVarName)\",\n\t\t\tlhs.Type(),\n\t\t\tCELPlaceholderFuncName,\n\t\t)\n\t}\n\tphStr, ok := rhs.(types.String)\n\tif !ok {\n\t\treturn types.NewErr(\n\t\t\t\"invalid placeholder variable name of type '%v' to %s(request, placeholderVarName)\",\n\t\t\trhs.Type(),\n\t\t\tCELPlaceholderFuncName,\n\t\t)\n\t}\n\n\trepl := celReq.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\tval, _ := repl.Get(string(phStr))\n\n\treturn m.ta.NativeToValue(val)\n}\n\n// httpRequestCELType is the type representation of a native HTTP request.\nvar httpRequestCELType = cel.ObjectType(\"http.Request\", traits.ReceiverType)\n\n// celHTTPRequest wraps an http.Request with ref.Val interface methods.\n//\n// This type also implements the interpreter.Activation interface which\n// drops allocation costs for CEL expression evaluations by roughly half.\ntype celHTTPRequest struct{ *http.Request }\n\nfunc (cr celHTTPRequest) ResolveName(name string) (any, bool) {\n\tif name == CELRequestVarName {\n\t\treturn cr, true\n\t}\n\treturn nil, false\n}\n\nfunc (cr celHTTPRequest) Parent() interpreter.Activation {\n\treturn nil\n}\n\nfunc (cr celHTTPRequest) ConvertToNative(typeDesc reflect.Type) (any, error) {\n\treturn cr.Request, nil\n}\n\nfunc (celHTTPRequest) ConvertToType(typeVal ref.Type) ref.Val {\n\tpanic(\"not implemented\")\n}\n\nfunc (cr celHTTPRequest) Equal(other ref.Val) ref.Val {\n\tif o, ok := other.Value().(celHTTPRequest); ok {\n\t\treturn types.Bool(o.Request == cr.Request)\n\t}\n\treturn types.ValOrErr(other, \"%v is not comparable type\", other)\n}\nfunc (celHTTPRequest) Type() ref.Type { return httpRequestCELType }\nfunc (cr celHTTPRequest) Value() any  { return cr }\n\nvar pkixNameCELType = cel.ObjectType(\"pkix.Name\", traits.ReceiverType)\n\n// celPkixName wraps an pkix.Name with\n// methods to satisfy the ref.Val interface.\ntype celPkixName struct{ *pkix.Name }\n\nfunc (pn celPkixName) ConvertToNative(typeDesc reflect.Type) (any, error) {\n\treturn pn.Name, nil\n}\n\nfunc (pn celPkixName) ConvertToType(typeVal ref.Type) ref.Val {\n\tif typeVal.TypeName() == \"string\" {\n\t\treturn types.String(pn.Name.String())\n\t}\n\tpanic(\"not implemented\")\n}\n\nfunc (pn celPkixName) Equal(other ref.Val) ref.Val {\n\tif o, ok := other.Value().(string); ok {\n\t\treturn types.Bool(pn.Name.String() == o)\n\t}\n\treturn types.ValOrErr(other, \"%v is not comparable type\", other)\n}\nfunc (celPkixName) Type() ref.Type { return pkixNameCELType }\nfunc (pn celPkixName) Value() any  { return pn }\n\n// celTypeAdapter can adapt our custom types to a CEL value.\ntype celTypeAdapter struct{}\n\nfunc (celTypeAdapter) NativeToValue(value any) ref.Val {\n\tswitch v := value.(type) {\n\tcase celHTTPRequest:\n\t\treturn v\n\tcase pkix.Name:\n\t\treturn celPkixName{&v}\n\tcase time.Time:\n\t\treturn types.Timestamp{Time: v}\n\tcase error:\n\t\treturn types.WrapErr(v)\n\t}\n\treturn types.DefaultTypeAdapter.NativeToValue(value)\n}\n\n// CELLibraryProducer provide CEL libraries that expose a Matcher\n// implementation as a first class function within the CEL expression\n// matcher.\ntype CELLibraryProducer interface {\n\t// CELLibrary creates a cel.Library which makes it possible to use the\n\t// target object within CEL expression matchers.\n\tCELLibrary(caddy.Context) (cel.Library, error)\n}\n\n// CELMatcherImpl creates a new cel.Library based on the following pieces of\n// data:\n//\n//   - macroName: the function name to be used within CEL. This will be a macro\n//     and not a function proper.\n//   - funcName: the function overload name generated by the CEL macro used to\n//     represent the matcher.\n//   - matcherDataTypes: the argument types to the macro.\n//   - fac: a matcherFactory implementation which converts from CEL constant\n//     values to a Matcher instance.\n//\n// Note, macro names and function names must not collide with other macros or\n// functions exposed within CEL expressions, or an error will be produced\n// during the expression matcher plan time.\n//\n// The existing CELMatcherImpl support methods are configured to support a\n// limited set of function signatures. For strong type validation you may need\n// to provide a custom macro which does a more detailed analysis of the CEL\n// literal provided to the macro as an argument.\nfunc CELMatcherImpl(macroName, funcName string, matcherDataTypes []*cel.Type, fac any) (cel.Library, error) {\n\trequestType := cel.ObjectType(\"http.Request\")\n\tvar macro parser.Macro\n\tswitch len(matcherDataTypes) {\n\tcase 1:\n\t\tmatcherDataType := matcherDataTypes[0]\n\t\tswitch matcherDataType.String() {\n\t\tcase \"list(string)\":\n\t\t\tmacro = parser.NewGlobalVarArgMacro(macroName, celMatcherStringListMacroExpander(funcName))\n\t\tcase cel.StringType.String():\n\t\t\tmacro = parser.NewGlobalMacro(macroName, 1, celMatcherStringMacroExpander(funcName))\n\t\tcase CELTypeJSON.String():\n\t\t\tmacro = parser.NewGlobalMacro(macroName, 1, celMatcherJSONMacroExpander(funcName))\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"unsupported matcher data type: %s\", matcherDataType)\n\t\t}\n\tcase 2:\n\t\tif matcherDataTypes[0] == cel.StringType && matcherDataTypes[1] == cel.StringType {\n\t\t\tmacro = parser.NewGlobalMacro(macroName, 2, celMatcherStringListMacroExpander(funcName))\n\t\t\tmatcherDataTypes = []*cel.Type{cel.ListType(cel.StringType)}\n\t\t} else {\n\t\t\treturn nil, fmt.Errorf(\"unsupported matcher data type: %s, %s\", matcherDataTypes[0], matcherDataTypes[1])\n\t\t}\n\tcase 3:\n\t\t// nolint:gosec // false positive, impossible to be out of bounds; see: https://github.com/securego/gosec/issues/1525\n\t\tif matcherDataTypes[0] == cel.StringType && matcherDataTypes[1] == cel.StringType && matcherDataTypes[2] == cel.StringType {\n\t\t\tmacro = parser.NewGlobalMacro(macroName, 3, celMatcherStringListMacroExpander(funcName))\n\t\t\tmatcherDataTypes = []*cel.Type{cel.ListType(cel.StringType)}\n\t\t} else {\n\t\t\t// nolint:gosec // false positive, impossible to be out of bounds; see: https://github.com/securego/gosec/issues/1525\n\t\t\treturn nil, fmt.Errorf(\"unsupported matcher data type: %s, %s, %s\", matcherDataTypes[0], matcherDataTypes[1], matcherDataTypes[2])\n\t\t}\n\t}\n\tenvOptions := []cel.EnvOption{\n\t\tcel.Macros(macro),\n\t\tcel.Function(funcName,\n\t\t\tcel.Overload(funcName, append([]*cel.Type{requestType}, matcherDataTypes...), cel.BoolType),\n\t\t\tcel.SingletonBinaryBinding(CELMatcherRuntimeFunction(funcName, fac))),\n\t}\n\tprogramOptions := []cel.ProgramOption{\n\t\tcel.CustomDecorator(CELMatcherDecorator(funcName, fac)),\n\t}\n\treturn NewMatcherCELLibrary(envOptions, programOptions), nil\n}\n\n// CELMatcherFactory converts a constant CEL value into a RequestMatcher.\n// Deprecated: Use CELMatcherWithErrorFactory instead.\ntype CELMatcherFactory = func(data ref.Val) (RequestMatcher, error)\n\n// CELMatcherWithErrorFactory converts a constant CEL value into a RequestMatcherWithError.\ntype CELMatcherWithErrorFactory = func(data ref.Val) (RequestMatcherWithError, error)\n\n// matcherCELLibrary is a simplistic configurable cel.Library implementation.\ntype matcherCELLibrary struct {\n\tenvOptions     []cel.EnvOption\n\tprogramOptions []cel.ProgramOption\n}\n\n// NewMatcherCELLibrary creates a matcherLibrary from option setes.\nfunc NewMatcherCELLibrary(envOptions []cel.EnvOption, programOptions []cel.ProgramOption) cel.Library {\n\treturn &matcherCELLibrary{\n\t\tenvOptions:     envOptions,\n\t\tprogramOptions: programOptions,\n\t}\n}\n\nfunc (lib *matcherCELLibrary) CompileOptions() []cel.EnvOption {\n\treturn lib.envOptions\n}\n\nfunc (lib *matcherCELLibrary) ProgramOptions() []cel.ProgramOption {\n\treturn lib.programOptions\n}\n\n// CELMatcherDecorator matches a call overload generated by a CEL macro\n// that takes a single argument, and optimizes the implementation to precompile\n// the matcher and return a function that references the precompiled and\n// provisioned matcher.\nfunc CELMatcherDecorator(funcName string, fac any) interpreter.InterpretableDecorator {\n\treturn func(i interpreter.Interpretable) (interpreter.Interpretable, error) {\n\t\tcall, ok := i.(interpreter.InterpretableCall)\n\t\tif !ok {\n\t\t\treturn i, nil\n\t\t}\n\t\tif call.OverloadID() != funcName {\n\t\t\treturn i, nil\n\t\t}\n\t\tcallArgs := call.Args()\n\t\treqAttr, ok := callArgs[0].(interpreter.InterpretableAttribute)\n\t\tif !ok {\n\t\t\treturn nil, errors.New(\"missing 'req' argument\")\n\t\t}\n\t\tnsAttr, ok := reqAttr.Attr().(interpreter.NamespacedAttribute)\n\t\tif !ok {\n\t\t\treturn nil, errors.New(\"missing 'req' argument\")\n\t\t}\n\t\tvarNames := nsAttr.CandidateVariableNames()\n\t\tif len(varNames) != 1 || len(varNames) == 1 && varNames[0] != CELRequestVarName {\n\t\t\treturn nil, errors.New(\"missing 'req' argument\")\n\t\t}\n\t\tmatcherData, ok := callArgs[1].(interpreter.InterpretableConst)\n\t\tif !ok {\n\t\t\t// If the matcher arguments are not constant, then this means\n\t\t\t// they contain a Caddy placeholder reference and the evaluation\n\t\t\t// and matcher provisioning should be handled at dynamically.\n\t\t\treturn i, nil\n\t\t}\n\n\t\tif factory, ok := fac.(CELMatcherWithErrorFactory); ok {\n\t\t\tmatcher, err := factory(matcherData.Value())\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\treturn interpreter.NewCall(\n\t\t\t\ti.ID(), funcName, funcName+\"_opt\",\n\t\t\t\t[]interpreter.Interpretable{reqAttr},\n\t\t\t\tfunc(args ...ref.Val) ref.Val {\n\t\t\t\t\t// The request value, guaranteed to be of type celHTTPRequest\n\t\t\t\t\tcelReq := args[0]\n\t\t\t\t\t// If needed this call could be changed to convert the value\n\t\t\t\t\t// to a *http.Request using CEL's ConvertToNative method.\n\t\t\t\t\thttpReq := celReq.Value().(celHTTPRequest)\n\t\t\t\t\tmatch, err := matcher.MatchWithError(httpReq.Request)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn types.WrapErr(err)\n\t\t\t\t\t}\n\t\t\t\t\treturn types.Bool(match)\n\t\t\t\t},\n\t\t\t), nil\n\t\t}\n\n\t\tif factory, ok := fac.(CELMatcherFactory); ok {\n\t\t\tmatcher, err := factory(matcherData.Value())\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\treturn interpreter.NewCall(\n\t\t\t\ti.ID(), funcName, funcName+\"_opt\",\n\t\t\t\t[]interpreter.Interpretable{reqAttr},\n\t\t\t\tfunc(args ...ref.Val) ref.Val {\n\t\t\t\t\t// The request value, guaranteed to be of type celHTTPRequest\n\t\t\t\t\tcelReq := args[0]\n\t\t\t\t\t// If needed this call could be changed to convert the value\n\t\t\t\t\t// to a *http.Request using CEL's ConvertToNative method.\n\t\t\t\t\thttpReq := celReq.Value().(celHTTPRequest)\n\t\t\t\t\tif m, ok := matcher.(RequestMatcherWithError); ok {\n\t\t\t\t\t\tmatch, err := m.MatchWithError(httpReq.Request)\n\t\t\t\t\t\tif err != nil {\n\t\t\t\t\t\t\treturn types.WrapErr(err)\n\t\t\t\t\t\t}\n\t\t\t\t\t\treturn types.Bool(match)\n\t\t\t\t\t}\n\t\t\t\t\treturn types.Bool(matcher.Match(httpReq.Request))\n\t\t\t\t},\n\t\t\t), nil\n\t\t}\n\n\t\treturn nil, fmt.Errorf(\"invalid matcher factory, must be CELMatcherFactory or CELMatcherWithErrorFactory: %T\", fac)\n\t}\n}\n\n// CELMatcherRuntimeFunction creates a function binding for when the input to the matcher\n// is dynamically resolved rather than a set of static constant values.\nfunc CELMatcherRuntimeFunction(funcName string, fac any) functions.BinaryOp {\n\treturn func(celReq, matcherData ref.Val) ref.Val {\n\t\tif factory, ok := fac.(CELMatcherWithErrorFactory); ok {\n\t\t\tmatcher, err := factory(matcherData)\n\t\t\tif err != nil {\n\t\t\t\treturn types.WrapErr(err)\n\t\t\t}\n\t\t\thttpReq := celReq.Value().(celHTTPRequest)\n\t\t\tmatch, err := matcher.MatchWithError(httpReq.Request)\n\t\t\tif err != nil {\n\t\t\t\treturn types.WrapErr(err)\n\t\t\t}\n\t\t\treturn types.Bool(match)\n\t\t}\n\t\tif factory, ok := fac.(CELMatcherFactory); ok {\n\t\t\tmatcher, err := factory(matcherData)\n\t\t\tif err != nil {\n\t\t\t\treturn types.WrapErr(err)\n\t\t\t}\n\t\t\thttpReq := celReq.Value().(celHTTPRequest)\n\t\t\tif m, ok := matcher.(RequestMatcherWithError); ok {\n\t\t\t\tmatch, err := m.MatchWithError(httpReq.Request)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn types.WrapErr(err)\n\t\t\t\t}\n\t\t\t\treturn types.Bool(match)\n\t\t\t}\n\t\t\treturn types.Bool(matcher.Match(httpReq.Request))\n\t\t}\n\t\treturn types.NewErr(\"CELMatcherRuntimeFunction invalid matcher factory: %T\", fac)\n\t}\n}\n\n// celMatcherStringListMacroExpander validates that the macro is called\n// with a variable number of string arguments (at least one).\n//\n// The arguments are collected into a single list argument the following\n// function call returned: <funcName>(request, [args])\nfunc celMatcherStringListMacroExpander(funcName string) cel.MacroFactory {\n\treturn func(eh cel.MacroExprFactory, target ast.Expr, args []ast.Expr) (ast.Expr, *common.Error) {\n\t\tmatchArgs := []ast.Expr{}\n\t\tif len(args) == 0 {\n\t\t\treturn nil, eh.NewError(0, \"matcher requires at least one argument\")\n\t\t}\n\t\tfor _, arg := range args {\n\t\t\tif isCELStringExpr(arg) {\n\t\t\t\tmatchArgs = append(matchArgs, arg)\n\t\t\t} else {\n\t\t\t\treturn nil, eh.NewError(arg.ID(), \"matcher arguments must be string constants\")\n\t\t\t}\n\t\t}\n\t\treturn eh.NewCall(funcName, eh.NewIdent(CELRequestVarName), eh.NewList(matchArgs...)), nil\n\t}\n}\n\n// celMatcherStringMacroExpander validates that the macro is called a single\n// string argument.\n//\n// The following function call is returned: <funcName>(request, arg)\nfunc celMatcherStringMacroExpander(funcName string) parser.MacroExpander {\n\treturn func(eh cel.MacroExprFactory, target ast.Expr, args []ast.Expr) (ast.Expr, *common.Error) {\n\t\tif len(args) != 1 {\n\t\t\treturn nil, eh.NewError(0, \"matcher requires one argument\")\n\t\t}\n\t\tif isCELStringExpr(args[0]) {\n\t\t\treturn eh.NewCall(funcName, eh.NewIdent(CELRequestVarName), args[0]), nil\n\t\t}\n\t\treturn nil, eh.NewError(args[0].ID(), \"matcher argument must be a string literal\")\n\t}\n}\n\n// celMatcherJSONMacroExpander validates that the macro is called a single\n// map literal argument.\n//\n// The following function call is returned: <funcName>(request, arg)\nfunc celMatcherJSONMacroExpander(funcName string) parser.MacroExpander {\n\treturn func(eh cel.MacroExprFactory, target ast.Expr, args []ast.Expr) (ast.Expr, *common.Error) {\n\t\tif len(args) != 1 {\n\t\t\treturn nil, eh.NewError(0, \"matcher requires a map literal argument\")\n\t\t}\n\t\targ := args[0]\n\n\t\tswitch arg.Kind() {\n\t\tcase ast.StructKind:\n\t\t\treturn nil, eh.NewError(arg.ID(),\n\t\t\t\tfmt.Sprintf(\"matcher input must be a map literal, not a %s\", arg.AsStruct().TypeName()))\n\t\tcase ast.MapKind:\n\t\t\tmapExpr := arg.AsMap()\n\t\t\tfor _, entry := range mapExpr.Entries() {\n\t\t\t\tisStringPlaceholder := isCELStringExpr(entry.AsMapEntry().Key())\n\t\t\t\tif !isStringPlaceholder {\n\t\t\t\t\treturn nil, eh.NewError(entry.ID(), \"matcher map keys must be string literals\")\n\t\t\t\t}\n\t\t\t\tisStringListPlaceholder := isCELStringExpr(entry.AsMapEntry().Value()) ||\n\t\t\t\t\tisCELStringListLiteral(entry.AsMapEntry().Value())\n\t\t\t\tif !isStringListPlaceholder {\n\t\t\t\t\treturn nil, eh.NewError(entry.AsMapEntry().Value().ID(), \"matcher map values must be string or list literals\")\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn eh.NewCall(funcName, eh.NewIdent(CELRequestVarName), arg), nil\n\t\tcase ast.UnspecifiedExprKind, ast.CallKind, ast.ComprehensionKind, ast.IdentKind, ast.ListKind, ast.LiteralKind, ast.SelectKind:\n\t\t\t// appeasing the linter :)\n\t\t}\n\n\t\treturn nil, eh.NewError(arg.ID(), \"matcher requires a map literal argument\")\n\t}\n}\n\n// CELValueToMapStrList converts a CEL value to a map[string][]string\n//\n// Earlier validation stages should guarantee that the value has this type\n// at compile time, and that the runtime value type is map[string]any.\n// The reason for the slight difference in value type is that CEL allows for\n// map literals containing heterogeneous values, in this case string and list\n// of string.\nfunc CELValueToMapStrList(data ref.Val) (map[string][]string, error) {\n\t// Prefer map[string]any, but newer cel-go versions may return map[any]any\n\tmapStrType := reflect.TypeFor[map[string]any]()\n\tmapStrRaw, err := data.ConvertToNative(mapStrType)\n\tvar mapStrIface map[string]any\n\tif err != nil {\n\t\t// Try map[any]any and convert keys to strings\n\t\tmapAnyType := reflect.TypeFor[map[any]any]()\n\t\tmapAnyRaw, err2 := data.ConvertToNative(mapAnyType)\n\t\tif err2 != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tmapAnyIface := mapAnyRaw.(map[any]any)\n\t\tmapStrIface = make(map[string]any, len(mapAnyIface))\n\t\tfor k, v := range mapAnyIface {\n\t\t\tks, ok := k.(string)\n\t\t\tif !ok {\n\t\t\t\treturn nil, fmt.Errorf(\"unsupported map key type in header match: %T\", k)\n\t\t\t}\n\t\t\tmapStrIface[ks] = v\n\t\t}\n\t} else {\n\t\tmapStrIface = mapStrRaw.(map[string]any)\n\t}\n\tmapStrListStr := make(map[string][]string, len(mapStrIface))\n\tfor k, v := range mapStrIface {\n\t\tswitch val := v.(type) {\n\t\tcase string:\n\t\t\tmapStrListStr[k] = []string{val}\n\t\tcase types.String:\n\t\t\tmapStrListStr[k] = []string{string(val)}\n\t\tcase []string:\n\t\t\tmapStrListStr[k] = val\n\t\tcase []ref.Val:\n\t\t\tconvVals := make([]string, len(val))\n\t\t\tfor i, elem := range val {\n\t\t\t\tstrVal, ok := elem.(types.String)\n\t\t\t\tif !ok {\n\t\t\t\t\treturn nil, fmt.Errorf(\"unsupported value type in matcher input: %T\", val)\n\t\t\t\t}\n\t\t\t\tconvVals[i] = string(strVal)\n\t\t\t}\n\t\t\tmapStrListStr[k] = convVals\n\t\tcase []any:\n\t\t\tconvVals := make([]string, len(val))\n\t\t\tfor i, elem := range val {\n\t\t\t\tswitch e := elem.(type) {\n\t\t\t\tcase string:\n\t\t\t\t\tconvVals[i] = e\n\t\t\t\tcase types.String:\n\t\t\t\t\tconvVals[i] = string(e)\n\t\t\t\tdefault:\n\t\t\t\t\treturn nil, fmt.Errorf(\"unsupported element type in matcher input list: %T\", elem)\n\t\t\t\t}\n\t\t\t}\n\t\t\tmapStrListStr[k] = convVals\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"unsupported value type in matcher input: %T\", val)\n\t\t}\n\t}\n\treturn mapStrListStr, nil\n}\n\n// isCELStringExpr indicates whether the expression is a supported string expression\nfunc isCELStringExpr(e ast.Expr) bool {\n\treturn isCELStringLiteral(e) || isCELCaddyPlaceholderCall(e) || isCELConcatCall(e)\n}\n\n// isCELStringLiteral returns whether the expression is a CEL string literal.\nfunc isCELStringLiteral(e ast.Expr) bool {\n\tswitch e.Kind() {\n\tcase ast.LiteralKind:\n\t\tconstant := e.AsLiteral()\n\t\tswitch constant.Type() {\n\t\tcase types.StringType:\n\t\t\treturn true\n\t\t}\n\tcase ast.UnspecifiedExprKind, ast.CallKind, ast.ComprehensionKind, ast.IdentKind, ast.ListKind, ast.MapKind, ast.SelectKind, ast.StructKind:\n\t\t// appeasing the linter :)\n\t}\n\treturn false\n}\n\n// isCELCaddyPlaceholderCall returns whether the expression is a caddy placeholder call.\nfunc isCELCaddyPlaceholderCall(e ast.Expr) bool {\n\tswitch e.Kind() {\n\tcase ast.CallKind:\n\t\tcall := e.AsCall()\n\t\tif call.FunctionName() == CELPlaceholderFuncName {\n\t\t\treturn true\n\t\t}\n\tcase ast.UnspecifiedExprKind, ast.ComprehensionKind, ast.IdentKind, ast.ListKind, ast.LiteralKind, ast.MapKind, ast.SelectKind, ast.StructKind:\n\t\t// appeasing the linter :)\n\t}\n\treturn false\n}\n\n// isCELConcatCall tests whether the expression is a concat function (+) with string, placeholder, or\n// other concat call arguments.\nfunc isCELConcatCall(e ast.Expr) bool {\n\tswitch e.Kind() {\n\tcase ast.CallKind:\n\t\tcall := e.AsCall()\n\t\tif call.Target().Kind() != ast.UnspecifiedExprKind {\n\t\t\treturn false\n\t\t}\n\t\tif call.FunctionName() != operators.Add {\n\t\t\treturn false\n\t\t}\n\t\tfor _, arg := range call.Args() {\n\t\t\tif !isCELStringExpr(arg) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\treturn true\n\tcase ast.UnspecifiedExprKind, ast.ComprehensionKind, ast.IdentKind, ast.ListKind, ast.LiteralKind, ast.MapKind, ast.SelectKind, ast.StructKind:\n\t\t// appeasing the linter :)\n\t}\n\treturn false\n}\n\n// isCELStringListLiteral returns whether the expression resolves to a list literal\n// containing only string constants or a placeholder call.\nfunc isCELStringListLiteral(e ast.Expr) bool {\n\tswitch e.Kind() {\n\tcase ast.ListKind:\n\t\tlist := e.AsList()\n\t\tfor _, elem := range list.Elements() {\n\t\t\tif !isCELStringExpr(elem) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\treturn true\n\tcase ast.UnspecifiedExprKind, ast.CallKind, ast.ComprehensionKind, ast.IdentKind, ast.LiteralKind, ast.MapKind, ast.SelectKind, ast.StructKind:\n\t\t// appeasing the linter :)\n\t}\n\treturn false\n}\n\n// Variables used for replacing Caddy placeholders in CEL\n// expressions with a proper CEL function call; this is\n// just for syntactic sugar.\nvar (\n\t// The placeholder may not be preceded by a backslash; the expansion\n\t// will include the preceding character if it is not a backslash.\n\tplaceholderRegexp    = regexp.MustCompile(`([^\\\\]|^){([a-zA-Z][\\w.-]+)}`)\n\tplaceholderExpansion = `${1}ph(req, \"${2}\")`\n\n\t// As a second pass, we need to strip the escape character in front of\n\t// the placeholder, if it exists.\n\tescapedPlaceholderRegexp    = regexp.MustCompile(`\\\\{([a-zA-Z][\\w.-]+)}`)\n\tescapedPlaceholderExpansion = `{${1}}`\n\n\tCELTypeJSON = cel.MapType(cel.StringType, cel.DynType)\n)\n\nvar httpRequestObjectType = cel.ObjectType(\"http.Request\")\n\n// The name of the CEL function which accesses Replacer values.\nconst CELPlaceholderFuncName = \"ph\"\n\n// The name of the CEL request variable.\nconst CELRequestVarName = \"req\"\n\nconst MatcherNameCtxKey = \"matcher_name\"\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner       = (*MatchExpression)(nil)\n\t_ RequestMatcherWithError = (*MatchExpression)(nil)\n\t_ caddyfile.Unmarshaler   = (*MatchExpression)(nil)\n\t_ json.Marshaler          = (*MatchExpression)(nil)\n\t_ json.Unmarshaler        = (*MatchExpression)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/celmatcher_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"encoding/pem\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nvar (\n\tclientCert = []byte(`-----BEGIN CERTIFICATE-----\nMIIB9jCCAV+gAwIBAgIBAjANBgkqhkiG9w0BAQsFADAYMRYwFAYDVQQDDA1DYWRk\neSBUZXN0IENBMB4XDTE4MDcyNDIxMzUwNVoXDTI4MDcyMTIxMzUwNVowHTEbMBkG\nA1UEAwwSY2xpZW50LmxvY2FsZG9tYWluMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCB\niQKBgQDFDEpzF0ew68teT3xDzcUxVFaTII+jXH1ftHXxxP4BEYBU4q90qzeKFneF\nz83I0nC0WAQ45ZwHfhLMYHFzHPdxr6+jkvKPASf0J2v2HDJuTM1bHBbik5Ls5eq+\nfVZDP8o/VHKSBKxNs8Goc2NTsr5b07QTIpkRStQK+RJALk4x9QIDAQABo0swSTAJ\nBgNVHRMEAjAAMAsGA1UdDwQEAwIHgDAaBgNVHREEEzARgglsb2NhbGhvc3SHBH8A\nAAEwEwYDVR0lBAwwCgYIKwYBBQUHAwIwDQYJKoZIhvcNAQELBQADgYEANSjz2Sk+\neqp31wM9il1n+guTNyxJd+FzVAH+hCZE5K+tCgVDdVFUlDEHHbS/wqb2PSIoouLV\n3Q9fgDkiUod+uIK0IynzIKvw+Cjg+3nx6NQ0IM0zo8c7v398RzB4apbXKZyeeqUH\n9fNwfEi+OoXR6s+upSKobCmLGLGi9Na5s5g=\n-----END CERTIFICATE-----`)\n\n\tmatcherTests = []struct {\n\t\tname              string\n\t\texpression        *MatchExpression\n\t\turlTarget         string\n\t\thttpMethod        string\n\t\thttpHeader        *http.Header\n\t\twantErr           bool\n\t\twantResult        bool\n\t\tclientCertificate []byte\n\t}{\n\t\t{\n\t\t\tname: \"boolean matches succeed for placeholder http.request.tls.client.subject\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: \"{http.request.tls.client.subject} == 'CN=client.localdomain'\",\n\t\t\t},\n\t\t\tclientCertificate: clientCert,\n\t\t\turlTarget:         \"https://example.com/foo\",\n\t\t\twantResult:        true,\n\t\t},\n\t\t{\n\t\t\tname: \"header matches (MatchHeader)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `header({'Field': 'foo'})`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo\",\n\t\t\thttpHeader: &http.Header{\"Field\": []string{\"foo\", \"bar\"}},\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"header matches an escaped placeholder value (MatchHeader)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `header({'Field': '\\\\\\{foobar}'})`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo\",\n\t\t\thttpHeader: &http.Header{\"Field\": []string{\"{foobar}\"}},\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"header matches an placeholder replaced during the header matcher (MatchHeader)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `header({'Field': '\\{http.request.uri.path}'})`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo\",\n\t\t\thttpHeader: &http.Header{\"Field\": []string{\"/foo\"}},\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"header error, invalid escape sequence (MatchHeader)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `header({'Field': '\\\\{foobar}'})`,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"header error, needs to be JSON syntax with field as key (MatchHeader)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `header('foo')`,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"header_regexp matches (MatchHeaderRE)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `header_regexp('Field', 'fo{2}')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo\",\n\t\t\thttpHeader: &http.Header{\"Field\": []string{\"foo\", \"bar\"}},\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"header_regexp matches with name (MatchHeaderRE)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `header_regexp('foo', 'Field', 'fo{2}')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo\",\n\t\t\thttpHeader: &http.Header{\"Field\": []string{\"foo\", \"bar\"}},\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"header_regexp does not match (MatchHeaderRE)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `header_regexp('foo', 'Nope', 'fo{2}')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo\",\n\t\t\thttpHeader: &http.Header{\"Field\": []string{\"foo\", \"bar\"}},\n\t\t\twantResult: false,\n\t\t},\n\t\t{\n\t\t\tname: \"header_regexp error (MatchHeaderRE)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `header_regexp('foo')`,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"host matches localhost (MatchHost)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `host('localhost')`,\n\t\t\t},\n\t\t\turlTarget:  \"http://localhost\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"host matches (MatchHost)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `host('*.example.com')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://foo.example.com\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"host does not match (MatchHost)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `host('example.net', '*.example.com')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://foo.example.org\",\n\t\t\twantResult: false,\n\t\t},\n\t\t{\n\t\t\tname: \"host error (MatchHost)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `host(80)`,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"method does not match (MatchMethod)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `method('PUT')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://foo.example.com\",\n\t\t\thttpMethod: \"GET\",\n\t\t\twantResult: false,\n\t\t},\n\t\t{\n\t\t\tname: \"method matches (MatchMethod)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `method('DELETE', 'PUT', 'POST')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://foo.example.com\",\n\t\t\thttpMethod: \"PUT\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"method error not enough arguments (MatchMethod)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `method()`,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"path matches substring (MatchPath)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `path('*substring*')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo/substring/bar.txt\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"path does not match (MatchPath)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `path('/foo')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo/bar\",\n\t\t\twantResult: false,\n\t\t},\n\t\t{\n\t\t\tname: \"path matches end url fragment (MatchPath)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `path('/foo')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/FOO\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"path matches end fragment with substring prefix (MatchPath)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `path('/foo*')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/FOOOOO\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"path matches one of multiple (MatchPath)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `path('/foo', '/foo/*', '/bar', '/bar/*', '/baz', '/baz*')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"path_regexp with empty regex matches empty path (MatchPathRE)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `path_regexp('')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"path_regexp with slash regex matches empty path (MatchPathRE)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `path_regexp('/')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"path_regexp matches end url fragment (MatchPathRE)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `path_regexp('^/foo')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo/\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"path_regexp does not match fragment at end (MatchPathRE)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `path_regexp('bar_at_start', '^/bar')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo/bar\",\n\t\t\twantResult: false,\n\t\t},\n\t\t{\n\t\t\tname: \"protocol matches (MatchProtocol)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `protocol('HTTPs')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"protocol does not match (MatchProtocol)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `protocol('grpc')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com\",\n\t\t\twantResult: false,\n\t\t},\n\t\t{\n\t\t\tname: \"protocol invocation error no args (MatchProtocol)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `protocol()`,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"protocol invocation error too many args (MatchProtocol)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `protocol('grpc', 'https')`,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"protocol invocation error wrong arg type (MatchProtocol)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `protocol(true)`,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"query does not match against a specific value (MatchQuery)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `query({\"debug\": \"1\"})`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo\",\n\t\t\twantResult: false,\n\t\t},\n\t\t{\n\t\t\tname: \"query matches against a specific value (MatchQuery)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `query({\"debug\": \"1\"})`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo/?debug=1\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"query matches against multiple values (MatchQuery)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `query({\"debug\": [\"0\", \"1\", {http.request.uri.query.debug}+\"1\"]})`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo/?debug=1\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"query matches against a wildcard (MatchQuery)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `query({\"debug\": [\"*\"]})`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo/?debug=something\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"query matches against a placeholder value (MatchQuery)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `query({\"debug\": {http.request.uri.query.debug}})`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo/?debug=1\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"query error bad map key type (MatchQuery)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `query({1: \"1\"})`,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"query error typed struct instead of map (MatchQuery)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `query(Message{field: \"1\"})`,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"query error bad map value type (MatchQuery)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `query({\"debug\": 1})`,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"query error no args (MatchQuery)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `query()`,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"remote_ip error no args (MatchRemoteIP)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `remote_ip()`,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"remote_ip single IP match (MatchRemoteIP)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `remote_ip('192.0.2.1')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"vars value (VarsMatcher)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `vars({'foo': 'bar'})`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"vars matches placeholder, needs escape (VarsMatcher)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `vars({'\\{http.request.uri.path}': '/foo'})`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"vars error wrong syntax (VarsMatcher)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `vars('foo', 'bar')`,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"vars error no args (VarsMatcher)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `vars()`,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"vars_regexp value (MatchVarsRE)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `vars_regexp('foo', 'ba?r')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"vars_regexp value with name (MatchVarsRE)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `vars_regexp('name', 'foo', 'ba?r')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"vars_regexp matches placeholder, needs escape (MatchVarsRE)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `vars_regexp('\\{http.request.uri.path}', '/fo?o')`,\n\t\t\t},\n\t\t\turlTarget:  \"https://example.com/foo\",\n\t\t\twantResult: true,\n\t\t},\n\t\t{\n\t\t\tname: \"vars_regexp error no args (MatchVarsRE)\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: `vars_regexp()`,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t}\n)\n\nfunc TestMatchExpressionMatch(t *testing.T) {\n\tfor _, tst := range matcherTests {\n\t\ttc := tst\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tcaddyCtx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\t\t\tdefer cancel()\n\t\t\terr := tc.expression.Provision(caddyCtx)\n\t\t\tif err != nil {\n\t\t\t\tif !tc.wantErr {\n\t\t\t\t\tt.Errorf(\"MatchExpression.Provision() error = %v, wantErr %v\", err, tc.wantErr)\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\treq := httptest.NewRequest(tc.httpMethod, tc.urlTarget, nil)\n\t\t\tif tc.httpHeader != nil {\n\t\t\t\treq.Header = *tc.httpHeader\n\t\t\t}\n\t\t\trepl := caddy.NewReplacer()\n\t\t\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\t\t\tctx = context.WithValue(ctx, VarsCtxKey, map[string]any{\n\t\t\t\t\"foo\": \"bar\",\n\t\t\t})\n\t\t\treq = req.WithContext(ctx)\n\t\t\taddHTTPVarsToReplacer(repl, req, httptest.NewRecorder())\n\n\t\t\tif tc.clientCertificate != nil {\n\t\t\t\tblock, _ := pem.Decode(clientCert)\n\t\t\t\tif block == nil {\n\t\t\t\t\tt.Fatalf(\"failed to decode PEM certificate\")\n\t\t\t\t}\n\n\t\t\t\tcert, err := x509.ParseCertificate(block.Bytes)\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Fatalf(\"failed to decode PEM certificate: %v\", err)\n\t\t\t\t}\n\n\t\t\t\treq.TLS = &tls.ConnectionState{\n\t\t\t\t\tPeerCertificates: []*x509.Certificate{cert},\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tmatches, err := tc.expression.MatchWithError(req)\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"MatchExpression.Match() error = %v\", err)\n\t\t\t}\n\t\t\tif matches != tc.wantResult {\n\t\t\t\tt.Errorf(\"MatchExpression.Match() expected to return '%t', for expression : '%s'\", tc.wantResult, tc.expression.Expr)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc BenchmarkMatchExpressionMatch(b *testing.B) {\n\tfor _, tst := range matcherTests {\n\t\ttc := tst\n\t\tif tc.wantErr {\n\t\t\tcontinue\n\t\t}\n\t\tb.Run(tst.name, func(b *testing.B) {\n\t\t\ttc.expression.Provision(caddy.Context{})\n\t\t\treq := httptest.NewRequest(tc.httpMethod, tc.urlTarget, nil)\n\t\t\tif tc.httpHeader != nil {\n\t\t\t\treq.Header = *tc.httpHeader\n\t\t\t}\n\t\t\trepl := caddy.NewReplacer()\n\t\t\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\t\t\tctx = context.WithValue(ctx, VarsCtxKey, map[string]any{\n\t\t\t\t\"foo\": \"bar\",\n\t\t\t})\n\t\t\treq = req.WithContext(ctx)\n\t\t\taddHTTPVarsToReplacer(repl, req, httptest.NewRecorder())\n\t\t\tif tc.clientCertificate != nil {\n\t\t\t\tblock, _ := pem.Decode(clientCert)\n\t\t\t\tif block == nil {\n\t\t\t\t\tb.Fatalf(\"failed to decode PEM certificate\")\n\t\t\t\t}\n\n\t\t\t\tcert, err := x509.ParseCertificate(block.Bytes)\n\t\t\t\tif err != nil {\n\t\t\t\t\tb.Fatalf(\"failed to decode PEM certificate: %v\", err)\n\t\t\t\t}\n\n\t\t\t\treq.TLS = &tls.ConnectionState{\n\t\t\t\t\tPeerCertificates: []*x509.Certificate{cert},\n\t\t\t\t}\n\t\t\t}\n\t\t\tb.ResetTimer()\n\t\t\tfor b.Loop() {\n\t\t\t\ttc.expression.MatchWithError(req)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestMatchExpressionProvision(t *testing.T) {\n\ttests := []struct {\n\t\tname       string\n\t\texpression *MatchExpression\n\t\twantErr    bool\n\t}{\n\t\t{\n\t\t\tname: \"boolean matches succeed\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: \"{http.request.uri.query} != ''\",\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"reject expressions with non-boolean results\",\n\t\t\texpression: &MatchExpression{\n\t\t\t\tExpr: \"{http.request.uri.query}\",\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tctx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\t\t\tdefer cancel()\n\t\t\tif err := tt.expression.Provision(ctx); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"MatchExpression.Provision() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/encode/brotli/brotli_precompressed.go",
    "content": "package caddybrotli\n\nimport (\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/encode\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(BrotliPrecompressed{})\n}\n\n// BrotliPrecompressed provides the file extension for files precompressed with brotli encoding.\ntype BrotliPrecompressed struct{}\n\n// CaddyModule returns the Caddy module information.\nfunc (BrotliPrecompressed) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.precompressed.br\",\n\t\tNew: func() caddy.Module { return new(BrotliPrecompressed) },\n\t}\n}\n\n// AcceptEncoding returns the name of the encoding as\n// used in the Accept-Encoding request headers.\nfunc (BrotliPrecompressed) AcceptEncoding() string { return \"br\" }\n\n// Suffix returns the filename suffix of precompressed files.\nfunc (BrotliPrecompressed) Suffix() string { return \".br\" }\n\n// Interface guards\nvar _ encode.Precompressed = (*BrotliPrecompressed)(nil)\n"
  },
  {
    "path": "modules/caddyhttp/encode/caddyfile.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage encode\n\nimport (\n\t\"strconv\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\thttpcaddyfile.RegisterHandlerDirective(\"encode\", parseCaddyfile)\n}\n\nfunc parseCaddyfile(h httpcaddyfile.Helper) (caddyhttp.MiddlewareHandler, error) {\n\tenc := new(Encode)\n\terr := enc.UnmarshalCaddyfile(h.Dispenser)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn enc, nil\n}\n\n// UnmarshalCaddyfile sets up the handler from Caddyfile tokens. Syntax:\n//\n//\tencode [<matcher>] <formats...> {\n//\t    gzip           [<level>]\n//\t    zstd\n//\t    minimum_length <length>\n//\t    # response matcher block\n//\t    match {\n//\t        status <code...>\n//\t        header <field> [<value>]\n//\t    }\n//\t    # or response matcher single line syntax\n//\t    match [header <field> [<value>]] | [status <code...>]\n//\t}\n//\n// Specifying the formats on the first line will use those formats' defaults.\nfunc (enc *Encode) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume directive name\n\n\tprefer := []string{}\n\tremainingArgs := d.RemainingArgs()\n\n\tresponseMatchers := make(map[string]caddyhttp.ResponseMatcher)\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"minimum_length\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tminLength, err := strconv.Atoi(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tenc.MinLength = minLength\n\t\tcase \"match\":\n\t\t\terr := caddyhttp.ParseNamedResponseMatcher(d.NewFromNextSegment(), responseMatchers)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tmatcher := responseMatchers[\"match\"]\n\t\t\tenc.Matcher = &matcher\n\t\tdefault:\n\t\t\tname := d.Val()\n\t\t\tmodID := \"http.encoders.\" + name\n\t\t\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tencoding, ok := unm.(Encoding)\n\t\t\tif !ok {\n\t\t\t\treturn d.Errf(\"module %s is not an HTTP encoding; is %T\", modID, unm)\n\t\t\t}\n\t\t\tif enc.EncodingsRaw == nil {\n\t\t\t\tenc.EncodingsRaw = make(caddy.ModuleMap)\n\t\t\t}\n\t\t\tenc.EncodingsRaw[name] = caddyconfig.JSON(encoding, nil)\n\t\t\tprefer = append(prefer, name)\n\t\t}\n\t}\n\n\tif len(prefer) == 0 && len(remainingArgs) == 0 {\n\t\tremainingArgs = []string{\"zstd\", \"gzip\"}\n\t}\n\n\tfor _, arg := range remainingArgs {\n\t\tmod, err := caddy.GetModule(\"http.encoders.\" + arg)\n\t\tif err != nil {\n\t\t\treturn d.Errf(\"finding encoder module '%s': %v\", mod, err)\n\t\t}\n\t\tencoding, ok := mod.New().(Encoding)\n\t\tif !ok {\n\t\t\treturn d.Errf(\"module %s is not an HTTP encoding\", mod)\n\t\t}\n\t\tif enc.EncodingsRaw == nil {\n\t\t\tenc.EncodingsRaw = make(caddy.ModuleMap)\n\t\t}\n\t\tenc.EncodingsRaw[arg] = caddyconfig.JSON(encoding, nil)\n\t\tprefer = append(prefer, arg)\n\t}\n\n\t// use the order in which the encoders were defined.\n\tenc.Prefer = prefer\n\n\treturn nil\n}\n\n// Interface guard\nvar _ caddyfile.Unmarshaler = (*Encode)(nil)\n"
  },
  {
    "path": "modules/caddyhttp/encode/encode.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Package encode implements an encoder middleware for Caddy. The initial\n// enhancements related to Accept-Encoding, minimum content length, and\n// buffer/writer pools were adapted from https://github.com/xi2/httpgzip\n// then modified heavily to accommodate modular encoders and fix bugs.\n// Code borrowed from that repository is Copyright (c) 2015 The Httpgzip Authors.\npackage encode\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"math\"\n\t\"net/http\"\n\t\"slices\"\n\t\"sort\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Encode{})\n}\n\n// Encode is a middleware which can encode responses.\ntype Encode struct {\n\t// Selection of compression algorithms to choose from. The best one\n\t// will be chosen based on the client's Accept-Encoding header.\n\tEncodingsRaw caddy.ModuleMap `json:\"encodings,omitempty\" caddy:\"namespace=http.encoders\"`\n\n\t// If the client has no strong preference, choose these encodings in order.\n\tPrefer []string `json:\"prefer,omitempty\"`\n\n\t// Only encode responses that are at least this many bytes long.\n\tMinLength int `json:\"minimum_length,omitempty\"`\n\n\t// Only encode responses that match against this ResponseMatcher.\n\t// The default is a collection of text-based Content-Type headers.\n\tMatcher *caddyhttp.ResponseMatcher `json:\"match,omitempty\"`\n\n\twriterPools map[string]*sync.Pool // TODO: these pools do not get reused through config reloads...\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Encode) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.encode\",\n\t\tNew: func() caddy.Module { return new(Encode) },\n\t}\n}\n\n// Provision provisions enc.\nfunc (enc *Encode) Provision(ctx caddy.Context) error {\n\tmods, err := ctx.LoadModule(enc, \"EncodingsRaw\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"loading encoder modules: %v\", err)\n\t}\n\tfor modName, modIface := range mods.(map[string]any) {\n\t\terr = enc.addEncoding(modIface.(Encoding))\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"adding encoding %s: %v\", modName, err)\n\t\t}\n\t}\n\tif enc.MinLength == 0 {\n\t\tenc.MinLength = defaultMinLength\n\t}\n\n\tif enc.Matcher == nil {\n\t\t// common text-based content types\n\t\t// list based on https://developers.cloudflare.com/speed/optimization/content/brotli/content-compression/#compression-between-cloudflare-and-website-visitors\n\t\tenc.Matcher = &caddyhttp.ResponseMatcher{\n\t\t\tHeaders: http.Header{\n\t\t\t\t\"Content-Type\": []string{\n\t\t\t\t\t\"application/atom+xml*\",\n\t\t\t\t\t\"application/eot*\",\n\t\t\t\t\t\"application/font*\",\n\t\t\t\t\t\"application/geo+json*\",\n\t\t\t\t\t\"application/graphql+json*\",\n\t\t\t\t\t\"application/graphql-response+json*\",\n\t\t\t\t\t\"application/javascript*\",\n\t\t\t\t\t\"application/json*\",\n\t\t\t\t\t\"application/ld+json*\",\n\t\t\t\t\t\"application/manifest+json*\",\n\t\t\t\t\t\"application/opentype*\",\n\t\t\t\t\t\"application/otf*\",\n\t\t\t\t\t\"application/rss+xml*\",\n\t\t\t\t\t\"application/truetype*\",\n\t\t\t\t\t\"application/ttf*\",\n\t\t\t\t\t\"application/vnd.api+json*\",\n\t\t\t\t\t\"application/vnd.ms-fontobject*\",\n\t\t\t\t\t\"application/wasm*\",\n\t\t\t\t\t\"application/x-httpd-cgi*\",\n\t\t\t\t\t\"application/x-javascript*\",\n\t\t\t\t\t\"application/x-opentype*\",\n\t\t\t\t\t\"application/x-otf*\",\n\t\t\t\t\t\"application/x-perl*\",\n\t\t\t\t\t\"application/x-protobuf*\",\n\t\t\t\t\t\"application/x-ttf*\",\n\t\t\t\t\t\"application/xhtml+xml*\",\n\t\t\t\t\t\"application/xml*\",\n\t\t\t\t\t\"font/ttf*\",\n\t\t\t\t\t\"font/otf*\",\n\t\t\t\t\t\"image/svg+xml*\",\n\t\t\t\t\t\"image/vnd.microsoft.icon*\",\n\t\t\t\t\t\"image/x-icon*\",\n\t\t\t\t\t\"multipart/bag*\",\n\t\t\t\t\t\"multipart/mixed*\",\n\t\t\t\t\t\"text/*\",\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// Validate ensures that enc's configuration is valid.\nfunc (enc *Encode) Validate() error {\n\tcheck := make(map[string]bool)\n\tfor _, encName := range enc.Prefer {\n\t\tif _, ok := enc.writerPools[encName]; !ok {\n\t\t\treturn fmt.Errorf(\"encoding %s not enabled\", encName)\n\t\t}\n\n\t\tif _, ok := check[encName]; ok {\n\t\t\treturn fmt.Errorf(\"encoding %s is duplicated in prefer\", encName)\n\t\t}\n\t\tcheck[encName] = true\n\t}\n\n\treturn nil\n}\n\nfunc isEncodeAllowed(h http.Header) bool {\n\treturn !strings.Contains(h.Get(\"Cache-Control\"), \"no-transform\")\n}\n\nfunc (enc *Encode) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {\n\tif isEncodeAllowed(r.Header) {\n\t\tfor _, encName := range AcceptedEncodings(r, enc.Prefer) {\n\t\t\tif _, ok := enc.writerPools[encName]; !ok {\n\t\t\t\tcontinue // encoding not offered\n\t\t\t}\n\t\t\tw = enc.openResponseWriter(encName, w, r.Method == http.MethodConnect)\n\t\t\tdefer w.(*responseWriter).Close()\n\n\t\t\t// to comply with RFC 9110 section 8.8.3(.3), we modify the Etag when encoding\n\t\t\t// by appending a hyphen and the encoder name; the problem is, the client will\n\t\t\t// send back that Etag in a If-None-Match header, but upstream handlers that set\n\t\t\t// the Etag in the first place don't know that we appended to their Etag! so here\n\t\t\t// we have to strip our addition so the upstream handlers can still honor client\n\t\t\t// caches without knowing about our changes...\n\t\t\tif etag := r.Header.Get(\"If-None-Match\"); etag != \"\" && !strings.HasPrefix(etag, \"W/\") {\n\t\t\t\tourSuffix := \"-\" + encName + `\"`\n\t\t\t\tif before, ok := strings.CutSuffix(etag, ourSuffix); ok {\n\t\t\t\t\tetag = before + `\"`\n\t\t\t\t\tr.Header.Set(\"If-None-Match\", etag)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tbreak\n\t\t}\n\t}\n\n\terr := next.ServeHTTP(w, r)\n\t// If there was an error, disable encoding completely\n\t// This prevents corruption when handle_errors processes the response\n\tif err != nil {\n\t\tif ew, ok := w.(*responseWriter); ok {\n\t\t\tew.disabled = true\n\t\t}\n\t}\n\n\treturn err\n}\n\nfunc (enc *Encode) addEncoding(e Encoding) error {\n\tae := e.AcceptEncoding()\n\tif ae == \"\" {\n\t\treturn fmt.Errorf(\"encoder does not specify an Accept-Encoding value\")\n\t}\n\tif _, ok := enc.writerPools[ae]; ok {\n\t\treturn fmt.Errorf(\"encoder already added: %s\", ae)\n\t}\n\tif enc.writerPools == nil {\n\t\tenc.writerPools = make(map[string]*sync.Pool)\n\t}\n\tenc.writerPools[ae] = &sync.Pool{\n\t\tNew: func() any {\n\t\t\treturn e.NewEncoder()\n\t\t},\n\t}\n\treturn nil\n}\n\n// openResponseWriter creates a new response writer that may (or may not)\n// encode the response with encodingName. The returned response writer MUST\n// be closed after the handler completes.\nfunc (enc *Encode) openResponseWriter(encodingName string, w http.ResponseWriter, isConnect bool) *responseWriter {\n\tvar rw responseWriter\n\treturn enc.initResponseWriter(&rw, encodingName, w, isConnect)\n}\n\n// initResponseWriter initializes the responseWriter instance\n// allocated in openResponseWriter, enabling mid-stack inlining.\nfunc (enc *Encode) initResponseWriter(rw *responseWriter, encodingName string, wrappedRW http.ResponseWriter, isConnect bool) *responseWriter {\n\tif rww, ok := wrappedRW.(*caddyhttp.ResponseWriterWrapper); ok {\n\t\trw.ResponseWriter = rww\n\t} else {\n\t\trw.ResponseWriter = &caddyhttp.ResponseWriterWrapper{ResponseWriter: wrappedRW}\n\t}\n\trw.encodingName = encodingName\n\trw.config = enc\n\trw.isConnect = isConnect\n\n\treturn rw\n}\n\n// responseWriter writes to an underlying response writer\n// using the encoding represented by encodingName and\n// configured by config.\ntype responseWriter struct {\n\thttp.ResponseWriter\n\tencodingName string\n\tw            Encoder\n\tconfig       *Encode\n\tstatusCode   int\n\twroteHeader  bool\n\tisConnect    bool\n\tdisabled     bool // disable encoding (for error responses)\n}\n\n// WriteHeader stores the status to write when the time comes\n// to actually write the header.\nfunc (rw *responseWriter) WriteHeader(status int) {\n\trw.statusCode = status\n\n\t// See #5849 and RFC 9110 section 15.4.5 (https://www.rfc-editor.org/rfc/rfc9110.html#section-15.4.5) - 304\n\t// Not Modified must have certain headers set as if it was a 200 response, and according to the issue\n\t// we would miss the Vary header in this case when compression was also enabled; note that we set this\n\t// header in the responseWriter.init() method but that is only called if we are writing a response body\n\tif status == http.StatusNotModified && !hasVaryValue(rw.Header(), \"Accept-Encoding\") {\n\t\trw.Header().Add(\"Vary\", \"Accept-Encoding\")\n\t}\n\n\t// write status immediately if status is 2xx and the request is CONNECT\n\t// since it means the response is successful.\n\t// see: https://github.com/caddyserver/caddy/issues/6733#issuecomment-2525058845\n\tif rw.isConnect && 200 <= status && status <= 299 {\n\t\trw.ResponseWriter.WriteHeader(status)\n\t\trw.wroteHeader = true\n\t}\n\n\t// write status immediately when status code is informational\n\t// see: https://caddy.community/t/disappear-103-early-hints-response-with-encode-enable-caddy-v2-7-6/23081/5\n\tif 100 <= status && status <= 199 {\n\t\trw.ResponseWriter.WriteHeader(status)\n\t}\n}\n\n// Match determines, if encoding should be done based on the ResponseMatcher.\nfunc (enc *Encode) Match(rw *responseWriter) bool {\n\treturn enc.Matcher.Match(rw.statusCode, rw.Header())\n}\n\n// FlushError is an alternative Flush returning an error. It delays the actual Flush of the underlying\n// ResponseWriterWrapper until headers were written.\nfunc (rw *responseWriter) FlushError() error {\n\t// WriteHeader wasn't called and is a CONNECT request, treat it as a success.\n\t// otherwise, wait until header is written.\n\tif rw.isConnect && !rw.wroteHeader && rw.statusCode == 0 {\n\t\trw.WriteHeader(http.StatusOK)\n\t}\n\n\tif !rw.wroteHeader {\n\t\t// flushing the underlying ResponseWriter will write header and status code,\n\t\t// but we need to delay that until we can determine if we must encode and\n\t\t// therefore add the Content-Encoding header; this happens in the first call\n\t\t// to rw.Write (see bug in #4314)\n\t\treturn nil\n\t}\n\t// also flushes the encoder, if any\n\t// see: https://github.com/jjiang-stripe/caddy-slow-gzip\n\tif rw.w != nil {\n\t\terr := rw.w.Flush()\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\t//nolint:bodyclose\n\treturn http.NewResponseController(rw.ResponseWriter).Flush()\n}\n\n// Write writes to the response. If the response qualifies,\n// it is encoded using the encoder, which is initialized\n// if not done so already.\nfunc (rw *responseWriter) Write(p []byte) (int, error) {\n\t// ignore zero data writes, probably head request\n\tif len(p) == 0 {\n\t\treturn 0, nil\n\t}\n\n\t// WriteHeader wasn't called and is a CONNECT request, treat it as a success.\n\t// otherwise, determine if the response should be compressed.\n\tif rw.isConnect && !rw.wroteHeader && rw.statusCode == 0 {\n\t\trw.WriteHeader(http.StatusOK)\n\t}\n\n\t// sniff content-type and determine content-length\n\tif !rw.wroteHeader && rw.config.MinLength > 0 {\n\t\tvar gtMinLength bool\n\t\tif len(p) > rw.config.MinLength {\n\t\t\tgtMinLength = true\n\t\t} else if cl, err := strconv.Atoi(rw.Header().Get(\"Content-Length\")); err == nil && cl > rw.config.MinLength {\n\t\t\tgtMinLength = true\n\t\t}\n\n\t\tif gtMinLength {\n\t\t\tif rw.Header().Get(\"Content-Type\") == \"\" {\n\t\t\t\trw.Header().Set(\"Content-Type\", http.DetectContentType(p))\n\t\t\t}\n\t\t\trw.init()\n\t\t}\n\t}\n\n\t// before we write to the response, we need to make\n\t// sure the header is written exactly once; we do\n\t// that by checking if a status code has been set,\n\t// and if so, that means we haven't written the\n\t// header OR the default status code will be written\n\t// by the standard library\n\tif !rw.wroteHeader {\n\t\tif rw.statusCode != 0 {\n\t\t\trw.ResponseWriter.WriteHeader(rw.statusCode)\n\t\t}\n\t\trw.wroteHeader = true\n\t}\n\n\tif rw.w != nil {\n\t\treturn rw.w.Write(p)\n\t} else {\n\t\treturn rw.ResponseWriter.Write(p)\n\t}\n}\n\n// used to mask ReadFrom method\ntype writerOnly struct {\n\tio.Writer\n}\n\n// copied from stdlib\nconst sniffLen = 512\n\n// ReadFrom will try to use sendfile to copy from the reader to the response writer.\n// It's only used if the response writer implements io.ReaderFrom and the data can't be compressed.\n// It's based on stdlin http1.1 response writer implementation.\n// https://github.com/golang/go/blob/f4e3ec3dbe3b8e04a058d266adf8e048bab563f2/src/net/http/server.go#L586\nfunc (rw *responseWriter) ReadFrom(r io.Reader) (int64, error) {\n\trf, ok := rw.ResponseWriter.(io.ReaderFrom)\n\t// sendfile can't be used anyway\n\tif !ok {\n\t\t// mask ReadFrom to avoid infinite recursion\n\t\treturn io.Copy(writerOnly{rw}, r)\n\t}\n\n\tvar ns int64\n\t// try to sniff the content type and determine if the response should be compressed\n\tif !rw.wroteHeader && rw.config.MinLength > 0 {\n\t\tvar (\n\t\t\terr error\n\t\t\tbuf [sniffLen]byte\n\t\t)\n\t\t// mask ReadFrom to let Write determine if the response should be compressed\n\t\tns, err = io.CopyBuffer(writerOnly{rw}, io.LimitReader(r, sniffLen), buf[:])\n\t\tif err != nil || ns < sniffLen {\n\t\t\treturn ns, err\n\t\t}\n\t}\n\n\t// the response will be compressed, no sendfile support\n\tif rw.w != nil {\n\t\tnr, err := io.Copy(rw.w, r)\n\t\treturn nr + ns, err\n\t}\n\tnr, err := rf.ReadFrom(r)\n\treturn nr + ns, err\n}\n\n// Close writes any remaining buffered response and\n// deallocates any active resources.\nfunc (rw *responseWriter) Close() error {\n\t// didn't write, probably head request\n\tif !rw.wroteHeader {\n\t\tcl, err := strconv.Atoi(rw.Header().Get(\"Content-Length\"))\n\t\tif err == nil && cl > rw.config.MinLength {\n\t\t\trw.init()\n\t\t}\n\n\t\t// issue #5059, don't write status code if not set explicitly.\n\t\tif rw.statusCode != 0 {\n\t\t\trw.ResponseWriter.WriteHeader(rw.statusCode)\n\t\t}\n\t\trw.wroteHeader = true\n\t}\n\n\tvar err error\n\tif rw.w != nil {\n\t\terr = rw.w.Close()\n\t\trw.w.Reset(nil)\n\t\trw.config.writerPools[rw.encodingName].Put(rw.w)\n\t\trw.w = nil\n\t}\n\treturn err\n}\n\n// Unwrap returns the underlying ResponseWriter.\nfunc (rw *responseWriter) Unwrap() http.ResponseWriter {\n\treturn rw.ResponseWriter\n}\n\n// init should be called before we write a response, if rw.buf has contents.\nfunc (rw *responseWriter) init() {\n\t// Don't initialize encoder for error responses\n\t// This prevents response corruption when handle_errors is used\n\tif rw.disabled {\n\t\treturn\n\t}\n\n\thdr := rw.Header()\n\n\tif hdr.Get(\"Content-Encoding\") == \"\" && isEncodeAllowed(hdr) &&\n\t\trw.config.Match(rw) {\n\t\trw.w = rw.config.writerPools[rw.encodingName].Get().(Encoder)\n\t\trw.w.Reset(rw.ResponseWriter)\n\t\thdr.Del(\"Content-Length\") // https://github.com/golang/go/issues/14975\n\t\thdr.Set(\"Content-Encoding\", rw.encodingName)\n\t\tif !hasVaryValue(hdr, \"Accept-Encoding\") {\n\t\t\thdr.Add(\"Vary\", \"Accept-Encoding\")\n\t\t}\n\t\thdr.Del(\"Accept-Ranges\") // we don't know ranges for dynamically-encoded content\n\n\t\t// strong ETags need to be distinct depending on the encoding (\"selected representation\")\n\t\t// see RFC 9110 section 8.8.3.3:\n\t\t// https://www.rfc-editor.org/rfc/rfc9110.html#name-example-entity-tags-varying\n\t\t// I don't know a great way to do this... how about appending? That's a neat trick!\n\t\t// (We have to strip the value we append from If-None-Match headers before\n\t\t// sending subsequent requests back upstream, however, since upstream handlers\n\t\t// don't know about our appending to their Etag since they've already done their work)\n\t\tif etag := hdr.Get(\"Etag\"); etag != \"\" && !strings.HasPrefix(etag, \"W/\") {\n\t\t\tetag = fmt.Sprintf(`%s-%s\"`, strings.TrimSuffix(etag, `\"`), rw.encodingName)\n\t\t\thdr.Set(\"Etag\", etag)\n\t\t}\n\t}\n}\n\nfunc hasVaryValue(hdr http.Header, target string) bool {\n\tfor _, vary := range hdr.Values(\"Vary\") {\n\t\tfor val := range strings.SplitSeq(vary, \",\") {\n\t\t\tif strings.EqualFold(strings.TrimSpace(val), target) {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t}\n\treturn false\n}\n\n// AcceptedEncodings returns the list of encodings that the\n// client supports, in descending order of preference.\n// The client preference via q-factor and the server\n// preference via Prefer setting are taken into account. If\n// the Sec-WebSocket-Key header is present then non-identity\n// encodings are not considered. See\n// http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html.\nfunc AcceptedEncodings(r *http.Request, preferredOrder []string) []string {\n\tacceptEncHeader := r.Header.Get(\"Accept-Encoding\")\n\twebsocketKey := r.Header.Get(\"Sec-WebSocket-Key\")\n\tif acceptEncHeader == \"\" {\n\t\treturn []string{}\n\t}\n\n\tprefs := []encodingPreference{}\n\n\tfor accepted := range strings.SplitSeq(acceptEncHeader, \",\") {\n\t\tparts := strings.Split(accepted, \";\")\n\t\tencName := strings.ToLower(strings.TrimSpace(parts[0]))\n\n\t\t// determine q-factor\n\t\tqFactor := 1.0\n\t\tif len(parts) > 1 {\n\t\t\tqFactorStr := strings.ToLower(strings.TrimSpace(parts[1]))\n\t\t\tif strings.HasPrefix(qFactorStr, \"q=\") {\n\t\t\t\tif qFactorFloat, err := strconv.ParseFloat(qFactorStr[2:], 32); err == nil {\n\t\t\t\t\tif qFactorFloat >= 0 && qFactorFloat <= 1 {\n\t\t\t\t\t\tqFactor = qFactorFloat\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// encodings with q-factor of 0 are not accepted;\n\t\t// use a small threshold to account for float precision\n\t\tif qFactor < 0.00001 {\n\t\t\tcontinue\n\t\t}\n\n\t\t// don't encode WebSocket handshakes\n\t\tif websocketKey != \"\" && encName != \"identity\" {\n\t\t\tcontinue\n\t\t}\n\n\t\t// set server preference\n\t\tprefOrder := slices.Index(preferredOrder, encName)\n\t\tif prefOrder > -1 {\n\t\t\tprefOrder = len(preferredOrder) - prefOrder\n\t\t}\n\n\t\tprefs = append(prefs, encodingPreference{\n\t\t\tencoding:    encName,\n\t\t\tq:           qFactor,\n\t\t\tpreferOrder: prefOrder,\n\t\t})\n\t}\n\n\t// sort preferences by descending q-factor first, then by preferOrder\n\tsort.Slice(prefs, func(i, j int) bool {\n\t\tif math.Abs(prefs[i].q-prefs[j].q) < 0.00001 {\n\t\t\treturn prefs[i].preferOrder > prefs[j].preferOrder\n\t\t}\n\t\treturn prefs[i].q > prefs[j].q\n\t})\n\n\tprefEncNames := make([]string, len(prefs))\n\tfor i := range prefs {\n\t\tprefEncNames[i] = prefs[i].encoding\n\t}\n\n\treturn prefEncNames\n}\n\n// encodingPreference pairs an encoding with its q-factor.\ntype encodingPreference struct {\n\tencoding    string\n\tq           float64\n\tpreferOrder int\n}\n\n// Encoder is a type which can encode a stream of data.\ntype Encoder interface {\n\tio.WriteCloser\n\tReset(io.Writer)\n\tFlush() error // encoder by default buffers data to maximize compressing rate\n}\n\n// Encoding is a type which can create encoders of its kind\n// and return the name used in the Accept-Encoding header.\ntype Encoding interface {\n\tAcceptEncoding() string\n\tNewEncoder() Encoder\n}\n\n// Precompressed is a type which returns filename suffix of precompressed\n// file and Accept-Encoding header to use when serving this file.\ntype Precompressed interface {\n\tAcceptEncoding() string\n\tSuffix() string\n}\n\n// defaultMinLength is the minimum length at which to compress content.\nconst defaultMinLength = 512\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner           = (*Encode)(nil)\n\t_ caddy.Validator             = (*Encode)(nil)\n\t_ caddyhttp.MiddlewareHandler = (*Encode)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/encode/encode_test.go",
    "content": "package encode\n\nimport (\n\t\"net/http\"\n\t\"slices\"\n\t\"sync\"\n\t\"testing\"\n)\n\nfunc BenchmarkOpenResponseWriter(b *testing.B) {\n\tenc := new(Encode)\n\tfor b.Loop() {\n\t\tenc.openResponseWriter(\"test\", nil, false)\n\t}\n}\n\nfunc TestPreferOrder(t *testing.T) {\n\ttestCases := []struct {\n\t\tname     string\n\t\taccept   string\n\t\tprefer   []string\n\t\texpected []string\n\t}{\n\t\t{\n\t\t\tname:     \"PreferOrder(): 4 accept, 3 prefer\",\n\t\t\taccept:   \"deflate, gzip, br, zstd\",\n\t\t\tprefer:   []string{\"zstd\", \"br\", \"gzip\"},\n\t\t\texpected: []string{\"zstd\", \"br\", \"gzip\", \"deflate\"},\n\t\t},\n\t\t{\n\t\t\tname:     \"PreferOrder(): 2 accept, 3 prefer\",\n\t\t\taccept:   \"deflate, zstd\",\n\t\t\tprefer:   []string{\"zstd\", \"br\", \"gzip\"},\n\t\t\texpected: []string{\"zstd\", \"deflate\"},\n\t\t},\n\t\t{\n\t\t\tname:     \"PreferOrder(): 2 accept (1 empty), 3 prefer\",\n\t\t\taccept:   \"gzip,,zstd\",\n\t\t\tprefer:   []string{\"zstd\", \"br\", \"gzip\"},\n\t\t\texpected: []string{\"zstd\", \"gzip\", \"\"},\n\t\t},\n\t\t{\n\t\t\tname:     \"PreferOrder(): 1 accept, 2 prefer\",\n\t\t\taccept:   \"gzip\",\n\t\t\tprefer:   []string{\"zstd\", \"gzip\"},\n\t\t\texpected: []string{\"gzip\"},\n\t\t},\n\t\t{\n\t\t\tname:     \"PreferOrder(): 4 accept (1 duplicate), 1 prefer\",\n\t\t\taccept:   \"deflate, gzip, br, br\",\n\t\t\tprefer:   []string{\"br\"},\n\t\t\texpected: []string{\"br\", \"br\", \"deflate\", \"gzip\"},\n\t\t},\n\t\t{\n\t\t\tname:     \"PreferOrder(): empty accept, 0 prefer\",\n\t\t\taccept:   \"\",\n\t\t\tprefer:   []string{},\n\t\t\texpected: []string{},\n\t\t},\n\t\t{\n\t\t\tname:     \"PreferOrder(): empty accept, 1 prefer\",\n\t\t\taccept:   \"\",\n\t\t\tprefer:   []string{\"gzip\"},\n\t\t\texpected: []string{},\n\t\t},\n\t\t{\n\t\t\tname:     \"PreferOrder(): with q-factor\",\n\t\t\taccept:   \"deflate;q=0.8, gzip;q=0.4, br;q=0.2, zstd\",\n\t\t\tprefer:   []string{\"gzip\"},\n\t\t\texpected: []string{\"zstd\", \"deflate\", \"gzip\", \"br\"},\n\t\t},\n\t\t{\n\t\t\tname:     \"PreferOrder(): with q-factor, no prefer\",\n\t\t\taccept:   \"deflate;q=0.8, gzip;q=0.4, br;q=0.2, zstd\",\n\t\t\tprefer:   []string{},\n\t\t\texpected: []string{\"zstd\", \"deflate\", \"gzip\", \"br\"},\n\t\t},\n\t\t{\n\t\t\tname:     \"PreferOrder(): q-factor=0 filtered out\",\n\t\t\taccept:   \"deflate;q=0.1, gzip;q=0.4, br;q=0.5, zstd;q=0\",\n\t\t\tprefer:   []string{\"gzip\"},\n\t\t\texpected: []string{\"br\", \"gzip\", \"deflate\"},\n\t\t},\n\t\t{\n\t\t\tname:     \"PreferOrder(): q-factor=0 filtered out, no prefer\",\n\t\t\taccept:   \"deflate;q=0.1, gzip;q=0.4, br;q=0.5, zstd;q=0\",\n\t\t\tprefer:   []string{},\n\t\t\texpected: []string{\"br\", \"gzip\", \"deflate\"},\n\t\t},\n\t\t{\n\t\t\tname:     \"PreferOrder(): with invalid q-factor\",\n\t\t\taccept:   \"br, deflate, gzip;q=2, zstd;q=0.1\",\n\t\t\tprefer:   []string{\"zstd\", \"gzip\"},\n\t\t\texpected: []string{\"gzip\", \"br\", \"deflate\", \"zstd\"},\n\t\t},\n\t\t{\n\t\t\tname:     \"PreferOrder(): with invalid q-factor, no prefer\",\n\t\t\taccept:   \"br, deflate, gzip;q=2, zstd;q=0.1\",\n\t\t\tprefer:   []string{},\n\t\t\texpected: []string{\"br\", \"deflate\", \"gzip\", \"zstd\"},\n\t\t},\n\t}\n\n\tenc := new(Encode)\n\tr, _ := http.NewRequest(\"\", \"\", nil)\n\n\tfor _, test := range testCases {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tif test.accept == \"\" {\n\t\t\t\tr.Header.Del(\"Accept-Encoding\")\n\t\t\t} else {\n\t\t\t\tr.Header.Set(\"Accept-Encoding\", test.accept)\n\t\t\t}\n\t\t\tenc.Prefer = test.prefer\n\t\t\tresult := AcceptedEncodings(r, enc.Prefer)\n\t\t\tif !slices.Equal(result, test.expected) {\n\t\t\t\tt.Errorf(\"AcceptedEncodings() actual: %s expected: %s\",\n\t\t\t\t\tresult,\n\t\t\t\t\ttest.expected)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestValidate(t *testing.T) {\n\ttype testCase struct {\n\t\tname    string\n\t\tprefer  []string\n\t\twantErr bool\n\t}\n\n\tvar err error\n\tvar testCases []testCase\n\tenc := new(Encode)\n\n\tenc.writerPools = map[string]*sync.Pool{\n\t\t\"zstd\": nil,\n\t\t\"gzip\": nil,\n\t\t\"br\":   nil,\n\t}\n\ttestCases = []testCase{\n\t\t{\n\t\t\tname:    \"ValidatePrefer (zstd, gzip & br enabled): valid order with all encoder\",\n\t\t\tprefer:  []string{\"zstd\", \"br\", \"gzip\"},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname:    \"ValidatePrefer (zstd, gzip & br enabled): valid order with 2 out of 3 encoders\",\n\t\t\tprefer:  []string{\"br\", \"gzip\"},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname:    \"ValidatePrefer (zstd, gzip & br enabled): valid order with 1 out of 3 encoders\",\n\t\t\tprefer:  []string{\"gzip\"},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname:    \"ValidatePrefer (zstd, gzip & br enabled): 1 duplicated (once) encoder\",\n\t\t\tprefer:  []string{\"gzip\", \"zstd\", \"gzip\"},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"ValidatePrefer (zstd, gzip & br enabled): 1 not enabled encoder in prefer list\",\n\t\t\tprefer:  []string{\"br\", \"zstd\", \"gzip\", \"deflate\"},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"ValidatePrefer (zstd, gzip & br enabled): no prefer list\",\n\t\t\tprefer:  []string{},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\n\tfor _, test := range testCases {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tenc.Prefer = test.prefer\n\t\t\terr = enc.Validate()\n\t\t\tif (err != nil) != test.wantErr {\n\t\t\t\tt.Errorf(\"Validate() error = %v, wantErr = %v\", err, test.wantErr)\n\t\t\t}\n\t\t})\n\t}\n\n\tenc.writerPools = map[string]*sync.Pool{\n\t\t\"zstd\": nil,\n\t\t\"gzip\": nil,\n\t}\n\ttestCases = []testCase{\n\t\t{\n\t\t\tname:    \"ValidatePrefer (zstd & gzip enabled): 1 not enabled encoder in prefer list\",\n\t\t\tprefer:  []string{\"zstd\", \"br\", \"gzip\"},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"ValidatePrefer (zstd & gzip enabled): 2 not enabled encoder in prefer list\",\n\t\t\tprefer:  []string{\"br\", \"zstd\", \"gzip\", \"deflate\"},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"ValidatePrefer (zstd & gzip enabled): only not enabled encoder in prefer list\",\n\t\t\tprefer:  []string{\"deflate\", \"br\", \"gzip\"},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"ValidatePrefer (zstd & gzip enabled): 1 duplicated (once) encoder in prefer list\",\n\t\t\tprefer:  []string{\"gzip\", \"zstd\", \"gzip\"},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"ValidatePrefer (zstd & gzip enabled): 1 duplicated (twice) encoder in prefer list\",\n\t\t\tprefer:  []string{\"gzip\", \"zstd\", \"gzip\", \"gzip\"},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"ValidatePrefer (zstd & gzip enabled): 1 duplicated encoder in prefer list\",\n\t\t\tprefer:  []string{\"zstd\", \"zstd\", \"gzip\", \"gzip\"},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"ValidatePrefer (zstd & gzip enabled): 1 duplicated not enabled encoder in prefer list\",\n\t\t\tprefer:  []string{\"br\", \"br\", \"gzip\"},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"ValidatePrefer (zstd & gzip enabled): 2 duplicated not enabled encoder in prefer list\",\n\t\t\tprefer:  []string{\"br\", \"deflate\", \"br\", \"deflate\"},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"ValidatePrefer (zstd & gzip enabled): valid order zstd first\",\n\t\t\tprefer:  []string{\"zstd\", \"gzip\"},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname:    \"ValidatePrefer (zstd & gzip enabled): valid order gzip first\",\n\t\t\tprefer:  []string{\"gzip\", \"zstd\"},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\n\tfor _, test := range testCases {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tenc.Prefer = test.prefer\n\t\t\terr = enc.Validate()\n\t\t\tif (err != nil) != test.wantErr {\n\t\t\t\tt.Errorf(\"Validate() error = %v, wantErr = %v\", err, test.wantErr)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestIsEncodeAllowed(t *testing.T) {\n\ttestCases := []struct {\n\t\tname     string\n\t\theaders  http.Header\n\t\texpected bool\n\t}{\n\t\t{\n\t\t\tname:     \"Without any headers\",\n\t\t\theaders:  http.Header{},\n\t\t\texpected: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Without Cache-Control HTTP header\",\n\t\t\theaders: http.Header{\n\t\t\t\t\"Accept-Encoding\": {\"gzip\"},\n\t\t\t},\n\t\t\texpected: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Cache-Control HTTP header ending with no-transform directive\",\n\t\t\theaders: http.Header{\n\t\t\t\t\"Accept-Encoding\": {\"gzip\"},\n\t\t\t\t\"Cache-Control\":   {\"no-cache; no-transform\"},\n\t\t\t},\n\t\t\texpected: false,\n\t\t},\n\t\t{\n\t\t\tname: \"With Cache-Control HTTP header no-transform as Cache-Extension value\",\n\t\t\theaders: http.Header{\n\t\t\t\t\"Accept-Encoding\": {\"gzip\"},\n\t\t\t\t\"Cache-Control\":   {`no-store; no-cache; community=\"no-transform\"`},\n\t\t\t},\n\t\t\texpected: false,\n\t\t},\n\t}\n\n\tfor _, test := range testCases {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tif result := isEncodeAllowed(test.headers); result != test.expected {\n\t\t\t\tt.Errorf(\"The headers given to the isEncodeAllowed should return %t, %t given.\",\n\t\t\t\t\tresult,\n\t\t\t\t\ttest.expected)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/encode/gzip/gzip.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddygzip\n\nimport (\n\t\"fmt\"\n\t\"strconv\"\n\n\t\"github.com/klauspost/compress/gzip\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/encode\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Gzip{})\n}\n\n// Gzip can create gzip encoders.\ntype Gzip struct {\n\tLevel int `json:\"level,omitempty\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Gzip) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.encoders.gzip\",\n\t\tNew: func() caddy.Module { return new(Gzip) },\n\t}\n}\n\n// UnmarshalCaddyfile sets up the handler from Caddyfile tokens.\nfunc (g *Gzip) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume option name\n\tif !d.NextArg() {\n\t\treturn nil\n\t}\n\tlevelStr := d.Val()\n\tlevel, err := strconv.Atoi(levelStr)\n\tif err != nil {\n\t\treturn err\n\t}\n\tg.Level = level\n\treturn nil\n}\n\n// Provision provisions g's configuration.\nfunc (g *Gzip) Provision(ctx caddy.Context) error {\n\tif g.Level == 0 {\n\t\tg.Level = defaultGzipLevel\n\t}\n\treturn nil\n}\n\n// Validate validates g's configuration.\nfunc (g Gzip) Validate() error {\n\tif g.Level < gzip.StatelessCompression {\n\t\treturn fmt.Errorf(\"quality too low; must be >= %d\", gzip.StatelessCompression)\n\t}\n\tif g.Level > gzip.BestCompression {\n\t\treturn fmt.Errorf(\"quality too high; must be <= %d\", gzip.BestCompression)\n\t}\n\treturn nil\n}\n\n// AcceptEncoding returns the name of the encoding as\n// used in the Accept-Encoding request headers.\nfunc (Gzip) AcceptEncoding() string { return \"gzip\" }\n\n// NewEncoder returns a new gzip writer.\nfunc (g Gzip) NewEncoder() encode.Encoder {\n\twriter, _ := gzip.NewWriterLevel(nil, g.Level)\n\treturn writer\n}\n\n// Informed from http://blog.klauspost.com/gzip-performance-for-go-webservers/\nvar defaultGzipLevel = 5\n\n// Interface guards\nvar (\n\t_ encode.Encoding       = (*Gzip)(nil)\n\t_ caddy.Provisioner     = (*Gzip)(nil)\n\t_ caddy.Validator       = (*Gzip)(nil)\n\t_ caddyfile.Unmarshaler = (*Gzip)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/encode/gzip/gzip_precompressed.go",
    "content": "package caddygzip\n\nimport (\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/encode\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(GzipPrecompressed{})\n}\n\n// GzipPrecompressed provides the file extension for files precompressed with gzip encoding.\ntype GzipPrecompressed struct {\n\tGzip\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (GzipPrecompressed) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.precompressed.gzip\",\n\t\tNew: func() caddy.Module { return new(GzipPrecompressed) },\n\t}\n}\n\n// Suffix returns the filename suffix of precompressed files.\nfunc (GzipPrecompressed) Suffix() string { return \".gz\" }\n\nvar _ encode.Precompressed = (*GzipPrecompressed)(nil)\n"
  },
  {
    "path": "modules/caddyhttp/encode/zstd/zstd.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyzstd\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/klauspost/compress/zstd\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/encode\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Zstd{})\n}\n\n// Zstd can create Zstandard encoders.\ntype Zstd struct {\n\t// The compression level. Accepted values: fastest, better, best, default.\n\tLevel string `json:\"level,omitempty\"`\n\n\t// Compression level refer to type constants value from zstd.SpeedFastest to zstd.SpeedBestCompression\n\tlevel zstd.EncoderLevel\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Zstd) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.encoders.zstd\",\n\t\tNew: func() caddy.Module { return new(Zstd) },\n\t}\n}\n\n// UnmarshalCaddyfile sets up the handler from Caddyfile tokens.\nfunc (z *Zstd) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume option name\n\tif !d.NextArg() {\n\t\treturn nil\n\t}\n\tlevelStr := d.Val()\n\tif ok, _ := zstd.EncoderLevelFromString(levelStr); !ok {\n\t\treturn d.Errf(\"unexpected compression level, use one of '%s', '%s', '%s', '%s'\",\n\t\t\tzstd.SpeedFastest,\n\t\t\tzstd.SpeedBetterCompression,\n\t\t\tzstd.SpeedBestCompression,\n\t\t\tzstd.SpeedDefault,\n\t\t)\n\t}\n\tz.Level = levelStr\n\treturn nil\n}\n\n// Provision provisions z's configuration.\nfunc (z *Zstd) Provision(ctx caddy.Context) error {\n\tif z.Level == \"\" {\n\t\tz.Level = zstd.SpeedDefault.String()\n\t}\n\tvar ok bool\n\tif ok, z.level = zstd.EncoderLevelFromString(z.Level); !ok {\n\t\treturn fmt.Errorf(\"unexpected compression level, use one of '%s', '%s', '%s', '%s'\",\n\t\t\tzstd.SpeedFastest,\n\t\t\tzstd.SpeedDefault,\n\t\t\tzstd.SpeedBetterCompression,\n\t\t\tzstd.SpeedBestCompression,\n\t\t)\n\t}\n\treturn nil\n}\n\n// AcceptEncoding returns the name of the encoding as\n// used in the Accept-Encoding request headers.\nfunc (Zstd) AcceptEncoding() string { return \"zstd\" }\n\n// NewEncoder returns a new Zstandard writer.\nfunc (z Zstd) NewEncoder() encode.Encoder {\n\t// The default of 8MB for the window is\n\t// too large for many clients, so we limit\n\t// it to 128K to lighten their load.\n\twriter, _ := zstd.NewWriter(\n\t\tnil,\n\t\tzstd.WithWindowSize(128<<10),\n\t\tzstd.WithEncoderConcurrency(1),\n\t\tzstd.WithZeroFrames(true),\n\t\tzstd.WithEncoderLevel(z.level),\n\t)\n\treturn writer\n}\n\n// Interface guards\nvar (\n\t_ encode.Encoding       = (*Zstd)(nil)\n\t_ caddyfile.Unmarshaler = (*Zstd)(nil)\n\t_ caddy.Provisioner     = (*Zstd)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/encode/zstd/zstd_precompressed.go",
    "content": "package caddyzstd\n\nimport (\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/encode\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(ZstdPrecompressed{})\n}\n\n// ZstdPrecompressed provides the file extension for files precompressed with zstandard encoding.\ntype ZstdPrecompressed struct {\n\tZstd\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (ZstdPrecompressed) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.precompressed.zstd\",\n\t\tNew: func() caddy.Module { return new(ZstdPrecompressed) },\n\t}\n}\n\n// Suffix returns the filename suffix of precompressed files.\nfunc (ZstdPrecompressed) Suffix() string { return \".zst\" }\n\nvar _ encode.Precompressed = (*ZstdPrecompressed)(nil)\n"
  },
  {
    "path": "modules/caddyhttp/errors.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\tweakrand \"math/rand/v2\"\n\t\"path\"\n\t\"runtime\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\n// Error is a convenient way for a Handler to populate the\n// essential fields of a HandlerError. If err is itself a\n// HandlerError, then any essential fields that are not\n// set will be populated.\nfunc Error(statusCode int, err error) HandlerError {\n\tconst idLen = 9\n\tvar he HandlerError\n\tif errors.As(err, &he) {\n\t\tif he.ID == \"\" {\n\t\t\the.ID = randString(idLen, true)\n\t\t}\n\t\tif he.Trace == \"\" {\n\t\t\the.Trace = trace()\n\t\t}\n\t\tif he.StatusCode == 0 {\n\t\t\the.StatusCode = statusCode\n\t\t}\n\t\treturn he\n\t}\n\treturn HandlerError{\n\t\tID:         randString(idLen, true),\n\t\tStatusCode: statusCode,\n\t\tErr:        err,\n\t\tTrace:      trace(),\n\t}\n}\n\n// HandlerError is a serializable representation of\n// an error from within an HTTP handler.\ntype HandlerError struct {\n\tErr        error // the original error value and message\n\tStatusCode int   // the HTTP status code to associate with this error\n\n\tID    string // generated; for identifying this error in logs\n\tTrace string // produced from call stack\n}\n\nfunc (e HandlerError) Error() string {\n\tvar s string\n\tif e.ID != \"\" {\n\t\ts += fmt.Sprintf(\"{id=%s}\", e.ID)\n\t}\n\tif e.Trace != \"\" {\n\t\ts += \" \" + e.Trace\n\t}\n\tif e.StatusCode != 0 {\n\t\ts += fmt.Sprintf(\": HTTP %d\", e.StatusCode)\n\t}\n\tif e.Err != nil {\n\t\ts += \": \" + e.Err.Error()\n\t}\n\treturn strings.TrimSpace(s)\n}\n\n// Unwrap returns the underlying error value. See the `errors` package for info.\nfunc (e HandlerError) Unwrap() error { return e.Err }\n\n// randString returns a string of n random characters.\n// It is not even remotely secure OR a proper distribution.\n// But it's good enough for some things. It excludes certain\n// confusing characters like I, l, 1, 0, O, etc. If sameCase\n// is true, then uppercase letters are excluded.\nfunc randString(n int, sameCase bool) string {\n\tif n <= 0 {\n\t\treturn \"\"\n\t}\n\tdict := []byte(\"abcdefghijkmnopqrstuvwxyzABCDEFGHJKLMNPQRTUVWXY23456789\")\n\tif sameCase {\n\t\tdict = []byte(\"abcdefghijkmnpqrstuvwxyz0123456789\")\n\t}\n\tb := make([]byte, n)\n\tfor i := range b {\n\t\t//nolint:gosec\n\t\tb[i] = dict[weakrand.IntN(len(dict))]\n\t}\n\treturn string(b)\n}\n\nfunc trace() string {\n\tif pc, file, line, ok := runtime.Caller(2); ok {\n\t\tfilename := path.Base(file)\n\t\tpkgAndFuncName := path.Base(runtime.FuncForPC(pc).Name())\n\t\treturn fmt.Sprintf(\"%s (%s:%d)\", pkgAndFuncName, filename, line)\n\t}\n\treturn \"\"\n}\n\n// ErrorCtxKey is the context key to use when storing\n// an error (for use with context.Context).\nconst ErrorCtxKey = caddy.CtxKey(\"handler_chain_error\")\n"
  },
  {
    "path": "modules/caddyhttp/fileserver/browse.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage fileserver\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t_ \"embed\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/fs\"\n\t\"net/http\"\n\t\"os\"\n\t\"path\"\n\t\"strings\"\n\t\"sync\"\n\t\"text/tabwriter\"\n\t\"text/template\"\n\t\"time\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/templates\"\n)\n\n// BrowseTemplate is the default template document to use for\n// file listings. By default, its default value is an embedded\n// document. You can override this value at program start, or\n// if you are running Caddy via config, you can specify a\n// custom template_file in the browse configuration.\n//\n//go:embed browse.html\nvar BrowseTemplate string\n\n// Browse configures directory browsing.\ntype Browse struct {\n\t// Filename of the template to use instead of the embedded browse template.\n\tTemplateFile string `json:\"template_file,omitempty\"`\n\n\t// Determines whether or not targets of symlinks should be revealed.\n\tRevealSymlinks bool `json:\"reveal_symlinks,omitempty\"`\n\n\t// Override the default sort.\n\t// It includes the following options:\n\t//   - sort_by: name(default), namedirfirst, size, time\n\t//   - order: asc(default), desc\n\t// eg.:\n\t//   - `sort time desc` will sort by time in descending order\n\t//   - `sort size` will sort by size in ascending order\n\t// The first option must be `sort_by` and the second option must be `order` (if exists).\n\tSortOptions []string `json:\"sort,omitempty\"`\n\n\t// FileLimit limits the number of up to n DirEntry values in directory order.\n\tFileLimit int `json:\"file_limit,omitempty\"`\n}\n\nconst (\n\tdefaultDirEntryLimit = 10000\n)\n\nfunc (fsrv *FileServer) serveBrowse(fileSystem fs.FS, root, dirPath string, w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {\n\tif c := fsrv.logger.Check(zapcore.DebugLevel, \"browse enabled; listing directory contents\"); c != nil {\n\t\tc.Write(zap.String(\"path\", dirPath), zap.String(\"root\", root))\n\t}\n\n\t// Navigation on the client-side gets messed up if the\n\t// URL doesn't end in a trailing slash because hrefs to\n\t// \"b/c\" at path \"/a\" end up going to \"/b/c\" instead\n\t// of \"/a/b/c\" - so we have to redirect in this case\n\t// so that the path is \"/a/\" and the client constructs\n\t// relative hrefs \"b/c\" to be \"/a/b/c\".\n\t//\n\t// Only redirect if the last element of the path (the filename) was not\n\t// rewritten; if the admin wanted to rewrite to the canonical path, they\n\t// would have, and we have to be very careful not to introduce unwanted\n\t// redirects and especially redirect loops! (Redirecting using the\n\t// original URI is necessary because that's the URI the browser knows,\n\t// we don't want to redirect from internally-rewritten URIs.)\n\t// See https://github.com/caddyserver/caddy/issues/4205.\n\t// We also redirect if the path is empty, because this implies the path\n\t// prefix was fully stripped away by a `handle_path` handler for example.\n\t// See https://github.com/caddyserver/caddy/issues/4466.\n\torigReq := r.Context().Value(caddyhttp.OriginalRequestCtxKey).(http.Request)\n\tif r.URL.Path == \"\" || path.Base(origReq.URL.Path) == path.Base(r.URL.Path) {\n\t\tif !strings.HasSuffix(origReq.URL.Path, \"/\") {\n\t\t\tif c := fsrv.logger.Check(zapcore.DebugLevel, \"redirecting to trailing slash to preserve hrefs\"); c != nil {\n\t\t\t\tc.Write(zap.String(\"request_path\", r.URL.Path))\n\t\t\t}\n\t\t\treturn redirect(w, r, origReq.URL.Path+\"/\")\n\t\t}\n\t}\n\n\tdir, err := fsrv.openFile(fileSystem, dirPath, w)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer dir.Close()\n\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\n\t// TODO: not entirely sure if path.Clean() is necessary here but seems like a safe plan (i.e. /%2e%2e%2f) - someone could verify this\n\tlisting, err := fsrv.loadDirectoryContents(r.Context(), fileSystem, dir.(fs.ReadDirFile), root, path.Clean(r.URL.EscapedPath()), repl)\n\tswitch {\n\tcase errors.Is(err, fs.ErrPermission):\n\t\treturn caddyhttp.Error(http.StatusForbidden, err)\n\tcase errors.Is(err, fs.ErrNotExist):\n\t\treturn fsrv.notFound(w, r, next)\n\tcase err != nil:\n\t\treturn caddyhttp.Error(http.StatusInternalServerError, err)\n\t}\n\n\tw.Header().Add(\"Vary\", \"Accept, Accept-Encoding\")\n\n\t// speed up browser/client experience and caching by supporting If-Modified-Since\n\tif ifModSinceStr := r.Header.Get(\"If-Modified-Since\"); ifModSinceStr != \"\" {\n\t\t// basically a copy of stdlib file server's handling of If-Modified-Since\n\t\tifModSince, err := http.ParseTime(ifModSinceStr)\n\t\tif err == nil && listing.lastModified.Truncate(time.Second).Compare(ifModSince) <= 0 {\n\t\t\tw.WriteHeader(http.StatusNotModified)\n\t\t\treturn nil\n\t\t}\n\t}\n\n\tfsrv.browseApplyQueryParams(w, r, listing)\n\n\tbuf := bufPool.Get().(*bytes.Buffer)\n\tbuf.Reset()\n\tdefer bufPool.Put(buf)\n\n\tacceptHeader := strings.ToLower(strings.Join(r.Header[\"Accept\"], \",\"))\n\tw.Header().Set(\"Last-Modified\", listing.lastModified.Format(http.TimeFormat))\n\n\tswitch {\n\tcase strings.Contains(acceptHeader, \"application/json\"):\n\t\tif err := json.NewEncoder(buf).Encode(listing.Items); err != nil {\n\t\t\treturn caddyhttp.Error(http.StatusInternalServerError, err)\n\t\t}\n\t\tw.Header().Set(\"Content-Type\", \"application/json; charset=utf-8\")\n\n\tcase strings.Contains(acceptHeader, \"text/plain\"):\n\t\twriter := tabwriter.NewWriter(buf, 0, 8, 1, '\\t', tabwriter.AlignRight)\n\n\t\t// Header on top\n\t\tif _, err := fmt.Fprintln(writer, \"Name\\tSize\\tModified\"); err != nil {\n\t\t\treturn caddyhttp.Error(http.StatusInternalServerError, err)\n\t\t}\n\n\t\t// Lines to separate the header\n\t\tif _, err := fmt.Fprintln(writer, \"----\\t----\\t--------\"); err != nil {\n\t\t\treturn caddyhttp.Error(http.StatusInternalServerError, err)\n\t\t}\n\n\t\t// Actual files\n\t\tfor _, item := range listing.Items {\n\t\t\t//nolint:gosec // not sure how this could be XSS unless you lose control of the file system (like aren't sanitizing) and client ignores Content-Type of text/plain\n\t\t\tif _, err := fmt.Fprintf(writer, \"%s\\t%s\\t%s\\n\",\n\t\t\t\titem.Name, item.HumanSize(), item.HumanModTime(\"January 2, 2006 at 15:04:05\"),\n\t\t\t); err != nil {\n\t\t\t\treturn caddyhttp.Error(http.StatusInternalServerError, err)\n\t\t\t}\n\t\t}\n\n\t\tif err := writer.Flush(); err != nil {\n\t\t\treturn caddyhttp.Error(http.StatusInternalServerError, err)\n\t\t}\n\n\t\tw.Header().Set(\"Content-Type\", \"text/plain; charset=utf-8\")\n\n\tdefault:\n\t\tvar fs http.FileSystem\n\t\tif fsrv.Root != \"\" {\n\t\t\tfs = http.Dir(repl.ReplaceAll(fsrv.Root, \".\"))\n\t\t}\n\n\t\ttplCtx := &templateContext{\n\t\t\tTemplateContext: templates.TemplateContext{\n\t\t\t\tRoot:       fs,\n\t\t\t\tReq:        r,\n\t\t\t\tRespHeader: templates.WrappedHeader{Header: w.Header()},\n\t\t\t},\n\t\t\tbrowseTemplateContext: listing,\n\t\t}\n\n\t\ttpl, err := fsrv.makeBrowseTemplate(tplCtx)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"parsing browse template: %v\", err)\n\t\t}\n\t\tif err := tpl.Execute(buf, tplCtx); err != nil {\n\t\t\treturn caddyhttp.Error(http.StatusInternalServerError, err)\n\t\t}\n\t\tw.Header().Set(\"Content-Type\", \"text/html; charset=utf-8\")\n\t}\n\n\t_, _ = buf.WriteTo(w)\n\n\treturn nil\n}\n\nfunc (fsrv *FileServer) loadDirectoryContents(ctx context.Context, fileSystem fs.FS, dir fs.ReadDirFile, root, urlPath string, repl *caddy.Replacer) (*browseTemplateContext, error) {\n\t// modTime for the directory itself\n\tstat, err := dir.Stat()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdirLimit := defaultDirEntryLimit\n\tif fsrv.Browse.FileLimit != 0 {\n\t\tdirLimit = fsrv.Browse.FileLimit\n\t}\n\tfiles, err := dir.ReadDir(dirLimit)\n\tif err != nil && err != io.EOF {\n\t\treturn nil, err\n\t}\n\n\t// user can presumably browse \"up\" to parent folder if path is longer than \"/\"\n\tcanGoUp := len(urlPath) > 1\n\n\treturn fsrv.directoryListing(ctx, fileSystem, stat.ModTime(), files, canGoUp, root, urlPath, repl), nil\n}\n\n// browseApplyQueryParams applies query parameters to the listing.\n// It mutates the listing and may set cookies.\nfunc (fsrv *FileServer) browseApplyQueryParams(w http.ResponseWriter, r *http.Request, listing *browseTemplateContext) {\n\tvar orderParam, sortParam string\n\n\t// The configs in Caddyfile have lower priority than Query params,\n\t// so put it at first.\n\tfor idx, item := range fsrv.Browse.SortOptions {\n\t\t// Only `sort` & `order`, 2 params are allowed\n\t\tif idx >= 2 {\n\t\t\tbreak\n\t\t}\n\t\tswitch item {\n\t\tcase sortByName, sortByNameDirFirst, sortBySize, sortByTime:\n\t\t\tsortParam = item\n\t\tcase sortOrderAsc, sortOrderDesc:\n\t\t\torderParam = item\n\t\t}\n\t}\n\n\tlayoutParam := r.URL.Query().Get(\"layout\")\n\tlimitParam := r.URL.Query().Get(\"limit\")\n\toffsetParam := r.URL.Query().Get(\"offset\")\n\tsortParamTmp := r.URL.Query().Get(\"sort\")\n\tif sortParamTmp != \"\" {\n\t\tsortParam = sortParamTmp\n\t}\n\torderParamTmp := r.URL.Query().Get(\"order\")\n\tif orderParamTmp != \"\" {\n\t\torderParam = orderParamTmp\n\t}\n\n\tswitch layoutParam {\n\tcase \"list\", \"grid\", \"\":\n\t\tlisting.Layout = layoutParam\n\tdefault:\n\t\tlisting.Layout = \"list\"\n\t}\n\n\t// figure out what to sort by\n\tswitch sortParam {\n\tcase \"\":\n\t\tsortParam = sortByNameDirFirst\n\t\tif sortCookie, sortErr := r.Cookie(\"sort\"); sortErr == nil {\n\t\t\tsortParam = sortCookie.Value\n\t\t}\n\tcase sortByName, sortByNameDirFirst, sortBySize, sortByTime:\n\t\thttp.SetCookie(w, &http.Cookie{Name: \"sort\", Value: sortParam, Secure: r.TLS != nil})\n\t}\n\n\t// then figure out the order\n\tswitch orderParam {\n\tcase \"\":\n\t\torderParam = sortOrderAsc\n\t\tif orderCookie, orderErr := r.Cookie(\"order\"); orderErr == nil {\n\t\t\torderParam = orderCookie.Value\n\t\t}\n\tcase sortOrderAsc, sortOrderDesc:\n\t\thttp.SetCookie(w, &http.Cookie{Name: \"order\", Value: orderParam, Secure: r.TLS != nil})\n\t}\n\n\t// finally, apply the sorting and limiting\n\tlisting.applySortAndLimit(sortParam, orderParam, limitParam, offsetParam)\n}\n\n// makeBrowseTemplate creates the template to be used for directory listings.\nfunc (fsrv *FileServer) makeBrowseTemplate(tplCtx *templateContext) (*template.Template, error) {\n\tvar tpl *template.Template\n\tvar err error\n\n\tif fsrv.Browse.TemplateFile != \"\" {\n\t\ttpl = tplCtx.NewTemplate(path.Base(fsrv.Browse.TemplateFile))\n\t\ttpl, err = tpl.ParseFiles(fsrv.Browse.TemplateFile)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"parsing browse template file: %v\", err)\n\t\t}\n\t} else {\n\t\ttpl = tplCtx.NewTemplate(\"default_listing\")\n\t\ttpl, err = tpl.Parse(BrowseTemplate)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"parsing default browse template: %v\", err)\n\t\t}\n\t}\n\n\treturn tpl, nil\n}\n\n// isSymlinkTargetDir returns true if f's symbolic link target\n// is a directory.\nfunc (fsrv *FileServer) isSymlinkTargetDir(fileSystem fs.FS, f fs.FileInfo, root, urlPath string) bool {\n\tif !isSymlink(f) {\n\t\treturn false\n\t}\n\ttarget := caddyhttp.SanitizedPathJoin(root, path.Join(urlPath, f.Name()))\n\ttargetInfo, err := fs.Stat(fileSystem, target)\n\tif err != nil {\n\t\treturn false\n\t}\n\treturn targetInfo.IsDir()\n}\n\n// isSymlink return true if f is a symbolic link.\nfunc isSymlink(f fs.FileInfo) bool {\n\treturn f.Mode()&os.ModeSymlink != 0\n}\n\n// templateContext powers the context used when evaluating the browse template.\n// It combines browse-specific features with the standard templates handler\n// features.\ntype templateContext struct {\n\ttemplates.TemplateContext\n\t*browseTemplateContext\n}\n\n// bufPool is used to increase the efficiency of file listings.\nvar bufPool = sync.Pool{\n\tNew: func() any {\n\t\treturn new(bytes.Buffer)\n\t},\n}\n"
  },
  {
    "path": "modules/caddyhttp/fileserver/browse.html",
    "content": "{{ $nonce := uuidv4 -}}\n{{ $nonceAttribute := print \"nonce=\" (quote $nonce) -}}\n{{ $csp := printf \"default-src 'none'; img-src 'self'; object-src 'none'; base-uri 'none'; script-src 'nonce-%s'; style-src 'nonce-%s'; frame-ancestors 'self'; form-action 'self';\" $nonce $nonce -}}\n{{/* To disable the Content-Security-Policy, set this to false */}}{{ $enableCsp := true -}}\n{{ if $enableCsp -}}\n  {{- .RespHeader.Set \"Content-Security-Policy\" $csp -}}\n{{ end -}}\n{{- define \"icon\"}}\n\t{{- if .IsDir}}\n\t\t{{- if .IsSymlink}}\n\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-folder-filled\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t\t<path d=\"M9 3a1 1 0 0 1 .608 .206l.1 .087l2.706 2.707h6.586a3 3 0 0 1 2.995 2.824l.005 .176v8a3 3 0 0 1 -2.824 2.995l-.176 .005h-14a3 3 0 0 1 -2.995 -2.824l-.005 -.176v-11a3 3 0 0 1 2.824 -2.995l.176 -.005h4z\" stroke-width=\"0\" fill=\"currentColor\"/>\n\t\t\t<path fill=\"#000\" d=\"M2.795 17.306c0-2.374 1.792-4.314 4.078-4.538v-1.104a.38.38 0 0 1 .651-.272l2.45 2.492a.132.132 0 0 1 0 .188l-2.45 2.492a.381.381 0 0 1-.651-.272V15.24c-1.889.297-3.436 1.39-3.817 3.26a2.809 2.809 0 0 1-.261-1.193Z\" stroke-width=\".127478\"/>\n\t\t</svg>\n\t\t{{- else}}\n\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-folder-filled\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t\t<path d=\"M9 3a1 1 0 0 1 .608 .206l.1 .087l2.706 2.707h6.586a3 3 0 0 1 2.995 2.824l.005 .176v8a3 3 0 0 1 -2.824 2.995l-.176 .005h-14a3 3 0 0 1 -2.995 -2.824l-.005 -.176v-11a3 3 0 0 1 2.824 -2.995l.176 -.005h4z\" stroke-width=\"0\" fill=\"currentColor\"/>\n\t\t</svg>\n\t\t{{- end}}\n\t{{- else if or (eq .Name \"LICENSE\") (eq .Name \"README\")}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-license\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M15 21h-9a3 3 0 0 1 -3 -3v-1h10v2a2 2 0 0 0 4 0v-14a2 2 0 1 1 2 2h-2m2 -4h-11a3 3 0 0 0 -3 3v11\"/>\n\t\t<path d=\"M9 7l4 0\"/>\n\t\t<path d=\"M9 11l4 0\"/>\n\t</svg>\n\t{{- else if .HasExt \".jpg\" \".jpeg\" \".png\" \".gif\" \".webp\" \".tiff\" \".bmp\" \".heif\" \".heic\" \".svg\" \".avif\"}}\n\t\t{{- if eq .Tpl.Layout \"grid\"}}\n\t\t<img loading=\"lazy\" src=\"{{.Name | pathEscape}}\">\n\t\t{{- else}}\n\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-photo\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t\t<path d=\"M15 8h.01\"/>\n\t\t\t<path d=\"M3 6a3 3 0 0 1 3 -3h12a3 3 0 0 1 3 3v12a3 3 0 0 1 -3 3h-12a3 3 0 0 1 -3 -3v-12z\"/>\n\t\t\t<path d=\"M3 16l5 -5c.928 -.893 2.072 -.893 3 0l5 5\"/>\n\t\t\t<path d=\"M14 14l1 -1c.928 -.893 2.072 -.893 3 0l3 3\"/>\n\t\t</svg>\n\t\t{{- end}}\n\t{{- else if .HasExt \".mp4\" \".mov\" \".m4v\" \".mpeg\" \".mpg\" \".avi\" \".ogg\" \".webm\" \".mkv\" \".vob\" \".gifv\" \".3gp\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-movie\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M4 4m0 2a2 2 0 0 1 2 -2h12a2 2 0 0 1 2 2v12a2 2 0 0 1 -2 2h-12a2 2 0 0 1 -2 -2z\"/>\n\t\t<path d=\"M8 4l0 16\"/>\n\t\t<path d=\"M16 4l0 16\"/>\n\t\t<path d=\"M4 8l4 0\"/>\n\t\t<path d=\"M4 16l4 0\"/>\n\t\t<path d=\"M4 12l16 0\"/>\n\t\t<path d=\"M16 8l4 0\"/>\n\t\t<path d=\"M16 16l4 0\"/>\n\t</svg>\n\t{{- else if .HasExt \".mp3\" \".m4a\" \".aac\" \".ogg\" \".flac\" \".wav\" \".wma\" \".midi\" \".cda\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-music\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M6 17m-3 0a3 3 0 1 0 6 0a3 3 0 1 0 -6 0\"/>\n\t\t<path d=\"M16 17m-3 0a3 3 0 1 0 6 0a3 3 0 1 0 -6 0\"/>\n\t\t<path d=\"M9 17l0 -13l10 0l0 13\"/>\n\t\t<path d=\"M9 8l10 0\"/>\n\t</svg>\n\t{{- else if .HasExt \".pdf\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-file-type-pdf\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M14 3v4a1 1 0 0 0 1 1h4\"/>\n\t\t<path d=\"M5 12v-7a2 2 0 0 1 2 -2h7l5 5v4\"/>\n\t\t<path d=\"M5 18h1.5a1.5 1.5 0 0 0 0 -3h-1.5v6\"/>\n\t\t<path d=\"M17 18h2\"/>\n\t\t<path d=\"M20 15h-3v6\"/>\n\t\t<path d=\"M11 15v6h1a2 2 0 0 0 2 -2v-2a2 2 0 0 0 -2 -2h-1z\"/>\n\t</svg>\n\t{{- else if .HasExt \".csv\" \".tsv\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-file-type-csv\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M14 3v4a1 1 0 0 0 1 1h4\"/>\n\t\t<path d=\"M5 12v-7a2 2 0 0 1 2 -2h7l5 5v4\"/>\n\t\t<path d=\"M7 16.5a1.5 1.5 0 0 0 -3 0v3a1.5 1.5 0 0 0 3 0\"/>\n\t\t<path d=\"M10 20.25c0 .414 .336 .75 .75 .75h1.25a1 1 0 0 0 1 -1v-1a1 1 0 0 0 -1 -1h-1a1 1 0 0 1 -1 -1v-1a1 1 0 0 1 1 -1h1.25a.75 .75 0 0 1 .75 .75\"/>\n\t\t<path d=\"M16 15l2 6l2 -6\"/>\n\t</svg>\n\t{{- else if .HasExt \".txt\" \".doc\" \".docx\" \".odt\" \".fodt\" \".rtf\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-file-text\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M14 3v4a1 1 0 0 0 1 1h4\"/>\n\t\t<path d=\"M17 21h-10a2 2 0 0 1 -2 -2v-14a2 2 0 0 1 2 -2h7l5 5v11a2 2 0 0 1 -2 2z\"/>\n\t\t<path d=\"M9 9l1 0\"/>\n\t\t<path d=\"M9 13l6 0\"/>\n\t\t<path d=\"M9 17l6 0\"/>\n\t</svg>\n\t{{- else if .HasExt \".xls\" \".xlsx\" \".ods\" \".fods\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-file-spreadsheet\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M14 3v4a1 1 0 0 0 1 1h4\"/>\n\t\t<path d=\"M17 21h-10a2 2 0 0 1 -2 -2v-14a2 2 0 0 1 2 -2h7l5 5v11a2 2 0 0 1 -2 2z\"/>\n\t\t<path d=\"M8 11h8v7h-8z\"/>\n\t\t<path d=\"M8 15h8\"/>\n\t\t<path d=\"M11 11v7\"/>\n\t</svg>\n\t{{- else if .HasExt \".ppt\" \".pptx\" \".odp\" \".fodp\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-presentation-analytics\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M9 12v-4\"/>\n\t\t<path d=\"M15 12v-2\"/>\n\t\t<path d=\"M12 12v-1\"/>\n\t\t<path d=\"M3 4h18\"/>\n\t\t<path d=\"M4 4v10a2 2 0 0 0 2 2h12a2 2 0 0 0 2 -2v-10\"/>\n\t\t<path d=\"M12 16v4\"/>\n\t\t<path d=\"M9 20h6\"/>\n\t</svg>\n\t{{- else if .HasExt \".zip\" \".gz\" \".xz\" \".tar\" \".7z\" \".rar\" \".xz\" \".zst\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-file-zip\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M6 20.735a2 2 0 0 1 -1 -1.735v-14a2 2 0 0 1 2 -2h7l5 5v11a2 2 0 0 1 -2 2h-1\"/>\n\t\t<path d=\"M11 17a2 2 0 0 1 2 2v2a1 1 0 0 1 -1 1h-2a1 1 0 0 1 -1 -1v-2a2 2 0 0 1 2 -2z\"/>\n\t\t<path d=\"M11 5l-1 0\"/>\n\t\t<path d=\"M13 7l-1 0\"/>\n\t\t<path d=\"M11 9l-1 0\"/>\n\t\t<path d=\"M13 11l-1 0\"/>\n\t\t<path d=\"M11 13l-1 0\"/>\n\t\t<path d=\"M13 15l-1 0\"/>\n\t</svg>\n\t{{- else if .HasExt \".deb\" \".dpkg\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-brand-debian\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M12 17c-2.397 -.943 -4 -3.153 -4 -5.635c0 -2.19 1.039 -3.14 1.604 -3.595c2.646 -2.133 6.396 -.27 6.396 3.23c0 2.5 -2.905 2.121 -3.5 1.5c-.595 -.621 -1 -1.5 -.5 -2.5\"/>\n\t\t<path d=\"M12 12m-9 0a9 9 0 1 0 18 0a9 9 0 1 0 -18 0\"/>\n\t</svg>\n\t{{- else if .HasExt \".rpm\" \".exe\" \".flatpak\" \".appimage\" \".jar\" \".msi\" \".apk\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-package\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M12 3l8 4.5l0 9l-8 4.5l-8 -4.5l0 -9l8 -4.5\"/>\n\t\t<path d=\"M12 12l8 -4.5\"/>\n\t\t<path d=\"M12 12l0 9\"/>\n\t\t<path d=\"M12 12l-8 -4.5\"/>\n\t\t<path d=\"M16 5.25l-8 4.5\"/>\n\t</svg>\n\t{{- else if .HasExt \".ps1\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-brand-powershell\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M4.887 20h11.868c.893 0 1.664 -.665 1.847 -1.592l2.358 -12c.212 -1.081 -.442 -2.14 -1.462 -2.366a1.784 1.784 0 0 0 -.385 -.042h-11.868c-.893 0 -1.664 .665 -1.847 1.592l-2.358 12c-.212 1.081 .442 2.14 1.462 2.366c.127 .028 .256 .042 .385 .042z\"/>\n\t\t<path d=\"M9 8l4 4l-6 4\"/>\n\t\t<path d=\"M12 16h3\"/>\n\t</svg>\n\t{{- else if .HasExt \".py\" \".pyc\" \".pyo\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-brand-python\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M12 9h-7a2 2 0 0 0 -2 2v4a2 2 0 0 0 2 2h3\"/>\n\t\t<path d=\"M12 15h7a2 2 0 0 0 2 -2v-4a2 2 0 0 0 -2 -2h-3\"/>\n\t\t<path d=\"M8 9v-4a2 2 0 0 1 2 -2h4a2 2 0 0 1 2 2v5a2 2 0 0 1 -2 2h-4a2 2 0 0 0 -2 2v5a2 2 0 0 0 2 2h4a2 2 0 0 0 2 -2v-4\"/>\n\t\t<path d=\"M11 6l0 .01\"/>\n\t\t<path d=\"M13 18l0 .01\"/>\n\t</svg>\n\t{{- else if .HasExt \".bash\" \".sh\" \".com\" \".bat\" \".dll\" \".so\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-script\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M17 20h-11a3 3 0 0 1 0 -6h11a3 3 0 0 0 0 6h1a3 3 0 0 0 3 -3v-11a2 2 0 0 0 -2 -2h-10a2 2 0 0 0 -2 2v8\"/>\n\t</svg>\n\t{{- else if .HasExt \".dmg\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-brand-finder\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M3 4m0 1a1 1 0 0 1 1 -1h16a1 1 0 0 1 1 1v14a1 1 0 0 1 -1 1h-16a1 1 0 0 1 -1 -1z\"/>\n\t\t<path d=\"M7 8v1\"/>\n\t\t<path d=\"M17 8v1\"/>\n\t\t<path d=\"M12.5 4c-.654 1.486 -1.26 3.443 -1.5 9h2.5c-.19 2.867 .094 5.024 .5 7\"/>\n\t\t<path d=\"M7 15.5c3.667 2 6.333 2 10 0\"/>\n\t</svg>\n\t{{- else if .HasExt \".iso\" \".img\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-disc\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M12 12m-9 0a9 9 0 1 0 18 0a9 9 0 1 0 -18 0\"/>\n\t\t<path d=\"M12 12m-1 0a1 1 0 1 0 2 0a1 1 0 1 0 -2 0\"/>\n\t\t<path d=\"M7 12a5 5 0 0 1 5 -5\"/>\n\t\t<path d=\"M12 17a5 5 0 0 0 5 -5\"/>\n\t</svg>\n\t{{- else if .HasExt \".md\" \".mdown\" \".markdown\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-markdown\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M3 5m0 2a2 2 0 0 1 2 -2h14a2 2 0 0 1 2 2v10a2 2 0 0 1 -2 2h-14a2 2 0 0 1 -2 -2z\"/>\n\t\t<path d=\"M7 15v-6l2 2l2 -2v6\"/>\n\t\t<path d=\"M14 13l2 2l2 -2m-2 2v-6\"/>\n\t</svg>\n\t{{- else if .HasExt \".ttf\" \".otf\" \".woff\" \".woff2\" \".eof\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-file-typography\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M14 3v4a1 1 0 0 0 1 1h4\"/>\n\t\t<path d=\"M17 21h-10a2 2 0 0 1 -2 -2v-14a2 2 0 0 1 2 -2h7l5 5v11a2 2 0 0 1 -2 2z\"/>\n\t\t<path d=\"M11 18h2\"/>\n\t\t<path d=\"M12 18v-7\"/>\n\t\t<path d=\"M9 12v-1h6v1\"/>\n\t</svg>\n\t{{- else if .HasExt \".go\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-brand-golang\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M15.695 14.305c1.061 1.06 2.953 .888 4.226 -.384c1.272 -1.273 1.444 -3.165 .384 -4.226c-1.061 -1.06 -2.953 -.888 -4.226 .384c-1.272 1.273 -1.444 3.165 -.384 4.226z\"/>\n\t\t<path d=\"M12.68 9.233c-1.084 -.497 -2.545 -.191 -3.591 .846c-1.284 1.273 -1.457 3.165 -.388 4.226c1.07 1.06 2.978 .888 4.261 -.384a3.669 3.669 0 0 0 1.038 -1.921h-2.427\"/>\n\t\t<path d=\"M5.5 15h-1.5\"/>\n\t\t<path d=\"M6 9h-2\"/>\n\t\t<path d=\"M5 12h-3\"/>\n\t</svg>\n\t{{- else if .HasExt \".html\" \".htm\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-file-type-html\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M14 3v4a1 1 0 0 0 1 1h4\"/>\n\t\t<path d=\"M5 12v-7a2 2 0 0 1 2 -2h7l5 5v4\"/>\n\t\t<path d=\"M2 21v-6\"/>\n\t\t<path d=\"M5 15v6\"/>\n\t\t<path d=\"M2 18h3\"/>\n\t\t<path d=\"M20 15v6h2\"/>\n\t\t<path d=\"M13 21v-6l2 3l2 -3v6\"/>\n\t\t<path d=\"M7.5 15h3\"/>\n\t\t<path d=\"M9 15v6\"/>\n\t</svg>\n\t{{- else if .HasExt \".js\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-file-type-js\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M14 3v4a1 1 0 0 0 1 1h4\"/>\n\t\t<path d=\"M3 15h3v4.5a1.5 1.5 0 0 1 -3 0\"/>\n\t\t<path d=\"M9 20.25c0 .414 .336 .75 .75 .75h1.25a1 1 0 0 0 1 -1v-1a1 1 0 0 0 -1 -1h-1a1 1 0 0 1 -1 -1v-1a1 1 0 0 1 1 -1h1.25a.75 .75 0 0 1 .75 .75\"/>\n\t\t<path d=\"M5 12v-7a2 2 0 0 1 2 -2h7l5 5v11a2 2 0 0 1 -2 2h-1\"/>\n\t</svg>\n\t{{- else if .HasExt \".css\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-file-type-css\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M14 3v4a1 1 0 0 0 1 1h4\"/>\n\t\t<path d=\"M5 12v-7a2 2 0 0 1 2 -2h7l5 5v4\"/>\n\t\t<path d=\"M8 16.5a1.5 1.5 0 0 0 -3 0v3a1.5 1.5 0 0 0 3 0\"/>\n\t\t<path d=\"M11 20.25c0 .414 .336 .75 .75 .75h1.25a1 1 0 0 0 1 -1v-1a1 1 0 0 0 -1 -1h-1a1 1 0 0 1 -1 -1v-1a1 1 0 0 1 1 -1h1.25a.75 .75 0 0 1 .75 .75\"/>\n\t\t<path d=\"M17 20.25c0 .414 .336 .75 .75 .75h1.25a1 1 0 0 0 1 -1v-1a1 1 0 0 0 -1 -1h-1a1 1 0 0 1 -1 -1v-1a1 1 0 0 1 1 -1h1.25a.75 .75 0 0 1 .75 .75\"/>\n\t</svg>\n\t{{- else if .HasExt \".json\" \".json5\" \".jsonc\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-json\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M20 16v-8l3 8v-8\"/>\n\t\t<path d=\"M15 8a2 2 0 0 1 2 2v4a2 2 0 1 1 -4 0v-4a2 2 0 0 1 2 -2z\"/>\n\t\t<path d=\"M1 8h3v6.5a1.5 1.5 0 0 1 -3 0v-.5\"/>\n\t\t<path d=\"M7 15a1 1 0 0 0 1 1h1a1 1 0 0 0 1 -1v-2a1 1 0 0 0 -1 -1h-1a1 1 0 0 1 -1 -1v-2a1 1 0 0 1 1 -1h1a1 1 0 0 1 1 1\"/>\n\t</svg>\n\t{{- else if .HasExt \".ts\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-file-type-ts\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M14 3v4a1 1 0 0 0 1 1h4\"/>\n\t\t<path d=\"M5 12v-7a2 2 0 0 1 2 -2h7l5 5v11a2 2 0 0 1 -2 2h-1\"/>\n\t\t<path d=\"M14 3v4a1 1 0 0 0 1 1h4\"/>\n\t\t<path d=\"M9 20.25c0 .414 .336 .75 .75 .75h1.25a1 1 0 0 0 1 -1v-1a1 1 0 0 0 -1 -1h-1a1 1 0 0 1 -1 -1v-1a1 1 0 0 1 1 -1h1.25a.75 .75 0 0 1 .75 .75\"/>\n\t\t<path d=\"M3.5 15h3\"/>\n\t\t<path d=\"M5 15v6\"/>\n\t</svg>\n\t{{- else if .HasExt \".sql\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-file-type-sql\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M14 3v4a1 1 0 0 0 1 1h4\"/>\n\t\t<path d=\"M14 3v4a1 1 0 0 0 1 1h4\"/>\n\t\t<path d=\"M5 20.25c0 .414 .336 .75 .75 .75h1.25a1 1 0 0 0 1 -1v-1a1 1 0 0 0 -1 -1h-1a1 1 0 0 1 -1 -1v-1a1 1 0 0 1 1 -1h1.25a.75 .75 0 0 1 .75 .75\"/>\n\t\t<path d=\"M5 12v-7a2 2 0 0 1 2 -2h7l5 5v4\"/>\n\t\t<path d=\"M18 15v6h2\"/>\n\t\t<path d=\"M13 15a2 2 0 0 1 2 2v2a2 2 0 1 1 -4 0v-2a2 2 0 0 1 2 -2z\"/>\n\t\t<path d=\"M14 20l1.5 1.5\"/>\n\t</svg>\n\t{{- else if .HasExt \".db\" \".sqlite\" \".bak\" \".mdb\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-database\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M12 6m-8 0a8 3 0 1 0 16 0a8 3 0 1 0 -16 0\"/>\n\t\t<path d=\"M4 6v6a8 3 0 0 0 16 0v-6\"/>\n\t\t<path d=\"M4 12v6a8 3 0 0 0 16 0v-6\"/>\n\t</svg>\n\t{{- else if .HasExt \".eml\" \".email\" \".mailbox\" \".mbox\" \".msg\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-mail\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M3 7a2 2 0 0 1 2 -2h14a2 2 0 0 1 2 2v10a2 2 0 0 1 -2 2h-14a2 2 0 0 1 -2 -2v-10z\"/>\n\t\t<path d=\"M3 7l9 6l9 -6\"/>\n\t</svg>\n\t{{- else if .HasExt \".crt\" \".pem\" \".x509\" \".cer\" \".ca-bundle\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-certificate\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M15 15m-3 0a3 3 0 1 0 6 0a3 3 0 1 0 -6 0\"/>\n\t\t<path d=\"M13 17.5v4.5l2 -1.5l2 1.5v-4.5\"/>\n\t\t<path d=\"M10 19h-5a2 2 0 0 1 -2 -2v-10c0 -1.1 .9 -2 2 -2h14a2 2 0 0 1 2 2v10a2 2 0 0 1 -1 1.73\"/>\n\t\t<path d=\"M6 9l12 0\"/>\n\t\t<path d=\"M6 12l3 0\"/>\n\t\t<path d=\"M6 15l2 0\"/>\n\t</svg>\n\t{{- else if .HasExt \".key\" \".keystore\" \".jks\" \".p12\" \".pfx\" \".pub\"}}\n\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-key\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t<path d=\"M16.555 3.843l3.602 3.602a2.877 2.877 0 0 1 0 4.069l-2.643 2.643a2.877 2.877 0 0 1 -4.069 0l-.301 -.301l-6.558 6.558a2 2 0 0 1 -1.239 .578l-.175 .008h-1.172a1 1 0 0 1 -.993 -.883l-.007 -.117v-1.172a2 2 0 0 1 .467 -1.284l.119 -.13l.414 -.414h2v-2h2v-2l2.144 -2.144l-.301 -.301a2.877 2.877 0 0 1 0 -4.069l2.643 -2.643a2.877 2.877 0 0 1 4.069 0z\"/>\n\t\t<path d=\"M15 9h.01\"/>\n\t</svg>\n\t{{- else}}\n\t\t{{- if .IsSymlink}}\n\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-file-symlink\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t\t<path d=\"M4 21v-4a3 3 0 0 1 3 -3h5\"/>\n\t\t\t<path d=\"M9 17l3 -3l-3 -3\"/>\n\t\t\t<path d=\"M14 3v4a1 1 0 0 0 1 1h4\"/>\n\t\t\t<path d=\"M5 11v-6a2 2 0 0 1 2 -2h7l5 5v11a2 2 0 0 1 -2 2h-9.5\"/>\n\t\t</svg>\n\t\t{{- else}}\n\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-file\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t\t<path d=\"M14 3v4a1 1 0 0 0 1 1h4\"/>\n\t\t\t<path d=\"M17 21h-10a2 2 0 0 1 -2 -2v-14a2 2 0 0 1 2 -2h7l5 5v11a2 2 0 0 1 -2 2z\"/>\n\t\t</svg>\n\t\t{{- end}}\n\t{{- end}}\n{{- end}}\n<!DOCTYPE html>\n<html>\n\t<head>\n\t\t<title>{{html .Name}}</title>\n\t\t<link rel=\"canonical\" href=\"{{.Path}}/\"  />\n\t\t<meta charset=\"utf-8\">\n\t\t<meta name=\"color-scheme\" content=\"light dark\">\n\t\t<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n<style {{ $nonceAttribute }}>\n* { padding: 0; margin: 0; box-sizing: border-box; }\n\nbody {\n\tfont-family: Inter, system-ui, sans-serif;\n\tfont-size: 16px;\n\ttext-rendering: optimizespeed;\n\tbackground-color: #f3f6f7;\n\tmin-height: 100vh;\n}\n\nimg,\nsvg {\n\tvertical-align: middle;\n\tz-index: 1;\n}\n\nimg {\n\tmax-width: 100%;\n\tmax-height: 100%;\n\tborder-radius: 5px;\n}\n\ntd img {\n\tmax-width: 1.5em;\n\tmax-height: 2em;\n\tobject-fit: cover;\n}\n\nbody,\na,\nsvg,\n.layout.current,\n.layout.current svg,\n.go-up {\n\tcolor: #333;\n\ttext-decoration: none;\n}\n\n#layout-list, #layout-grid {\n\tcursor: pointer;\n}\n\n.wrapper {\n\tmax-width: 1200px;\n\tmargin-left: auto;\n\tmargin-right: auto;\n}\n\nheader,\n.meta {\n\tpadding-left: 5%;\n\tpadding-right: 5%;\n}\n\ntd a {\n\tcolor: #006ed3;\n\ttext-decoration: none;\n}\n\ntd a:hover {\n\tcolor: #0095e4;\n}\n\ntd a:visited {\n\tcolor: #800080;\n}\n\ntd a:visited:hover {\n\tcolor: #b900b9;\n}\n\nth:first-child,\ntd:first-child {\n\twidth: 5%;\n}\n\nth:last-child,\ntd:last-child {\n\twidth: 5%;\n}\n\n.size,\n.timestamp {\n\tfont-size: 14px;\n}\n\n.grid .size {\n\tfont-size: 12px;\n\tmargin-top: .5em;\n\tcolor: #496a84;\n}\n\nheader {\n\tpadding-top: 15px;\n\tpadding-bottom: 15px;\n\tbox-shadow: 0px 0px 20px 0px rgb(0 0 0 / 10%);\n}\n\n.breadcrumbs {\n\ttext-transform: uppercase;\n\tfont-size: 10px;\n\tletter-spacing: 1px;\n\tcolor: #939393;\n\tmargin-bottom: 5px;\n\tpadding-left: 3px;\n}\n\nh1 {\n\tfont-size: 20px;\n\tfont-family: Poppins, system-ui, sans-serif;\n\tfont-weight: normal;\n\twhite-space: nowrap;\n\toverflow-x: hidden;\n\ttext-overflow: ellipsis;\n\tcolor: #c5c5c5;\n}\n\nh1 a,\nth a {\n\tcolor: #000;\n}\n\nh1 a {\n\tpadding: 0 3px;\n\tmargin: 0 1px;\n}\n\nh1 a:hover {\n\tbackground: #ffffc4;\n}\n\nh1 a:first-child {\n\tmargin: 0;\n}\n\nheader,\nmain {\n\tbackground-color: white;\n}\n\nmain {\n\tmargin: 3em auto 0;\n\tborder-radius: 5px;\n\tbox-shadow: 0 2px 5px 1px rgb(0 0 0 / 5%);\n}\n\n.meta {\n\tdisplay: flex;\n\tgap: 1em;\n\tfont-size: 14px;\n\tborder-bottom: 1px solid #e5e9ea;\n\tpadding-top: 1em;\n\tpadding-bottom: 1em;\n}\n\n#summary {\n\tdisplay: flex;\n\tgap: 1em;\n\talign-items: center;\n\tmargin-right: auto;\n}\n\n.filter-container {\n\tposition: relative;\n\tdisplay: inline-block;\n\tmargin-left: 1em;\n}\n\n#search-icon {\n\tcolor: #777;\n\tposition: absolute;\n\theight: 1em;\n\ttop: .6em;\n\tleft: .5em;\n}\n\n#filter {\n\tpadding: .5em 1em .5em 2.5em;\n\tborder: none;\n\tborder: 1px solid #CCC;\n\tborder-radius: 5px;\n\tfont-family: inherit;\n\tposition: relative;\n\tz-index: 2;\n\tbackground: none;\n}\n\n.layout,\n.layout svg {\n\tcolor: #9a9a9a;\n}\n\ntable {\n\twidth: 100%;\n\tborder-collapse: collapse;\n}\n\ntbody tr,\ntbody tr a,\n.entry a {\n\ttransition: all .15s;\n}\n\ntbody tr:hover,\n.grid .entry a:hover {\n\tbackground-color: #f4f9fd;\n}\n\nth,\ntd {\n\ttext-align: left;\n}\n\nth {\n\tposition: sticky;\n\ttop: 0;\n\tbackground: white;\n\twhite-space: nowrap;\n\tz-index: 2;\n\ttext-transform: uppercase;\n\tfont-size: 14px;\n\tletter-spacing: 1px;\n\tpadding: .75em 0;\n}\n\ntd {\n\twhite-space: nowrap;\n}\n\ntd:nth-child(2) {\n\twidth: 75%;\n}\n\ntd:nth-child(2) a {\n\tpadding: 1em 0;\n\tdisplay: block;\n}\n\ntd:nth-child(3),\nth:nth-child(3) {\n\tpadding: 0 20px 0 20px;\n\tmin-width: 150px;\n}\n\ntd .go-up {\n\ttext-transform: uppercase;\n\tfont-size: 12px;\n\tfont-weight: bold;\n}\n\n.name,\n.go-up {\n\tword-break: break-all;\n\toverflow-wrap: break-word;\n\twhite-space: pre-wrap;\n}\n\n.listing .icon-tabler {\n\tcolor: #454545;\n}\n\n.listing .icon-tabler-folder-filled {\n\tcolor: #ffb900 !important;\n}\n\n.sizebar {\n\tposition: relative;\n\tpadding: 0.25rem 0.5rem;\n\tdisplay: flex;\n}\n\n.sizebar-bar {\n\tbackground-color: #dbeeff;\n\tposition: absolute;\n\ttop: 0;\n\tright: 0;\n\tbottom: 0;\n\tleft: 0;\n\tz-index: 0;\n\theight: 100%;\n\tpointer-events: none;\n}\n\n.sizebar-text {\n\tposition: relative;\n\tz-index: 1;\n\toverflow: hidden;\n\ttext-overflow: ellipsis;\n\twhite-space: nowrap;\n}\n\n.grid {\n\tdisplay: grid;\n\tgrid-template-columns: repeat(auto-fill, minmax(16em, 1fr));\n\tgap: 2px;\n}\n\n.grid .entry {\n\tposition: relative;\n\twidth: 100%;\n}\n\n.grid .entry a {\n\tdisplay: flex;\n\tflex-direction: column;\n\talign-items: center;\n\tjustify-content: center;\n\tpadding: 1.5em;\n\theight: 100%;\n}\n\n.grid .entry svg {\n\twidth: 75px;\n\theight: 75px;\n}\n\n.grid .entry img {\n\tmax-height: 200px;\n\tobject-fit: cover;\n}\n\n.grid .entry .name {\n\tmargin-top: 1em;\n}\n\nfooter {\n\tpadding: 40px 20px;\n\tfont-size: 12px;\n\ttext-align: center;\n}\n\n.caddy-logo {\n\tdisplay: inline-block;\n\theight: 2.5em;\n\tmargin: 0 auto;\n}\n\n@media (max-width: 600px) {\n\t.hideable {\n\t\tdisplay: none;\n\t}\n\n\ttd:nth-child(2) {\n\t\twidth: auto;\n\t}\n\n\tth:nth-child(3),\n\ttd:nth-child(3) {\n\t\tpadding-right: 5%;\n\t\ttext-align: right;\n\t}\n\n\th1 {\n\t\tcolor: #000;\n\t}\n\n\th1 a {\n\t\tmargin: 0;\n\t}\n\n\t#filter {\n\t\tmax-width: 100px;\n\t}\n\n\t.grid .entry {\n\t\tmax-width: initial;\n\t}\n}\n\n\n@media (prefers-color-scheme: dark) {\n\thtml {\n\t\tbackground: black; /* overscroll */\n\t}\n\n\tbody {\n\t\tbackground: linear-gradient(180deg, rgb(34 50 66) 0%, rgb(26 31 38) 100%);\n\t\tbackground-attachment: fixed;\n\t}\n\n\tbody,\n\ta,\n\tsvg,\n\t.layout.current,\n\t.layout.current svg,\n\t.go-up {\n\t\tcolor: #ccc;\n\t}\n\n\th1 a,\n\tth a {\n\t\tcolor: white;\n\t}\n\n\th1 {\n\t\tcolor: white;\n\t}\n\n\th1 a:hover {\n\t\tbackground: hsl(213deg 100% 73% / 20%);\n\t}\n\n\theader,\n\tmain,\n\t.grid .entry {\n\t\tbackground-color: #101720;\n\t}\n\n\ttbody tr:hover,\n\t.grid .entry a:hover {\n\t\tbackground-color: #162030;\n\t\tcolor: #fff;\n\t}\n\n\tth {\n\t\tbackground-color: #18212c;\n\t}\n\n\ttd a,\n\t.listing .icon-tabler {\n\t\tcolor: #abc8e3;\n\t}\n\n\ttd a:hover,\n\ttd a:hover .icon-tabler {\n\t\tcolor: white;\n\t}\n\n\ttd a:visited {\n\t\tcolor: #cd53cd;\n\t}\n\n\ttd a:visited:hover {\n\t\tcolor: #f676f6;\n\t}\n\n\t#search-icon {\n\t\tcolor: #7798c4;\n\t}\n\n\t#filter {\n\t\tcolor: #ffffff;\n\t\tborder: 1px solid #29435c;\n\t}\n\n\t.meta {\n\t\tborder-bottom: 1px solid #222e3b;\n\t}\n\n\t.sizebar-bar {\n\t\tbackground-color: #1f3549;\n\t}\n\n\t.grid .entry a {\n\t\tbackground-color: #080b0f;\n\t}\n\n\t#Wordmark path,\n\t#R path {\n\t\tfill: #ccc !important;\n\t}\n\t#R circle {\n\t\tstroke: #ccc !important;\n\t}\n}\n\n</style>\n{{- if eq .Layout \"grid\"}}\n<style {{ $nonceAttribute }}>.wrapper { max-width: none; } main { margin-top: 1px; }</style>\n{{- end}}\n</head>\n<body>\n\t<header>\n\t\t<div class=\"wrapper\">\n\t\t\t<div class=\"breadcrumbs\">Folder Path</div>\n\t\t\t\t<h1>\n\t\t\t\t\t{{range $i, $crumb := .Breadcrumbs}}<a href=\"{{html $crumb.Link}}\">{{html $crumb.Text}}</a>{{if ne $i 0}}/{{end}}{{end}}\n\t\t\t\t</h1>\n\t\t\t</div>\n\t\t</header>\n\t\t<div class=\"wrapper\">\n\t\t\t<main>\n\t\t\t\t<div class=\"meta\">\n\t\t\t\t\t<div id=\"summary\">\n\t\t\t\t\t\t<span class=\"meta-item\">\n\t\t\t\t\t\t\t<b>{{.NumDirs}}</b> director{{if eq 1 .NumDirs}}y{{else}}ies{{end}}\n\t\t\t\t\t\t</span>\n\t\t\t\t\t\t<span class=\"meta-item\">\n\t\t\t\t\t\t\t<b>{{.NumFiles}}</b> file{{if ne 1 .NumFiles}}s{{end}}\n\t\t\t\t\t\t</span>\n\t\t\t\t\t\t<span class=\"meta-item\">\n\t\t\t\t\t\t\t<b>{{.HumanTotalFileSize}}</b> total\n\t\t\t\t\t\t</span>\n\t\t\t\t\t\t{{- if ne 0 .Limit}}\n\t\t\t\t\t\t<span class=\"meta-item\">\n\t\t\t\t\t\t\t(of which only <b>{{.Limit}}</b> are displayed)\n\t\t\t\t\t\t</span>\n\t\t\t\t\t\t{{- end}}\n\t\t\t\t\t</div>\n\t\t\t\t\t<a id=\"layout-list\" class='layout{{if eq $.Layout \"list\" \"\"}}current{{end}}'>\n\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-layout-list\" width=\"16\" height=\"16\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t\t\t\t\t\t<path d=\"M4 4m0 2a2 2 0 0 1 2 -2h12a2 2 0 0 1 2 2v2a2 2 0 0 1 -2 2h-12a2 2 0 0 1 -2 -2z\"/>\n\t\t\t\t\t\t\t<path d=\"M4 14m0 2a2 2 0 0 1 2 -2h12a2 2 0 0 1 2 2v2a2 2 0 0 1 -2 2h-12a2 2 0 0 1 -2 -2z\"/>\n\t\t\t\t\t\t</svg>\n\t\t\t\t\t\tList\n\t\t\t\t\t</a>\n\t\t\t\t\t<a id=\"layout-grid\" class='layout{{if eq $.Layout \"grid\"}}current{{end}}'>\n\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-layout-grid\" width=\"16\" height=\"16\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t\t\t\t\t\t<path d=\"M4 4m0 1a1 1 0 0 1 1 -1h4a1 1 0 0 1 1 1v4a1 1 0 0 1 -1 1h-4a1 1 0 0 1 -1 -1z\"/>\n\t\t\t\t\t\t\t<path d=\"M14 4m0 1a1 1 0 0 1 1 -1h4a1 1 0 0 1 1 1v4a1 1 0 0 1 -1 1h-4a1 1 0 0 1 -1 -1z\"/>\n\t\t\t\t\t\t\t<path d=\"M4 14m0 1a1 1 0 0 1 1 -1h4a1 1 0 0 1 1 1v4a1 1 0 0 1 -1 1h-4a1 1 0 0 1 -1 -1z\"/>\n\t\t\t\t\t\t\t<path d=\"M14 14m0 1a1 1 0 0 1 1 -1h4a1 1 0 0 1 1 1v4a1 1 0 0 1 -1 1h-4a1 1 0 0 1 -1 -1z\"/>\n\t\t\t\t\t\t</svg>\n\t\t\t\t\t\tGrid\n\t\t\t\t\t</a>\n\t\t\t\t\t{{- if and (eq .Layout \"grid\") (eq .Sort \"name\") (ne .Order \"asc\")}}\n\t\t\t\t\t<a href=\"?sort=name&order=asc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}&layout=grid\">\n\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-layout-grid\" width=\"16\" height=\"16\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t<text x=\"2\" y=\"10\" font-size=\"9\" fill=\"currentColor\">Z</text>\n\t\t\t\t\t\t\t<text x=\"2\" y=\"20\" font-size=\"9\" fill=\"currentColor\">A</text>\n\t\t\t\t\t\t\t<path d=\"M13 4v12\"></path>\n\t\t\t\t\t\t\t<path d=\"M12 16l1 2l1 -2\"></path>\n\t\t\t\t\t\t</svg>\n\t\t\t\t\t</a>\n\t\t\t\t\t{{- else if and (eq .Layout \"grid\") (eq .Sort \"name\") (ne .Order \"desc\")}}\n\t\t\t\t\t<a href=\"?sort=name&order=desc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}&layout=grid\">\n\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-layout-grid\" width=\"16\" height=\"16\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t<text x=\"2\" y=\"10\" font-size=\"9\" fill=\"currentColor\">A</text>\n\t\t\t\t\t\t\t<text x=\"2\" y=\"20\" font-size=\"9\" fill=\"currentColor\">Z</text>\n\t\t\t\t\t\t\t<path d=\"M13 4v12\"></path>\n\t\t\t\t\t\t\t<path d=\"M12 16l1 2l1 -2\"></path>\n\t\t\t\t\t\t</svg>\n\t\t\t\t\t</a>\n\t\t\t\t\t{{- else if and (eq .Layout \"grid\")}}\n\t\t\t\t\t<a href=\"?sort=name&order=asc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}&layout=grid\">\n\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-layout-grid\" width=\"16\" height=\"16\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t<text x=\"2\" y=\"20\" font-size=\"9\" fill=\"currentColor\">A</text>\n\t\t\t\t\t\t\t<text x=\"2\" y=\"10\" font-size=\"9\" fill=\"currentColor\">Z</text>\n\t\t\t\t\t\t\t<path d=\"M13 4v12\"></path>\n\t\t\t\t\t\t\t<path d=\"M12 16l1 2l1 -2\"></path>\n\t\t\t\t\t\t</svg>\n\t\t\t\t\t</a>\n\t\t\t\t\t{{- end}}\n\t\t\t\t\t{{- if and (eq .Layout \"grid\") (eq .Sort \"size\") (ne .Order \"asc\")}}\n\t\t\t\t\t<a href=\"?sort=size&order=asc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}&layout=grid\">\n\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-layout-grid\" width=\"16\" height=\"16\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t<rect x=\"2\" y=\"4\" width=\"4\" height=\"3\" rx=\"0.4\" ry=\"0.4\"></rect>\n\t\t\t\t\t\t\t<rect x=\"2\" y=\"10\" width=\"8\" height=\"3\" rx=\"0.4\" ry=\"0.4\"></rect>\n\t\t\t\t\t\t\t<rect x=\"2\" y=\"16\" width=\"12\" height=\"3\" rx=\"0.4\" ry=\"0.4\"></rect>\n\t\t\t\t\t\t\t<path d=\"M18 4v12\"></path>\n\t\t\t\t\t\t\t<path d=\"M17 16l1 2l1 -2\"></path>\n\t\t\t\t\t\t</svg>\n\t\t\t\t\t</a>\n\t\t\t\t\t{{- else if and (eq .Layout \"grid\") (eq .Sort \"size\") (ne .Order \"desc\")}}\n\t\t\t\t\t<a href=\"?sort=size&order=desc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}&layout=grid\">\n\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-layout-grid\" width=\"16\" height=\"16\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t<rect x=\"2\" y=\"4\" width=\"12\" height=\"3\" rx=\"0.4\" ry=\"0.4\"></rect>\n\t\t\t\t\t\t\t<rect x=\"2\" y=\"10\" width=\"8\" height=\"3\" rx=\"0.4\" ry=\"0.4\"></rect>\n\t\t\t\t\t\t\t<rect x=\"2\" y=\"16\" width=\"4\" height=\"3\" rx=\"0.4\" ry=\"0.4\"></rect>\n\t\t\t\t\t\t\t<path d=\"M18 4v12\"></path>\n\t\t\t\t\t\t\t<path d=\"M17 16l1 2l1 -2\"></path>\n\t\t\t\t\t\t</svg>\n\t\t\t\t\t</a>\n\t\t\t\t\t{{- else if and (eq .Layout \"grid\")}}\n\t\t\t\t\t<a href=\"?sort=size&order=asc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}&layout=grid\">\n\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-layout-grid\" width=\"16\" height=\"16\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t<rect x=\"2\" y=\"4\" width=\"4\" height=\"3\" rx=\"0.4\" ry=\"0.4\"></rect>\n\t\t\t\t\t\t\t<rect x=\"2\" y=\"10\" width=\"8\" height=\"3\" rx=\"0.4\" ry=\"0.4\"></rect>\n\t\t\t\t\t\t\t<rect x=\"2\" y=\"16\" width=\"12\" height=\"3\" rx=\"0.4\" ry=\"0.4\"></rect>\n\t\t\t\t\t\t\t<path d=\"M18 4v12\"></path>\n\t\t\t\t\t\t\t<path d=\"M17 16l1 2l1 -2\"></path>\n\t\t\t\t\t\t</svg>\n\t\t\t\t\t</a>\n\t\t\t\t\t{{- end}}\n\t\t\t\t\t{{- if and (eq .Layout \"grid\") (eq .Sort \"time\") (ne .Order \"asc\")}}\n\t\t\t\t\t<a href=\"?sort=time&order=asc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}&layout=grid\">\n\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-layout-grid\" width=\"16\" height=\"16\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t<circle cx=\"9\" cy=\"11\" r=\"8\"></circle>\n\t\t\t\t\t\t\t<line x1=\"9\" y1=\"12\" x2=\"9\" y2=\"7\" stroke-linecap=\"round\"></line>\n\t\t\t\t\t\t\t<line x1=\"9\" y1=\"12\" x2=\"12\" y2=\"12\" stroke-linecap=\"round\"></line>\n\t\t\t\t\t\t\t<path d=\"M20 4v12\"></path>\n\t\t\t\t\t\t\t<path d=\"M19 16l1 2l1 -2\"></path>\n\t\t\t\t\t\t</svg>\n\t\t\t\t\t</a>\n\t\t\t\t\t{{- else if and (eq .Layout \"grid\") (eq .Sort \"time\") (ne .Order \"desc\")}}\n\t\t\t\t\t<a href=\"?sort=time&order=desc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}&layout=grid\">\n\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-layout-grid\" width=\"16\" height=\"16\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t<circle cx=\"9\" cy=\"11\" r=\"8\"></circle>\n\t\t\t\t\t\t\t<line x1=\"9\" y1=\"12\" x2=\"9\" y2=\"7\" stroke-linecap=\"round\"></line>\n\t\t\t\t\t\t\t<line x1=\"9\" y1=\"12\" x2=\"12\" y2=\"12\" stroke-linecap=\"round\"></line>\n\t\t\t\t\t\t\t<path d=\"M20 4v12\"></path>\n\t\t\t\t\t\t\t<path d=\"M19 5l1 -2l1 2\"></path>\n\t\t\t\t\t\t</svg>\n\t\t\t\t\t</a>\n\t\t\t\t\t{{- else if and (eq .Layout \"grid\")}}\n\t\t\t\t\t<a href=\"?sort=time&order=asc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}&layout=grid\">\n\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-layout-grid\" width=\"16\" height=\"16\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t<circle cx=\"9\" cy=\"11\" r=\"8\"></circle>\n\t\t\t\t\t\t\t<line x1=\"9\" y1=\"12\" x2=\"9\" y2=\"7\" stroke-linecap=\"round\"></line>\n\t\t\t\t\t\t\t<line x1=\"9\" y1=\"12\" x2=\"12\" y2=\"12\" stroke-linecap=\"round\"></line>\n\t\t\t\t\t\t\t<path d=\"M20 4v12\"></path>\n\t\t\t\t\t\t\t<path d=\"M19 16l1 2l1 -2\"></path>\n\t\t\t\t\t\t</svg>\n\t\t\t\t\t</a>\n\t\t\t\t\t{{- end}}\n\t\t\t\t</div>\n\t\t\t\t<div class='listing{{if eq .Layout \"grid\"}} grid{{end}}'>\n\t\t\t\t{{- if eq .Layout \"grid\"}}\n\t\t\t\t{{- range .Items}}\n\t\t\t\t<div class=\"entry\">\n\t\t\t\t\t<a href=\"{{html .URL}}\" title='{{html (.HumanModTime \"January 2, 2006 at 15:04:05\")}}'>\n\t\t\t\t\t\t{{template \"icon\" .}}\n\t\t\t\t\t\t<div class=\"name\">{{html .Name}}</div>\n\t\t\t\t\t\t<div class=\"size\">{{.HumanSize}}</div>\n\t\t\t\t\t</a>\n\t\t\t\t</div>\n\t\t\t\t{{- end}}\n\t\t\t\t{{- else}}\n\t\t\t\t<table aria-describedby=\"summary\">\n\t\t\t\t\t<thead>\n\t\t\t\t\t<tr>\n\t\t\t\t\t\t<th></th>\n\t\t\t\t\t\t<th>\n\t\t\t\t\t\t\t{{- if and (eq .Sort \"namedirfirst\") (ne .Order \"desc\")}}\n\t\t\t\t\t\t\t<a href=\"?sort=namedirfirst&order=desc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}\" class=\"icon\">\n\t\t\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-caret-up\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t\t\t\t\t\t\t\t<path d=\"M18 14l-6 -6l-6 6h12\"/>\n\t\t\t\t\t\t\t\t</svg>\n\t\t\t\t\t\t\t</a>\n\t\t\t\t\t\t\t{{- else if and (eq .Sort \"namedirfirst\") (ne .Order \"asc\")}}\n\t\t\t\t\t\t\t<a href=\"?sort=namedirfirst&order=asc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}\" class=\"icon\">\n\t\t\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-caret-down\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t\t\t\t\t\t\t\t<path d=\"M6 10l6 6l6 -6h-12\"/>\n\t\t\t\t\t\t\t\t</svg>\n\t\t\t\t\t\t\t</a>\n\t\t\t\t\t\t\t{{- else}}\n\t\t\t\t\t\t\t<a href=\"?sort=namedirfirst&order=asc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}\" class=\"icon sort\">\n\t\t\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-caret-up\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t\t\t\t\t\t\t\t<path d=\"M18 14l-6 -6l-6 6h12\"/>\n\t\t\t\t\t\t\t\t</svg>\n\t\t\t\t\t\t\t</a>\n\t\t\t\t\t\t\t{{- end}}\n\n\t\t\t\t\t\t\t{{- if and (eq .Sort \"name\") (ne .Order \"desc\")}}\n\t\t\t\t\t\t\t<a href=\"?sort=name&order=desc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}\">\n\t\t\t\t\t\t\t\tName\n\t\t\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-caret-up\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t\t\t\t\t\t\t\t<path d=\"M18 14l-6 -6l-6 6h12\"/>\n\t\t\t\t\t\t\t\t</svg>\n\t\t\t\t\t\t\t</a>\n\t\t\t\t\t\t\t{{- else if and (eq .Sort \"name\") (ne .Order \"asc\")}}\n\t\t\t\t\t\t\t<a href=\"?sort=name&order=asc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}\">\n\t\t\t\t\t\t\t\tName\n\t\t\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-caret-down\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t\t\t\t\t\t\t\t<path d=\"M6 10l6 6l6 -6h-12\"/>\n\t\t\t\t\t\t\t\t</svg>\n\t\t\t\t\t\t\t</a>\n\t\t\t\t\t\t\t{{- else}}\n\t\t\t\t\t\t\t<a href=\"?sort=name&order=asc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}\">\n\t\t\t\t\t\t\t\tName\n\t\t\t\t\t\t\t</a>\n\t\t\t\t\t\t\t{{- end}}\n\n\t\t\t\t\t\t\t<div class=\"filter-container\">\n\t\t\t\t\t\t\t\t<svg id=\"search-icon\" xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-search\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t\t\t\t\t\t\t\t<path d=\"M10 10m-7 0a7 7 0 1 0 14 0a7 7 0 1 0 -14 0\"/>\n\t\t\t\t\t\t\t\t\t<path d=\"M21 21l-6 -6\"/>\n\t\t\t\t\t\t\t\t</svg>\n\t\t\t\t\t\t\t\t<input type=\"search\" placeholder=\"Search\" id=\"filter\">\n\t\t\t\t\t\t\t</div>\n\t\t\t\t\t\t</th>\n\t\t\t\t\t\t<th>\n\t\t\t\t\t\t\t{{- if and (eq .Sort \"size\") (ne .Order \"desc\")}}\n\t\t\t\t\t\t\t<a href=\"?sort=size&order=desc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}\">\n\t\t\t\t\t\t\t\tSize\n\t\t\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-caret-up\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t\t\t\t\t\t\t\t<path d=\"M18 14l-6 -6l-6 6h12\"/>\n\t\t\t\t\t\t\t\t</svg>\n\t\t\t\t\t\t\t</a>\n\t\t\t\t\t\t\t{{- else if and (eq .Sort \"size\") (ne .Order \"asc\")}}\n\t\t\t\t\t\t\t<a href=\"?sort=size&order=asc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}\">\n\t\t\t\t\t\t\t\tSize\n\t\t\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-caret-down\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t\t\t\t\t\t\t\t<path d=\"M6 10l6 6l6 -6h-12\"/>\n\t\t\t\t\t\t\t\t</svg>\n\t\t\t\t\t\t\t</a>\n\t\t\t\t\t\t\t{{- else}}\n\t\t\t\t\t\t\t<a href=\"?sort=size&order=asc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}\">\n\t\t\t\t\t\t\t\tSize\n\t\t\t\t\t\t\t</a>\n\t\t\t\t\t\t\t{{- end}}\n\t\t\t\t\t\t</th>\n\t\t\t\t\t\t<th class=\"hideable\">\n\t\t\t\t\t\t\t{{- if and (eq .Sort \"time\") (ne .Order \"desc\")}}\n\t\t\t\t\t\t\t<a href=\"?sort=time&order=desc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}\">\n\t\t\t\t\t\t\t\tModified\n\t\t\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-caret-up\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t\t\t\t\t\t\t\t<path d=\"M18 14l-6 -6l-6 6h12\"/>\n\t\t\t\t\t\t\t\t</svg>\n\t\t\t\t\t\t\t</a>\n\t\t\t\t\t\t\t{{- else if and (eq .Sort \"time\") (ne .Order \"asc\")}}\n\t\t\t\t\t\t\t<a href=\"?sort=time&order=asc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}\">\n\t\t\t\t\t\t\t\tModified\n\t\t\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-caret-down\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t\t\t\t\t\t\t\t<path d=\"M6 10l6 6l6 -6h-12\"/>\n\t\t\t\t\t\t\t\t</svg>\n\t\t\t\t\t\t\t</a>\n\t\t\t\t\t\t\t{{- else}}\n\t\t\t\t\t\t\t<a href=\"?sort=time&order=asc{{if ne 0 .Limit}}&limit={{.Limit}}{{end}}{{if ne 0 .Offset}}&offset={{.Offset}}{{end}}\">\n\t\t\t\t\t\t\t\tModified\n\t\t\t\t\t\t\t</a>\n\t\t\t\t\t\t\t{{- end}}\n\t\t\t\t\t\t</th>\n\t\t\t\t\t\t<th class=\"hideable\"></th>\n\t\t\t\t\t</tr>\n\t\t\t\t\t</thead>\n\t\t\t\t\t<tbody>\n\t\t\t\t\t{{- if .CanGoUp}}\n\t\t\t\t\t<tr>\n\t\t\t\t\t\t<td></td>\n\t\t\t\t\t\t<td>\n\t\t\t\t\t\t\t<a href=\"..\">\n\t\t\t\t\t\t\t\t<svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-corner-left-up\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/>\n\t\t\t\t\t\t\t\t\t<path d=\"M18 18h-6a3 3 0 0 1 -3 -3v-10l-4 4m8 0l-4 -4\"/>\n\t\t\t\t\t\t\t\t</svg>\n\t\t\t\t\t\t\t\t<span class=\"go-up\">Up</span>\n\t\t\t\t\t\t\t</a>\n\t\t\t\t\t\t</td>\n\t\t\t\t\t\t<td></td>\n\t\t\t\t\t\t<td class=\"hideable\"></td>\n\t\t\t\t\t\t<td class=\"hideable\"></td>\n\t\t\t\t\t</tr>\n\t\t\t\t\t{{- end}}\n\t\t\t\t\t{{- range .Items}}\n\t\t\t\t\t<tr class=\"file\">\n\t\t\t\t\t\t<td></td>\n\t\t\t\t\t\t<td>\n\t\t\t\t\t\t\t<a href=\"{{html .URL}}\">\n\t\t\t\t\t\t\t\t{{template \"icon\" .}}\n\t\t\t\t\t\t\t\t{{- if not .SymlinkPath}}\n\t\t\t\t\t\t\t\t<span class=\"name\">{{html .Name}}</span>\n\t\t\t\t\t\t\t\t{{- else}}\n\t\t\t\t\t\t\t\t<span class=\"name\">{{html .Name}} <svg xmlns=\"http://www.w3.org/2000/svg\" class=\"icon icon-tabler icon-tabler-arrow-narrow-right\" width=\"24\" height=\"24\" viewBox=\"0 0 24 24\" stroke-width=\"2\" stroke=\"currentColor\" fill=\"none\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t\t\t\t\t<path stroke=\"none\" d=\"M0 0h24v24H0z\" fill=\"none\"/><path d=\"M5 12l14 0\" />\n\t\t\t\t\t\t\t\t\t<path d=\"M15 16l4 -4\" />\n\t\t\t\t\t\t\t\t\t<path d=\"M15 8l4 4\" />\n\t\t\t\t\t\t\t\t</svg> {{html .SymlinkPath}}</span>\n\t\t\t\t\t\t\t\t{{- end}}\n\t\t\t\t\t\t\t</a>\n\t\t\t\t\t\t</td>\n\t\t\t\t\t\t{{- if .IsDir}}\n\t\t\t\t\t\t<td>&mdash;</td>\n\t\t\t\t\t\t{{- else}}\n\t\t\t\t\t\t<td class=\"size\" data-size=\"{{.Size}}\">\n\t\t\t\t\t\t\t<div class=\"sizebar\">\n\t\t\t\t\t\t\t\t<div class=\"sizebar-bar\"></div>\n\t\t\t\t\t\t\t\t<div class=\"sizebar-text\">\n\t\t\t\t\t\t\t\t\t{{if .IsSymlink}}↱&nbsp;{{end}}{{.HumanSize}}\n\t\t\t\t\t\t\t\t</div>\n\t\t\t\t\t\t\t</div>\n\t\t\t\t\t\t</td>\n\t\t\t\t\t\t{{- end}}\n\t\t\t\t\t\t<td class=\"timestamp hideable\">\n\t\t\t\t\t\t\t<time datetime=\"{{.HumanModTime \"2006-01-02T15:04:05Z\"}}\">{{.HumanModTime \"01/02/2006 03:04:05 PM -07:00\"}}</time>\n\t\t\t\t\t\t</td>\n\t\t\t\t\t\t<td class=\"hideable\"></td>\n\t\t\t\t\t</tr>\n\t\t\t\t\t{{- end}}\n\t\t\t\t\t</tbody>\n\t\t\t\t</table>\n\t\t\t\t{{- end}}\n\t\t\t</div>\n\t\t\t</main>\n\t\t</div>\n\t\t<footer>\n\t\t\tServed with\n\t\t\t<a rel=\"noopener noreferrer\" href=\"https://caddyserver.com\">\n\t\t\t\t<svg class=\"caddy-logo\" viewBox=\"0 0 379 114\" version=\"1.1\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" xml:space=\"preserve\" xmlns:serif=\"http://www.serif.com/\" fill-rule=\"evenodd\" clip-rule=\"evenodd\" stroke-linecap=\"round\" stroke-linejoin=\"round\">\n\t\t\t\t\t<g transform=\"matrix(1,0,0,1,-1982.99,-530.985)\">\n\t\t\t\t\t\t<g transform=\"matrix(1.16548,0,0,1.10195,1823.12,393.466)\">\n\t\t\t\t\t\t\t<g transform=\"matrix(1,0,0,1,0.233052,1.17986)\">\n\t\t\t\t\t\t\t\t<g id=\"Icon\" transform=\"matrix(0.858013,0,0,0.907485,-3224.99,-1435.83)\">\n\t\t\t\t\t\t\t\t\t<g>\n\t\t\t\t\t\t\t\t\t\t<g transform=\"matrix(-0.191794,-0.715786,0.715786,-0.191794,4329.14,4673.64)\">\n\t\t\t\t\t\t\t\t\t\t\t<path d=\"M3901.56,610.734C3893.53,610.261 3886.06,608.1 3879.2,604.877C3872.24,601.608 3866.04,597.093 3860.8,591.633C3858.71,589.457 3856.76,587.149 3854.97,584.709C3853.2,582.281 3851.57,579.733 3850.13,577.066C3845.89,569.224 3843.21,560.381 3842.89,550.868C3842.57,543.321 3843.64,536.055 3845.94,529.307C3848.37,522.203 3852.08,515.696 3856.83,510.049L3855.79,509.095C3850.39,514.54 3846.02,520.981 3842.9,528.125C3839.84,535.125 3838.03,542.781 3837.68,550.868C3837.34,561.391 3839.51,571.425 3843.79,580.306C3845.27,583.38 3847.03,586.304 3849.01,589.049C3851.01,591.806 3853.24,594.39 3855.69,596.742C3861.75,602.568 3869,607.19 3877.03,610.1C3884.66,612.867 3892.96,614.059 3901.56,613.552L3901.56,610.734Z\" fill=\"rgb(0,144,221)\"/>\n\t\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t\t<g transform=\"matrix(-0.191794,-0.715786,0.715786,-0.191794,4329.14,4673.64)\">\n\t\t\t\t\t\t\t\t\t\t\t<path d=\"M3875.69,496.573C3879.62,494.538 3883.8,492.897 3888.2,491.786C3892.49,490.704 3896.96,490.124 3901.56,490.032C3903.82,490.13 3906.03,490.332 3908.21,490.688C3917.13,492.147 3925.19,495.814 3932.31,500.683C3936.13,503.294 3939.59,506.335 3942.81,509.619C3947.09,513.98 3950.89,518.816 3953.85,524.232C3958.2,532.197 3960.96,541.186 3961.32,550.868C3961.61,558.748 3960.46,566.345 3957.88,573.322C3956.09,578.169 3953.7,582.753 3950.66,586.838C3947.22,591.461 3942.96,595.427 3938.27,598.769C3933.66,602.055 3928.53,604.619 3923.09,606.478C3922.37,606.721 3921.6,606.805 3920.93,607.167C3920.42,607.448 3920.14,607.854 3919.69,608.224L3920.37,610.389C3920.98,610.432 3921.47,610.573 3922.07,610.474C3922.86,610.344 3923.55,609.883 3924.28,609.566C3931.99,606.216 3938.82,601.355 3944.57,595.428C3947.02,592.903 3949.25,590.174 3951.31,587.319C3953.59,584.168 3955.66,580.853 3957.43,577.348C3961.47,569.34 3964.01,560.422 3964.36,550.868C3964.74,540.511 3962.66,530.628 3958.48,521.868C3955.57,515.775 3951.72,510.163 3946.95,505.478C3943.37,501.962 3939.26,498.99 3934.84,496.562C3926.88,492.192 3917.87,489.76 3908.37,489.229C3906.12,489.104 3903.86,489.054 3901.56,489.154C3896.87,489.06 3892.3,489.519 3887.89,490.397C3883.3,491.309 3878.89,492.683 3874.71,494.525L3875.69,496.573Z\" fill=\"rgb(0,144,221)\"/>\n\t\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t<g>\n\t\t\t\t\t\t\t\t\t\t<g transform=\"matrix(-3.37109,-0.514565,0.514565,-3.37109,4078.07,1806.88)\">\n\t\t\t\t\t\t\t\t\t\t\t<path d=\"M22,12C22,10.903 21.097,10 20,10C19.421,10 18.897,10.251 18.53,10.649C18.202,11.006 18,11.481 18,12C18,13.097 18.903,14 20,14C21.097,14 22,13.097 22,12Z\" fill=\"none\" fill-rule=\"nonzero\" stroke=\"rgb(0,144,221)\" stroke-width=\"1.05px\"/>\n\t\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t\t<g transform=\"matrix(-5.33921,-5.26159,-3.12106,-6.96393,4073.87,1861.55)\">\n\t\t\t\t\t\t\t\t\t\t\t<path d=\"M10.315,5.333C10.315,5.333 9.748,5.921 9.03,6.673C7.768,7.995 6.054,9.805 6.054,9.805L6.237,9.86C6.237,9.86 8.045,8.077 9.36,6.771C10.107,6.028 10.689,5.444 10.689,5.444L10.315,5.333Z\" fill=\"rgb(0,144,221)\"/>\n\t\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t<g id=\"Padlock\" transform=\"matrix(3.11426,0,0,3.11426,3938.31,1737.25)\">\n\t\t\t\t\t\t\t\t\t\t<g>\n\t\t\t\t\t\t\t\t\t\t\t<path d=\"M9.876,21L18.162,21C18.625,21 19,20.625 19,20.162L19,11.838C19,11.375 18.625,11 18.162,11L5.838,11C5.375,11 5,11.375 5,11.838L5,16.758\" fill=\"none\" stroke=\"rgb(34,182,56)\" stroke-width=\"1.89px\" stroke-linecap=\"butt\" stroke-linejoin=\"miter\"/>\n\t\t\t\t\t\t\t\t\t\t\t<path d=\"M8,11L8,7C8,4.806 9.806,3 12,3C14.194,3 16,4.806 16,7L16,11\" fill=\"none\" fill-rule=\"nonzero\" stroke=\"rgb(34,182,56)\" stroke-width=\"1.89px\"/>\n\t\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t<g>\n\t\t\t\t\t\t\t\t\t\t<g transform=\"matrix(5.30977,0.697415,-0.697415,5.30977,3852.72,1727.97)\">\n\t\t\t\t\t\t\t\t\t\t\t<path d=\"M22,12C22,11.659 21.913,11.337 21.76,11.055C21.421,10.429 20.756,10 20,10C18.903,10 18,10.903 18,12C18,13.097 18.903,14 20,14C21.097,14 22,13.097 22,12Z\" fill=\"none\" fill-rule=\"nonzero\" stroke=\"rgb(0,144,221)\" stroke-width=\"0.98px\"/>\n\t\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t\t<g transform=\"matrix(4.93114,2.49604,1.11018,5.44847,3921.41,1726.72)\">\n\t\t\t\t\t\t\t\t\t\t\t<path d=\"M8.902,6.77C8.902,6.77 7.235,8.253 6.027,9.366C5.343,9.996 4.819,10.502 4.819,10.502L5.52,11.164C5.52,11.164 6.021,10.637 6.646,9.951C7.749,8.739 9.219,7.068 9.219,7.068L8.902,6.77Z\" fill=\"rgb(0,144,221)\"/>\n\t\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t<g id=\"Text\">\n\t\t\t\t\t\t\t\t\t<g id=\"Wordmark\" transform=\"matrix(1.32271,0,0,2.60848,-899.259,-791.691)\">\n\t\t\t\t\t\t\t\t\t\t<g id=\"y\" transform=\"matrix(0.50291,0,0,0.281607,905.533,304.987)\">\n\t\t\t\t\t\t\t\t\t\t\t<path d=\"M192.152,286.875L202.629,268.64C187.804,270.106 183.397,265.779 180.143,263.391C176.888,261.004 174.362,257.99 172.563,254.347C170.765,250.705 169.866,246.691 169.866,242.305L169.866,208.107L183.21,208.107L183.21,242.213C183.21,245.188 183.896,247.822 185.268,250.116C186.64,252.41 188.465,254.197 190.743,255.475C193.022,256.754 195.501,257.393 198.182,257.393C200.894,257.393 203.393,256.75 205.68,255.463C207.966,254.177 209.799,252.391 211.178,250.105C212.558,247.818 213.248,245.188 213.248,242.213L213.248,208.107L226.545,208.107L226.545,242.305C226.545,246.707 225.378,258.46 218.079,268.64C215.735,271.909 207.835,286.875 207.835,286.875L192.152,286.875Z\" fill=\"rgb(47,47,47)\" fill-rule=\"nonzero\"/>\n\t\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t\t<g id=\"add\" transform=\"matrix(0.525075,0,0,0.281607,801.871,304.987)\">\n\t\t\t\t\t\t\t\t\t\t\t<g transform=\"matrix(116.242,0,0,116.242,161.846,267.39)\">\n\t\t\t\t\t\t\t\t\t\t\t\t<path d=\"M0.276,0.012C0.227,0.012 0.186,0 0.15,-0.024C0.115,-0.048 0.088,-0.08 0.069,-0.12C0.05,-0.161 0.04,-0.205 0.04,-0.254C0.04,-0.305 0.051,-0.35 0.072,-0.39C0.094,-0.431 0.125,-0.463 0.165,-0.487C0.205,-0.51 0.254,-0.522 0.31,-0.522C0.366,-0.522 0.413,-0.51 0.452,-0.486C0.491,-0.463 0.521,-0.431 0.542,-0.39C0.562,-0.35 0.573,-0.305 0.573,-0.256L0.573,-0L0.458,-0L0.458,-0.095L0.456,-0.095C0.446,-0.076 0.433,-0.058 0.417,-0.042C0.401,-0.026 0.381,-0.013 0.358,-0.003C0.335,0.007 0.307,0.012 0.276,0.012ZM0.307,-0.086C0.337,-0.086 0.363,-0.093 0.386,-0.108C0.408,-0.123 0.426,-0.144 0.438,-0.17C0.45,-0.195 0.456,-0.224 0.456,-0.256C0.456,-0.288 0.45,-0.317 0.438,-0.342C0.426,-0.367 0.409,-0.387 0.387,-0.402C0.365,-0.417 0.338,-0.424 0.308,-0.424C0.276,-0.424 0.249,-0.417 0.226,-0.402C0.204,-0.387 0.186,-0.366 0.174,-0.341C0.162,-0.315 0.156,-0.287 0.156,-0.255C0.156,-0.224 0.162,-0.195 0.174,-0.169C0.186,-0.144 0.203,-0.123 0.226,-0.108C0.248,-0.093 0.275,-0.086 0.307,-0.086Z\" fill=\"rgb(47,47,47)\" fill-rule=\"nonzero\"/>\n\t\t\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t\t\t<g transform=\"matrix(116.242,0,0,116.242,226.592,267.39)\">\n\t\t\t\t\t\t\t\t\t\t\t\t<path d=\"M0.306,0.012C0.265,0.012 0.229,0.006 0.196,-0.008C0.163,-0.021 0.135,-0.039 0.112,-0.064C0.089,-0.088 0.071,-0.117 0.059,-0.151C0.046,-0.185 0.04,-0.222 0.04,-0.263C0.04,-0.315 0.051,-0.36 0.072,-0.399C0.093,-0.437 0.122,-0.468 0.159,-0.489C0.196,-0.511 0.239,-0.522 0.287,-0.522C0.311,-0.522 0.333,-0.518 0.355,-0.511C0.377,-0.504 0.396,-0.493 0.413,-0.48C0.431,-0.466 0.445,-0.451 0.455,-0.433L0.456,-0.433L0.456,-0.73L0.571,-0.73L0.571,-0.261C0.571,-0.205 0.56,-0.156 0.537,-0.115C0.515,-0.074 0.484,-0.043 0.444,-0.021C0.405,0.001 0.358,0.012 0.306,0.012ZM0.306,-0.086C0.335,-0.086 0.361,-0.093 0.384,-0.107C0.406,-0.122 0.423,-0.141 0.436,-0.167C0.448,-0.192 0.455,-0.221 0.455,-0.255C0.455,-0.288 0.448,-0.317 0.436,-0.343C0.423,-0.368 0.406,-0.388 0.383,-0.402C0.361,-0.417 0.335,-0.424 0.305,-0.424C0.276,-0.424 0.251,-0.417 0.228,-0.402C0.206,-0.387 0.188,-0.368 0.175,-0.342C0.163,-0.317 0.156,-0.288 0.156,-0.255C0.156,-0.222 0.163,-0.193 0.175,-0.167C0.188,-0.142 0.206,-0.122 0.229,-0.108C0.251,-0.093 0.277,-0.086 0.306,-0.086Z\" fill=\"rgb(47,47,47)\" fill-rule=\"nonzero\"/>\n\t\t\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t\t\t<g transform=\"matrix(116.242,0,0,116.242,290.293,267.39)\">\n\t\t\t\t\t\t\t\t\t\t\t\t<path d=\"M0.306,0.012C0.265,0.012 0.229,0.006 0.196,-0.008C0.163,-0.021 0.135,-0.039 0.112,-0.064C0.089,-0.088 0.071,-0.117 0.059,-0.151C0.046,-0.185 0.04,-0.222 0.04,-0.263C0.04,-0.315 0.051,-0.36 0.072,-0.399C0.093,-0.437 0.122,-0.468 0.159,-0.489C0.196,-0.511 0.239,-0.522 0.287,-0.522C0.311,-0.522 0.333,-0.518 0.355,-0.511C0.377,-0.504 0.396,-0.493 0.413,-0.48C0.431,-0.466 0.445,-0.451 0.455,-0.433L0.456,-0.433L0.456,-0.73L0.571,-0.73L0.571,-0.261C0.571,-0.205 0.56,-0.156 0.537,-0.115C0.515,-0.074 0.484,-0.043 0.444,-0.021C0.405,0.001 0.358,0.012 0.306,0.012ZM0.306,-0.086C0.335,-0.086 0.361,-0.093 0.384,-0.107C0.406,-0.122 0.423,-0.141 0.436,-0.167C0.448,-0.192 0.455,-0.221 0.455,-0.255C0.455,-0.288 0.448,-0.317 0.436,-0.343C0.423,-0.368 0.406,-0.388 0.383,-0.402C0.361,-0.417 0.335,-0.424 0.305,-0.424C0.276,-0.424 0.251,-0.417 0.228,-0.402C0.206,-0.387 0.188,-0.368 0.175,-0.342C0.163,-0.317 0.156,-0.288 0.156,-0.255C0.156,-0.222 0.163,-0.193 0.175,-0.167C0.188,-0.142 0.206,-0.122 0.229,-0.108C0.251,-0.093 0.277,-0.086 0.306,-0.086Z\" fill=\"rgb(47,47,47)\" fill-rule=\"nonzero\"/>\n\t\t\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t\t<g id=\"c\" transform=\"matrix(-0.0716462,0.31304,-0.583685,-0.0384251,1489.76,-444.051)\">\n\t\t\t\t\t\t\t\t\t\t\t<path d=\"M2668.11,700.4C2666.79,703.699 2666.12,707.216 2666.12,710.766C2666.12,726.268 2678.71,738.854 2694.21,738.854C2709.71,738.854 2722.3,726.268 2722.3,710.766C2722.3,704.111 2719.93,697.672 2715.63,692.597L2707.63,699.378C2710.33,702.559 2711.57,706.602 2711.81,710.766C2712.2,717.38 2706.61,724.52 2697.27,726.637C2683.9,728.581 2676.61,720.482 2676.61,710.766C2676.61,708.541 2677.03,706.336 2677.85,704.269L2668.11,700.4Z\" fill=\"rgb(46,46,46)\"/>\n\t\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t<g id=\"R\" transform=\"matrix(0.426446,0,0,0.451034,-1192.44,-722.167)\">\n\t\t\t\t\t\t\t\t\t\t<g transform=\"matrix(1,0,0,1,-0.10786,0.450801)\">\n\t\t\t\t\t\t\t\t\t\t\t<g transform=\"matrix(12.1247,0,0,12.1247,3862.61,1929.9)\">\n\t\t\t\t\t\t\t\t\t\t\t\t<path d=\"M0.073,-0L0.073,-0.7L0.383,-0.7C0.428,-0.7 0.469,-0.69 0.506,-0.67C0.543,-0.651 0.572,-0.623 0.594,-0.588C0.616,-0.553 0.627,-0.512 0.627,-0.465C0.627,-0.418 0.615,-0.377 0.592,-0.342C0.569,-0.306 0.539,-0.279 0.501,-0.259L0.57,-0.128C0.574,-0.12 0.579,-0.115 0.584,-0.111C0.59,-0.107 0.596,-0.106 0.605,-0.106L0.664,-0.106L0.664,-0L0.587,-0C0.56,-0 0.535,-0.007 0.514,-0.02C0.493,-0.034 0.476,-0.052 0.463,-0.075L0.381,-0.232C0.375,-0.231 0.368,-0.231 0.361,-0.231C0.354,-0.231 0.347,-0.231 0.34,-0.231L0.192,-0.231L0.192,-0L0.073,-0ZM0.192,-0.336L0.368,-0.336C0.394,-0.336 0.417,-0.341 0.438,-0.351C0.459,-0.361 0.476,-0.376 0.489,-0.396C0.501,-0.415 0.507,-0.438 0.507,-0.465C0.507,-0.492 0.501,-0.516 0.488,-0.535C0.475,-0.554 0.459,-0.569 0.438,-0.579C0.417,-0.59 0.394,-0.595 0.369,-0.595L0.192,-0.595L0.192,-0.336Z\" fill=\"rgb(46,46,46)\" fill-rule=\"nonzero\"/>\n\t\t\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t\t<g transform=\"matrix(1,0,0,1,0.278569,0.101881)\">\n\t\t\t\t\t\t\t\t\t\t\t<circle cx=\"3866.43\" cy=\"1926.14\" r=\"8.923\" fill=\"none\" stroke=\"rgb(46,46,46)\" stroke-width=\"2px\" stroke-linecap=\"butt\" stroke-linejoin=\"miter\"/>\n\t\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t\t</g>\n\t\t\t\t\t\t</g>\n\t\t\t\t\t</g>\n\t\t\t\t</svg>\n\t\t\t</a>\n\t\t</footer>\n\n\t\t<script {{ $nonceAttribute }}>\n\t\t\t// @license magnet:?xt=urn:btih:8e4f440f4c65981c5bf93c76d35135ba5064d8b7&dn=apache-2.0.txt Apache-2.0\n\t\t\tconst filterEl = document.getElementById('filter');\n\t\t\tfilterEl?.focus({ preventScroll: true });\n\n\t\t\tfunction initPage() {\n\t\t\t\t// populate and evaluate filter\n\t\t\t\tif (!filterEl?.value) {\n\t\t\t\t\tconst filterParam = new URL(window.location.href).searchParams.get('filter');\n\t\t\t\t\tif (filterParam) {\n\t\t\t\t\t\tfilterEl.value = filterParam;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfilter();\n\n\t\t\t\t// fill in size bars\n\t\t\t\tlet largest = 0;\n\t\t\t\tdocument.querySelectorAll('.size').forEach(el => {\n\t\t\t\t\tlargest = Math.max(largest, Number(el.dataset.size));\n\t\t\t\t});\n\t\t\t\tdocument.querySelectorAll('.size').forEach(el => {\n\t\t\t\t\tconst size = Number(el.dataset.size);\n\t\t\t\t\tconst sizebar = el.querySelector('.sizebar-bar');\n\t\t\t\t\tif (sizebar) {\n\t\t\t\t\t\tsizebar.style.width = `${size/largest * 100}%`;\n\t\t\t\t\t}\n\t\t\t\t});\n\t\t\t}\n\n\t\t\tfunction filter() {\n\t\t\t\tif (!filterEl) return;\n\t\t\t\tconst q = filterEl.value.trim().toLowerCase();\n\t\t\t\tdocument.querySelectorAll('tr.file').forEach(function(el) {\n\t\t\t\t\tif (!q) {\n\t\t\t\t\t\tel.style.display = '';\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t\tconst nameEl = el.querySelector('.name');\n\t\t\t\t\tconst nameVal = nameEl.textContent.trim().toLowerCase();\n\t\t\t\t\tif (nameVal.indexOf(q) !== -1) {\n\t\t\t\t\t\tel.style.display = '';\n\t\t\t\t\t} else {\n\t\t\t\t\t\tel.style.display = 'none';\n\t\t\t\t\t}\n\t\t\t\t});\n\t\t\t}\n\n\t\t\tconst filterElem = document.getElementById(\"filter\");\n\t\t\tif (filterElem) {\n\t\t\t\tfilterElem.addEventListener(\"keyup\", filter);\n\t\t\t}\n\n\t\t\tdocument.getElementById(\"layout-list\").addEventListener(\"click\", function() {\n\t\t\t\tqueryParam('layout', '');\n\t\t\t});\n\t\t\tdocument.getElementById(\"layout-grid\").addEventListener(\"click\", function() {\n\t\t\t\tqueryParam('layout', 'grid');\n\t\t\t});\n\n\t\t\twindow.addEventListener(\"load\", initPage);\n\n\t\t\tfunction queryParam(k, v) {\n\t\t\t\tconst qs = new URLSearchParams(window.location.search);\n\t\t\t\tif (!v) {\n\t\t\t\t\tqs.delete(k);\n\t\t\t\t} else {\n\t\t\t\t\tqs.set(k, v);\n\t\t\t\t}\n\t\t\t\tconst qsStr = qs.toString();\n\t\t\t\tif (qsStr) {\n\t\t\t\t\twindow.location.search = qsStr;\n\t\t\t\t} else {\n\t\t\t\t\twindow.location = window.location.pathname;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tfunction localizeDatetime(e, index, ar) {\n\t\t\t\tif (e.textContent === undefined) {\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tvar d = new Date(e.getAttribute('datetime'));\n\t\t\t\tif (isNaN(d)) {\n\t\t\t\t\td = new Date(e.textContent);\n\t\t\t\t\tif (isNaN(d)) {\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\te.textContent = d.toLocaleString();\n\t\t\t}\n\t\t\tvar timeList = Array.prototype.slice.call(document.getElementsByTagName(\"time\"));\n\t\t\ttimeList.forEach(localizeDatetime);\n\t\t\t// @license-end\n\t\t</script>\n\t</body>\n</html>\n"
  },
  {
    "path": "modules/caddyhttp/fileserver/browsetplcontext.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage fileserver\n\nimport (\n\t\"context\"\n\t\"io/fs\"\n\t\"net/url\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"slices\"\n\t\"sort\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/dustin/go-humanize\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc (fsrv *FileServer) directoryListing(ctx context.Context, fileSystem fs.FS, parentModTime time.Time, entries []fs.DirEntry, canGoUp bool, root, urlPath string, repl *caddy.Replacer) *browseTemplateContext {\n\tfilesToHide := fsrv.transformHidePaths(repl)\n\n\tname, _ := url.PathUnescape(urlPath)\n\n\ttplCtx := &browseTemplateContext{\n\t\tName:         path.Base(name),\n\t\tPath:         urlPath,\n\t\tCanGoUp:      canGoUp,\n\t\tlastModified: parentModTime,\n\t}\n\n\tfor _, entry := range entries {\n\t\tif err := ctx.Err(); err != nil {\n\t\t\tbreak\n\t\t}\n\n\t\tname := entry.Name()\n\n\t\tif fileHidden(name, filesToHide) {\n\t\t\tcontinue\n\t\t}\n\n\t\tinfo, err := entry.Info()\n\t\tif err != nil {\n\t\t\tif c := fsrv.logger.Check(zapcore.ErrorLevel, \"could not get info about directory entry\"); c != nil {\n\t\t\t\tc.Write(zap.String(\"name\", entry.Name()), zap.String(\"root\", root))\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\t// keep track of the most recently modified item in the listing\n\t\tmodTime := info.ModTime()\n\t\tif tplCtx.lastModified.IsZero() || modTime.After(tplCtx.lastModified) {\n\t\t\ttplCtx.lastModified = modTime\n\t\t}\n\n\t\tisDir := entry.IsDir() || fsrv.isSymlinkTargetDir(fileSystem, info, root, urlPath)\n\n\t\t// add the slash after the escape of path to avoid escaping the slash as well\n\t\tif isDir {\n\t\t\tname += \"/\"\n\t\t\ttplCtx.NumDirs++\n\t\t} else {\n\t\t\ttplCtx.NumFiles++\n\t\t}\n\n\t\tsize := info.Size()\n\n\t\tif !isDir {\n\t\t\t// increase the total by the symlink's size, not the target's size,\n\t\t\t// by incrementing before we follow the symlink\n\t\t\ttplCtx.TotalFileSize += size\n\t\t}\n\n\t\tfileIsSymlink := isSymlink(info)\n\t\tsymlinkPath := \"\"\n\t\tif fileIsSymlink {\n\t\t\tpath := caddyhttp.SanitizedPathJoin(root, path.Join(urlPath, info.Name()))\n\t\t\tfileInfo, err := fs.Stat(fileSystem, path)\n\t\t\tif err == nil {\n\t\t\t\tsize = fileInfo.Size()\n\t\t\t}\n\n\t\t\tif fsrv.Browse.RevealSymlinks {\n\t\t\t\tsymLinkTarget, err := filepath.EvalSymlinks(path)\n\t\t\t\tif err == nil {\n\t\t\t\t\tsymlinkPath = symLinkTarget\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// An error most likely means the symlink target doesn't exist,\n\t\t\t// which isn't entirely unusual and shouldn't fail the listing.\n\t\t\t// In this case, just use the size of the symlink itself, which\n\t\t\t// was already set above.\n\t\t}\n\n\t\tif !isDir {\n\t\t\t// increase the total including the symlink target's size\n\t\t\ttplCtx.TotalFileSizeFollowingSymlinks += size\n\t\t}\n\n\t\tu := url.URL{Path: \"./\" + name} // prepend with \"./\" to fix paths with ':' in the name\n\n\t\ttplCtx.Items = append(tplCtx.Items, fileInfo{\n\t\t\tIsDir:       isDir,\n\t\t\tIsSymlink:   fileIsSymlink,\n\t\t\tName:        name,\n\t\t\tSize:        size,\n\t\t\tURL:         u.String(),\n\t\t\tModTime:     modTime.UTC(),\n\t\t\tMode:        info.Mode(),\n\t\t\tTpl:         tplCtx, // a reference up to the template context is useful\n\t\t\tSymlinkPath: symlinkPath,\n\t\t})\n\t}\n\n\t// this time is used for the Last-Modified header and comparing If-Modified-Since from client\n\t// both are expected to be in UTC, so we convert to UTC here\n\t// see: https://github.com/caddyserver/caddy/issues/6828\n\ttplCtx.lastModified = tplCtx.lastModified.UTC()\n\treturn tplCtx\n}\n\n// browseTemplateContext provides the template context for directory listings.\ntype browseTemplateContext struct {\n\t// The name of the directory (the last element of the path).\n\tName string `json:\"name\"`\n\n\t// The full path of the request.\n\tPath string `json:\"path\"`\n\n\t// Whether the parent directory is browsable.\n\tCanGoUp bool `json:\"can_go_up\"`\n\n\t// The items (files and folders) in the path.\n\tItems []fileInfo `json:\"items,omitempty\"`\n\n\t// If ≠0 then Items starting from that many elements.\n\tOffset int `json:\"offset,omitempty\"`\n\n\t// If ≠0 then Items have been limited to that many elements.\n\tLimit int `json:\"limit,omitempty\"`\n\n\t// The number of directories in the listing.\n\tNumDirs int `json:\"num_dirs\"`\n\n\t// The number of files (items that aren't directories) in the listing.\n\tNumFiles int `json:\"num_files\"`\n\n\t// The total size of all files in the listing. Only includes the\n\t// size of the files themselves, not the size of symlink targets\n\t// (i.e. the calculation of this value does not follow symlinks).\n\tTotalFileSize int64 `json:\"total_file_size\"`\n\n\t// The total size of all files in the listing, including the\n\t// size of the files targeted by symlinks.\n\tTotalFileSizeFollowingSymlinks int64 `json:\"total_file_size_following_symlinks\"`\n\n\t// Sort column used\n\tSort string `json:\"sort,omitempty\"`\n\n\t// Sorting order\n\tOrder string `json:\"order,omitempty\"`\n\n\t// Display format (list or grid)\n\tLayout string `json:\"layout,omitempty\"`\n\n\t// The most recent file modification date in the listing.\n\t// Used for HTTP header purposes.\n\tlastModified time.Time\n}\n\n// Breadcrumbs returns l.Path where every element maps\n// the link to the text to display.\nfunc (l browseTemplateContext) Breadcrumbs() []crumb {\n\tif len(l.Path) == 0 {\n\t\treturn []crumb{}\n\t}\n\n\t// skip trailing slash\n\tlpath := l.Path\n\tif lpath[len(lpath)-1] == '/' {\n\t\tlpath = lpath[:len(lpath)-1]\n\t}\n\tparts := strings.Split(lpath, \"/\")\n\tresult := make([]crumb, len(parts))\n\tfor i, p := range parts {\n\t\tif i == 0 && p == \"\" {\n\t\t\tp = \"/\"\n\t\t}\n\t\t// the directory name could include an encoded slash in its path,\n\t\t// so the item name should be unescaped in the loop rather than unescaping the\n\t\t// entire path outside the loop.\n\t\tp, _ = url.PathUnescape(p)\n\t\tlnk := strings.Repeat(\"../\", len(parts)-i-1)\n\t\tresult[i] = crumb{Link: lnk, Text: p}\n\t}\n\n\treturn result\n}\n\nfunc (l *browseTemplateContext) applySortAndLimit(sortParam, orderParam, limitParam string, offsetParam string) {\n\tl.Sort = sortParam\n\tl.Order = orderParam\n\n\tif l.Order == \"desc\" {\n\t\tswitch l.Sort {\n\t\tcase sortByName:\n\t\t\tsort.Sort(sort.Reverse(byName(*l)))\n\t\tcase sortByNameDirFirst:\n\t\t\tsort.Sort(sort.Reverse(byNameDirFirst(*l)))\n\t\tcase sortBySize:\n\t\t\tsort.Sort(sort.Reverse(bySize(*l)))\n\t\tcase sortByTime:\n\t\t\tsort.Sort(sort.Reverse(byTime(*l)))\n\t\t}\n\t} else {\n\t\tswitch l.Sort {\n\t\tcase sortByName:\n\t\t\tsort.Sort(byName(*l))\n\t\tcase sortByNameDirFirst:\n\t\t\tsort.Sort(byNameDirFirst(*l))\n\t\tcase sortBySize:\n\t\t\tsort.Sort(bySize(*l))\n\t\tcase sortByTime:\n\t\t\tsort.Sort(byTime(*l))\n\t\t}\n\t}\n\n\tif offsetParam != \"\" {\n\t\toffset, _ := strconv.Atoi(offsetParam)\n\t\tif offset > 0 && offset <= len(l.Items) {\n\t\t\tl.Items = l.Items[offset:]\n\t\t\tl.Offset = offset\n\t\t}\n\t}\n\n\tif limitParam != \"\" {\n\t\tlimit, _ := strconv.Atoi(limitParam)\n\n\t\tif limit > 0 && limit <= len(l.Items) {\n\t\t\tl.Items = l.Items[:limit]\n\t\t\tl.Limit = limit\n\t\t}\n\t}\n}\n\n// crumb represents part of a breadcrumb menu,\n// pairing a link with the text to display.\ntype crumb struct {\n\tLink, Text string\n}\n\n// fileInfo contains serializable information\n// about a file or directory.\ntype fileInfo struct {\n\tName        string      `json:\"name\"`\n\tSize        int64       `json:\"size\"`\n\tURL         string      `json:\"url\"`\n\tModTime     time.Time   `json:\"mod_time\"`\n\tMode        os.FileMode `json:\"mode\"`\n\tIsDir       bool        `json:\"is_dir\"`\n\tIsSymlink   bool        `json:\"is_symlink\"`\n\tSymlinkPath string      `json:\"symlink_path,omitempty\"`\n\n\t// a pointer to the template context is useful inside nested templates\n\tTpl *browseTemplateContext `json:\"-\"`\n}\n\n// HasExt returns true if the filename has any of the given suffixes, case-insensitive.\nfunc (fi fileInfo) HasExt(exts ...string) bool {\n\treturn slices.ContainsFunc(exts, func(ext string) bool {\n\t\treturn strings.HasSuffix(strings.ToLower(fi.Name), strings.ToLower(ext))\n\t})\n}\n\n// HumanSize returns the size of the file as a\n// human-readable string in IEC format (i.e.\n// power of 2 or base 1024).\nfunc (fi fileInfo) HumanSize() string {\n\treturn humanize.IBytes(uint64(fi.Size))\n}\n\n// HumanTotalFileSize returns the total size of all files\n// in the listing as a human-readable string in IEC format\n// (i.e. power of 2 or base 1024).\nfunc (btc browseTemplateContext) HumanTotalFileSize() string {\n\treturn humanize.IBytes(uint64(btc.TotalFileSize))\n}\n\n// HumanTotalFileSizeFollowingSymlinks is the same as HumanTotalFileSize\n// except the returned value reflects the size of symlink targets.\nfunc (btc browseTemplateContext) HumanTotalFileSizeFollowingSymlinks() string {\n\treturn humanize.IBytes(uint64(btc.TotalFileSizeFollowingSymlinks))\n}\n\n// HumanModTime returns the modified time of the file\n// as a human-readable string given by format.\nfunc (fi fileInfo) HumanModTime(format string) string {\n\treturn fi.ModTime.Format(format)\n}\n\ntype (\n\tbyName         browseTemplateContext\n\tbyNameDirFirst browseTemplateContext\n\tbySize         browseTemplateContext\n\tbyTime         browseTemplateContext\n)\n\nfunc (l byName) Len() int      { return len(l.Items) }\nfunc (l byName) Swap(i, j int) { l.Items[i], l.Items[j] = l.Items[j], l.Items[i] }\n\nfunc (l byName) Less(i, j int) bool {\n\treturn strings.ToLower(l.Items[i].Name) < strings.ToLower(l.Items[j].Name)\n}\n\nfunc (l byNameDirFirst) Len() int      { return len(l.Items) }\nfunc (l byNameDirFirst) Swap(i, j int) { l.Items[i], l.Items[j] = l.Items[j], l.Items[i] }\n\nfunc (l byNameDirFirst) Less(i, j int) bool {\n\t// sort by name if both are dir or file\n\tif l.Items[i].IsDir == l.Items[j].IsDir {\n\t\treturn strings.ToLower(l.Items[i].Name) < strings.ToLower(l.Items[j].Name)\n\t}\n\t// sort dir ahead of file\n\treturn l.Items[i].IsDir\n}\n\nfunc (l bySize) Len() int      { return len(l.Items) }\nfunc (l bySize) Swap(i, j int) { l.Items[i], l.Items[j] = l.Items[j], l.Items[i] }\n\nfunc (l bySize) Less(i, j int) bool {\n\tconst directoryOffset = -1 << 31 // = -math.MinInt32\n\n\tiSize, jSize := l.Items[i].Size, l.Items[j].Size\n\n\t// directory sizes depend on the file system; to\n\t// provide a consistent experience, put them up front\n\t// and sort them by name\n\tif l.Items[i].IsDir {\n\t\tiSize = directoryOffset\n\t}\n\tif l.Items[j].IsDir {\n\t\tjSize = directoryOffset\n\t}\n\tif l.Items[i].IsDir && l.Items[j].IsDir {\n\t\treturn strings.ToLower(l.Items[i].Name) < strings.ToLower(l.Items[j].Name)\n\t}\n\n\treturn iSize < jSize\n}\n\nfunc (l byTime) Len() int           { return len(l.Items) }\nfunc (l byTime) Swap(i, j int)      { l.Items[i], l.Items[j] = l.Items[j], l.Items[i] }\nfunc (l byTime) Less(i, j int) bool { return l.Items[i].ModTime.Before(l.Items[j].ModTime) }\n\nconst (\n\tsortByName         = \"name\"\n\tsortByNameDirFirst = \"namedirfirst\"\n\tsortBySize         = \"size\"\n\tsortByTime         = \"time\"\n\n\tsortOrderAsc  = \"asc\"\n\tsortOrderDesc = \"desc\"\n)\n"
  },
  {
    "path": "modules/caddyhttp/fileserver/browsetplcontext_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage fileserver\n\nimport (\n\t\"testing\"\n)\n\nfunc TestBreadcrumbs(t *testing.T) {\n\ttestdata := []struct {\n\t\tpath     string\n\t\texpected []crumb\n\t}{\n\t\t{\"\", []crumb{}},\n\t\t{\"/\", []crumb{{Text: \"/\"}}},\n\t\t{\"/foo/\", []crumb{\n\t\t\t{Link: \"../\", Text: \"/\"},\n\t\t\t{Link: \"\", Text: \"foo\"},\n\t\t}},\n\t\t{\"/foo/bar/\", []crumb{\n\t\t\t{Link: \"../../\", Text: \"/\"},\n\t\t\t{Link: \"../\", Text: \"foo\"},\n\t\t\t{Link: \"\", Text: \"bar\"},\n\t\t}},\n\t\t{\"/foo bar/\", []crumb{\n\t\t\t{Link: \"../\", Text: \"/\"},\n\t\t\t{Link: \"\", Text: \"foo bar\"},\n\t\t}},\n\t\t{\"/foo bar/baz/\", []crumb{\n\t\t\t{Link: \"../../\", Text: \"/\"},\n\t\t\t{Link: \"../\", Text: \"foo bar\"},\n\t\t\t{Link: \"\", Text: \"baz\"},\n\t\t}},\n\t\t{\"/100%25 test coverage/is a lie/\", []crumb{\n\t\t\t{Link: \"../../\", Text: \"/\"},\n\t\t\t{Link: \"../\", Text: \"100% test coverage\"},\n\t\t\t{Link: \"\", Text: \"is a lie\"},\n\t\t}},\n\t\t{\"/AC%2FDC/\", []crumb{\n\t\t\t{Link: \"../\", Text: \"/\"},\n\t\t\t{Link: \"\", Text: \"AC/DC\"},\n\t\t}},\n\t\t{\"/foo/%2e%2e%2f/bar\", []crumb{\n\t\t\t{Link: \"../../../\", Text: \"/\"},\n\t\t\t{Link: \"../../\", Text: \"foo\"},\n\t\t\t{Link: \"../\", Text: \"../\"},\n\t\t\t{Link: \"\", Text: \"bar\"},\n\t\t}},\n\t\t{\"/foo/../bar\", []crumb{\n\t\t\t{Link: \"../../../\", Text: \"/\"},\n\t\t\t{Link: \"../../\", Text: \"foo\"},\n\t\t\t{Link: \"../\", Text: \"..\"},\n\t\t\t{Link: \"\", Text: \"bar\"},\n\t\t}},\n\t\t{\"foo/bar/baz\", []crumb{\n\t\t\t{Link: \"../../\", Text: \"foo\"},\n\t\t\t{Link: \"../\", Text: \"bar\"},\n\t\t\t{Link: \"\", Text: \"baz\"},\n\t\t}},\n\t\t{\"/qux/quux/corge/\", []crumb{\n\t\t\t{Link: \"../../../\", Text: \"/\"},\n\t\t\t{Link: \"../../\", Text: \"qux\"},\n\t\t\t{Link: \"../\", Text: \"quux\"},\n\t\t\t{Link: \"\", Text: \"corge\"},\n\t\t}},\n\t\t{\"/مجلد/\", []crumb{\n\t\t\t{Link: \"../\", Text: \"/\"},\n\t\t\t{Link: \"\", Text: \"مجلد\"},\n\t\t}},\n\t\t{\"/مجلد-1/مجلد-2\", []crumb{\n\t\t\t{Link: \"../../\", Text: \"/\"},\n\t\t\t{Link: \"../\", Text: \"مجلد-1\"},\n\t\t\t{Link: \"\", Text: \"مجلد-2\"},\n\t\t}},\n\t\t{\"/مجلد%2F1\", []crumb{\n\t\t\t{Link: \"../\", Text: \"/\"},\n\t\t\t{Link: \"\", Text: \"مجلد/1\"},\n\t\t}},\n\t}\n\n\tfor testNum, d := range testdata {\n\t\tl := browseTemplateContext{Path: d.path}\n\t\tactual := l.Breadcrumbs()\n\t\tif len(actual) != len(d.expected) {\n\t\t\tt.Errorf(\"Test %d: Got %d components but expected %d; got: %+v\", testNum, len(actual), len(d.expected), actual)\n\t\t\tcontinue\n\t\t}\n\t\tfor i, c := range actual {\n\t\t\tif c != d.expected[i] {\n\t\t\t\tt.Errorf(\"Test %d crumb %d: got %#v but expected %#v at index %d\", testNum, i, c, d.expected[i], i)\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/fileserver/caddyfile.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage fileserver\n\nimport (\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/encode\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/rewrite\"\n)\n\nfunc init() {\n\thttpcaddyfile.RegisterHandlerDirective(\"file_server\", parseCaddyfile)\n\thttpcaddyfile.RegisterDirective(\"try_files\", parseTryFiles)\n}\n\n// parseCaddyfile parses the file_server directive.\n// See UnmarshalCaddyfile for the syntax.\nfunc parseCaddyfile(h httpcaddyfile.Helper) (caddyhttp.MiddlewareHandler, error) {\n\tfsrv := new(FileServer)\n\terr := fsrv.UnmarshalCaddyfile(h.Dispenser)\n\tif err != nil {\n\t\treturn fsrv, err\n\t}\n\terr = fsrv.FinalizeUnmarshalCaddyfile(h)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn fsrv, err\n}\n\n// UnmarshalCaddyfile parses the file_server directive. It enables\n// the static file server and configures it with this syntax:\n//\n//\tfile_server [<matcher>] [browse] {\n//\t    fs            <filesystem>\n//\t    root          <path>\n//\t    hide          <files...>\n//\t    index         <files...>\n//\t    browse        [<template_file>]\n//\t    precompressed <formats...>\n//\t    status        <status>\n//\t    disable_canonical_uris\n//\t}\n//\n// The FinalizeUnmarshalCaddyfile method should be called after this\n// to finalize setup of hidden Caddyfiles.\nfunc (fsrv *FileServer) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume directive name\n\n\targs := d.RemainingArgs()\n\tswitch len(args) {\n\tcase 0:\n\tcase 1:\n\t\tif args[0] != \"browse\" {\n\t\t\treturn d.ArgErr()\n\t\t}\n\t\tfsrv.Browse = new(Browse)\n\tdefault:\n\t\treturn d.ArgErr()\n\t}\n\n\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\tswitch d.Val() {\n\t\tcase \"fs\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif fsrv.FileSystem != \"\" {\n\t\t\t\treturn d.Err(\"file system already specified\")\n\t\t\t}\n\t\t\tfsrv.FileSystem = d.Val()\n\n\t\tcase \"hide\":\n\t\t\tfsrv.Hide = d.RemainingArgs()\n\t\t\tif len(fsrv.Hide) == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"index\":\n\t\t\tfsrv.IndexNames = d.RemainingArgs()\n\t\t\tif len(fsrv.IndexNames) == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"root\":\n\t\t\tif !d.Args(&fsrv.Root) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"browse\":\n\t\t\tif fsrv.Browse != nil {\n\t\t\t\treturn d.Err(\"browsing is already configured\")\n\t\t\t}\n\t\t\tfsrv.Browse = new(Browse)\n\t\t\td.Args(&fsrv.Browse.TemplateFile)\n\t\t\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\t\t\tswitch d.Val() {\n\t\t\t\tcase \"reveal_symlinks\":\n\t\t\t\t\tif fsrv.Browse.RevealSymlinks {\n\t\t\t\t\t\treturn d.Err(\"Symlinks path reveal is already enabled\")\n\t\t\t\t\t}\n\t\t\t\t\tfsrv.Browse.RevealSymlinks = true\n\t\t\t\tcase \"sort\":\n\t\t\t\t\tfor d.NextArg() {\n\t\t\t\t\t\tdVal := d.Val()\n\t\t\t\t\t\tswitch dVal {\n\t\t\t\t\t\tcase sortByName, sortByNameDirFirst, sortBySize, sortByTime, sortOrderAsc, sortOrderDesc:\n\t\t\t\t\t\t\tfsrv.Browse.SortOptions = append(fsrv.Browse.SortOptions, dVal)\n\t\t\t\t\t\tdefault:\n\t\t\t\t\t\t\treturn d.Errf(\"unknown sort option '%s'\", dVal)\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\tcase \"file_limit\":\n\t\t\t\t\tfileLimit := d.RemainingArgs()\n\t\t\t\t\tif len(fileLimit) != 1 {\n\t\t\t\t\t\treturn d.Err(\"file_limit should have an integer value\")\n\t\t\t\t\t}\n\t\t\t\t\tval, _ := strconv.Atoi(fileLimit[0])\n\t\t\t\t\tif fsrv.Browse.FileLimit != 0 {\n\t\t\t\t\t\treturn d.Err(\"file_limit is already enabled\")\n\t\t\t\t\t}\n\t\t\t\t\tfsrv.Browse.FileLimit = val\n\t\t\t\tdefault:\n\t\t\t\t\treturn d.Errf(\"unknown subdirective '%s'\", d.Val())\n\t\t\t\t}\n\t\t\t}\n\n\t\tcase \"precompressed\":\n\t\t\tfsrv.PrecompressedOrder = d.RemainingArgs()\n\t\t\tif len(fsrv.PrecompressedOrder) == 0 {\n\t\t\t\tfsrv.PrecompressedOrder = []string{\"br\", \"zstd\", \"gzip\"}\n\t\t\t}\n\n\t\t\tfor _, format := range fsrv.PrecompressedOrder {\n\t\t\t\tmodID := \"http.precompressed.\" + format\n\t\t\t\tmod, err := caddy.GetModule(modID)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn d.Errf(\"getting module named '%s': %v\", modID, err)\n\t\t\t\t}\n\t\t\t\tinst := mod.New()\n\t\t\t\tprecompress, ok := inst.(encode.Precompressed)\n\t\t\t\tif !ok {\n\t\t\t\t\treturn d.Errf(\"module %s is not a precompressor; is %T\", modID, inst)\n\t\t\t\t}\n\t\t\t\tif fsrv.PrecompressedRaw == nil {\n\t\t\t\t\tfsrv.PrecompressedRaw = make(caddy.ModuleMap)\n\t\t\t\t}\n\t\t\t\tfsrv.PrecompressedRaw[format] = caddyconfig.JSON(precompress, nil)\n\t\t\t}\n\n\t\tcase \"status\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tfsrv.StatusCode = caddyhttp.WeakString(d.Val())\n\n\t\tcase \"disable_canonical_uris\":\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tfalseBool := false\n\t\t\tfsrv.CanonicalURIs = &falseBool\n\n\t\tcase \"pass_thru\":\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tfsrv.PassThru = true\n\n\t\tcase \"etag_file_extensions\":\n\t\t\tetagFileExtensions := d.RemainingArgs()\n\t\t\tif len(etagFileExtensions) == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tfsrv.EtagFileExtensions = etagFileExtensions\n\n\t\tdefault:\n\t\t\treturn d.Errf(\"unknown subdirective '%s'\", d.Val())\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// FinalizeUnmarshalCaddyfile finalizes the Caddyfile parsing which\n// requires having an httpcaddyfile.Helper to function, to setup hidden Caddyfiles.\nfunc (fsrv *FileServer) FinalizeUnmarshalCaddyfile(h httpcaddyfile.Helper) error {\n\t// Hide the Caddyfile (and any imported Caddyfiles).\n\t// This needs to be done in here instead of UnmarshalCaddyfile\n\t// because UnmarshalCaddyfile only has access to the dispenser\n\t// and not the helper, and only the helper has access to the\n\t// Caddyfiles function.\n\tif configFiles := h.Caddyfiles(); len(configFiles) > 0 {\n\t\tfor _, file := range configFiles {\n\t\t\tfile = filepath.Clean(file)\n\t\t\tif !fileHidden(file, fsrv.Hide) {\n\t\t\t\t// if there's no path separator, the file server module will hide all\n\t\t\t\t// files by that name, rather than a specific one; but we want to hide\n\t\t\t\t// only this specific file, so ensure there's always a path separator\n\t\t\t\tif !strings.Contains(file, separator) {\n\t\t\t\t\tfile = \".\" + separator + file\n\t\t\t\t}\n\t\t\t\tfsrv.Hide = append(fsrv.Hide, file)\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\n// parseTryFiles parses the try_files directive. It combines a file matcher\n// with a rewrite directive, so this is not a standard handler directive.\n// A try_files directive has this syntax (notice no matcher tokens accepted):\n//\n//\ttry_files <files...> {\n//\t\tpolicy first_exist|smallest_size|largest_size|most_recently_modified\n//\t}\n//\n// and is basically shorthand for:\n//\n//\t@try_files file {\n//\t\ttry_files <files...>\n//\t\tpolicy first_exist|smallest_size|largest_size|most_recently_modified\n//\t}\n//\trewrite @try_files {http.matchers.file.relative}\n//\n// This directive rewrites request paths only, preserving any other part\n// of the URI, unless the part is explicitly given in the file list. For\n// example, if any of the files in the list have a query string:\n//\n//\ttry_files {path} index.php?{query}&p={path}\n//\n// then the query string will not be treated as part of the file name; and\n// if that file matches, the given query string will replace any query string\n// that already exists on the request URI.\nfunc parseTryFiles(h httpcaddyfile.Helper) ([]httpcaddyfile.ConfigValue, error) {\n\tif !h.Next() {\n\t\treturn nil, h.ArgErr()\n\t}\n\n\ttryFiles := h.RemainingArgs()\n\tif len(tryFiles) == 0 {\n\t\treturn nil, h.ArgErr()\n\t}\n\n\t// parse out the optional try policy\n\tvar tryPolicy string\n\tfor h.NextBlock(0) {\n\t\tswitch h.Val() {\n\t\tcase \"policy\":\n\t\t\tif tryPolicy != \"\" {\n\t\t\t\treturn nil, h.Err(\"try policy already configured\")\n\t\t\t}\n\t\t\tif !h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\ttryPolicy = h.Val()\n\n\t\t\tswitch tryPolicy {\n\t\t\tcase tryPolicyFirstExist, tryPolicyFirstExistFallback, tryPolicyLargestSize, tryPolicySmallestSize, tryPolicyMostRecentlyMod:\n\t\t\tdefault:\n\t\t\t\treturn nil, h.Errf(\"unrecognized try policy: %s\", tryPolicy)\n\t\t\t}\n\t\t}\n\t}\n\n\t// makeRoute returns a route that tries the files listed in try\n\t// and then rewrites to the matched file; userQueryString is\n\t// appended to the rewrite rule.\n\tmakeRoute := func(try []string, userQueryString string) []httpcaddyfile.ConfigValue {\n\t\thandler := rewrite.Rewrite{\n\t\t\tURI: \"{http.matchers.file.relative}\" + userQueryString,\n\t\t}\n\t\tmatcherSet := caddy.ModuleMap{\n\t\t\t\"file\": h.JSON(MatchFile{TryFiles: try, TryPolicy: tryPolicy}),\n\t\t}\n\t\treturn h.NewRoute(matcherSet, handler)\n\t}\n\n\tvar result []httpcaddyfile.ConfigValue\n\n\t// if there are query strings in the list, we have to split into\n\t// a separate route for each item with a query string, because\n\t// the rewrite is different for that item\n\ttry := make([]string, 0, len(tryFiles))\n\tfor _, item := range tryFiles {\n\t\tif idx := strings.Index(item, \"?\"); idx >= 0 {\n\t\t\tif len(try) > 0 {\n\t\t\t\tresult = append(result, makeRoute(try, \"\")...)\n\t\t\t\ttry = []string{}\n\t\t\t}\n\t\t\tresult = append(result, makeRoute([]string{item[:idx]}, item[idx:])...)\n\t\t\tcontinue\n\t\t}\n\t\t// accumulate consecutive non-query-string parameters\n\t\ttry = append(try, item)\n\t}\n\tif len(try) > 0 {\n\t\tresult = append(result, makeRoute(try, \"\")...)\n\t}\n\n\t// ensure that multiple routes (possible if rewrite targets\n\t// have query strings, for example) are grouped together\n\t// so only the first matching rewrite is performed (#2891)\n\th.GroupRoutes(result)\n\n\treturn result, nil\n}\n\nvar _ caddyfile.Unmarshaler = (*FileServer)(nil)\n"
  },
  {
    "path": "modules/caddyhttp/fileserver/command.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage fileserver\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"log\"\n\t\"os\"\n\t\"strconv\"\n\t\"time\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/spf13/cobra\"\n\t\"go.uber.org/zap\"\n\n\tcaddycmd \"github.com/caddyserver/caddy/v2/cmd\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/encode\"\n\tcaddytpl \"github.com/caddyserver/caddy/v2/modules/caddyhttp/templates\"\n)\n\nfunc init() {\n\tcaddycmd.RegisterCommand(caddycmd.Command{\n\t\tName:  \"file-server\",\n\t\tUsage: \"[--domain <example.com>] [--root <path>] [--listen <addr>] [--browse] [--reveal-symlinks] [--access-log] [--precompressed]\",\n\t\tShort: \"Spins up a production-ready file server\",\n\t\tLong: `\nA simple but production-ready file server. Useful for quick deployments,\ndemos, and development.\n\nThe listener's socket address can be customized with the --listen flag.\n\nIf a domain name is specified with --domain, the default listener address\nwill be changed to the HTTPS port and the server will use HTTPS. If using\na public domain, ensure A/AAAA records are properly configured before\nusing this option.\n\nBy default, Zstandard and Gzip compression are enabled. Use --no-compress\nto disable compression.\n\nIf --browse is enabled, requests for folders without an index file will\nrespond with a file listing.`,\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\tcmd.Flags().StringP(\"domain\", \"d\", \"\", \"Domain name at which to serve the files\")\n\t\t\tcmd.Flags().StringP(\"root\", \"r\", \"\", \"The path to the root of the site\")\n\t\t\tcmd.Flags().StringP(\"listen\", \"l\", \"\", \"The address to which to bind the listener\")\n\t\t\tcmd.Flags().BoolP(\"browse\", \"b\", false, \"Enable directory browsing\")\n\t\t\tcmd.Flags().BoolP(\"reveal-symlinks\", \"\", false, \"Show symlink paths when browse is enabled.\")\n\t\t\tcmd.Flags().BoolP(\"templates\", \"t\", false, \"Enable template rendering\")\n\t\t\tcmd.Flags().BoolP(\"access-log\", \"a\", false, \"Enable the access log\")\n\t\t\tcmd.Flags().BoolP(\"debug\", \"v\", false, \"Enable verbose debug logs\")\n\t\t\tcmd.Flags().IntP(\"file-limit\", \"f\", defaultDirEntryLimit, \"Max directories to read\")\n\t\t\tcmd.Flags().BoolP(\"no-compress\", \"\", false, \"Disable Zstandard and Gzip compression\")\n\t\t\tcmd.Flags().StringSliceP(\"precompressed\", \"p\", []string{}, \"Specify precompression file extensions. Compression preference implied from flag order.\")\n\t\t\tcmd.RunE = caddycmd.WrapCommandFuncForCobra(cmdFileServer)\n\t\t\tcmd.AddCommand(&cobra.Command{\n\t\t\t\tUse:     \"export-template\",\n\t\t\t\tShort:   \"Exports the default file browser template\",\n\t\t\t\tExample: \"caddy file-server export-template > browse.html\",\n\t\t\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\t\t\t_, err := io.WriteString(os.Stdout, BrowseTemplate)\n\t\t\t\t\treturn err\n\t\t\t\t},\n\t\t\t})\n\t\t},\n\t})\n}\n\nfunc cmdFileServer(fs caddycmd.Flags) (int, error) {\n\tcaddy.TrapSignals()\n\n\tdomain := fs.String(\"domain\")\n\troot := fs.String(\"root\")\n\tlisten := fs.String(\"listen\")\n\tbrowse := fs.Bool(\"browse\")\n\ttemplates := fs.Bool(\"templates\")\n\taccessLog := fs.Bool(\"access-log\")\n\tfileLimit := fs.Int(\"file-limit\")\n\tdebug := fs.Bool(\"debug\")\n\trevealSymlinks := fs.Bool(\"reveal-symlinks\")\n\tcompress := !fs.Bool(\"no-compress\")\n\tprecompressed, err := fs.GetStringSlice(\"precompressed\")\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"invalid precompressed flag: %v\", err)\n\t}\n\tvar handlers []json.RawMessage\n\n\tif compress {\n\t\tzstd, err := caddy.GetModule(\"http.encoders.zstd\")\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, err\n\t\t}\n\n\t\tgzip, err := caddy.GetModule(\"http.encoders.gzip\")\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, err\n\t\t}\n\n\t\thandlers = append(handlers, caddyconfig.JSONModuleObject(encode.Encode{\n\t\t\tEncodingsRaw: caddy.ModuleMap{\n\t\t\t\t\"zstd\": caddyconfig.JSON(zstd.New(), nil),\n\t\t\t\t\"gzip\": caddyconfig.JSON(gzip.New(), nil),\n\t\t\t},\n\t\t\tPrefer: []string{\"zstd\", \"gzip\"},\n\t\t}, \"handler\", \"encode\", nil))\n\t}\n\n\tif templates {\n\t\thandler := caddytpl.Templates{FileRoot: root}\n\t\thandlers = append(handlers, caddyconfig.JSONModuleObject(handler, \"handler\", \"templates\", nil))\n\t}\n\n\thandler := FileServer{Root: root}\n\n\tif len(precompressed) != 0 {\n\t\t// logic mirrors modules/caddyhttp/fileserver/caddyfile.go case \"precompressed\"\n\t\tvar order []string\n\t\tfor _, compression := range precompressed {\n\t\t\tmodID := \"http.precompressed.\" + compression\n\t\t\tmod, err := caddy.GetModule(modID)\n\t\t\tif err != nil {\n\t\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"getting module named '%s': %v\", modID, err)\n\t\t\t}\n\t\t\tinst := mod.New()\n\t\t\tprecompress, ok := inst.(encode.Precompressed)\n\t\t\tif !ok {\n\t\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"module %s is not a precompressor; is %T\", modID, inst)\n\t\t\t}\n\t\t\tif handler.PrecompressedRaw == nil {\n\t\t\t\thandler.PrecompressedRaw = make(caddy.ModuleMap)\n\t\t\t}\n\t\t\thandler.PrecompressedRaw[compression] = caddyconfig.JSON(precompress, nil)\n\t\t\torder = append(order, compression)\n\t\t}\n\t\thandler.PrecompressedOrder = order\n\t}\n\n\tif browse {\n\t\thandler.Browse = &Browse{RevealSymlinks: revealSymlinks, FileLimit: fileLimit}\n\t}\n\n\thandlers = append(handlers, caddyconfig.JSONModuleObject(handler, \"handler\", \"file_server\", nil))\n\n\troute := caddyhttp.Route{HandlersRaw: handlers}\n\n\tif domain != \"\" {\n\t\troute.MatcherSetsRaw = []caddy.ModuleMap{\n\t\t\t{\n\t\t\t\t\"host\": caddyconfig.JSON(caddyhttp.MatchHost{domain}, nil),\n\t\t\t},\n\t\t}\n\t}\n\n\tserver := &caddyhttp.Server{\n\t\tReadHeaderTimeout: caddy.Duration(10 * time.Second),\n\t\tIdleTimeout:       caddy.Duration(30 * time.Second),\n\t\tMaxHeaderBytes:    1024 * 10,\n\t\tRoutes:            caddyhttp.RouteList{route},\n\t}\n\tif listen == \"\" {\n\t\tif domain == \"\" {\n\t\t\tlisten = \":80\"\n\t\t} else {\n\t\t\tlisten = \":\" + strconv.Itoa(certmagic.HTTPSPort)\n\t\t}\n\t}\n\tserver.Listen = []string{listen}\n\tif accessLog {\n\t\tserver.Logs = &caddyhttp.ServerLogConfig{}\n\t}\n\n\thttpApp := caddyhttp.App{\n\t\tServers: map[string]*caddyhttp.Server{\"static\": server},\n\t}\n\n\tvar false bool\n\tcfg := &caddy.Config{\n\t\tAdmin: &caddy.AdminConfig{\n\t\t\tDisabled: true,\n\t\t\tConfig: &caddy.ConfigSettings{\n\t\t\t\tPersist: &false,\n\t\t\t},\n\t\t},\n\t\tAppsRaw: caddy.ModuleMap{\n\t\t\t\"http\": caddyconfig.JSON(httpApp, nil),\n\t\t},\n\t}\n\n\tif debug {\n\t\tcfg.Logging = &caddy.Logging{\n\t\t\tLogs: map[string]*caddy.CustomLog{\n\t\t\t\t\"default\": {\n\t\t\t\t\tBaseLog: caddy.BaseLog{Level: zap.DebugLevel.CapitalString()},\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\t}\n\n\terr = caddy.Run(cfg)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\tlog.Printf(\"Caddy serving static files on %s\", listen)\n\n\tselect {}\n}\n"
  },
  {
    "path": "modules/caddyhttp/fileserver/matcher.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage fileserver\n\nimport (\n\t\"fmt\"\n\t\"io/fs\"\n\t\"net/http\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/google/cel-go/cel\"\n\t\"github.com/google/cel-go/common\"\n\t\"github.com/google/cel-go/common/ast\"\n\t\"github.com/google/cel-go/common/operators\"\n\t\"github.com/google/cel-go/common/types\"\n\t\"github.com/google/cel-go/common/types/ref\"\n\t\"github.com/google/cel-go/parser\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(MatchFile{})\n}\n\n// MatchFile is an HTTP request matcher that can match\n// requests based upon file existence.\n//\n// Upon matching, three new placeholders will be made\n// available:\n//\n// - `{http.matchers.file.relative}` The root-relative\n// path of the file. This is often useful when rewriting\n// requests.\n// - `{http.matchers.file.absolute}` The absolute path\n// of the matched file.\n// - `{http.matchers.file.type}` Set to \"directory\" if\n// the matched file is a directory, \"file\" otherwise.\n// - `{http.matchers.file.remainder}` Set to the remainder\n// of the path if the path was split by `split_path`.\n//\n// Even though file matching may depend on the OS path\n// separator, the placeholder values always use /.\ntype MatchFile struct {\n\t// The file system implementation to use. By default, the\n\t// local disk file system will be used.\n\tFileSystem string `json:\"fs,omitempty\"`\n\n\t// The root directory, used for creating absolute\n\t// file paths, and required when working with\n\t// relative paths; if not specified, `{http.vars.root}`\n\t// will be used, if set; otherwise, the current\n\t// directory is assumed. Accepts placeholders.\n\tRoot string `json:\"root,omitempty\"`\n\n\t// The list of files to try. Each path here is\n\t// considered related to Root. If nil, the request\n\t// URL's path will be assumed. Files and\n\t// directories are treated distinctly, so to match\n\t// a directory, the filepath MUST end in a forward\n\t// slash `/`. To match a regular file, there must\n\t// be no trailing slash. Accepts placeholders. If\n\t// the policy is \"first_exist\", then an error may\n\t// be triggered as a fallback by configuring \"=\"\n\t// followed by a status code number,\n\t// for example \"=404\".\n\tTryFiles []string `json:\"try_files,omitempty\"`\n\n\t// How to choose a file in TryFiles. Can be:\n\t//\n\t// - first_exist\n\t// - first_exist_fallback\n\t// - smallest_size\n\t// - largest_size\n\t// - most_recently_modified\n\t//\n\t// Default is first_exist.\n\tTryPolicy string `json:\"try_policy,omitempty\"`\n\n\t// A list of delimiters to use to split the path in two\n\t// when trying files. If empty, no splitting will\n\t// occur, and the path will be tried as-is. For each\n\t// split value, the left-hand side of the split,\n\t// including the split value, will be the path tried.\n\t// For example, the path `/remote.php/dav/` using the\n\t// split value `.php` would try the file `/remote.php`.\n\t// Each delimiter must appear at the end of a URI path\n\t// component in order to be used as a split delimiter.\n\tSplitPath []string `json:\"split_path,omitempty\"`\n\n\tfsmap caddy.FileSystems\n\n\tlogger *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchFile) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.matchers.file\",\n\t\tNew: func() caddy.Module { return new(MatchFile) },\n\t}\n}\n\n// UnmarshalCaddyfile sets up the matcher from Caddyfile tokens. Syntax:\n//\n//\tfile <files...> {\n//\t    root      <path>\n//\t    try_files <files...>\n//\t    try_policy first_exist|smallest_size|largest_size|most_recently_modified\n//\t}\nfunc (m *MatchFile) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\t// iterate to merge multiple matchers into one\n\tfor d.Next() {\n\t\tm.TryFiles = append(m.TryFiles, d.RemainingArgs()...)\n\t\tfor d.NextBlock(0) {\n\t\t\tswitch d.Val() {\n\t\t\tcase \"root\":\n\t\t\t\tif !d.NextArg() {\n\t\t\t\t\treturn d.ArgErr()\n\t\t\t\t}\n\t\t\t\tm.Root = d.Val()\n\t\t\tcase \"try_files\":\n\t\t\t\tm.TryFiles = append(m.TryFiles, d.RemainingArgs()...)\n\t\t\t\tif len(m.TryFiles) == 0 {\n\t\t\t\t\treturn d.ArgErr()\n\t\t\t\t}\n\t\t\tcase \"try_policy\":\n\t\t\t\tif !d.NextArg() {\n\t\t\t\t\treturn d.ArgErr()\n\t\t\t\t}\n\t\t\t\tm.TryPolicy = d.Val()\n\t\t\tcase \"split_path\":\n\t\t\t\tm.SplitPath = d.RemainingArgs()\n\t\t\t\tif len(m.SplitPath) == 0 {\n\t\t\t\t\treturn d.ArgErr()\n\t\t\t\t}\n\t\t\tdefault:\n\t\t\t\treturn d.Errf(\"unrecognized subdirective: %s\", d.Val())\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\n// CELLibrary produces options that expose this matcher for use in CEL\n// expression matchers.\n//\n// Example:\n//\n//\texpression file()\n//\texpression file({http.request.uri.path}, '/index.php')\n//\texpression file({'root': '/srv', 'try_files': [{http.request.uri.path}, '/index.php'], 'try_policy': 'first_exist', 'split_path': ['.php']})\nfunc (MatchFile) CELLibrary(ctx caddy.Context) (cel.Library, error) {\n\trequestType := cel.ObjectType(\"http.Request\")\n\n\tmatcherFactory := func(data ref.Val) (caddyhttp.RequestMatcherWithError, error) {\n\t\tvalues, err := caddyhttp.CELValueToMapStrList(data)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tvar root string\n\t\tif len(values[\"root\"]) > 0 {\n\t\t\troot = values[\"root\"][0]\n\t\t}\n\n\t\tvar fsName string\n\t\tif len(values[\"fs\"]) > 0 {\n\t\t\tfsName = values[\"fs\"][0]\n\t\t}\n\n\t\tvar try_policy string\n\t\tif len(values[\"try_policy\"]) > 0 {\n\t\t\ttry_policy = values[\"try_policy\"][0]\n\t\t}\n\n\t\tm := MatchFile{\n\t\t\tRoot:       root,\n\t\t\tTryFiles:   values[\"try_files\"],\n\t\t\tTryPolicy:  try_policy,\n\t\t\tSplitPath:  values[\"split_path\"],\n\t\t\tFileSystem: fsName,\n\t\t}\n\n\t\terr = m.Provision(ctx)\n\t\treturn m, err\n\t}\n\n\tenvOptions := []cel.EnvOption{\n\t\tcel.Macros(parser.NewGlobalVarArgMacro(\"file\", celFileMatcherMacroExpander())),\n\t\tcel.Function(\"file\", cel.Overload(\"file_request_map\", []*cel.Type{requestType, caddyhttp.CELTypeJSON}, cel.BoolType)),\n\t\tcel.Function(\"file_request_map\",\n\t\t\tcel.Overload(\"file_request_map\", []*cel.Type{requestType, caddyhttp.CELTypeJSON}, cel.BoolType),\n\t\t\tcel.SingletonBinaryBinding(caddyhttp.CELMatcherRuntimeFunction(\"file_request_map\", matcherFactory))),\n\t}\n\n\tprogramOptions := []cel.ProgramOption{\n\t\tcel.CustomDecorator(caddyhttp.CELMatcherDecorator(\"file_request_map\", matcherFactory)),\n\t}\n\n\treturn caddyhttp.NewMatcherCELLibrary(envOptions, programOptions), nil\n}\n\nfunc celFileMatcherMacroExpander() parser.MacroExpander {\n\treturn func(eh parser.ExprHelper, target ast.Expr, args []ast.Expr) (ast.Expr, *common.Error) {\n\t\tif len(args) == 0 {\n\t\t\treturn eh.NewCall(\"file\",\n\t\t\t\teh.NewIdent(caddyhttp.CELRequestVarName),\n\t\t\t\teh.NewMap(),\n\t\t\t), nil\n\t\t}\n\t\tif len(args) == 1 {\n\t\t\targ := args[0]\n\t\t\tif isCELStringLiteral(arg) || isCELCaddyPlaceholderCall(arg) {\n\t\t\t\treturn eh.NewCall(\"file\",\n\t\t\t\t\teh.NewIdent(caddyhttp.CELRequestVarName),\n\t\t\t\t\teh.NewMap(eh.NewMapEntry(\n\t\t\t\t\t\teh.NewLiteral(types.String(\"try_files\")),\n\t\t\t\t\t\teh.NewList(arg),\n\t\t\t\t\t\tfalse,\n\t\t\t\t\t)),\n\t\t\t\t), nil\n\t\t\t}\n\t\t\tif isCELTryFilesLiteral(arg) {\n\t\t\t\treturn eh.NewCall(\"file\", eh.NewIdent(caddyhttp.CELRequestVarName), arg), nil\n\t\t\t}\n\t\t\treturn nil, &common.Error{\n\t\t\t\tLocation: eh.OffsetLocation(arg.ID()),\n\t\t\t\tMessage:  \"matcher requires either a map or string literal argument\",\n\t\t\t}\n\t\t}\n\n\t\tfor _, arg := range args {\n\t\t\tif !isCELStringLiteral(arg) && !isCELCaddyPlaceholderCall(arg) {\n\t\t\t\treturn nil, &common.Error{\n\t\t\t\t\tLocation: eh.OffsetLocation(arg.ID()),\n\t\t\t\t\tMessage:  \"matcher only supports repeated string literal arguments\",\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\treturn eh.NewCall(\"file\",\n\t\t\teh.NewIdent(caddyhttp.CELRequestVarName),\n\t\t\teh.NewMap(eh.NewMapEntry(\n\t\t\t\teh.NewLiteral(types.String(\"try_files\")),\n\t\t\t\teh.NewList(args...),\n\t\t\t\tfalse,\n\t\t\t)),\n\t\t), nil\n\t}\n}\n\n// Provision sets up m's defaults.\nfunc (m *MatchFile) Provision(ctx caddy.Context) error {\n\tm.logger = ctx.Logger()\n\n\tm.fsmap = ctx.FileSystems()\n\n\tif m.Root == \"\" {\n\t\tm.Root = \"{http.vars.root}\"\n\t}\n\n\tif m.FileSystem == \"\" {\n\t\tm.FileSystem = \"{http.vars.fs}\"\n\t}\n\n\t// if list of files to try was omitted entirely, assume URL path\n\t// (use placeholder instead of r.URL.Path; see issue #4146)\n\tif m.TryFiles == nil {\n\t\tm.TryFiles = []string{\"{http.request.uri.path}\"}\n\t}\n\treturn nil\n}\n\n// Validate ensures m has a valid configuration.\nfunc (m MatchFile) Validate() error {\n\tswitch m.TryPolicy {\n\tcase \"\",\n\t\ttryPolicyFirstExist,\n\t\ttryPolicyFirstExistFallback,\n\t\ttryPolicyLargestSize,\n\t\ttryPolicySmallestSize,\n\t\ttryPolicyMostRecentlyMod:\n\tdefault:\n\t\treturn fmt.Errorf(\"unknown try policy %s\", m.TryPolicy)\n\t}\n\treturn nil\n}\n\n// Match returns true if r matches m. Returns true\n// if a file was matched. If so, four placeholders\n// will be available:\n//   - http.matchers.file.relative: Path to file relative to site root\n//   - http.matchers.file.absolute: Path to file including site root\n//   - http.matchers.file.type: file or directory\n//   - http.matchers.file.remainder: Portion remaining after splitting file path (if configured)\nfunc (m MatchFile) Match(r *http.Request) bool {\n\tmatch, err := m.selectFile(r)\n\tif err != nil {\n\t\t// nolint:staticcheck\n\t\tcaddyhttp.SetVar(r.Context(), caddyhttp.MatcherErrorVarKey, err)\n\t}\n\treturn match\n}\n\n// MatchWithError returns true if r matches m.\nfunc (m MatchFile) MatchWithError(r *http.Request) (bool, error) {\n\treturn m.selectFile(r)\n}\n\n// selectFile chooses a file according to m.TryPolicy by appending\n// the paths in m.TryFiles to m.Root, with placeholder replacements.\nfunc (m MatchFile) selectFile(r *http.Request) (bool, error) {\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\n\troot := filepath.Clean(repl.ReplaceAll(m.Root, \".\"))\n\n\tfsName := repl.ReplaceAll(m.FileSystem, \"\")\n\n\tfileSystem, ok := m.fsmap.Get(fsName)\n\tif !ok {\n\t\tif c := m.logger.Check(zapcore.ErrorLevel, \"use of unregistered filesystem\"); c != nil {\n\t\t\tc.Write(zap.String(\"fs\", fsName))\n\t\t}\n\t\treturn false, nil\n\t}\n\ttype matchCandidate struct {\n\t\tfullpath, relative, splitRemainder string\n\t}\n\n\t// makeCandidates evaluates placeholders in file and expands any glob expressions\n\t// to build a list of file candidates. Special glob characters are escaped in\n\t// placeholder replacements so globs cannot be expanded from placeholders, and\n\t// globs are not evaluated on Windows because of its path separator character:\n\t// escaping is not supported so we can't safely glob on Windows, or we can't\n\t// support placeholders on Windows (pick one). (Actually, evaluating untrusted\n\t// globs is not the end of the world since the file server will still hide any\n\t// hidden files, it just might lead to unexpected behavior.)\n\tmakeCandidates := func(file string) []matchCandidate {\n\t\t// first, evaluate placeholders in the file pattern\n\t\texpandedFile, err := repl.ReplaceFunc(file, func(variable string, val any) (any, error) {\n\t\t\tif runtime.GOOS == \"windows\" {\n\t\t\t\treturn val, nil\n\t\t\t}\n\t\t\tswitch v := val.(type) {\n\t\t\tcase string:\n\t\t\t\treturn globSafeRepl.Replace(v), nil\n\t\t\tcase fmt.Stringer:\n\t\t\t\treturn globSafeRepl.Replace(v.String()), nil\n\t\t\t}\n\t\t\treturn val, nil\n\t\t})\n\t\tif err != nil {\n\t\t\tif c := m.logger.Check(zapcore.ErrorLevel, \"evaluating placeholders\"); c != nil {\n\t\t\t\tc.Write(zap.Error(err))\n\t\t\t}\n\n\t\t\texpandedFile = file // \"oh well,\" I guess?\n\t\t}\n\n\t\t// clean the path and split, if configured -- we must split before\n\t\t// globbing so that the file system doesn't include the remainder\n\t\t// (\"afterSplit\") in the filename; be sure to restore trailing slash\n\t\tbeforeSplit, afterSplit := m.firstSplit(path.Clean(expandedFile))\n\t\tif strings.HasSuffix(file, \"/\") {\n\t\t\tbeforeSplit += \"/\"\n\t\t}\n\n\t\t// create the full path to the file by prepending the site root\n\t\tfullPattern := caddyhttp.SanitizedPathJoin(root, beforeSplit)\n\n\t\t// expand glob expressions, but not on Windows because Glob() doesn't\n\t\t// support escaping on Windows due to path separator)\n\t\tvar globResults []string\n\t\tif runtime.GOOS == \"windows\" {\n\t\t\tglobResults = []string{fullPattern} // precious Windows\n\t\t} else {\n\t\t\tglobResults, err = fs.Glob(fileSystem, fullPattern)\n\t\t\tif err != nil {\n\t\t\t\tif c := m.logger.Check(zapcore.ErrorLevel, \"expanding glob\"); c != nil {\n\t\t\t\t\tc.Write(zap.Error(err))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// for each glob result, combine all the forms of the path\n\t\tcandidates := make([]matchCandidate, 0, len(globResults))\n\t\tfor _, result := range globResults {\n\t\t\tcandidates = append(candidates, matchCandidate{\n\t\t\t\tfullpath:       result,\n\t\t\t\trelative:       strings.TrimPrefix(result, root),\n\t\t\t\tsplitRemainder: afterSplit,\n\t\t\t})\n\t\t}\n\n\t\treturn candidates\n\t}\n\n\t// setPlaceholders creates the placeholders for the matched file\n\tsetPlaceholders := func(candidate matchCandidate, isDir bool) {\n\t\trepl.Set(\"http.matchers.file.relative\", filepath.ToSlash(candidate.relative))\n\t\trepl.Set(\"http.matchers.file.absolute\", filepath.ToSlash(candidate.fullpath))\n\t\trepl.Set(\"http.matchers.file.remainder\", filepath.ToSlash(candidate.splitRemainder))\n\n\t\tfileType := \"file\"\n\t\tif isDir {\n\t\t\tfileType = \"directory\"\n\t\t}\n\t\trepl.Set(\"http.matchers.file.type\", fileType)\n\t}\n\n\t// match file according to the configured policy\n\tswitch m.TryPolicy {\n\tcase \"\", tryPolicyFirstExist, tryPolicyFirstExistFallback:\n\t\tmaxI := -1\n\t\tif m.TryPolicy == tryPolicyFirstExistFallback {\n\t\t\tmaxI = len(m.TryFiles) - 1\n\t\t}\n\n\t\tfor i, pattern := range m.TryFiles {\n\t\t\t// If the pattern is a status code, emit an error,\n\t\t\t// which short-circuits the middleware pipeline and\n\t\t\t// writes an HTTP error response.\n\t\t\tif err := parseErrorCode(pattern); err != nil {\n\t\t\t\treturn false, err\n\t\t\t}\n\n\t\t\tcandidates := makeCandidates(pattern)\n\t\t\tfor _, c := range candidates {\n\t\t\t\t// Skip the IO if using fallback policy and it's the latest item\n\t\t\t\tif i == maxI {\n\t\t\t\t\tsetPlaceholders(c, false)\n\n\t\t\t\t\treturn true, nil\n\t\t\t\t}\n\n\t\t\t\tif info, exists := m.strictFileExists(fileSystem, c.fullpath); exists {\n\t\t\t\t\tsetPlaceholders(c, info.IsDir())\n\t\t\t\t\treturn true, nil\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\tcase tryPolicyLargestSize:\n\t\tvar largestSize int64\n\t\tvar largest matchCandidate\n\t\tvar largestInfo os.FileInfo\n\t\tfor _, pattern := range m.TryFiles {\n\t\t\tcandidates := makeCandidates(pattern)\n\t\t\tfor _, c := range candidates {\n\t\t\t\tinfo, err := fs.Stat(fileSystem, c.fullpath)\n\t\t\t\tif err == nil && info.Size() > largestSize {\n\t\t\t\t\tlargestSize = info.Size()\n\t\t\t\t\tlargest = c\n\t\t\t\t\tlargestInfo = info\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif largestInfo == nil {\n\t\t\treturn false, nil\n\t\t}\n\t\tsetPlaceholders(largest, largestInfo.IsDir())\n\t\treturn true, nil\n\n\tcase tryPolicySmallestSize:\n\t\tvar smallestSize int64\n\t\tvar smallest matchCandidate\n\t\tvar smallestInfo os.FileInfo\n\t\tfor _, pattern := range m.TryFiles {\n\t\t\tcandidates := makeCandidates(pattern)\n\t\t\tfor _, c := range candidates {\n\t\t\t\tinfo, err := fs.Stat(fileSystem, c.fullpath)\n\t\t\t\tif err == nil && (smallestSize == 0 || info.Size() < smallestSize) {\n\t\t\t\t\tsmallestSize = info.Size()\n\t\t\t\t\tsmallest = c\n\t\t\t\t\tsmallestInfo = info\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif smallestInfo == nil {\n\t\t\treturn false, nil\n\t\t}\n\t\tsetPlaceholders(smallest, smallestInfo.IsDir())\n\t\treturn true, nil\n\n\tcase tryPolicyMostRecentlyMod:\n\t\tvar recent matchCandidate\n\t\tvar recentInfo os.FileInfo\n\t\tfor _, pattern := range m.TryFiles {\n\t\t\tcandidates := makeCandidates(pattern)\n\t\t\tfor _, c := range candidates {\n\t\t\t\tinfo, err := fs.Stat(fileSystem, c.fullpath)\n\t\t\t\tif err == nil &&\n\t\t\t\t\t(recentInfo == nil || info.ModTime().After(recentInfo.ModTime())) {\n\t\t\t\t\trecent = c\n\t\t\t\t\trecentInfo = info\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif recentInfo == nil {\n\t\t\treturn false, nil\n\t\t}\n\t\tsetPlaceholders(recent, recentInfo.IsDir())\n\t\treturn true, nil\n\t}\n\n\treturn false, nil\n}\n\n// parseErrorCode checks if the input is a status\n// code number, prefixed by \"=\", and returns an\n// error if so.\nfunc parseErrorCode(input string) error {\n\tif len(input) > 1 && input[0] == '=' {\n\t\tcode, err := strconv.Atoi(input[1:])\n\t\tif err != nil || code < 100 || code > 999 {\n\t\t\treturn nil\n\t\t}\n\t\treturn caddyhttp.Error(code, fmt.Errorf(\"%s\", input[1:]))\n\t}\n\treturn nil\n}\n\n// strictFileExists returns true if file exists\n// and matches the convention of the given file\n// path. If the path ends in a forward slash,\n// the file must also be a directory; if it does\n// NOT end in a forward slash, the file must NOT\n// be a directory.\nfunc (m MatchFile) strictFileExists(fileSystem fs.FS, file string) (os.FileInfo, bool) {\n\tinfo, err := fs.Stat(fileSystem, file)\n\tif err != nil {\n\t\t// in reality, this can be any error\n\t\t// such as permission or even obscure\n\t\t// ones like \"is not a directory\" (when\n\t\t// trying to stat a file within a file);\n\t\t// in those cases we can't be sure if\n\t\t// the file exists, so we just treat any\n\t\t// error as if it does not exist; see\n\t\t// https://stackoverflow.com/a/12518877/1048862\n\t\treturn nil, false\n\t}\n\tif strings.HasSuffix(file, separator) {\n\t\t// by convention, file paths ending\n\t\t// in a path separator must be a directory\n\t\treturn info, info.IsDir()\n\t}\n\t// by convention, file paths NOT ending\n\t// in a path separator must NOT be a directory\n\treturn info, !info.IsDir()\n}\n\n// firstSplit returns the first result where the path\n// can be split in two by a value in m.SplitPath. The\n// return values are the first piece of the path that\n// ends with the split substring and the remainder.\n// If the path cannot be split, the path is returned\n// as-is (with no remainder).\nfunc (m MatchFile) firstSplit(path string) (splitPart, remainder string) {\n\tfor _, split := range m.SplitPath {\n\t\tif idx := indexFold(path, split); idx > -1 {\n\t\t\tpos := idx + len(split)\n\t\t\t// skip the split if it's not the final part of the filename\n\t\t\tif pos != len(path) && !strings.HasPrefix(path[pos:], \"/\") {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\treturn path[:pos], path[pos:]\n\t\t}\n\t}\n\treturn path, \"\"\n}\n\n// There is no strings.IndexFold() function like there is strings.EqualFold(),\n// but we can use strings.EqualFold() to build our own case-insensitive\n// substring search (as of Go 1.14).\nfunc indexFold(haystack, needle string) int {\n\tnlen := len(needle)\n\tfor i := 0; i+nlen < len(haystack); i++ {\n\t\tif strings.EqualFold(haystack[i:i+nlen], needle) {\n\t\t\treturn i\n\t\t}\n\t}\n\treturn -1\n}\n\n// isCELTryFilesLiteral returns whether the expression resolves to a map literal containing\n// only string keys with or a placeholder call.\nfunc isCELTryFilesLiteral(e ast.Expr) bool {\n\tswitch e.Kind() {\n\tcase ast.MapKind:\n\t\tmapExpr := e.AsMap()\n\t\tfor _, entry := range mapExpr.Entries() {\n\t\t\tmapKey := entry.AsMapEntry().Key()\n\t\t\tmapVal := entry.AsMapEntry().Value()\n\t\t\tif !isCELStringLiteral(mapKey) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tmapKeyStr := mapKey.AsLiteral().ConvertToType(types.StringType).Value()\n\t\t\tswitch mapKeyStr {\n\t\t\tcase \"try_files\", \"split_path\":\n\t\t\t\tif !isCELStringListLiteral(mapVal) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\tcase \"try_policy\", \"root\":\n\t\t\t\tif !(isCELStringExpr(mapVal)) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\tdefault:\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\treturn true\n\n\tcase ast.UnspecifiedExprKind, ast.CallKind, ast.ComprehensionKind, ast.IdentKind, ast.ListKind, ast.LiteralKind, ast.SelectKind, ast.StructKind:\n\t\t// appeasing the linter :)\n\t}\n\treturn false\n}\n\n// isCELStringExpr indicates whether the expression is a supported string expression\nfunc isCELStringExpr(e ast.Expr) bool {\n\treturn isCELStringLiteral(e) || isCELCaddyPlaceholderCall(e) || isCELConcatCall(e)\n}\n\n// isCELStringLiteral returns whether the expression is a CEL string literal.\nfunc isCELStringLiteral(e ast.Expr) bool {\n\tswitch e.Kind() {\n\tcase ast.LiteralKind:\n\t\tconstant := e.AsLiteral()\n\t\tswitch constant.Type() {\n\t\tcase types.StringType:\n\t\t\treturn true\n\t\t}\n\tcase ast.UnspecifiedExprKind, ast.CallKind, ast.ComprehensionKind, ast.IdentKind, ast.ListKind, ast.MapKind, ast.SelectKind, ast.StructKind:\n\t\t// appeasing the linter :)\n\t}\n\treturn false\n}\n\n// isCELCaddyPlaceholderCall returns whether the expression is a caddy placeholder call.\nfunc isCELCaddyPlaceholderCall(e ast.Expr) bool {\n\tswitch e.Kind() {\n\tcase ast.CallKind:\n\t\tcall := e.AsCall()\n\t\tif call.FunctionName() == caddyhttp.CELPlaceholderFuncName {\n\t\t\treturn true\n\t\t}\n\tcase ast.UnspecifiedExprKind, ast.ComprehensionKind, ast.IdentKind, ast.ListKind, ast.LiteralKind, ast.MapKind, ast.SelectKind, ast.StructKind:\n\t\t// appeasing the linter :)\n\t}\n\treturn false\n}\n\n// isCELConcatCall tests whether the expression is a concat function (+) with string, placeholder, or\n// other concat call arguments.\nfunc isCELConcatCall(e ast.Expr) bool {\n\tswitch e.Kind() {\n\tcase ast.CallKind:\n\t\tcall := e.AsCall()\n\t\tif call.Target().Kind() != ast.UnspecifiedExprKind {\n\t\t\treturn false\n\t\t}\n\t\tif call.FunctionName() != operators.Add {\n\t\t\treturn false\n\t\t}\n\t\tfor _, arg := range call.Args() {\n\t\t\tif !isCELStringExpr(arg) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\treturn true\n\tcase ast.UnspecifiedExprKind, ast.ComprehensionKind, ast.IdentKind, ast.ListKind, ast.LiteralKind, ast.MapKind, ast.SelectKind, ast.StructKind:\n\t\t// appeasing the linter :)\n\t}\n\treturn false\n}\n\n// isCELStringListLiteral returns whether the expression resolves to a list literal\n// containing only string constants or a placeholder call.\nfunc isCELStringListLiteral(e ast.Expr) bool {\n\tswitch e.Kind() {\n\tcase ast.ListKind:\n\t\tlist := e.AsList()\n\t\tfor _, elem := range list.Elements() {\n\t\t\tif !isCELStringExpr(elem) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\treturn true\n\tcase ast.UnspecifiedExprKind, ast.CallKind, ast.ComprehensionKind, ast.IdentKind, ast.LiteralKind, ast.MapKind, ast.SelectKind, ast.StructKind:\n\t\t// appeasing the linter :)\n\t}\n\treturn false\n}\n\n// globSafeRepl replaces special glob characters with escaped\n// equivalents. Note that the filepath godoc states that\n// escaping is not done on Windows because of the separator.\nvar globSafeRepl = strings.NewReplacer(\n\t\"*\", \"\\\\*\",\n\t\"[\", \"\\\\[\",\n\t\"?\", \"\\\\?\",\n\t\"\\\\\", \"\\\\\\\\\",\n)\n\nconst (\n\ttryPolicyFirstExist         = \"first_exist\"\n\ttryPolicyFirstExistFallback = \"first_exist_fallback\"\n\ttryPolicyLargestSize        = \"largest_size\"\n\ttryPolicySmallestSize       = \"smallest_size\"\n\ttryPolicyMostRecentlyMod    = \"most_recently_modified\"\n)\n\n// Interface guards\nvar (\n\t_ caddy.Validator                   = (*MatchFile)(nil)\n\t_ caddyhttp.RequestMatcherWithError = (*MatchFile)(nil)\n\t_ caddyhttp.CELLibraryProducer      = (*MatchFile)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/fileserver/matcher_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage fileserver\n\nimport (\n\t\"context\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"net/url\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/internal/filesystems\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\ntype testCase struct {\n\tpath         string\n\texpectedPath string\n\texpectedType string\n\tmatched      bool\n}\n\nfunc TestFileMatcher(t *testing.T) {\n\t// Windows doesn't like colons in files names\n\tisWindows := runtime.GOOS == \"windows\"\n\tif !isWindows {\n\t\tfilename := \"with:in-name.txt\"\n\t\tf, err := os.Create(\"./testdata/\" + filename)\n\t\tif err != nil {\n\t\t\tt.Fail()\n\t\t\treturn\n\t\t}\n\t\tt.Cleanup(func() {\n\t\t\tos.Remove(\"./testdata/\" + filename)\n\t\t})\n\t\tf.WriteString(filename)\n\t\tf.Close()\n\t}\n\n\tfor i, tc := range []testCase{\n\t\t{\n\t\t\tpath:         \"/foo.txt\",\n\t\t\texpectedPath: \"/foo.txt\",\n\t\t\texpectedType: \"file\",\n\t\t\tmatched:      true,\n\t\t},\n\t\t{\n\t\t\tpath:         \"/foo.txt/\",\n\t\t\texpectedPath: \"/foo.txt\",\n\t\t\texpectedType: \"file\",\n\t\t\tmatched:      true,\n\t\t},\n\t\t{\n\t\t\tpath:         \"/foo.txt?a=b\",\n\t\t\texpectedPath: \"/foo.txt\",\n\t\t\texpectedType: \"file\",\n\t\t\tmatched:      true,\n\t\t},\n\t\t{\n\t\t\tpath:         \"/foodir\",\n\t\t\texpectedPath: \"/foodir/\",\n\t\t\texpectedType: \"directory\",\n\t\t\tmatched:      true,\n\t\t},\n\t\t{\n\t\t\tpath:         \"/foodir/\",\n\t\t\texpectedPath: \"/foodir/\",\n\t\t\texpectedType: \"directory\",\n\t\t\tmatched:      true,\n\t\t},\n\t\t{\n\t\t\tpath:         \"/foodir/foo.txt\",\n\t\t\texpectedPath: \"/foodir/foo.txt\",\n\t\t\texpectedType: \"file\",\n\t\t\tmatched:      true,\n\t\t},\n\t\t{\n\t\t\tpath:    \"/missingfile.php\",\n\t\t\tmatched: false,\n\t\t},\n\t\t{\n\t\t\tpath:         \"ملف.txt\", // the path file name is not escaped\n\t\t\texpectedPath: \"/ملف.txt\",\n\t\t\texpectedType: \"file\",\n\t\t\tmatched:      true,\n\t\t},\n\t\t{\n\t\t\tpath:         url.PathEscape(\"ملف.txt\"), // singly-escaped path\n\t\t\texpectedPath: \"/ملف.txt\",\n\t\t\texpectedType: \"file\",\n\t\t\tmatched:      true,\n\t\t},\n\t\t{\n\t\t\tpath:         url.PathEscape(url.PathEscape(\"ملف.txt\")), // doubly-escaped path\n\t\t\texpectedPath: \"/%D9%85%D9%84%D9%81.txt\",\n\t\t\texpectedType: \"file\",\n\t\t\tmatched:      true,\n\t\t},\n\t\t{\n\t\t\tpath:         \"./with:in-name.txt\", // browsers send the request with the path as such\n\t\t\texpectedPath: \"/with:in-name.txt\",\n\t\t\texpectedType: \"file\",\n\t\t\tmatched:      !isWindows,\n\t\t},\n\t} {\n\t\tfileMatcherTest(t, i, tc)\n\t}\n}\n\nfunc TestFileMatcherNonWindows(t *testing.T) {\n\tif runtime.GOOS == \"windows\" {\n\t\treturn\n\t}\n\n\t// this is impossible to test on Windows, but tests a security patch for other platforms\n\ttc := testCase{\n\t\tpath:         \"/foodir/secr%5Cet.txt\",\n\t\texpectedPath: \"/foodir/secr\\\\et.txt\",\n\t\texpectedType: \"file\",\n\t\tmatched:      true,\n\t}\n\n\tf, err := os.Create(filepath.Join(\"testdata\", strings.TrimPrefix(tc.expectedPath, \"/\")))\n\tif err != nil {\n\t\tt.Fatalf(\"could not create test file: %v\", err)\n\t}\n\tdefer f.Close()\n\tdefer os.Remove(f.Name())\n\n\tfileMatcherTest(t, 0, tc)\n}\n\nfunc fileMatcherTest(t *testing.T, i int, tc testCase) {\n\tm := &MatchFile{\n\t\tfsmap:    &filesystems.FileSystemMap{},\n\t\tRoot:     \"./testdata\",\n\t\tTryFiles: []string{\"{http.request.uri.path}\", \"{http.request.uri.path}/\"},\n\t}\n\n\tu, err := url.Parse(tc.path)\n\tif err != nil {\n\t\tt.Errorf(\"Test %d: parsing path: %v\", i, err)\n\t}\n\n\treq := &http.Request{URL: u}\n\trepl := caddyhttp.NewTestReplacer(req)\n\n\tresult, err := m.MatchWithError(req)\n\tif err != nil {\n\t\tt.Errorf(\"Test %d: unexpected error: %v\", i, err)\n\t}\n\tif result != tc.matched {\n\t\tt.Errorf(\"Test %d: expected match=%t, got %t\", i, tc.matched, result)\n\t}\n\n\trel, ok := repl.Get(\"http.matchers.file.relative\")\n\tif !ok && result {\n\t\tt.Errorf(\"Test %d: expected replacer value\", i)\n\t}\n\tif !result {\n\t\treturn\n\t}\n\n\tif rel != tc.expectedPath {\n\t\tt.Errorf(\"Test %d: actual path: %v, expected: %v\", i, rel, tc.expectedPath)\n\t}\n\n\tfileType, _ := repl.Get(\"http.matchers.file.type\")\n\tif fileType != tc.expectedType {\n\t\tt.Errorf(\"Test %d: actual file type: %v, expected: %v\", i, fileType, tc.expectedType)\n\t}\n}\n\nfunc TestPHPFileMatcher(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tpath         string\n\t\texpectedPath string\n\t\texpectedType string\n\t\tmatched      bool\n\t}{\n\t\t{\n\t\t\tpath:         \"/index.php\",\n\t\t\texpectedPath: \"/index.php\",\n\t\t\texpectedType: \"file\",\n\t\t\tmatched:      true,\n\t\t},\n\t\t{\n\t\t\tpath:         \"/index.php/somewhere\",\n\t\t\texpectedPath: \"/index.php\",\n\t\t\texpectedType: \"file\",\n\t\t\tmatched:      true,\n\t\t},\n\t\t{\n\t\t\tpath:         \"/remote.php\",\n\t\t\texpectedPath: \"/remote.php\",\n\t\t\texpectedType: \"file\",\n\t\t\tmatched:      true,\n\t\t},\n\t\t{\n\t\t\tpath:         \"/remote.php/somewhere\",\n\t\t\texpectedPath: \"/remote.php\",\n\t\t\texpectedType: \"file\",\n\t\t\tmatched:      true,\n\t\t},\n\t\t{\n\t\t\tpath:    \"/missingfile.php\",\n\t\t\tmatched: false,\n\t\t},\n\t\t{\n\t\t\tpath:         \"/notphp.php.txt\",\n\t\t\texpectedPath: \"/notphp.php.txt\",\n\t\t\texpectedType: \"file\",\n\t\t\tmatched:      true,\n\t\t},\n\t\t{\n\t\t\tpath:         \"/notphp.php.txt/\",\n\t\t\texpectedPath: \"/notphp.php.txt\",\n\t\t\texpectedType: \"file\",\n\t\t\tmatched:      true,\n\t\t},\n\t\t{\n\t\t\tpath:    \"/notphp.php.txt.suffixed\",\n\t\t\tmatched: false,\n\t\t},\n\t\t{\n\t\t\tpath:         \"/foo.php.php/index.php\",\n\t\t\texpectedPath: \"/foo.php.php/index.php\",\n\t\t\texpectedType: \"file\",\n\t\t\tmatched:      true,\n\t\t},\n\t\t{\n\t\t\t// See https://github.com/caddyserver/caddy/issues/3623\n\t\t\tpath:         \"/%E2%C3\",\n\t\t\texpectedPath: \"/%E2%C3\",\n\t\t\texpectedType: \"file\",\n\t\t\tmatched:      false,\n\t\t},\n\t\t{\n\t\t\tpath:         \"/index.php?path={path}&{query}\",\n\t\t\texpectedPath: \"/index.php\",\n\t\t\texpectedType: \"file\",\n\t\t\tmatched:      true,\n\t\t},\n\t} {\n\t\tm := &MatchFile{\n\t\t\tfsmap:     &filesystems.FileSystemMap{},\n\t\t\tRoot:      \"./testdata\",\n\t\t\tTryFiles:  []string{\"{http.request.uri.path}\", \"{http.request.uri.path}/index.php\"},\n\t\t\tSplitPath: []string{\".php\"},\n\t\t}\n\n\t\tu, err := url.Parse(tc.path)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Test %d: parsing path: %v\", i, err)\n\t\t}\n\n\t\treq := &http.Request{URL: u}\n\t\trepl := caddyhttp.NewTestReplacer(req)\n\n\t\tresult, err := m.MatchWithError(req)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Test %d: unexpected error: %v\", i, err)\n\t\t}\n\t\tif result != tc.matched {\n\t\t\tt.Errorf(\"Test %d: expected match=%t, got %t\", i, tc.matched, result)\n\t\t}\n\n\t\trel, ok := repl.Get(\"http.matchers.file.relative\")\n\t\tif !ok && result {\n\t\t\tt.Errorf(\"Test %d: expected replacer value\", i)\n\t\t}\n\t\tif !result {\n\t\t\tcontinue\n\t\t}\n\n\t\tif rel != tc.expectedPath {\n\t\t\tt.Errorf(\"Test %d: actual path: %v, expected: %v\", i, rel, tc.expectedPath)\n\t\t}\n\n\t\tfileType, _ := repl.Get(\"http.matchers.file.type\")\n\t\tif fileType != tc.expectedType {\n\t\t\tt.Errorf(\"Test %d: actual file type: %v, expected: %v\", i, fileType, tc.expectedType)\n\t\t}\n\t}\n}\n\nfunc TestFirstSplit(t *testing.T) {\n\tm := MatchFile{\n\t\tSplitPath: []string{\".php\"},\n\t\tfsmap:     &filesystems.FileSystemMap{},\n\t}\n\tactual, remainder := m.firstSplit(\"index.PHP/somewhere\")\n\texpected := \"index.PHP\"\n\texpectedRemainder := \"/somewhere\"\n\tif actual != expected {\n\t\tt.Errorf(\"Expected split %s but got %s\", expected, actual)\n\t}\n\tif remainder != expectedRemainder {\n\t\tt.Errorf(\"Expected remainder %s but got %s\", expectedRemainder, remainder)\n\t}\n}\n\nvar expressionTests = []struct {\n\tname              string\n\texpression        *caddyhttp.MatchExpression\n\turlTarget         string\n\thttpMethod        string\n\thttpHeader        *http.Header\n\twantErr           bool\n\twantResult        bool\n\tclientCertificate []byte\n\texpectedPath      string\n}{\n\t{\n\t\tname: \"file error no args (MatchFile)\",\n\t\texpression: &caddyhttp.MatchExpression{\n\t\t\tExpr: `file()`,\n\t\t},\n\t\turlTarget:  \"https://example.com/foo.txt\",\n\t\twantResult: true,\n\t},\n\t{\n\t\tname: \"file error bad try files (MatchFile)\",\n\t\texpression: &caddyhttp.MatchExpression{\n\t\t\tExpr: `file({\"try_file\": [\"bad_arg\"]})`,\n\t\t},\n\t\turlTarget: \"https://example.com/foo\",\n\t\twantErr:   true,\n\t},\n\t{\n\t\tname: \"file match short pattern index.php (MatchFile)\",\n\t\texpression: &caddyhttp.MatchExpression{\n\t\t\tExpr: `file(\"index.php\")`,\n\t\t},\n\t\turlTarget:  \"https://example.com/foo\",\n\t\twantResult: true,\n\t},\n\t{\n\t\tname: \"file match short pattern foo.txt (MatchFile)\",\n\t\texpression: &caddyhttp.MatchExpression{\n\t\t\tExpr: `file({http.request.uri.path})`,\n\t\t},\n\t\turlTarget:  \"https://example.com/foo.txt\",\n\t\twantResult: true,\n\t},\n\t{\n\t\tname: \"file match index.php (MatchFile)\",\n\t\texpression: &caddyhttp.MatchExpression{\n\t\t\tExpr: `file({\"root\": \"./testdata\", \"try_files\": [{http.request.uri.path}, \"/index.php\"]})`,\n\t\t},\n\t\turlTarget:  \"https://example.com/foo\",\n\t\twantResult: true,\n\t},\n\t{\n\t\tname: \"file match long pattern foo.txt (MatchFile)\",\n\t\texpression: &caddyhttp.MatchExpression{\n\t\t\tExpr: `file({\"root\": \"./testdata\", \"try_files\": [{http.request.uri.path}]})`,\n\t\t},\n\t\turlTarget:  \"https://example.com/foo.txt\",\n\t\twantResult: true,\n\t},\n\t{\n\t\tname: \"file match long pattern foo.txt with concatenation (MatchFile)\",\n\t\texpression: &caddyhttp.MatchExpression{\n\t\t\tExpr: `file({\"root\": \".\", \"try_files\": [\"./testdata\" + {http.request.uri.path}]})`,\n\t\t},\n\t\turlTarget:  \"https://example.com/foo.txt\",\n\t\twantResult: true,\n\t},\n\t{\n\t\tname: \"file not match long pattern (MatchFile)\",\n\t\texpression: &caddyhttp.MatchExpression{\n\t\t\tExpr: `file({\"root\": \"./testdata\", \"try_files\": [{http.request.uri.path}]})`,\n\t\t},\n\t\turlTarget:  \"https://example.com/nopenope.txt\",\n\t\twantResult: false,\n\t},\n\t{\n\t\tname: \"file match long pattern foo.txt with try_policy (MatchFile)\",\n\t\texpression: &caddyhttp.MatchExpression{\n\t\t\tExpr: `file({\"root\": \"./testdata\", \"try_policy\": \"largest_size\", \"try_files\": [\"foo.txt\", \"large.txt\"]})`,\n\t\t},\n\t\turlTarget:    \"https://example.com/\",\n\t\twantResult:   true,\n\t\texpectedPath: \"/large.txt\",\n\t},\n}\n\nfunc TestMatchExpressionMatch(t *testing.T) {\n\tfor _, tst := range expressionTests {\n\t\ttc := tst\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tcaddyCtx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\t\t\tdefer cancel()\n\t\t\terr := tc.expression.Provision(caddyCtx)\n\t\t\tif err != nil {\n\t\t\t\tif !tc.wantErr {\n\t\t\t\t\tt.Errorf(\"MatchExpression.Provision() error = %v, wantErr %v\", err, tc.wantErr)\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\treq := httptest.NewRequest(tc.httpMethod, tc.urlTarget, nil)\n\t\t\tif tc.httpHeader != nil {\n\t\t\t\treq.Header = *tc.httpHeader\n\t\t\t}\n\t\t\trepl := caddyhttp.NewTestReplacer(req)\n\t\t\trepl.Set(\"http.vars.root\", \"./testdata\")\n\t\t\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\t\t\treq = req.WithContext(ctx)\n\n\t\t\tmatches, err := tc.expression.MatchWithError(req)\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"MatchExpression.Match() error = %v\", err)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif matches != tc.wantResult {\n\t\t\t\tt.Errorf(\"MatchExpression.Match() expected to return '%t', for expression : '%s'\", tc.wantResult, tc.expression.Expr)\n\t\t\t}\n\n\t\t\tif tc.expectedPath != \"\" {\n\t\t\t\tpath, ok := repl.Get(\"http.matchers.file.relative\")\n\t\t\t\tif !ok {\n\t\t\t\t\tt.Errorf(\"MatchExpression.Match() expected to return path '%s', but got none\", tc.expectedPath)\n\t\t\t\t}\n\t\t\t\tif path != tc.expectedPath {\n\t\t\t\t\tt.Errorf(\"MatchExpression.Match() expected to return path '%s', but got '%s'\", tc.expectedPath, path)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/fileserver/staticfiles.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage fileserver\n\nimport (\n\t\"bytes\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/fs\"\n\tweakrand \"math/rand/v2\"\n\t\"mime\"\n\t\"net/http\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/encode\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(FileServer{})\n}\n\n// FileServer implements a handler that serves static files.\n//\n// The path of the file to serve is constructed by joining the site root\n// and the sanitized request path. Any and all files within the root and\n// links with targets outside the site root may therefore be accessed.\n// For example, with a site root of `/www`, requests to `/foo/bar.txt`\n// will serve the file at `/www/foo/bar.txt`.\n//\n// The request path is sanitized using the Go standard library's\n// path.Clean() function (https://pkg.go.dev/path#Clean) before being\n// joined to the root. Request paths must be valid and well-formed.\n//\n// For requests that access directories instead of regular files,\n// Caddy will attempt to serve an index file if present. For example,\n// a request to `/dir/` will attempt to serve `/dir/index.html` if\n// it exists. The index file names to try are configurable. If a\n// requested directory does not have an index file, Caddy writes a\n// 404 response. Alternatively, file browsing can be enabled with\n// the \"browse\" parameter which shows a list of files when directories\n// are requested if no index file is present. If \"browse\" is enabled,\n// Caddy may serve a JSON array of the directory listing when the `Accept`\n// header mentions `application/json` with the following structure:\n//\n//\t[{\n//\t\t\"name\": \"\",\n//\t\t\"size\": 0,\n//\t\t\"url\": \"\",\n//\t\t\"mod_time\": \"\",\n//\t\t\"mode\": 0,\n//\t\t\"is_dir\": false,\n//\t\t\"is_symlink\": false\n//\t}]\n//\n// with the `url` being relative to the request path and `mod_time` in the RFC 3339 format\n// with sub-second precision. For any other value for the `Accept` header, the\n// respective browse template is executed with `Content-Type: text/html`.\n//\n// By default, this handler will canonicalize URIs so that requests to\n// directories end with a slash, but requests to regular files do not.\n// This is enforced with HTTP redirects automatically and can be disabled.\n// Canonicalization redirects are not issued, however, if a URI rewrite\n// modified the last component of the path (the filename).\n//\n// This handler sets the Etag and Last-Modified headers for static files.\n// It does not perform MIME sniffing to determine Content-Type based on\n// contents, but does use the extension (if known); see the Go docs for\n// details: https://pkg.go.dev/mime#TypeByExtension\n//\n// The file server properly handles requests with If-Match,\n// If-Unmodified-Since, If-Modified-Since, If-None-Match, Range, and\n// If-Range headers. It includes the file's modification time in the\n// Last-Modified header of the response.\ntype FileServer struct {\n\t// The file system implementation to use. By default, Caddy uses the local\n\t// disk file system.\n\t//\n\t// if a non default filesystem is used, it must be first be registered in the globals section.\n\tFileSystem string `json:\"fs,omitempty\"`\n\n\t// The path to the root of the site. Default is `{http.vars.root}` if set,\n\t// or current working directory otherwise. This should be a trusted value.\n\t//\n\t// Note that a site root is not a sandbox. Although the file server does\n\t// sanitize the request URI to prevent directory traversal, files (including\n\t// links) within the site root may be directly accessed based on the request\n\t// path. Files and folders within the root should be secure and trustworthy.\n\tRoot string `json:\"root,omitempty\"`\n\n\t// A list of files or folders to hide; the file server will pretend as if\n\t// they don't exist. Accepts globular patterns like `*.ext` or `/foo/*/bar`\n\t// as well as placeholders. Because site roots can be dynamic, this list\n\t// uses file system paths, not request paths. To clarify, the base of\n\t// relative paths is the current working directory, NOT the site root.\n\t//\n\t// Entries without a path separator (`/` or `\\` depending on OS) will match\n\t// any file or directory of that name regardless of its path. To hide only a\n\t// specific file with a name that may not be unique, always use a path\n\t// separator. For example, to hide all files or folder trees named \"hidden\",\n\t// put \"hidden\" in the list. To hide only ./hidden, put \"./hidden\" in the list.\n\t//\n\t// When possible, all paths are resolved to their absolute form before\n\t// comparisons are made. For maximum clarity and explictness, use complete,\n\t// absolute paths; or, for greater portability, use relative paths instead.\n\t//\n\t// Note that hide comparisons are case-sensitive. On case-insensitive\n\t// filesystems, requests with different path casing may still resolve to the\n\t// same file or directory on disk, so hide should not be treated as a\n\t// security boundary for sensitive paths.\n\tHide []string `json:\"hide,omitempty\"`\n\n\t// The names of files to try as index files if a folder is requested.\n\t// Default: index.html, index.txt.\n\tIndexNames []string `json:\"index_names,omitempty\"`\n\n\t// Enables file listings if a directory was requested and no index\n\t// file is present.\n\tBrowse *Browse `json:\"browse,omitempty\"`\n\n\t// Use redirects to enforce trailing slashes for directories, or to\n\t// remove trailing slash from URIs for files. Default is true.\n\t//\n\t// Canonicalization will not happen if the last element of the request's\n\t// path (the filename) is changed in an internal rewrite, to avoid\n\t// clobbering the explicit rewrite with implicit behavior.\n\tCanonicalURIs *bool `json:\"canonical_uris,omitempty\"`\n\n\t// Override the status code written when successfully serving a file.\n\t// Particularly useful when explicitly serving a file as display for\n\t// an error, like a 404 page. A placeholder may be used. By default,\n\t// the status code will typically be 200, or 206 for partial content.\n\tStatusCode caddyhttp.WeakString `json:\"status_code,omitempty\"`\n\n\t// If pass-thru mode is enabled and a requested file is not found,\n\t// it will invoke the next handler in the chain instead of returning\n\t// a 404 error. By default, this is false (disabled).\n\tPassThru bool `json:\"pass_thru,omitempty\"`\n\n\t// Selection of encoders to use to check for precompressed files.\n\tPrecompressedRaw caddy.ModuleMap `json:\"precompressed,omitempty\" caddy:\"namespace=http.precompressed\"`\n\n\t// If the client has no strong preference (q-factor), choose these encodings in order.\n\t// If no order specified here, the first encoding from the Accept-Encoding header\n\t// that both client and server support is used\n\tPrecompressedOrder []string `json:\"precompressed_order,omitempty\"`\n\tprecompressors     map[string]encode.Precompressed\n\n\t// List of file extensions to try to read Etags from.\n\t// If set, file Etags will be read from sidecar files\n\t// with any of these suffixes, instead of generating\n\t// our own Etag.\n\t// Keep in mind that the Etag values in the files have to be quoted as per RFC7232.\n\t// See https://datatracker.ietf.org/doc/html/rfc7232#section-2.3 for a few examples.\n\tEtagFileExtensions []string `json:\"etag_file_extensions,omitempty\"`\n\n\tfsmap caddy.FileSystems\n\n\tlogger *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (FileServer) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.file_server\",\n\t\tNew: func() caddy.Module { return new(FileServer) },\n\t}\n}\n\n// Provision sets up the static files responder.\nfunc (fsrv *FileServer) Provision(ctx caddy.Context) error {\n\tfsrv.logger = ctx.Logger()\n\n\tfsrv.fsmap = ctx.FileSystems()\n\n\tif fsrv.FileSystem == \"\" {\n\t\tfsrv.FileSystem = \"{http.vars.fs}\"\n\t}\n\n\tif fsrv.Root == \"\" {\n\t\tfsrv.Root = \"{http.vars.root}\"\n\t}\n\n\tif fsrv.IndexNames == nil {\n\t\tfsrv.IndexNames = defaultIndexNames\n\t}\n\n\t// for hide paths that are static (i.e. no placeholders), we can transform them into\n\t// absolute paths before the server starts for very slight performance improvement\n\tfor i, h := range fsrv.Hide {\n\t\tif !strings.Contains(h, \"{\") && strings.Contains(h, separator) {\n\t\t\tif abs, err := caddy.FastAbs(h); err == nil {\n\t\t\t\tfsrv.Hide[i] = abs\n\t\t\t}\n\t\t}\n\t}\n\n\t// support precompressed sidecar files\n\tmods, err := ctx.LoadModule(fsrv, \"PrecompressedRaw\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"loading encoder modules: %v\", err)\n\t}\n\tfor modName, modIface := range mods.(map[string]any) {\n\t\tp, ok := modIface.(encode.Precompressed)\n\t\tif !ok {\n\t\t\treturn fmt.Errorf(\"module %s is not precompressor\", modName)\n\t\t}\n\t\tae := p.AcceptEncoding()\n\t\tif ae == \"\" {\n\t\t\treturn fmt.Errorf(\"precompressor does not specify an Accept-Encoding value\")\n\t\t}\n\t\tsuffix := p.Suffix()\n\t\tif suffix == \"\" {\n\t\t\treturn fmt.Errorf(\"precompressor does not specify a Suffix value\")\n\t\t}\n\t\tif _, ok := fsrv.precompressors[ae]; ok {\n\t\t\treturn fmt.Errorf(\"precompressor already added: %s\", ae)\n\t\t}\n\t\tif fsrv.precompressors == nil {\n\t\t\tfsrv.precompressors = make(map[string]encode.Precompressed)\n\t\t}\n\t\tfsrv.precompressors[ae] = p\n\t}\n\n\tif fsrv.Browse != nil {\n\t\t// check sort options\n\t\tfor idx, sortOption := range fsrv.Browse.SortOptions {\n\t\t\tswitch idx {\n\t\t\tcase 0:\n\t\t\t\tif sortOption != sortByName && sortOption != sortByNameDirFirst && sortOption != sortBySize && sortOption != sortByTime {\n\t\t\t\t\treturn fmt.Errorf(\"the first option must be one of the following: %s, %s, %s, %s, but got %s\", sortByName, sortByNameDirFirst, sortBySize, sortByTime, sortOption)\n\t\t\t\t}\n\t\t\tcase 1:\n\t\t\t\tif sortOption != sortOrderAsc && sortOption != sortOrderDesc {\n\t\t\t\t\treturn fmt.Errorf(\"the second option must be one of the following: %s, %s, but got %s\", sortOrderAsc, sortOrderDesc, sortOption)\n\t\t\t\t}\n\t\t\tdefault:\n\t\t\t\treturn fmt.Errorf(\"only max 2 sort options are allowed, but got %d\", idx+1)\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (fsrv *FileServer) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\n\tif runtime.GOOS == \"windows\" {\n\t\t// reject paths with Alternate Data Streams (ADS)\n\t\tif strings.Contains(r.URL.Path, \":\") {\n\t\t\treturn caddyhttp.Error(http.StatusBadRequest, fmt.Errorf(\"illegal ADS path\"))\n\t\t}\n\t\t// reject paths with \"8.3\" short names\n\t\ttrimmedPath := strings.TrimRight(r.URL.Path, \". \") // Windows ignores trailing dots and spaces, sigh\n\t\tif len(path.Base(trimmedPath)) <= 12 && strings.Contains(trimmedPath, \"~\") {\n\t\t\treturn caddyhttp.Error(http.StatusBadRequest, fmt.Errorf(\"illegal short name\"))\n\t\t}\n\t\t// both of those could bypass file hiding or possibly leak information even if the file is not hidden\n\t}\n\n\tfilesToHide := fsrv.transformHidePaths(repl)\n\n\troot := repl.ReplaceAll(fsrv.Root, \".\")\n\tfsName := repl.ReplaceAll(fsrv.FileSystem, \"\")\n\n\tfileSystem, ok := fsrv.fsmap.Get(fsName)\n\tif !ok {\n\t\treturn caddyhttp.Error(http.StatusNotFound, fmt.Errorf(\"filesystem not found\"))\n\t}\n\n\t// remove any trailing `/` as it breaks fs.ValidPath() in the stdlib\n\tfilename := strings.TrimSuffix(caddyhttp.SanitizedPathJoin(root, r.URL.Path), \"/\")\n\n\tif c := fsrv.logger.Check(zapcore.DebugLevel, \"sanitized path join\"); c != nil {\n\t\tc.Write(\n\t\t\tzap.String(\"site_root\", root),\n\t\t\tzap.String(\"fs\", fsName),\n\t\t\tzap.String(\"request_path\", r.URL.Path),\n\t\t\tzap.String(\"result\", filename),\n\t\t)\n\t}\n\n\t// get information about the file\n\tinfo, err := fs.Stat(fileSystem, filename)\n\tif err != nil {\n\t\terr = fsrv.mapDirOpenError(fileSystem, err, filename)\n\t\tif errors.Is(err, fs.ErrNotExist) {\n\t\t\treturn fsrv.notFound(w, r, next)\n\t\t} else if errors.Is(err, fs.ErrInvalid) {\n\t\t\treturn caddyhttp.Error(http.StatusBadRequest, err)\n\t\t} else if errors.Is(err, fs.ErrPermission) {\n\t\t\treturn caddyhttp.Error(http.StatusForbidden, err)\n\t\t}\n\t\treturn caddyhttp.Error(http.StatusInternalServerError, err)\n\t}\n\n\t// if the request mapped to a directory, see if\n\t// there is an index file we can serve\n\tvar implicitIndexFile bool\n\tif info.IsDir() && len(fsrv.IndexNames) > 0 {\n\t\tfor _, indexPage := range fsrv.IndexNames {\n\t\t\tindexPage := repl.ReplaceAll(indexPage, \"\")\n\t\t\tindexPath := caddyhttp.SanitizedPathJoin(filename, indexPage)\n\t\t\tif fileHidden(indexPath, filesToHide) {\n\t\t\t\t// pretend this file doesn't exist\n\t\t\t\tif c := fsrv.logger.Check(zapcore.DebugLevel, \"hiding index file\"); c != nil {\n\t\t\t\t\tc.Write(\n\t\t\t\t\t\tzap.String(\"filename\", indexPath),\n\t\t\t\t\t\tzap.Strings(\"files_to_hide\", filesToHide),\n\t\t\t\t\t)\n\t\t\t\t}\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tindexInfo, err := fs.Stat(fileSystem, indexPath)\n\t\t\tif err != nil {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// don't rewrite the request path to append\n\t\t\t// the index file, because we might need to\n\t\t\t// do a canonical-URL redirect below based\n\t\t\t// on the URL as-is\n\n\t\t\t// we've chosen to use this index file,\n\t\t\t// so replace the last file info and path\n\t\t\t// with that of the index file\n\t\t\tinfo = indexInfo\n\t\t\tfilename = indexPath\n\t\t\timplicitIndexFile = true\n\t\t\tif c := fsrv.logger.Check(zapcore.DebugLevel, \"located index file\"); c != nil {\n\t\t\t\tc.Write(zap.String(\"filename\", filename))\n\t\t\t}\n\t\t\tbreak\n\t\t}\n\t}\n\n\t// if still referencing a directory, delegate\n\t// to browse or return an error\n\tif info.IsDir() {\n\t\tif c := fsrv.logger.Check(zapcore.DebugLevel, \"no index file in directory\"); c != nil {\n\t\t\tc.Write(\n\t\t\t\tzap.String(\"path\", filename),\n\t\t\t\tzap.Strings(\"index_filenames\", fsrv.IndexNames),\n\t\t\t)\n\t\t}\n\t\tif fsrv.Browse != nil && !fileHidden(filename, filesToHide) {\n\t\t\treturn fsrv.serveBrowse(fileSystem, root, filename, w, r, next)\n\t\t}\n\t\treturn fsrv.notFound(w, r, next)\n\t}\n\n\t// one last check to ensure the file isn't hidden (we might\n\t// have changed the filename from when we last checked)\n\tif fileHidden(filename, filesToHide) {\n\t\tif c := fsrv.logger.Check(zapcore.DebugLevel, \"hiding file\"); c != nil {\n\t\t\tc.Write(\n\t\t\t\tzap.String(\"filename\", filename),\n\t\t\t\tzap.Strings(\"files_to_hide\", filesToHide),\n\t\t\t)\n\t\t}\n\t\treturn fsrv.notFound(w, r, next)\n\t}\n\n\t// if URL canonicalization is enabled, we need to enforce trailing\n\t// slash convention: if a directory, trailing slash; if a file, no\n\t// trailing slash - not enforcing this can break relative hrefs\n\t// in HTML (see https://github.com/caddyserver/caddy/issues/2741)\n\tif fsrv.CanonicalURIs == nil || *fsrv.CanonicalURIs {\n\t\t// Only redirect if the last element of the path (the filename) was not\n\t\t// rewritten; if the admin wanted to rewrite to the canonical path, they\n\t\t// would have, and we have to be very careful not to introduce unwanted\n\t\t// redirects and especially redirect loops!\n\t\t// See https://github.com/caddyserver/caddy/issues/4205.\n\t\torigReq := r.Context().Value(caddyhttp.OriginalRequestCtxKey).(http.Request)\n\t\tif path.Base(origReq.URL.Path) == path.Base(r.URL.Path) {\n\t\t\tif implicitIndexFile && !strings.HasSuffix(origReq.URL.Path, \"/\") {\n\t\t\t\tto := origReq.URL.Path + \"/\"\n\t\t\t\tif c := fsrv.logger.Check(zapcore.DebugLevel, \"redirecting to canonical URI (adding trailing slash for directory\"); c != nil {\n\t\t\t\t\tc.Write(\n\t\t\t\t\t\tzap.String(\"from_path\", origReq.URL.Path),\n\t\t\t\t\t\tzap.String(\"to_path\", to),\n\t\t\t\t\t)\n\t\t\t\t}\n\t\t\t\treturn redirect(w, r, to)\n\t\t\t} else if !implicitIndexFile && strings.HasSuffix(origReq.URL.Path, \"/\") {\n\t\t\t\tto := origReq.URL.Path[:len(origReq.URL.Path)-1]\n\t\t\t\tif c := fsrv.logger.Check(zapcore.DebugLevel, \"redirecting to canonical URI (removing trailing slash for file\"); c != nil {\n\t\t\t\t\tc.Write(\n\t\t\t\t\t\tzap.String(\"from_path\", origReq.URL.Path),\n\t\t\t\t\t\tzap.String(\"to_path\", to),\n\t\t\t\t\t)\n\t\t\t\t}\n\t\t\t\treturn redirect(w, r, to)\n\t\t\t}\n\t\t}\n\t}\n\n\tvar file fs.File\n\trespHeader := w.Header()\n\n\t// etag is usually unset, but if the user knows what they're doing, let them override it\n\tetag := respHeader.Get(\"Etag\")\n\n\t// static file responses are often compressed, either on-the-fly\n\t// or with precompressed sidecar files; in any case, the headers\n\t// should contain \"Vary: Accept-Encoding\" even when not compressed\n\t// so caches can craft a reliable key (according to REDbot results)\n\t// see #5849\n\trespHeader.Add(\"Vary\", \"Accept-Encoding\")\n\n\t// check for precompressed files\n\tfor _, ae := range encode.AcceptedEncodings(r, fsrv.PrecompressedOrder) {\n\t\tprecompress, ok := fsrv.precompressors[ae]\n\t\tif !ok {\n\t\t\tcontinue\n\t\t}\n\t\tcompressedFilename := filename + precompress.Suffix()\n\t\tcompressedInfo, err := fs.Stat(fileSystem, compressedFilename)\n\t\tif err != nil || compressedInfo.IsDir() {\n\t\t\tif c := fsrv.logger.Check(zapcore.DebugLevel, \"precompressed file not accessible\"); c != nil {\n\t\t\t\tc.Write(zap.String(\"filename\", compressedFilename), zap.Error(err))\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\t\tif c := fsrv.logger.Check(zapcore.DebugLevel, \"opening compressed sidecar file\"); c != nil {\n\t\t\tc.Write(zap.String(\"filename\", compressedFilename), zap.Error(err))\n\t\t}\n\t\tfile, err = fsrv.openFile(fileSystem, compressedFilename, w)\n\t\tif err != nil {\n\t\t\tif c := fsrv.logger.Check(zapcore.WarnLevel, \"opening precompressed file failed\"); c != nil {\n\t\t\t\tc.Write(zap.String(\"filename\", compressedFilename), zap.Error(err))\n\t\t\t}\n\t\t\tif caddyErr, ok := err.(caddyhttp.HandlerError); ok && caddyErr.StatusCode == http.StatusServiceUnavailable {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tfile = nil\n\t\t\tcontinue\n\t\t}\n\t\tdefer file.Close()\n\t\trespHeader.Set(\"Content-Encoding\", ae)\n\n\t\t// stdlib won't set Content-Length for non-range requests if Content-Encoding is set.\n\t\t// see: https://github.com/caddyserver/caddy/issues/7040\n\t\t// Setting the Range header manually will result in 206 Partial Content.\n\t\t// see: https://github.com/caddyserver/caddy/issues/7250\n\t\tif r.Header.Get(\"Range\") == \"\" {\n\t\t\trespHeader.Set(\"Content-Length\", strconv.FormatInt(compressedInfo.Size(), 10))\n\t\t}\n\n\t\t// try to get the etag from pre computed files if an etag suffix list was provided\n\t\tif etag == \"\" && fsrv.EtagFileExtensions != nil {\n\t\t\tetag, err = fsrv.getEtagFromFile(fileSystem, compressedFilename)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\n\t\t// don't assign info = compressedInfo because sidecars are kind\n\t\t// of transparent; however we do need to set the Etag:\n\t\t// https://caddy.community/t/gzipped-sidecar-file-wrong-same-etag/16793\n\t\tif etag == \"\" {\n\t\t\tetag = calculateEtag(compressedInfo)\n\t\t}\n\n\t\tbreak\n\t}\n\n\t// no precompressed file found, use the actual file\n\tif file == nil {\n\t\tif c := fsrv.logger.Check(zapcore.DebugLevel, \"opening file\"); c != nil {\n\t\t\tc.Write(zap.String(\"filename\", filename))\n\t\t}\n\n\t\t// open the file\n\t\tfile, err = fsrv.openFile(fileSystem, filename, w)\n\t\tif err != nil {\n\t\t\tif herr, ok := err.(caddyhttp.HandlerError); ok &&\n\t\t\t\therr.StatusCode == http.StatusNotFound {\n\t\t\t\treturn fsrv.notFound(w, r, next)\n\t\t\t}\n\t\t\treturn err // error is already structured\n\t\t}\n\t\tdefer file.Close()\n\t\t// try to get the etag from pre computed files if an etag suffix list was provided\n\t\tif etag == \"\" && fsrv.EtagFileExtensions != nil {\n\t\t\tetag, err = fsrv.getEtagFromFile(fileSystem, filename)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t\tif etag == \"\" {\n\t\t\tetag = calculateEtag(info)\n\t\t}\n\t}\n\n\t// at this point, we're serving a file; Go std lib supports only\n\t// GET and HEAD, which is sensible for a static file server - reject\n\t// any other methods (see issue #5166)\n\tif r.Method != http.MethodGet && r.Method != http.MethodHead {\n\t\t// if we're in an error context, then it doesn't make sense\n\t\t// to repeat the error; just continue because we're probably\n\t\t// trying to write an error page response (see issue #5703)\n\t\tif _, ok := r.Context().Value(caddyhttp.ErrorCtxKey).(error); !ok {\n\t\t\trespHeader.Add(\"Allow\", \"GET, HEAD\")\n\t\t\treturn caddyhttp.Error(http.StatusMethodNotAllowed, nil)\n\t\t}\n\t}\n\n\t// set the Etag - note that a conditional If-None-Match request is handled\n\t// by http.ServeContent below, which checks against this Etag value\n\tif etag != \"\" {\n\t\trespHeader.Set(\"Etag\", etag)\n\t}\n\n\tif respHeader.Get(\"Content-Type\") == \"\" {\n\t\tmtyp := mime.TypeByExtension(filepath.Ext(filename))\n\t\tif mtyp == \"\" {\n\t\t\t// do not allow Go to sniff the content-type; see https://www.youtube.com/watch?v=8t8JYpt0egE\n\t\t\trespHeader[\"Content-Type\"] = nil\n\t\t} else {\n\t\t\trespHeader.Set(\"Content-Type\", mtyp)\n\t\t}\n\t}\n\n\tvar statusCodeOverride int\n\n\t// if this handler exists in an error context (i.e. is part of a\n\t// handler chain that is supposed to handle a previous error),\n\t// we should set status code to the one from the error instead\n\t// of letting http.ServeContent set the default (usually 200)\n\tif reqErr, ok := r.Context().Value(caddyhttp.ErrorCtxKey).(error); ok {\n\t\tstatusCodeOverride = http.StatusInternalServerError\n\t\tif handlerErr, ok := reqErr.(caddyhttp.HandlerError); ok {\n\t\t\tif handlerErr.StatusCode > 0 {\n\t\t\t\tstatusCodeOverride = handlerErr.StatusCode\n\t\t\t}\n\t\t}\n\t}\n\n\t// if a status code override is configured, run the replacer on it\n\tif codeStr := fsrv.StatusCode.String(); codeStr != \"\" {\n\t\tstatusCodeOverride, err = strconv.Atoi(repl.ReplaceAll(codeStr, \"\"))\n\t\tif err != nil {\n\t\t\treturn caddyhttp.Error(http.StatusInternalServerError, err)\n\t\t}\n\t}\n\n\t// if we do have an override from the previous two parts, then\n\t// we wrap the response writer to intercept the WriteHeader call\n\tif statusCodeOverride > 0 {\n\t\tw = statusOverrideResponseWriter{ResponseWriter: w, code: statusCodeOverride}\n\t}\n\n\t// let the standard library do what it does best; note, however,\n\t// that errors generated by ServeContent are written immediately\n\t// to the response, so we cannot handle them (but errors there\n\t// are rare)\n\thttp.ServeContent(w, r, info.Name(), info.ModTime(), file.(io.ReadSeeker))\n\n\treturn nil\n}\n\n// openFile opens the file at the given filename. If there was an error,\n// the response is configured to inform the client how to best handle it\n// and a well-described handler error is returned (do not wrap the\n// returned error value).\nfunc (fsrv *FileServer) openFile(fileSystem fs.FS, filename string, w http.ResponseWriter) (fs.File, error) {\n\tfile, err := fileSystem.Open(filename)\n\tif err != nil {\n\t\terr = fsrv.mapDirOpenError(fileSystem, err, filename)\n\t\tif errors.Is(err, fs.ErrNotExist) {\n\t\t\tif c := fsrv.logger.Check(zapcore.DebugLevel, \"file not found\"); c != nil {\n\t\t\t\tc.Write(zap.String(\"filename\", filename), zap.Error(err))\n\t\t\t}\n\t\t\treturn nil, caddyhttp.Error(http.StatusNotFound, err)\n\t\t} else if errors.Is(err, fs.ErrPermission) {\n\t\t\tif c := fsrv.logger.Check(zapcore.DebugLevel, \"permission denied\"); c != nil {\n\t\t\t\tc.Write(zap.String(\"filename\", filename), zap.Error(err))\n\t\t\t}\n\t\t\treturn nil, caddyhttp.Error(http.StatusForbidden, err)\n\t\t}\n\t\t// maybe the server is under load and ran out of file descriptors?\n\t\t// have client wait arbitrary seconds to help prevent a stampede\n\t\t//nolint:gosec\n\t\tbackoff := weakrand.IntN(maxBackoff-minBackoff) + minBackoff\n\t\tw.Header().Set(\"Retry-After\", strconv.Itoa(backoff))\n\t\tif c := fsrv.logger.Check(zapcore.DebugLevel, \"retry after backoff\"); c != nil {\n\t\t\tc.Write(zap.String(\"filename\", filename), zap.Int(\"backoff\", backoff), zap.Error(err))\n\t\t}\n\t\treturn nil, caddyhttp.Error(http.StatusServiceUnavailable, err)\n\t}\n\treturn file, nil\n}\n\n// mapDirOpenError maps the provided non-nil error from opening name\n// to a possibly better non-nil error. In particular, it turns OS-specific errors\n// about opening files in non-directories into os.ErrNotExist. See golang/go#18984.\n// Adapted from the Go standard library; originally written by Nathaniel Caza.\n// https://go-review.googlesource.com/c/go/+/36635/\n// https://go-review.googlesource.com/c/go/+/36804/\nfunc (fsrv *FileServer) mapDirOpenError(fileSystem fs.FS, originalErr error, name string) error {\n\tif errors.Is(originalErr, fs.ErrNotExist) || errors.Is(originalErr, fs.ErrPermission) {\n\t\treturn originalErr\n\t}\n\n\tvar pathErr *fs.PathError\n\tif errors.As(originalErr, &pathErr) {\n\t\treturn fs.ErrInvalid\n\t}\n\n\tparts := strings.Split(name, separator)\n\tfor i := range parts {\n\t\tif parts[i] == \"\" {\n\t\t\tcontinue\n\t\t}\n\t\tfi, err := fs.Stat(fileSystem, strings.Join(parts[:i+1], separator))\n\t\tif err != nil {\n\t\t\treturn originalErr\n\t\t}\n\t\tif !fi.IsDir() {\n\t\t\treturn fs.ErrNotExist\n\t\t}\n\t}\n\n\treturn originalErr\n}\n\n// transformHidePaths performs replacements for all the elements of fsrv.Hide and\n// makes them absolute paths (if they contain a path separator), then returns a\n// new list of the transformed values.\nfunc (fsrv *FileServer) transformHidePaths(repl *caddy.Replacer) []string {\n\thide := make([]string, len(fsrv.Hide))\n\tfor i := range fsrv.Hide {\n\t\thide[i] = repl.ReplaceAll(fsrv.Hide[i], \"\")\n\t\tif strings.Contains(hide[i], separator) {\n\t\t\tabs, err := caddy.FastAbs(hide[i])\n\t\t\tif err == nil {\n\t\t\t\thide[i] = abs\n\t\t\t}\n\t\t}\n\t}\n\treturn hide\n}\n\n// fileHidden returns true if filename is hidden according to the hide list.\n// filename must be a relative or absolute file system path, not a request\n// URI path. It is expected that all the paths in the hide list are absolute\n// paths or are singular filenames (without a path separator).\nfunc fileHidden(filename string, hide []string) bool {\n\tif len(hide) == 0 {\n\t\treturn false\n\t}\n\n\t// all path comparisons use the complete absolute path if possible\n\tfilenameAbs, err := caddy.FastAbs(filename)\n\tif err == nil {\n\t\tfilename = filenameAbs\n\t}\n\n\tvar components []string\n\n\tfor _, h := range hide {\n\t\tif !strings.Contains(h, separator) {\n\t\t\t// if there is no separator in h, then we assume the user\n\t\t\t// wants to hide any files or folders that match that\n\t\t\t// name; thus we have to compare against each component\n\t\t\t// of the filename, e.g. hiding \"bar\" would hide \"/bar\"\n\t\t\t// as well as \"/foo/bar/baz\" but not \"/barstool\".\n\t\t\tif len(components) == 0 {\n\t\t\t\tcomponents = strings.Split(filename, separator)\n\t\t\t}\n\t\t\tfor _, c := range components {\n\t\t\t\tif hidden, _ := filepath.Match(h, c); hidden {\n\t\t\t\t\treturn true\n\t\t\t\t}\n\t\t\t}\n\t\t} else if after, ok := strings.CutPrefix(filename, h); ok {\n\t\t\t// if there is a separator in h, and filename is exactly\n\t\t\t// prefixed with h, then we can do a prefix match so that\n\t\t\t// \"/foo\" matches \"/foo/bar\" but not \"/foobar\".\n\t\t\twithoutPrefix := after\n\t\t\tif strings.HasPrefix(withoutPrefix, separator) {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\n\t\t// in the general case, a glob match will suffice\n\t\tif hidden, _ := filepath.Match(h, filename); hidden {\n\t\t\treturn true\n\t\t}\n\t}\n\n\treturn false\n}\n\n// notFound returns a 404 error or, if pass-thru is enabled,\n// it calls the next handler in the chain.\nfunc (fsrv *FileServer) notFound(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {\n\tif fsrv.PassThru {\n\t\treturn next.ServeHTTP(w, r)\n\t}\n\treturn caddyhttp.Error(http.StatusNotFound, nil)\n}\n\n// calculateEtag computes an entity tag using a strong validator\n// without consuming the contents of the file. It requires the\n// file info contain the correct size and modification time.\n// It strives to implement the semantics regarding ETags as defined\n// by RFC 9110 section 8.8.3 and 8.8.1. See\n// https://www.rfc-editor.org/rfc/rfc9110.html#section-8.8.3.\n//\n// As our implementation uses file modification timestamp and size,\n// note the following from RFC 9110 section 8.8.1: \"A representation's\n// modification time, if defined with only one-second resolution,\n// might be a weak validator if it is possible for the representation to\n// be modified twice during a single second and retrieved between those\n// modifications.\" The ext4 file system, which underpins the vast majority\n// of Caddy deployments, stores mod times with millisecond precision,\n// which we consider precise enough to qualify as a strong validator.\nfunc calculateEtag(d os.FileInfo) string {\n\tmtime := d.ModTime()\n\tif mtimeUnix := mtime.Unix(); mtimeUnix == 0 || mtimeUnix == 1 {\n\t\treturn \"\" // not useful anyway; see issue #5548\n\t}\n\tvar sb strings.Builder\n\tsb.WriteRune('\"')\n\tsb.WriteString(strconv.FormatInt(mtime.UnixNano(), 36))\n\tsb.WriteString(strconv.FormatInt(d.Size(), 36))\n\tsb.WriteRune('\"')\n\treturn sb.String()\n}\n\n// Finds the first corresponding etag file for a given file in the file system and return its content\nfunc (fsrv *FileServer) getEtagFromFile(fileSystem fs.FS, filename string) (string, error) {\n\tfor _, suffix := range fsrv.EtagFileExtensions {\n\t\tetagFilename := filename + suffix\n\t\tetag, err := fs.ReadFile(fileSystem, etagFilename)\n\t\tif errors.Is(err, fs.ErrNotExist) {\n\t\t\tcontinue\n\t\t}\n\t\tif err != nil {\n\t\t\treturn \"\", fmt.Errorf(\"cannot read etag from file %s: %v\", etagFilename, err)\n\t\t}\n\n\t\t// Etags should not contain newline characters\n\t\tetag = bytes.ReplaceAll(etag, []byte(\"\\n\"), []byte{})\n\n\t\treturn string(etag), nil\n\t}\n\treturn \"\", nil\n}\n\n// redirect performs a redirect to a given path. The 'toPath' parameter\n// MUST be solely a path, and MUST NOT include a query.\nfunc redirect(w http.ResponseWriter, r *http.Request, toPath string) error {\n\tfor strings.HasPrefix(toPath, \"//\") {\n\t\t// prevent path-based open redirects\n\t\ttoPath = strings.TrimPrefix(toPath, \"/\")\n\t}\n\t// preserve the query string if present\n\tif r.URL.RawQuery != \"\" {\n\t\ttoPath += \"?\" + r.URL.RawQuery\n\t}\n\thttp.Redirect(w, r, toPath, http.StatusPermanentRedirect)\n\treturn nil\n}\n\n// statusOverrideResponseWriter intercepts WriteHeader calls\n// to instead write the HTTP status code we want instead\n// of the one http.ServeContent will use by default (usually 200)\ntype statusOverrideResponseWriter struct {\n\thttp.ResponseWriter\n\tcode int\n}\n\n// WriteHeader intercepts calls by the stdlib to WriteHeader\n// to instead write the HTTP status code we want.\nfunc (wr statusOverrideResponseWriter) WriteHeader(int) {\n\twr.ResponseWriter.WriteHeader(wr.code)\n}\n\n// Unwrap returns the underlying ResponseWriter, necessary for\n// http.ResponseController to work correctly.\nfunc (wr statusOverrideResponseWriter) Unwrap() http.ResponseWriter {\n\treturn wr.ResponseWriter\n}\n\nvar defaultIndexNames = []string{\"index.html\", \"index.txt\"}\n\nconst (\n\tminBackoff, maxBackoff = 2, 5\n\tseparator              = string(filepath.Separator)\n)\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner           = (*FileServer)(nil)\n\t_ caddyhttp.MiddlewareHandler = (*FileServer)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/fileserver/staticfiles_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage fileserver\n\nimport (\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc TestFileHidden(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinputHide []string\n\t\tinputPath string\n\t\texpect    bool\n\t}{\n\t\t{\n\t\t\tinputHide: nil,\n\t\t\tinputPath: \"\",\n\t\t\texpect:    false,\n\t\t},\n\t\t{\n\t\t\tinputHide: []string{\".gitignore\"},\n\t\t\tinputPath: \"/.gitignore\",\n\t\t\texpect:    true,\n\t\t},\n\t\t{\n\t\t\tinputHide: []string{\".git\"},\n\t\t\tinputPath: \"/.gitignore\",\n\t\t\texpect:    false,\n\t\t},\n\t\t{\n\t\t\tinputHide: []string{\"/.git\"},\n\t\t\tinputPath: \"/.gitignore\",\n\t\t\texpect:    false,\n\t\t},\n\t\t{\n\t\t\tinputHide: []string{\".git\"},\n\t\t\tinputPath: \"/.git\",\n\t\t\texpect:    true,\n\t\t},\n\t\t{\n\t\t\tinputHide: []string{\".git\"},\n\t\t\tinputPath: \"/.git/foo\",\n\t\t\texpect:    true,\n\t\t},\n\t\t{\n\t\t\tinputHide: []string{\".git\"},\n\t\t\tinputPath: \"/foo/.git/bar\",\n\t\t\texpect:    true,\n\t\t},\n\t\t{\n\t\t\tinputHide: []string{\"/prefix\"},\n\t\t\tinputPath: \"/prefix/foo\",\n\t\t\texpect:    true,\n\t\t},\n\t\t{\n\t\t\tinputHide: []string{\"/foo/*/bar\"},\n\t\t\tinputPath: \"/foo/asdf/bar\",\n\t\t\texpect:    true,\n\t\t},\n\t\t{\n\t\t\tinputHide: []string{\"*.txt\"},\n\t\t\tinputPath: \"/foo/bar.txt\",\n\t\t\texpect:    true,\n\t\t},\n\t\t{\n\t\t\tinputHide: []string{\"/foo/bar/*.txt\"},\n\t\t\tinputPath: \"/foo/bar/baz.txt\",\n\t\t\texpect:    true,\n\t\t},\n\t\t{\n\t\t\tinputHide: []string{\"/foo/bar/*.txt\"},\n\t\t\tinputPath: \"/foo/bar.txt\",\n\t\t\texpect:    false,\n\t\t},\n\t\t{\n\t\t\tinputHide: []string{\"/foo/bar/*.txt\"},\n\t\t\tinputPath: \"/foo/bar/index.html\",\n\t\t\texpect:    false,\n\t\t},\n\t\t{\n\t\t\tinputHide: []string{\"/foo\"},\n\t\t\tinputPath: \"/foo\",\n\t\t\texpect:    true,\n\t\t},\n\t\t{\n\t\t\tinputHide: []string{\"/foo\"},\n\t\t\tinputPath: \"/foobar\",\n\t\t\texpect:    false,\n\t\t},\n\t\t{\n\t\t\tinputHide: []string{\"first\", \"second\"},\n\t\t\tinputPath: \"/second\",\n\t\t\texpect:    true,\n\t\t},\n\t} {\n\t\tif runtime.GOOS == \"windows\" {\n\t\t\tif strings.HasPrefix(tc.inputPath, \"/\") {\n\t\t\t\ttc.inputPath, _ = filepath.Abs(tc.inputPath)\n\t\t\t}\n\t\t\ttc.inputPath = filepath.FromSlash(tc.inputPath)\n\t\t\tfor i := range tc.inputHide {\n\t\t\t\tif strings.HasPrefix(tc.inputHide[i], \"/\") {\n\t\t\t\t\ttc.inputHide[i], _ = filepath.Abs(tc.inputHide[i])\n\t\t\t\t}\n\t\t\t\ttc.inputHide[i] = filepath.FromSlash(tc.inputHide[i])\n\t\t\t}\n\t\t}\n\n\t\tactual := fileHidden(tc.inputPath, tc.inputHide)\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d: Does %v hide %s? Got %t but expected %t\",\n\t\t\t\ti, tc.inputHide, tc.inputPath, actual, tc.expect)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/fileserver/testdata/%D9%85%D9%84%D9%81.txt",
    "content": "%D9%85%D9%84%D9%81.txt"
  },
  {
    "path": "modules/caddyhttp/fileserver/testdata/foo.php.php/index.php",
    "content": "foo.php.php/index.php\n"
  },
  {
    "path": "modules/caddyhttp/fileserver/testdata/foo.txt",
    "content": "foo.txt"
  },
  {
    "path": "modules/caddyhttp/fileserver/testdata/foodir/bar.txt",
    "content": "foodir/bar.txt"
  },
  {
    "path": "modules/caddyhttp/fileserver/testdata/foodir/foo.txt",
    "content": "foodir/foo.txt"
  },
  {
    "path": "modules/caddyhttp/fileserver/testdata/index.php",
    "content": "index.php"
  },
  {
    "path": "modules/caddyhttp/fileserver/testdata/large.txt",
    "content": "This is a file with more content than the other files in this directory\nsuch that tests using the largest_size policy pick this file, or the\nsmallest_size policy avoids this file."
  },
  {
    "path": "modules/caddyhttp/fileserver/testdata/notphp.php.txt",
    "content": "notphp.php.txt\n"
  },
  {
    "path": "modules/caddyhttp/fileserver/testdata/remote.php",
    "content": "remote.php"
  },
  {
    "path": "modules/caddyhttp/fileserver/testdata/ملف.txt",
    "content": "ملف.txt"
  },
  {
    "path": "modules/caddyhttp/headers/caddyfile.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage headers\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\t\"reflect\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\thttpcaddyfile.RegisterDirective(\"header\", parseCaddyfile)\n\thttpcaddyfile.RegisterDirective(\"request_header\", parseReqHdrCaddyfile)\n}\n\n// parseCaddyfile sets up the handler for response headers from\n// Caddyfile tokens. Syntax:\n//\n//\theader [<matcher>] [[+|-|?|>]<field> [<value|regexp>] [<replacement>]] {\n//\t\t[+]<field> [<value|regexp> [<replacement>]]\n//\t\t?<field> <default_value>\n//\t\t-<field>\n//\t\t><field>\n//\t\t[defer]\n//\t}\n//\n// Either a block can be opened or a single header field can be configured\n// in the first line, but not both in the same directive. Header operations\n// are deferred to write-time if any headers are being deleted or if the\n// 'defer' subdirective is used. + appends a header value, - deletes a field,\n// ? conditionally sets a value only if the header field is not already set,\n// and > sets a field with defer enabled.\nfunc parseCaddyfile(h httpcaddyfile.Helper) ([]httpcaddyfile.ConfigValue, error) {\n\th.Next() // consume directive name\n\tmatcherSet, err := h.ExtractMatcherSet()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\th.Next() // consume the directive name again (matcher parsing resets)\n\n\tmakeHandler := func() Handler {\n\t\treturn Handler{\n\t\t\tResponse: &RespHeaderOps{\n\t\t\t\tHeaderOps: &HeaderOps{},\n\t\t\t},\n\t\t}\n\t}\n\thandler, handlerWithRequire := makeHandler(), makeHandler()\n\n\t// first see if headers are in the initial line\n\tvar hasArgs bool\n\tif h.NextArg() {\n\t\thasArgs = true\n\t\tfield := h.Val()\n\t\tvar value string\n\t\tvar replacement *string\n\t\tif h.NextArg() {\n\t\t\tvalue = h.Val()\n\t\t}\n\t\tif h.NextArg() {\n\t\t\targ := h.Val()\n\t\t\treplacement = &arg\n\t\t}\n\t\terr := applyHeaderOp(\n\t\t\thandler.Response.HeaderOps,\n\t\t\thandler.Response,\n\t\t\tfield,\n\t\t\tvalue,\n\t\t\treplacement,\n\t\t)\n\t\tif err != nil {\n\t\t\treturn nil, h.Err(err.Error())\n\t\t}\n\t\tif len(handler.Response.HeaderOps.Delete) > 0 {\n\t\t\thandler.Response.Deferred = true\n\t\t}\n\t}\n\n\t// if not, they should be in a block\n\tfor h.NextBlock(0) {\n\t\tfield := h.Val()\n\t\tif field == \"defer\" {\n\t\t\thandler.Response.Deferred = true\n\t\t\tcontinue\n\t\t}\n\t\tif field == \"match\" {\n\t\t\tresponseMatchers := make(map[string]caddyhttp.ResponseMatcher)\n\t\t\terr := caddyhttp.ParseNamedResponseMatcher(h.NewFromNextSegment(), responseMatchers)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tmatcher := responseMatchers[\"match\"]\n\t\t\thandler.Response.Require = &matcher\n\t\t\tcontinue\n\t\t}\n\t\tif hasArgs {\n\t\t\treturn nil, h.Err(\"cannot specify headers in both arguments and block\") // because it would be weird\n\t\t}\n\n\t\t// sometimes it is habitual for users to suffix a field name with a colon,\n\t\t// as if they were writing a curl command or something; see\n\t\t// https://caddy.community/t/v2-reverse-proxy-please-add-cors-example-to-the-docs/7349/19\n\t\tfield = strings.TrimSuffix(field, \":\")\n\n\t\tvar value string\n\t\tvar replacement *string\n\t\tif h.NextArg() {\n\t\t\tvalue = h.Val()\n\t\t}\n\t\tif h.NextArg() {\n\t\t\targ := h.Val()\n\t\t\treplacement = &arg\n\t\t}\n\n\t\thandlerToUse := handler\n\t\tif strings.HasPrefix(field, \"?\") {\n\t\t\thandlerToUse = handlerWithRequire\n\t\t}\n\n\t\terr := applyHeaderOp(\n\t\t\thandlerToUse.Response.HeaderOps,\n\t\t\thandlerToUse.Response,\n\t\t\tfield,\n\t\t\tvalue,\n\t\t\treplacement,\n\t\t)\n\t\tif err != nil {\n\t\t\treturn nil, h.Err(err.Error())\n\t\t}\n\t}\n\n\tvar configValues []httpcaddyfile.ConfigValue\n\tif !reflect.DeepEqual(handler, makeHandler()) {\n\t\tconfigValues = append(configValues, h.NewRoute(matcherSet, handler)...)\n\t}\n\tif !reflect.DeepEqual(handlerWithRequire, makeHandler()) {\n\t\tconfigValues = append(configValues, h.NewRoute(matcherSet, handlerWithRequire)...)\n\t}\n\n\treturn configValues, nil\n}\n\n// parseReqHdrCaddyfile sets up the handler for request headers\n// from Caddyfile tokens. Syntax:\n//\n//\trequest_header [<matcher>] [[+|-]<field> [<value|regexp>] [<replacement>]]\nfunc parseReqHdrCaddyfile(h httpcaddyfile.Helper) ([]httpcaddyfile.ConfigValue, error) {\n\th.Next() // consume directive name\n\tmatcherSet, err := h.ExtractMatcherSet()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\th.Next() // consume the directive name again (matcher parsing resets)\n\n\tif !h.NextArg() {\n\t\treturn nil, h.ArgErr()\n\t}\n\tfield := h.Val()\n\n\thdr := Handler{\n\t\tRequest: &HeaderOps{},\n\t}\n\n\t// sometimes it is habitual for users to suffix a field name with a colon,\n\t// as if they were writing a curl command or something; see\n\t// https://caddy.community/t/v2-reverse-proxy-please-add-cors-example-to-the-docs/7349/19\n\tfield = strings.TrimSuffix(field, \":\")\n\n\tvar value string\n\tvar replacement *string\n\tif h.NextArg() {\n\t\tvalue = h.Val()\n\t}\n\tif h.NextArg() {\n\t\targ := h.Val()\n\t\treplacement = &arg\n\t\tif h.NextArg() {\n\t\t\treturn nil, h.ArgErr()\n\t\t}\n\t}\n\n\tif hdr.Request == nil {\n\t\thdr.Request = new(HeaderOps)\n\t}\n\tif err := CaddyfileHeaderOp(hdr.Request, field, value, replacement); err != nil {\n\t\treturn nil, h.Err(err.Error())\n\t}\n\n\tconfigValues := h.NewRoute(matcherSet, hdr)\n\n\tif h.NextArg() {\n\t\treturn nil, h.ArgErr()\n\t}\n\treturn configValues, nil\n}\n\n// CaddyfileHeaderOp applies a new header operation according to\n// field, value, and replacement. The field can be prefixed with\n// \"+\" or \"-\" to specify adding or removing; otherwise, the value\n// will be set (overriding any previous value). If replacement is\n// non-nil, value will be treated as a regular expression which\n// will be used to search and then replacement will be used to\n// complete the substring replacement; in that case, any + or -\n// prefix to field will be ignored.\nfunc CaddyfileHeaderOp(ops *HeaderOps, field, value string, replacement *string) error {\n\treturn applyHeaderOp(ops, nil, field, value, replacement)\n}\n\nfunc applyHeaderOp(ops *HeaderOps, respHeaderOps *RespHeaderOps, field, value string, replacement *string) error {\n\tswitch {\n\tcase strings.HasPrefix(field, \"+\"): // append\n\t\tif ops.Add == nil {\n\t\t\tops.Add = make(http.Header)\n\t\t}\n\t\tops.Add.Add(field[1:], value)\n\n\tcase strings.HasPrefix(field, \"-\"): // delete\n\t\tops.Delete = append(ops.Delete, field[1:])\n\t\tif respHeaderOps != nil {\n\t\t\trespHeaderOps.Deferred = true\n\t\t}\n\n\tcase strings.HasPrefix(field, \"?\"): // default (conditional on not existing) - response headers only\n\t\tif respHeaderOps == nil {\n\t\t\treturn fmt.Errorf(\"%v: the default header modifier ('?') can only be used on response headers; for conditional manipulation of request headers, use matchers\", field)\n\t\t}\n\t\tif respHeaderOps.Require == nil {\n\t\t\trespHeaderOps.Require = &caddyhttp.ResponseMatcher{\n\t\t\t\tHeaders: make(http.Header),\n\t\t\t}\n\t\t}\n\t\tfield = strings.TrimPrefix(field, \"?\")\n\t\trespHeaderOps.Require.Headers[field] = nil\n\t\tif respHeaderOps.Set == nil {\n\t\t\trespHeaderOps.Set = make(http.Header)\n\t\t}\n\t\trespHeaderOps.Set.Set(field, value)\n\n\tcase replacement != nil: // replace\n\t\t// allow defer shortcut for replace syntax\n\t\tif strings.HasPrefix(field, \">\") && respHeaderOps != nil {\n\t\t\trespHeaderOps.Deferred = true\n\t\t}\n\t\tif ops.Replace == nil {\n\t\t\tops.Replace = make(map[string][]Replacement)\n\t\t}\n\t\tfield = strings.TrimLeft(field, \"+-?>\")\n\t\tops.Replace[field] = append(\n\t\t\tops.Replace[field],\n\t\t\tReplacement{\n\t\t\t\tSearchRegexp: value,\n\t\t\t\tReplace:      *replacement,\n\t\t\t},\n\t\t)\n\n\tcase strings.HasPrefix(field, \">\"): // set (overwrite) with defer\n\t\tif ops.Set == nil {\n\t\t\tops.Set = make(http.Header)\n\t\t}\n\t\tops.Set.Set(field[1:], value)\n\t\tif respHeaderOps != nil {\n\t\t\trespHeaderOps.Deferred = true\n\t\t}\n\n\tdefault: // set (overwrite)\n\t\tif ops.Set == nil {\n\t\t\tops.Set = make(http.Header)\n\t\t}\n\t\tops.Set.Set(field, value)\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "modules/caddyhttp/headers/headers.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage headers\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\t\"regexp\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Handler{})\n}\n\n// Handler is a middleware which modifies request and response headers.\n//\n// Changes to headers are applied immediately, except for the response\n// headers when Deferred is true or when Required is set. In those cases,\n// the changes are applied when the headers are written to the response.\n// Note that deferred changes do not take effect if an error occurs later\n// in the middleware chain.\n//\n// Properties in this module accept placeholders.\n//\n// Response header operations can be conditioned upon response status code\n// and/or other header values.\ntype Handler struct {\n\tRequest  *HeaderOps     `json:\"request,omitempty\"`\n\tResponse *RespHeaderOps `json:\"response,omitempty\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Handler) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.headers\",\n\t\tNew: func() caddy.Module { return new(Handler) },\n\t}\n}\n\n// Provision sets up h's configuration.\nfunc (h *Handler) Provision(ctx caddy.Context) error {\n\tif h.Request != nil {\n\t\terr := h.Request.Provision(ctx)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\tif h.Response != nil {\n\t\terr := h.Response.Provision(ctx)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\n// Validate ensures h's configuration is valid.\nfunc (h Handler) Validate() error {\n\tif h.Request != nil {\n\t\terr := h.Request.validate()\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\tif h.Response != nil && h.Response.HeaderOps != nil {\n\t\terr := h.Response.validate()\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (h Handler) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\n\tif h.Request != nil {\n\t\th.Request.ApplyToRequest(r)\n\t}\n\n\tif h.Response != nil {\n\t\tif h.Response.Deferred || h.Response.Require != nil {\n\t\t\tw = &responseWriterWrapper{\n\t\t\t\tResponseWriterWrapper: &caddyhttp.ResponseWriterWrapper{ResponseWriter: w},\n\t\t\t\treplacer:              repl,\n\t\t\t\trequire:               h.Response.Require,\n\t\t\t\theaderOps:             h.Response.HeaderOps,\n\t\t\t}\n\t\t} else {\n\t\t\th.Response.ApplyTo(w.Header(), repl)\n\t\t}\n\t}\n\n\treturn next.ServeHTTP(w, r)\n}\n\n// HeaderOps defines manipulations for HTTP headers.\ntype HeaderOps struct {\n\t// Adds HTTP headers; does not replace any existing header fields.\n\tAdd http.Header `json:\"add,omitempty\"`\n\n\t// Sets HTTP headers; replaces existing header fields.\n\tSet http.Header `json:\"set,omitempty\"`\n\n\t// Names of HTTP header fields to delete. Basic wildcards are supported:\n\t//\n\t// - Start with `*` for all field names with the given suffix;\n\t// - End with `*` for all field names with the given prefix;\n\t// - Start and end with `*` for all field names containing a substring.\n\tDelete []string `json:\"delete,omitempty\"`\n\n\t// Performs in-situ substring replacements of HTTP headers.\n\t// Keys are the field names on which to perform the associated replacements.\n\t// If the field name is `*`, the replacements are performed on all header fields.\n\tReplace map[string][]Replacement `json:\"replace,omitempty\"`\n}\n\n// Provision sets up the header operations.\nfunc (ops *HeaderOps) Provision(_ caddy.Context) error {\n\tif ops == nil {\n\t\treturn nil // it's possible no ops are configured; fix #6893\n\t}\n\tfor fieldName, replacements := range ops.Replace {\n\t\tfor i, r := range replacements {\n\t\t\tif r.SearchRegexp == \"\" {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// Check if it contains placeholders\n\t\t\tif containsPlaceholders(r.SearchRegexp) {\n\t\t\t\t// Contains placeholders, skips precompilation, and recompiles at runtime\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// Does not contain placeholders, safe to precompile\n\t\t\tre, err := regexp.Compile(r.SearchRegexp)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"replacement %d for header field '%s': %v\", i, fieldName, err)\n\t\t\t}\n\t\t\treplacements[i].re = re\n\t\t}\n\t}\n\treturn nil\n}\n\n// containsPlaceholders checks if the string contains Caddy placeholder syntax {key}\nfunc containsPlaceholders(s string) bool {\n\t_, after, ok := strings.Cut(s, \"{\")\n\tif !ok {\n\t\treturn false\n\t}\n\tcloseIdx := strings.Index(after, \"}\")\n\tif closeIdx == -1 {\n\t\treturn false\n\t}\n\t// Make sure there is content between the brackets\n\treturn closeIdx > 0\n}\n\nfunc (ops HeaderOps) validate() error {\n\tfor fieldName, replacements := range ops.Replace {\n\t\tfor _, r := range replacements {\n\t\t\tif r.Search != \"\" && r.SearchRegexp != \"\" {\n\t\t\t\treturn fmt.Errorf(\"cannot specify both a substring search and a regular expression search for field '%s'\", fieldName)\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\n// Replacement describes a string replacement,\n// either a simple and fast substring search\n// or a slower but more powerful regex search.\ntype Replacement struct {\n\t// The substring to search for.\n\tSearch string `json:\"search,omitempty\"`\n\n\t// The regular expression to search with.\n\tSearchRegexp string `json:\"search_regexp,omitempty\"`\n\n\t// The string with which to replace matches.\n\tReplace string `json:\"replace,omitempty\"`\n\n\tre *regexp.Regexp\n}\n\n// RespHeaderOps defines manipulations for response headers.\ntype RespHeaderOps struct {\n\t*HeaderOps\n\n\t// If set, header operations will be deferred until\n\t// they are written out and only performed if the\n\t// response matches these criteria.\n\tRequire *caddyhttp.ResponseMatcher `json:\"require,omitempty\"`\n\n\t// If true, header operations will be deferred until\n\t// they are written out. Superseded if Require is set.\n\t// Usually you will need to set this to true if any\n\t// fields are being deleted.\n\tDeferred bool `json:\"deferred,omitempty\"`\n}\n\n// ApplyTo applies ops to hdr using repl.\nfunc (ops *HeaderOps) ApplyTo(hdr http.Header, repl *caddy.Replacer) {\n\tif ops == nil {\n\t\treturn\n\t}\n\t// before manipulating headers in other ways, check if there\n\t// is configuration to delete all headers, and do that first\n\t// because if a header is to be added, we don't want to delete\n\t// it also\n\tfor _, fieldName := range ops.Delete {\n\t\tfieldName = repl.ReplaceKnown(fieldName, \"\")\n\t\tif fieldName == \"*\" {\n\t\t\tclear(hdr)\n\t\t}\n\t}\n\n\t// add\n\tfor fieldName, vals := range ops.Add {\n\t\tfieldName = repl.ReplaceKnown(fieldName, \"\")\n\t\tfor _, v := range vals {\n\t\t\thdr.Add(fieldName, repl.ReplaceKnown(v, \"\"))\n\t\t}\n\t}\n\n\t// set\n\tfor fieldName, vals := range ops.Set {\n\t\tfieldName = repl.ReplaceKnown(fieldName, \"\")\n\t\tvar newVals []string\n\t\tfor i := range vals {\n\t\t\t// append to new slice so we don't overwrite\n\t\t\t// the original values in ops.Set\n\t\t\tnewVals = append(newVals, repl.ReplaceKnown(vals[i], \"\"))\n\t\t}\n\t\thdr.Set(fieldName, strings.Join(newVals, \",\"))\n\t}\n\n\t// delete\n\tfor _, fieldName := range ops.Delete {\n\t\tfieldName = strings.ToLower(repl.ReplaceKnown(fieldName, \"\"))\n\t\tif fieldName == \"*\" {\n\t\t\tcontinue // handled above\n\t\t}\n\t\tswitch {\n\t\tcase strings.HasPrefix(fieldName, \"*\") && strings.HasSuffix(fieldName, \"*\"):\n\t\t\tfor existingField := range hdr {\n\t\t\t\tif strings.Contains(strings.ToLower(existingField), fieldName[1:len(fieldName)-1]) {\n\t\t\t\t\tdelete(hdr, existingField)\n\t\t\t\t}\n\t\t\t}\n\t\tcase strings.HasPrefix(fieldName, \"*\"):\n\t\t\tfor existingField := range hdr {\n\t\t\t\tif strings.HasSuffix(strings.ToLower(existingField), fieldName[1:]) {\n\t\t\t\t\tdelete(hdr, existingField)\n\t\t\t\t}\n\t\t\t}\n\t\tcase strings.HasSuffix(fieldName, \"*\"):\n\t\t\tfor existingField := range hdr {\n\t\t\t\tif strings.HasPrefix(strings.ToLower(existingField), fieldName[:len(fieldName)-1]) {\n\t\t\t\t\tdelete(hdr, existingField)\n\t\t\t\t}\n\t\t\t}\n\t\tdefault:\n\t\t\thdr.Del(fieldName)\n\t\t}\n\t}\n\n\t// replace\n\tfor fieldName, replacements := range ops.Replace {\n\t\tfieldName = http.CanonicalHeaderKey(repl.ReplaceKnown(fieldName, \"\"))\n\n\t\t// all fields...\n\t\tif fieldName == \"*\" {\n\t\t\tfor _, r := range replacements {\n\t\t\t\tsearch := repl.ReplaceKnown(r.Search, \"\")\n\t\t\t\treplace := repl.ReplaceKnown(r.Replace, \"\")\n\t\t\t\tfor fieldName, vals := range hdr {\n\t\t\t\t\tfor i := range vals {\n\t\t\t\t\t\tif r.re != nil {\n\t\t\t\t\t\t\t// Use precompiled regular expressions\n\t\t\t\t\t\t\thdr[fieldName][i] = r.re.ReplaceAllString(hdr[fieldName][i], replace)\n\t\t\t\t\t\t} else if r.SearchRegexp != \"\" {\n\t\t\t\t\t\t\t// Runtime compilation of regular expressions\n\t\t\t\t\t\t\tsearchRegexp := repl.ReplaceKnown(r.SearchRegexp, \"\")\n\t\t\t\t\t\t\tif re, err := regexp.Compile(searchRegexp); err == nil {\n\t\t\t\t\t\t\t\thdr[fieldName][i] = re.ReplaceAllString(hdr[fieldName][i], replace)\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t// If compilation fails, skip this replacement\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\thdr[fieldName][i] = strings.ReplaceAll(hdr[fieldName][i], search, replace)\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\t// ...or only with the named field\n\t\tfor _, r := range replacements {\n\t\t\tsearch := repl.ReplaceKnown(r.Search, \"\")\n\t\t\treplace := repl.ReplaceKnown(r.Replace, \"\")\n\t\t\tfor hdrFieldName, vals := range hdr {\n\t\t\t\t// see issue #4330 for why we don't simply use hdr[fieldName]\n\t\t\t\tif http.CanonicalHeaderKey(hdrFieldName) != fieldName {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tfor i := range vals {\n\t\t\t\t\tif r.re != nil {\n\t\t\t\t\t\thdr[hdrFieldName][i] = r.re.ReplaceAllString(hdr[hdrFieldName][i], replace)\n\t\t\t\t\t} else if r.SearchRegexp != \"\" {\n\t\t\t\t\t\tsearchRegexp := repl.ReplaceKnown(r.SearchRegexp, \"\")\n\t\t\t\t\t\tif re, err := regexp.Compile(searchRegexp); err == nil {\n\t\t\t\t\t\t\thdr[hdrFieldName][i] = re.ReplaceAllString(hdr[hdrFieldName][i], replace)\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\thdr[hdrFieldName][i] = strings.ReplaceAll(hdr[hdrFieldName][i], search, replace)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n// ApplyToRequest applies ops to r, specially handling the Host\n// header which the standard library does not include with the\n// header map with all the others. This method mutates r.Host.\nfunc (ops HeaderOps) ApplyToRequest(r *http.Request) {\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\n\t// capture the current Host header so we can\n\t// reset to it when we're done\n\torigHost, hadHost := r.Header[\"Host\"]\n\n\t// append r.Host; this way, we know that our value\n\t// was last in the list, and if an Add operation\n\t// appended something else after it, that's probably\n\t// fine because it's weird to have multiple Host\n\t// headers anyway and presumably the one they added\n\t// is the one they wanted\n\tr.Header[\"Host\"] = append(r.Header[\"Host\"], r.Host)\n\n\t// apply header operations\n\tops.ApplyTo(r.Header, repl)\n\n\t// retrieve the last Host value (likely the one we appended)\n\tif len(r.Header[\"Host\"]) > 0 {\n\t\tr.Host = r.Header[\"Host\"][len(r.Header[\"Host\"])-1]\n\t} else {\n\t\tr.Host = \"\"\n\t}\n\n\t// reset the Host header slice\n\tif hadHost {\n\t\tr.Header[\"Host\"] = origHost\n\t} else {\n\t\tdelete(r.Header, \"Host\")\n\t}\n}\n\n// responseWriterWrapper defers response header\n// operations until WriteHeader is called.\ntype responseWriterWrapper struct {\n\t*caddyhttp.ResponseWriterWrapper\n\treplacer    *caddy.Replacer\n\trequire     *caddyhttp.ResponseMatcher\n\theaderOps   *HeaderOps\n\twroteHeader bool\n}\n\nfunc (rww *responseWriterWrapper) WriteHeader(status int) {\n\tif rww.wroteHeader {\n\t\treturn\n\t}\n\t// 1xx responses aren't final; just informational\n\tif status < 100 || status > 199 {\n\t\trww.wroteHeader = true\n\t}\n\tif rww.require == nil || rww.require.Match(status, rww.ResponseWriterWrapper.Header()) {\n\t\tif rww.headerOps != nil {\n\t\t\trww.headerOps.ApplyTo(rww.ResponseWriterWrapper.Header(), rww.replacer)\n\t\t}\n\t}\n\trww.ResponseWriterWrapper.WriteHeader(status)\n}\n\nfunc (rww *responseWriterWrapper) Write(d []byte) (int, error) {\n\tif !rww.wroteHeader {\n\t\trww.WriteHeader(http.StatusOK)\n\t}\n\treturn rww.ResponseWriterWrapper.Write(d)\n}\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner           = (*Handler)(nil)\n\t_ caddyhttp.MiddlewareHandler = (*Handler)(nil)\n\t_ http.ResponseWriter         = (*responseWriterWrapper)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/headers/headers_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage headers\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"reflect\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc TestHandler(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\thandler            Handler\n\t\treqHeader          http.Header\n\t\trespHeader         http.Header\n\t\trespStatusCode     int\n\t\texpectedReqHeader  http.Header\n\t\texpectedRespHeader http.Header\n\t}{\n\t\t{\n\t\t\thandler: Handler{\n\t\t\t\tRequest: &HeaderOps{\n\t\t\t\t\tAdd: http.Header{\n\t\t\t\t\t\t\"Expose-Secrets\": []string{\"always\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\treqHeader: http.Header{\n\t\t\t\t\"Expose-Secrets\": []string{\"i'm serious\"},\n\t\t\t},\n\t\t\texpectedReqHeader: http.Header{\n\t\t\t\t\"Expose-Secrets\": []string{\"i'm serious\", \"always\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\thandler: Handler{\n\t\t\t\tRequest: &HeaderOps{\n\t\t\t\t\tSet: http.Header{\n\t\t\t\t\t\t\"Who-Wins\": []string{\"batman\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\treqHeader: http.Header{\n\t\t\t\t\"Who-Wins\": []string{\"joker\"},\n\t\t\t},\n\t\t\texpectedReqHeader: http.Header{\n\t\t\t\t\"Who-Wins\": []string{\"batman\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\thandler: Handler{\n\t\t\t\tRequest: &HeaderOps{\n\t\t\t\t\tDelete: []string{\"Kick-Me\"},\n\t\t\t\t},\n\t\t\t},\n\t\t\treqHeader: http.Header{\n\t\t\t\t\"Kick-Me\": []string{\"if you can\"},\n\t\t\t\t\"Keep-Me\": []string{\"i swear i'm innocent\"},\n\t\t\t},\n\t\t\texpectedReqHeader: http.Header{\n\t\t\t\t\"Keep-Me\": []string{\"i swear i'm innocent\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\thandler: Handler{\n\t\t\t\tRequest: &HeaderOps{\n\t\t\t\t\tDelete: []string{\n\t\t\t\t\t\t\"*-suffix\",\n\t\t\t\t\t\t\"prefix-*\",\n\t\t\t\t\t\t\"*_*\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\treqHeader: http.Header{\n\t\t\t\t\"Header-Suffix\": []string{\"lalala\"},\n\t\t\t\t\"Prefix-Test\":   []string{\"asdf\"},\n\t\t\t\t\"Host_Header\":   []string{\"silly django... sigh\"}, // see issue #4830\n\t\t\t\t\"Keep-Me\":       []string{\"foofoofoo\"},\n\t\t\t},\n\t\t\texpectedReqHeader: http.Header{\n\t\t\t\t\"Keep-Me\": []string{\"foofoofoo\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\thandler: Handler{\n\t\t\t\tRequest: &HeaderOps{\n\t\t\t\t\tReplace: map[string][]Replacement{\n\t\t\t\t\t\t\"Best-Server\": {\n\t\t\t\t\t\t\tReplacement{\n\t\t\t\t\t\t\t\tSearch:  \"NGINX\",\n\t\t\t\t\t\t\t\tReplace: \"the Caddy web server\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\tReplacement{\n\t\t\t\t\t\t\t\tSearchRegexp: `Apache(\\d+)`,\n\t\t\t\t\t\t\t\tReplace:      \"Caddy\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\treqHeader: http.Header{\n\t\t\t\t\"Best-Server\": []string{\"it's NGINX, undoubtedly\", \"I love Apache2\"},\n\t\t\t},\n\t\t\texpectedReqHeader: http.Header{\n\t\t\t\t\"Best-Server\": []string{\"it's the Caddy web server, undoubtedly\", \"I love Caddy\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\thandler: Handler{\n\t\t\t\tResponse: &RespHeaderOps{\n\t\t\t\t\tRequire: &caddyhttp.ResponseMatcher{\n\t\t\t\t\t\tHeaders: http.Header{\n\t\t\t\t\t\t\t\"Cache-Control\": nil,\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tHeaderOps: &HeaderOps{\n\t\t\t\t\t\tAdd: http.Header{\n\t\t\t\t\t\t\t\"Cache-Control\": []string{\"no-cache\"},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\trespHeader: http.Header{},\n\t\t\texpectedRespHeader: http.Header{\n\t\t\t\t\"Cache-Control\": []string{\"no-cache\"},\n\t\t\t},\n\t\t},\n\t\t{ // same as above, but checks that response headers are left alone when \"Require\" conditions are unmet\n\t\t\thandler: Handler{\n\t\t\t\tResponse: &RespHeaderOps{\n\t\t\t\t\tRequire: &caddyhttp.ResponseMatcher{\n\t\t\t\t\t\tHeaders: http.Header{\n\t\t\t\t\t\t\t\"Cache-Control\": nil,\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tHeaderOps: &HeaderOps{\n\t\t\t\t\t\tAdd: http.Header{\n\t\t\t\t\t\t\t\"Cache-Control\": []string{\"no-cache\"},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\trespHeader: http.Header{\n\t\t\t\t\"Cache-Control\": []string{\"something\"},\n\t\t\t},\n\t\t\texpectedRespHeader: http.Header{\n\t\t\t\t\"Cache-Control\": []string{\"something\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\thandler: Handler{\n\t\t\t\tResponse: &RespHeaderOps{\n\t\t\t\t\tRequire: &caddyhttp.ResponseMatcher{\n\t\t\t\t\t\tHeaders: http.Header{\n\t\t\t\t\t\t\t\"Cache-Control\": []string{\"no-cache\"},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tHeaderOps: &HeaderOps{\n\t\t\t\t\t\tDelete: []string{\"Cache-Control\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\trespHeader: http.Header{\n\t\t\t\t\"Cache-Control\": []string{\"no-cache\"},\n\t\t\t},\n\t\t\texpectedRespHeader: http.Header{},\n\t\t},\n\t\t{\n\t\t\thandler: Handler{\n\t\t\t\tResponse: &RespHeaderOps{\n\t\t\t\t\tRequire: &caddyhttp.ResponseMatcher{\n\t\t\t\t\t\tStatusCode: []int{5},\n\t\t\t\t\t},\n\t\t\t\t\tHeaderOps: &HeaderOps{\n\t\t\t\t\t\tAdd: http.Header{\n\t\t\t\t\t\t\t\"Fail-5xx\": []string{\"true\"},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\trespStatusCode: 503,\n\t\t\trespHeader:     http.Header{},\n\t\t\texpectedRespHeader: http.Header{\n\t\t\t\t\"Fail-5xx\": []string{\"true\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\thandler: Handler{\n\t\t\t\tRequest: &HeaderOps{\n\t\t\t\t\tReplace: map[string][]Replacement{\n\t\t\t\t\t\t\"Case-Insensitive\": {\n\t\t\t\t\t\t\tReplacement{\n\t\t\t\t\t\t\t\tSearch:  \"issue4330\",\n\t\t\t\t\t\t\t\tReplace: \"issue #4330\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\treqHeader: http.Header{\n\t\t\t\t\"case-insensitive\": []string{\"issue4330\"},\n\t\t\t\t\"Other-Header\":     []string{\"issue4330\"},\n\t\t\t},\n\t\t\texpectedReqHeader: http.Header{\n\t\t\t\t\"case-insensitive\": []string{\"issue #4330\"},\n\t\t\t\t\"Other-Header\":     []string{\"issue4330\"},\n\t\t\t},\n\t\t},\n\t} {\n\t\trr := httptest.NewRecorder()\n\n\t\treq := &http.Request{Header: tc.reqHeader}\n\t\trepl := caddy.NewReplacer()\n\t\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\t\treq = req.WithContext(ctx)\n\n\t\ttc.handler.Provision(caddy.Context{})\n\n\t\tnext := nextHandler(func(w http.ResponseWriter, r *http.Request) error {\n\t\t\tfor k, hdrs := range tc.respHeader {\n\t\t\t\tfor _, v := range hdrs {\n\t\t\t\t\tw.Header().Add(k, v)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tstatus := 200\n\t\t\tif tc.respStatusCode != 0 {\n\t\t\t\tstatus = tc.respStatusCode\n\t\t\t}\n\t\t\tw.WriteHeader(status)\n\n\t\t\tif tc.expectedReqHeader != nil && !reflect.DeepEqual(r.Header, tc.expectedReqHeader) {\n\t\t\t\treturn fmt.Errorf(\"expected request header %v, got %v\", tc.expectedReqHeader, r.Header)\n\t\t\t}\n\n\t\t\treturn nil\n\t\t})\n\n\t\tif err := tc.handler.ServeHTTP(rr, req, next); err != nil {\n\t\t\tt.Errorf(\"Test %d: %v\", i, err)\n\t\t\tcontinue\n\t\t}\n\n\t\tactual := rr.Header()\n\t\tif tc.expectedRespHeader != nil && !reflect.DeepEqual(actual, tc.expectedRespHeader) {\n\t\t\tt.Errorf(\"Test %d: expected response header %v, got %v\", i, tc.expectedRespHeader, actual)\n\t\t\tcontinue\n\t\t}\n\t}\n}\n\ntype nextHandler func(http.ResponseWriter, *http.Request) error\n\nfunc (f nextHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) error {\n\treturn f(w, r)\n}\n\nfunc TestContainsPlaceholders(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinput    string\n\t\texpected bool\n\t}{\n\t\t{\"static\", false},\n\t\t{\"{placeholder}\", true},\n\t\t{\"prefix-{placeholder}-suffix\", true},\n\t\t{\"{}\", false},\n\t\t{\"no-braces\", false},\n\t\t{\"{unclosed\", false},\n\t\t{\"unopened}\", false},\n\t} {\n\t\tactual := containsPlaceholders(tc.input)\n\t\tif actual != tc.expected {\n\t\t\tt.Errorf(\"Test %d: containsPlaceholders(%q) = %v, expected %v\", i, tc.input, actual, tc.expected)\n\t\t}\n\t}\n}\n\nfunc TestHeaderProvisionSkipsPlaceholders(t *testing.T) {\n\tops := &HeaderOps{\n\t\tReplace: map[string][]Replacement{\n\t\t\t\"Static\": {\n\t\t\t\tReplacement{SearchRegexp: \":443\", Replace: \"STATIC\"},\n\t\t\t},\n\t\t\t\"Dynamic\": {\n\t\t\t\tReplacement{SearchRegexp: \":{http.request.local.port}\", Replace: \"DYNAMIC\"},\n\t\t\t},\n\t\t},\n\t}\n\n\terr := ops.Provision(caddy.Context{})\n\tif err != nil {\n\t\tt.Fatalf(\"Provision failed: %v\", err)\n\t}\n\n\t// Static regex should be precompiled\n\tif ops.Replace[\"Static\"][0].re == nil {\n\t\tt.Error(\"Expected static regex to be precompiled\")\n\t}\n\n\t// Dynamic regex with placeholder should not be precompiled\n\tif ops.Replace[\"Dynamic\"][0].re != nil {\n\t\tt.Error(\"Expected dynamic regex with placeholder to not be precompiled\")\n\t}\n}\n\nfunc TestPlaceholderInSearchRegexp(t *testing.T) {\n\thandler := Handler{\n\t\tResponse: &RespHeaderOps{\n\t\t\tHeaderOps: &HeaderOps{\n\t\t\t\tReplace: map[string][]Replacement{\n\t\t\t\t\t\"Test-Header\": {\n\t\t\t\t\t\tReplacement{\n\t\t\t\t\t\t\tSearchRegexp: \":{http.request.local.port}\",\n\t\t\t\t\t\t\tReplace:      \"PLACEHOLDER-WORKS\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n\n\t// Provision the handler\n\terr := handler.Provision(caddy.Context{})\n\tif err != nil {\n\t\tt.Fatalf(\"Provision failed: %v\", err)\n\t}\n\n\treplacement := handler.Response.HeaderOps.Replace[\"Test-Header\"][0]\n\tt.Logf(\"After provision - SearchRegexp: %q, re: %v\", replacement.SearchRegexp, replacement.re)\n\n\trr := httptest.NewRecorder()\n\n\treq := httptest.NewRequest(\"GET\", \"http://localhost:443/\", nil)\n\trepl := caddy.NewReplacer()\n\trepl.Set(\"http.request.local.port\", \"443\")\n\n\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\treq = req.WithContext(ctx)\n\n\trr.Header().Set(\"Test-Header\", \"prefix:443suffix\")\n\tt.Logf(\"Initial header: %v\", rr.Header())\n\n\tnext := nextHandler(func(w http.ResponseWriter, r *http.Request) error {\n\t\tw.WriteHeader(200)\n\t\treturn nil\n\t})\n\n\terr = handler.ServeHTTP(rr, req, next)\n\tif err != nil {\n\t\tt.Fatalf(\"ServeHTTP failed: %v\", err)\n\t}\n\n\tt.Logf(\"Final header: %v\", rr.Header())\n\n\tresult := rr.Header().Get(\"Test-Header\")\n\texpected := \"prefixPLACEHOLDER-WORKSsuffix\"\n\tif result != expected {\n\t\tt.Errorf(\"Expected header value %q, got %q\", expected, result)\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/http2listener.go",
    "content": "package caddyhttp\n\nimport (\n\t\"crypto/tls\"\n\t\"io\"\n\t\"net\"\n\n\t\"go.uber.org/zap\"\n\t\"golang.org/x/net/http2\"\n)\n\ntype connectionStater interface {\n\tConnectionState() tls.ConnectionState\n}\n\n// http2Listener wraps the listener to solve the following problems:\n// 1. prevent genuine h2c connections from succeeding if h2c is not enabled\n// and the connection doesn't implment connectionStater or the resulting NegotiatedProtocol\n// isn't http2.\n// This does allow a connection to pass as tls enabled even if it's not, listener wrappers\n// can do this.\n// 2. After wrapping the connection doesn't implement connectionStater, emit a warning so that listener\n// wrapper authors will hopefully implement it.\n// 3. check if the connection matches a specific http version. h2/h2c has a distinct preface.\ntype http2Listener struct {\n\tuseTLS bool\n\tuseH1  bool\n\tuseH2  bool\n\tnet.Listener\n\tlogger *zap.Logger\n}\n\nfunc (h *http2Listener) Accept() (net.Conn, error) {\n\tconn, err := h.Listener.Accept()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// *tls.Conn doesn't need to be wrapped because we already removed unwanted alpns\n\t// and handshake won't succeed without mutually supported alpns\n\tif tlsConn, ok := conn.(*tls.Conn); ok {\n\t\treturn tlsConn, nil\n\t}\n\n\t_, isConnectionStater := conn.(connectionStater)\n\t// emit a warning\n\tif h.useTLS && !isConnectionStater {\n\t\th.logger.Warn(\"tls is enabled, but listener wrapper returns a connection that doesn't implement connectionStater\")\n\t} else if !h.useTLS && isConnectionStater {\n\t\th.logger.Warn(\"tls is disabled, but listener wrapper returns a connection that implements connectionStater\")\n\t}\n\n\t// if both h1 and h2 are enabled, we don't need to check the preface\n\tif h.useH1 && h.useH2 {\n\t\tif isConnectionStater {\n\t\t\treturn tlsStateConn{conn}, nil\n\t\t}\n\t\treturn conn, nil\n\t}\n\n\t// impossible both are false, either useH1 or useH2 must be true,\n\t// or else the listener wouldn't be created\n\th2Conn := &http2Conn{\n\t\th2Expected: h.useH2,\n\t\tlogger:     h.logger,\n\t\tConn:       conn,\n\t}\n\tif isConnectionStater {\n\t\treturn tlsStateConn{http2StateConn{h2Conn}}, nil\n\t}\n\treturn h2Conn, nil\n}\n\n// tlsStateConn wraps a net.Conn that implements connectionStater to hide that method\n// we can call netConn to get the original net.Conn and get the tls connection state\n// golang 1.25 will call that method, and it breaks h2 with connections other than *tls.Conn\ntype tlsStateConn struct {\n\tnet.Conn\n}\n\nfunc (conn tlsStateConn) tlsNetConn() net.Conn {\n\treturn conn.Conn\n}\n\ntype http2StateConn struct {\n\t*http2Conn\n}\n\nfunc (conn http2StateConn) ConnectionState() tls.ConnectionState {\n\treturn conn.Conn.(connectionStater).ConnectionState()\n}\n\ntype http2Conn struct {\n\t// current index where the preface should match,\n\t// no matching is done if idx is >= len(http2.ClientPreface)\n\tidx int\n\t// whether the connection is expected to be h2/h2c\n\th2Expected bool\n\t// log if one such connection is detected\n\tlogger *zap.Logger\n\tnet.Conn\n}\n\nfunc (c *http2Conn) Read(p []byte) (int, error) {\n\tif c.idx >= len(http2.ClientPreface) {\n\t\treturn c.Conn.Read(p)\n\t}\n\tn, err := c.Conn.Read(p)\n\tfor i := range n {\n\t\t// first mismatch\n\t\tif p[i] != http2.ClientPreface[c.idx] {\n\t\t\t// close the connection if h2 is expected\n\t\t\tif c.h2Expected {\n\t\t\t\tc.logger.Debug(\"h1 connection detected, but h1 is not enabled\")\n\t\t\t\t_ = c.Conn.Close()\n\t\t\t\treturn 0, io.EOF\n\t\t\t}\n\t\t\t// no need to continue matching anymore\n\t\t\tc.idx = len(http2.ClientPreface)\n\t\t\treturn n, err\n\t\t}\n\t\tc.idx++\n\t\t// matching complete\n\t\tif c.idx == len(http2.ClientPreface) && !c.h2Expected {\n\t\t\tc.logger.Debug(\"h2/h2c connection detected, but h2/h2c is not enabled\")\n\t\t\t_ = c.Conn.Close()\n\t\t\treturn 0, io.EOF\n\t\t}\n\t}\n\treturn n, err\n}\n"
  },
  {
    "path": "modules/caddyhttp/httpredirectlistener.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"net/http\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(HTTPRedirectListenerWrapper{})\n}\n\n// HTTPRedirectListenerWrapper provides HTTP->HTTPS redirects for\n// connections that come on the TLS port as an HTTP request,\n// by detecting using the first few bytes that it's not a TLS\n// handshake, but instead an HTTP request.\n//\n// This is especially useful when using a non-standard HTTPS port.\n// A user may simply type the address in their browser without the\n// https:// scheme, which would cause the browser to attempt the\n// connection over HTTP, but this would cause a \"Client sent an\n// HTTP request to an HTTPS server\" error response.\n//\n// This listener wrapper must be placed BEFORE the \"tls\" listener\n// wrapper, for it to work properly.\ntype HTTPRedirectListenerWrapper struct {\n\t// MaxHeaderBytes is the maximum size to parse from a client's\n\t// HTTP request headers. Default: 1 MB\n\tMaxHeaderBytes int64 `json:\"max_header_bytes,omitempty\"`\n}\n\nfunc (HTTPRedirectListenerWrapper) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.listeners.http_redirect\",\n\t\tNew: func() caddy.Module { return new(HTTPRedirectListenerWrapper) },\n\t}\n}\n\nfunc (h *HTTPRedirectListenerWrapper) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\treturn nil\n}\n\nfunc (h *HTTPRedirectListenerWrapper) WrapListener(l net.Listener) net.Listener {\n\treturn &httpRedirectListener{l, h.MaxHeaderBytes}\n}\n\n// httpRedirectListener is listener that checks the first few bytes\n// of the request when the server is intended to accept HTTPS requests,\n// to respond to an HTTP request with a redirect.\ntype httpRedirectListener struct {\n\tnet.Listener\n\tmaxHeaderBytes int64\n}\n\n// Accept waits for and returns the next connection to the listener,\n// wrapping it with a httpRedirectConn.\nfunc (l *httpRedirectListener) Accept() (net.Conn, error) {\n\tc, err := l.Listener.Accept()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tmaxHeaderBytes := l.maxHeaderBytes\n\tif maxHeaderBytes == 0 {\n\t\tmaxHeaderBytes = 1024 * 1024\n\t}\n\n\treturn &httpRedirectConn{\n\t\tConn:  c,\n\t\tlimit: maxHeaderBytes,\n\t\tr:     bufio.NewReader(c),\n\t}, nil\n}\n\ntype httpRedirectConn struct {\n\tnet.Conn\n\tonce  bool\n\tlimit int64\n\tr     *bufio.Reader\n}\n\n// Read tries to peek at the first few bytes of the request, and if we get\n// an error reading the headers, and that error was due to the bytes looking\n// like an HTTP request, then we perform a HTTP->HTTPS redirect on the same\n// port as the original connection.\nfunc (c *httpRedirectConn) Read(p []byte) (int, error) {\n\tif c.once {\n\t\treturn c.r.Read(p)\n\t}\n\t// no need to use sync.Once - net.Conn is not read from concurrently.\n\tc.once = true\n\n\tfirstBytes, err := c.r.Peek(5)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\n\t// If the request doesn't look like HTTP, then it's probably\n\t// TLS bytes, and we don't need to do anything.\n\tif !firstBytesLookLikeHTTP(firstBytes) {\n\t\treturn c.r.Read(p)\n\t}\n\n\t// From now on, we can be almost certain the request is HTTP.\n\t// The returned error will be non nil and caller are expected to\n\t// close the connection.\n\n\t// Set the read limit, io.MultiReader is needed because\n\t// when resetting, *bufio.Reader discards buffered data.\n\tbuffered, _ := c.r.Peek(c.r.Buffered())\n\tmr := io.MultiReader(bytes.NewReader(buffered), c.Conn)\n\tc.r.Reset(io.LimitReader(mr, c.limit))\n\n\t// Parse the HTTP request, so we can get the Host and URL to redirect to.\n\treq, err := http.ReadRequest(c.r)\n\tif err != nil {\n\t\treturn 0, fmt.Errorf(\"couldn't read HTTP request\")\n\t}\n\n\t// Build the redirect response, using the same Host and URL,\n\t// but replacing the scheme with https.\n\theaders := make(http.Header)\n\theaders.Add(\"Location\", \"https://\"+req.Host+req.URL.String())\n\tresp := &http.Response{\n\t\tProto:      \"HTTP/1.0\",\n\t\tStatus:     \"308 Permanent Redirect\",\n\t\tStatusCode: 308,\n\t\tProtoMajor: 1,\n\t\tProtoMinor: 0,\n\t\tHeader:     headers,\n\t}\n\n\terr = resp.Write(c.Conn)\n\tif err != nil {\n\t\treturn 0, fmt.Errorf(\"couldn't write HTTP->HTTPS redirect\")\n\t}\n\n\treturn 0, fmt.Errorf(\"redirected HTTP request on HTTPS port\")\n}\n\n// firstBytesLookLikeHTTP reports whether a TLS record header\n// looks like it might've been a misdirected plaintext HTTP request.\nfunc firstBytesLookLikeHTTP(hdr []byte) bool {\n\tswitch string(hdr[:5]) {\n\tcase \"GET /\", \"HEAD \", \"POST \", \"PUT /\", \"OPTIO\":\n\t\treturn true\n\t}\n\treturn false\n}\n\nvar (\n\t_ caddy.ListenerWrapper = (*HTTPRedirectListenerWrapper)(nil)\n\t_ caddyfile.Unmarshaler = (*HTTPRedirectListenerWrapper)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/intercept/intercept.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage intercept\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Intercept{})\n\thttpcaddyfile.RegisterHandlerDirective(\"intercept\", parseCaddyfile)\n}\n\n// Intercept is a middleware that intercepts then replaces or modifies the original response.\n// It can, for instance, be used to implement X-Sendfile/X-Accel-Redirect-like features\n// when using modules like FrankenPHP or Caddy Snake.\n//\n// EXPERIMENTAL: Subject to change or removal.\ntype Intercept struct {\n\t// List of handlers and their associated matchers to evaluate\n\t// after successful response generation.\n\t// The first handler that matches the original response will\n\t// be invoked. The original response body will not be\n\t// written to the client;\n\t// it is up to the handler to finish handling the response.\n\t//\n\t// Three new placeholders are available in this handler chain:\n\t// - `{http.intercept.status_code}` The status code from the response\n\t// - `{http.intercept.header.*}` The headers from the response\n\tHandleResponse []caddyhttp.ResponseHandler `json:\"handle_response,omitempty\"`\n\n\t// Holds the named response matchers from the Caddyfile while adapting\n\tresponseMatchers map[string]caddyhttp.ResponseMatcher\n\n\t// Holds the handle_response Caddyfile tokens while adapting\n\thandleResponseSegments []*caddyfile.Dispenser\n\n\tlogger *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\n//\n// EXPERIMENTAL: Subject to change or removal.\nfunc (Intercept) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.intercept\",\n\t\tNew: func() caddy.Module { return new(Intercept) },\n\t}\n}\n\n// Provision ensures that i is set up properly before use.\n//\n// EXPERIMENTAL: Subject to change or removal.\nfunc (irh *Intercept) Provision(ctx caddy.Context) error {\n\t// set up any response routes\n\tfor i, rh := range irh.HandleResponse {\n\t\terr := rh.Provision(ctx)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"provisioning response handler %d: %w\", i, err)\n\t\t}\n\t}\n\n\tirh.logger = ctx.Logger()\n\n\treturn nil\n}\n\nvar bufPool = sync.Pool{\n\tNew: func() any {\n\t\treturn new(bytes.Buffer)\n\t},\n}\n\n// TODO: handle status code replacement\n//\n// EXPERIMENTAL: Subject to change or removal.\ntype interceptedResponseHandler struct {\n\tcaddyhttp.ResponseRecorder\n\treplacer     *caddy.Replacer\n\thandler      caddyhttp.ResponseHandler\n\thandlerIndex int\n\tstatusCode   int\n}\n\n// EXPERIMENTAL: Subject to change or removal.\nfunc (irh interceptedResponseHandler) WriteHeader(statusCode int) {\n\tif irh.statusCode != 0 && (statusCode < 100 || statusCode >= 200) {\n\t\tirh.ResponseRecorder.WriteHeader(irh.statusCode)\n\n\t\treturn\n\t}\n\n\tirh.ResponseRecorder.WriteHeader(statusCode)\n}\n\n// EXPERIMENTAL: Subject to change or removal.\nfunc (irh interceptedResponseHandler) Unwrap() http.ResponseWriter {\n\treturn irh.ResponseRecorder\n}\n\n// EXPERIMENTAL: Subject to change or removal.\nfunc (ir Intercept) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {\n\tbuf := bufPool.Get().(*bytes.Buffer)\n\tbuf.Reset()\n\tdefer bufPool.Put(buf)\n\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\trec := interceptedResponseHandler{replacer: repl}\n\trec.ResponseRecorder = caddyhttp.NewResponseRecorder(w, buf, func(status int, header http.Header) bool {\n\t\t// see if any response handler is configured for this original response\n\t\tfor i, rh := range ir.HandleResponse {\n\t\t\tif rh.Match != nil && !rh.Match.Match(status, header) {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\trec.handler = rh\n\t\t\trec.handlerIndex = i\n\n\t\t\t// if configured to only change the status code,\n\t\t\t// do that then stream\n\t\t\tif statusCodeStr := rh.StatusCode.String(); statusCodeStr != \"\" {\n\t\t\t\tsc, err := strconv.Atoi(repl.ReplaceAll(statusCodeStr, \"\"))\n\t\t\t\tif err != nil {\n\t\t\t\t\trec.statusCode = http.StatusInternalServerError\n\t\t\t\t} else {\n\t\t\t\t\trec.statusCode = sc\n\t\t\t\t}\n\t\t\t}\n\n\t\t\treturn rec.statusCode == 0\n\t\t}\n\n\t\treturn false\n\t})\n\n\tif err := next.ServeHTTP(rec, r); err != nil {\n\t\treturn err\n\t}\n\tif !rec.Buffered() {\n\t\treturn nil\n\t}\n\n\t// set up the replacer so that parts of the original response can be\n\t// used for routing decisions\n\tfor field, value := range rec.Header() {\n\t\trepl.Set(\"http.intercept.header.\"+field, strings.Join(value, \",\"))\n\t}\n\trepl.Set(\"http.intercept.status_code\", rec.Status())\n\n\tif c := ir.logger.Check(zapcore.DebugLevel, \"handling response\"); c != nil {\n\t\tc.Write(zap.Int(\"handler\", rec.handlerIndex))\n\t}\n\n\t// response recorder doesn't create a new copy of the original headers, they're\n\t// present in the original response writer\n\t// create a new recorder to see if any response body from the new handler is present,\n\t// if not, use the already buffered response body\n\trecorder := caddyhttp.NewResponseRecorder(w, nil, nil)\n\tif err := rec.handler.Routes.Compile(emptyHandler).ServeHTTP(recorder, r); err != nil {\n\t\treturn err\n\t}\n\n\t// no new response status and the status is not 0\n\tif recorder.Status() == 0 && rec.Status() != 0 {\n\t\tw.WriteHeader(rec.Status())\n\t}\n\n\t// no new response body and there is some in the original response\n\t// TODO: what if the new response doesn't have a body by design?\n\t// see: https://github.com/caddyserver/caddy/pull/6232#issue-2235224400\n\tif recorder.Size() == 0 && buf.Len() > 0 {\n\t\t_, err := io.Copy(w, buf)\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// this handler does nothing because everything we need is already buffered\nvar emptyHandler caddyhttp.Handler = caddyhttp.HandlerFunc(func(_ http.ResponseWriter, req *http.Request) error {\n\treturn nil\n})\n\n// UnmarshalCaddyfile sets up the handler from Caddyfile tokens. Syntax:\n//\n//\tintercept [<matcher>] {\n//\t    # intercept original responses\n//\t    @name {\n//\t        status <code...>\n//\t        header <field> [<value>]\n//\t    }\n//\t    replace_status [<matcher>] <status_code>\n//\t    handle_response [<matcher>] {\n//\t        <directives...>\n//\t    }\n//\t}\n//\n// The FinalizeUnmarshalCaddyfile method should be called after this\n// to finalize parsing of \"handle_response\" blocks, if possible.\n//\n// EXPERIMENTAL: Subject to change or removal.\nfunc (i *Intercept) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\t// collect the response matchers defined as subdirectives\n\t// prefixed with \"@\" for use with \"handle_response\" blocks\n\ti.responseMatchers = make(map[string]caddyhttp.ResponseMatcher)\n\n\td.Next() // consume the directive name\n\tfor d.NextBlock(0) {\n\t\t// if the subdirective has an \"@\" prefix then we\n\t\t// parse it as a response matcher for use with \"handle_response\"\n\t\tif strings.HasPrefix(d.Val(), matcherPrefix) {\n\t\t\terr := caddyhttp.ParseNamedResponseMatcher(d.NewFromNextSegment(), i.responseMatchers)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\tswitch d.Val() {\n\t\tcase \"handle_response\":\n\t\t\t// delegate the parsing of handle_response to the caller,\n\t\t\t// since we need the httpcaddyfile.Helper to parse subroutes.\n\t\t\t// See h.FinalizeUnmarshalCaddyfile\n\t\t\ti.handleResponseSegments = append(i.handleResponseSegments, d.NewFromNextSegment())\n\n\t\tcase \"replace_status\":\n\t\t\targs := d.RemainingArgs()\n\t\t\tif len(args) != 1 && len(args) != 2 {\n\t\t\t\treturn d.Errf(\"must have one or two arguments: an optional response matcher, and a status code\")\n\t\t\t}\n\n\t\t\tresponseHandler := caddyhttp.ResponseHandler{}\n\n\t\t\tif len(args) == 2 {\n\t\t\t\tif !strings.HasPrefix(args[0], matcherPrefix) {\n\t\t\t\t\treturn d.Errf(\"must use a named response matcher, starting with '@'\")\n\t\t\t\t}\n\t\t\t\tfoundMatcher, ok := i.responseMatchers[args[0]]\n\t\t\t\tif !ok {\n\t\t\t\t\treturn d.Errf(\"no named response matcher defined with name '%s'\", args[0][1:])\n\t\t\t\t}\n\t\t\t\tresponseHandler.Match = &foundMatcher\n\t\t\t\tresponseHandler.StatusCode = caddyhttp.WeakString(args[1])\n\t\t\t} else if len(args) == 1 {\n\t\t\t\tresponseHandler.StatusCode = caddyhttp.WeakString(args[0])\n\t\t\t}\n\n\t\t\t// make sure there's no block, cause it doesn't make sense\n\t\t\tif nesting := d.Nesting(); d.NextBlock(nesting) {\n\t\t\t\treturn d.Errf(\"cannot define routes for 'replace_status', use 'handle_response' instead.\")\n\t\t\t}\n\n\t\t\ti.HandleResponse = append(\n\t\t\t\ti.HandleResponse,\n\t\t\t\tresponseHandler,\n\t\t\t)\n\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized subdirective %s\", d.Val())\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// FinalizeUnmarshalCaddyfile finalizes the Caddyfile parsing which\n// requires having an httpcaddyfile.Helper to function, to parse subroutes.\n//\n// EXPERIMENTAL: Subject to change or removal.\nfunc (i *Intercept) FinalizeUnmarshalCaddyfile(helper httpcaddyfile.Helper) error {\n\tfor _, d := range i.handleResponseSegments {\n\t\t// consume the \"handle_response\" token\n\t\td.Next()\n\t\targs := d.RemainingArgs()\n\n\t\t// TODO: Remove this check at some point in the future\n\t\tif len(args) == 2 {\n\t\t\treturn d.Errf(\"configuring 'handle_response' for status code replacement is no longer supported. Use 'replace_status' instead.\")\n\t\t}\n\n\t\tif len(args) > 1 {\n\t\t\treturn d.Errf(\"too many arguments for 'handle_response': %s\", args)\n\t\t}\n\n\t\tvar matcher *caddyhttp.ResponseMatcher\n\t\tif len(args) == 1 {\n\t\t\t// the first arg should always be a matcher.\n\t\t\tif !strings.HasPrefix(args[0], matcherPrefix) {\n\t\t\t\treturn d.Errf(\"must use a named response matcher, starting with '@'\")\n\t\t\t}\n\n\t\t\tfoundMatcher, ok := i.responseMatchers[args[0]]\n\t\t\tif !ok {\n\t\t\t\treturn d.Errf(\"no named response matcher defined with name '%s'\", args[0][1:])\n\t\t\t}\n\t\t\tmatcher = &foundMatcher\n\t\t}\n\n\t\t// parse the block as routes\n\t\thandler, err := httpcaddyfile.ParseSegmentAsSubroute(helper.WithDispenser(d.NewFromNextSegment()))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsubroute, ok := handler.(*caddyhttp.Subroute)\n\t\tif !ok {\n\t\t\treturn helper.Errf(\"segment was not parsed as a subroute\")\n\t\t}\n\t\ti.HandleResponse = append(\n\t\t\ti.HandleResponse,\n\t\t\tcaddyhttp.ResponseHandler{\n\t\t\t\tMatch:  matcher,\n\t\t\t\tRoutes: subroute.Routes,\n\t\t\t},\n\t\t)\n\t}\n\n\t// move the handle_response entries without a matcher to the end.\n\t// we can't use sort.SliceStable because it will reorder the rest of the\n\t// entries which may be undesirable because we don't have a good\n\t// heuristic to use for sorting.\n\twithoutMatchers := []caddyhttp.ResponseHandler{}\n\twithMatchers := []caddyhttp.ResponseHandler{}\n\tfor _, hr := range i.HandleResponse {\n\t\tif hr.Match == nil {\n\t\t\twithoutMatchers = append(withoutMatchers, hr)\n\t\t} else {\n\t\t\twithMatchers = append(withMatchers, hr)\n\t\t}\n\t}\n\ti.HandleResponse = append(withMatchers, withoutMatchers...)\n\n\t// clean up the bits we only needed for adapting\n\ti.handleResponseSegments = nil\n\ti.responseMatchers = nil\n\n\treturn nil\n}\n\nconst matcherPrefix = \"@\"\n\nfunc parseCaddyfile(helper httpcaddyfile.Helper) (caddyhttp.MiddlewareHandler, error) {\n\tvar ir Intercept\n\tif err := ir.UnmarshalCaddyfile(helper.Dispenser); err != nil {\n\t\treturn nil, err\n\t}\n\n\tif err := ir.FinalizeUnmarshalCaddyfile(helper); err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn ir, nil\n}\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner           = (*Intercept)(nil)\n\t_ caddyfile.Unmarshaler       = (*Intercept)(nil)\n\t_ caddyhttp.MiddlewareHandler = (*Intercept)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/invoke.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Invoke{})\n}\n\n// Invoke implements a handler that compiles and executes a\n// named route that was defined on the server.\n//\n// EXPERIMENTAL: Subject to change or removal.\ntype Invoke struct {\n\t// Name is the key of the named route to execute\n\tName string `json:\"name,omitempty\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Invoke) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.invoke\",\n\t\tNew: func() caddy.Module { return new(Invoke) },\n\t}\n}\n\nfunc (invoke *Invoke) ServeHTTP(w http.ResponseWriter, r *http.Request, next Handler) error {\n\tserver := r.Context().Value(ServerCtxKey).(*Server)\n\tif route, ok := server.NamedRoutes[invoke.Name]; ok {\n\t\treturn route.Compile(next).ServeHTTP(w, r)\n\t}\n\treturn fmt.Errorf(\"invoke: route '%s' not found\", invoke.Name)\n}\n\n// Interface guards\nvar (\n\t_ MiddlewareHandler = (*Invoke)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/ip_matchers.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/netip\"\n\t\"strings\"\n\n\t\"github.com/google/cel-go/cel\"\n\t\"github.com/google/cel-go/common/types/ref\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/internal\"\n)\n\n// MatchRemoteIP matches requests by the remote IP address,\n// i.e. the IP address of the direct connection to Caddy.\ntype MatchRemoteIP struct {\n\t// The IPs or CIDR ranges to match.\n\tRanges []string `json:\"ranges,omitempty\"`\n\n\t// cidrs and zones vars should aligned always in the same\n\t// length and indexes for matching later\n\tcidrs  []*netip.Prefix\n\tzones  []string\n\tlogger *zap.Logger\n}\n\n// MatchClientIP matches requests by the client IP address,\n// i.e. the resolved address, considering trusted proxies.\ntype MatchClientIP struct {\n\t// The IPs or CIDR ranges to match.\n\tRanges []string `json:\"ranges,omitempty\"`\n\n\t// cidrs and zones vars should aligned always in the same\n\t// length and indexes for matching later\n\tcidrs  []*netip.Prefix\n\tzones  []string\n\tlogger *zap.Logger\n}\n\nfunc init() {\n\tcaddy.RegisterModule(MatchRemoteIP{})\n\tcaddy.RegisterModule(MatchClientIP{})\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchRemoteIP) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.matchers.remote_ip\",\n\t\tNew: func() caddy.Module { return new(MatchRemoteIP) },\n\t}\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (m *MatchRemoteIP) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\t// iterate to merge multiple matchers into one\n\tfor d.Next() {\n\t\tfor d.NextArg() {\n\t\t\tif d.Val() == \"forwarded\" {\n\t\t\t\treturn d.Err(\"the 'forwarded' option is no longer supported; use the 'client_ip' matcher instead\")\n\t\t\t}\n\t\t\tif d.Val() == \"private_ranges\" {\n\t\t\t\tm.Ranges = append(m.Ranges, internal.PrivateRangesCIDR()...)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tm.Ranges = append(m.Ranges, d.Val())\n\t\t}\n\t\tif d.NextBlock(0) {\n\t\t\treturn d.Err(\"malformed remote_ip matcher: blocks are not supported\")\n\t\t}\n\t}\n\treturn nil\n}\n\n// CELLibrary produces options that expose this matcher for use in CEL\n// expression matchers.\n//\n// Example:\n//\n//\texpression remote_ip('192.168.0.0/16', '172.16.0.0/12', '10.0.0.0/8')\nfunc (MatchRemoteIP) CELLibrary(ctx caddy.Context) (cel.Library, error) {\n\treturn CELMatcherImpl(\n\t\t// name of the macro, this is the function name that users see when writing expressions.\n\t\t\"remote_ip\",\n\t\t// name of the function that the macro will be rewritten to call.\n\t\t\"remote_ip_match_request_list\",\n\t\t// internal data type of the MatchPath value.\n\t\t[]*cel.Type{cel.ListType(cel.StringType)},\n\t\t// function to convert a constant list of strings to a MatchPath instance.\n\t\tfunc(data ref.Val) (RequestMatcherWithError, error) {\n\t\t\trefStringList := stringSliceType\n\t\t\tstrList, err := data.ConvertToNative(refStringList)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\n\t\t\tm := MatchRemoteIP{}\n\n\t\t\tfor _, input := range strList.([]string) {\n\t\t\t\tif input == \"forwarded\" {\n\t\t\t\t\treturn nil, errors.New(\"the 'forwarded' option is no longer supported; use the 'client_ip' matcher instead\")\n\t\t\t\t}\n\t\t\t\tm.Ranges = append(m.Ranges, input)\n\t\t\t}\n\n\t\t\terr = m.Provision(ctx)\n\t\t\treturn m, err\n\t\t},\n\t)\n}\n\n// Provision parses m's IP ranges, either from IP or CIDR expressions.\nfunc (m *MatchRemoteIP) Provision(ctx caddy.Context) error {\n\tm.logger = ctx.Logger()\n\tcidrs, zones, err := provisionCidrsZonesFromRanges(m.Ranges)\n\tif err != nil {\n\t\treturn err\n\t}\n\tm.cidrs = cidrs\n\tm.zones = zones\n\n\treturn nil\n}\n\n// Match returns true if r matches m.\nfunc (m MatchRemoteIP) Match(r *http.Request) bool {\n\tmatch, err := m.MatchWithError(r)\n\tif err != nil {\n\t\tSetVar(r.Context(), MatcherErrorVarKey, err)\n\t}\n\treturn match\n}\n\n// MatchWithError returns true if r matches m.\nfunc (m MatchRemoteIP) MatchWithError(r *http.Request) (bool, error) {\n\t// if handshake is not finished, we infer 0-RTT that has\n\t// not verified remote IP; could be spoofed, so we throw\n\t// HTTP 425 status to tell the client to try again after\n\t// the handshake is complete\n\tif r.TLS != nil && !r.TLS.HandshakeComplete {\n\t\treturn false, Error(http.StatusTooEarly, fmt.Errorf(\"TLS handshake not complete, remote IP cannot be verified\"))\n\t}\n\n\taddress := r.RemoteAddr\n\tclientIP, zoneID, err := parseIPZoneFromString(address)\n\tif err != nil {\n\t\tif c := m.logger.Check(zapcore.ErrorLevel, \"getting remote \"); c != nil {\n\t\t\tc.Write(zap.Error(err))\n\t\t}\n\n\t\treturn false, nil\n\t}\n\tmatches, zoneFilter := matchIPByCidrZones(clientIP, zoneID, m.cidrs, m.zones)\n\tif !matches && !zoneFilter {\n\t\tif c := m.logger.Check(zapcore.DebugLevel, \"zone ID from remote IP did not match\"); c != nil {\n\t\t\tc.Write(zap.String(\"zone\", zoneID))\n\t\t}\n\t}\n\treturn matches, nil\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchClientIP) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.matchers.client_ip\",\n\t\tNew: func() caddy.Module { return new(MatchClientIP) },\n\t}\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (m *MatchClientIP) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\t// iterate to merge multiple matchers into one\n\tfor d.Next() {\n\t\tfor d.NextArg() {\n\t\t\tif d.Val() == \"private_ranges\" {\n\t\t\t\tm.Ranges = append(m.Ranges, internal.PrivateRangesCIDR()...)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tm.Ranges = append(m.Ranges, d.Val())\n\t\t}\n\t\tif d.NextBlock(0) {\n\t\t\treturn d.Err(\"malformed client_ip matcher: blocks are not supported\")\n\t\t}\n\t}\n\treturn nil\n}\n\n// CELLibrary produces options that expose this matcher for use in CEL\n// expression matchers.\n//\n// Example:\n//\n//\texpression client_ip('192.168.0.0/16', '172.16.0.0/12', '10.0.0.0/8')\nfunc (MatchClientIP) CELLibrary(ctx caddy.Context) (cel.Library, error) {\n\treturn CELMatcherImpl(\n\t\t// name of the macro, this is the function name that users see when writing expressions.\n\t\t\"client_ip\",\n\t\t// name of the function that the macro will be rewritten to call.\n\t\t\"client_ip_match_request_list\",\n\t\t// internal data type of the MatchPath value.\n\t\t[]*cel.Type{cel.ListType(cel.StringType)},\n\t\t// function to convert a constant list of strings to a MatchPath instance.\n\t\tfunc(data ref.Val) (RequestMatcherWithError, error) {\n\t\t\trefStringList := stringSliceType\n\t\t\tstrList, err := data.ConvertToNative(refStringList)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\n\t\t\tm := MatchClientIP{\n\t\t\t\tRanges: strList.([]string),\n\t\t\t}\n\n\t\t\terr = m.Provision(ctx)\n\t\t\treturn m, err\n\t\t},\n\t)\n}\n\n// Provision parses m's IP ranges, either from IP or CIDR expressions.\nfunc (m *MatchClientIP) Provision(ctx caddy.Context) error {\n\tm.logger = ctx.Logger()\n\tcidrs, zones, err := provisionCidrsZonesFromRanges(m.Ranges)\n\tif err != nil {\n\t\treturn err\n\t}\n\tm.cidrs = cidrs\n\tm.zones = zones\n\treturn nil\n}\n\n// Match returns true if r matches m.\nfunc (m MatchClientIP) Match(r *http.Request) bool {\n\tmatch, err := m.MatchWithError(r)\n\tif err != nil {\n\t\tSetVar(r.Context(), MatcherErrorVarKey, err)\n\t}\n\treturn match\n}\n\n// MatchWithError returns true if r matches m.\nfunc (m MatchClientIP) MatchWithError(r *http.Request) (bool, error) {\n\t// if handshake is not finished, we infer 0-RTT that has\n\t// not verified remote IP; could be spoofed, so we throw\n\t// HTTP 425 status to tell the client to try again after\n\t// the handshake is complete\n\tif r.TLS != nil && !r.TLS.HandshakeComplete {\n\t\treturn false, Error(http.StatusTooEarly, fmt.Errorf(\"TLS handshake not complete, remote IP cannot be verified\"))\n\t}\n\n\taddress := GetVar(r.Context(), ClientIPVarKey).(string)\n\tclientIP, zoneID, err := parseIPZoneFromString(address)\n\tif err != nil {\n\t\tm.logger.Error(\"getting client IP\", zap.Error(err))\n\t\treturn false, nil\n\t}\n\tmatches, zoneFilter := matchIPByCidrZones(clientIP, zoneID, m.cidrs, m.zones)\n\tif !matches && !zoneFilter {\n\t\tm.logger.Debug(\"zone ID from client IP did not match\", zap.String(\"zone\", zoneID))\n\t}\n\treturn matches, nil\n}\n\nfunc provisionCidrsZonesFromRanges(ranges []string) ([]*netip.Prefix, []string, error) {\n\tcidrs := []*netip.Prefix{}\n\tzones := []string{}\n\trepl := caddy.NewReplacer()\n\tfor _, str := range ranges {\n\t\tstr = repl.ReplaceAll(str, \"\")\n\t\t// Exclude the zone_id from the IP\n\t\tif strings.Contains(str, \"%\") {\n\t\t\tsplit := strings.Split(str, \"%\")\n\t\t\tstr = split[0]\n\t\t\t// write zone identifiers in m.zones for matching later\n\t\t\tzones = append(zones, split[1])\n\t\t} else {\n\t\t\tzones = append(zones, \"\")\n\t\t}\n\t\tif strings.Contains(str, \"/\") {\n\t\t\tipNet, err := netip.ParsePrefix(str)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, nil, fmt.Errorf(\"parsing CIDR expression '%s': %v\", str, err)\n\t\t\t}\n\t\t\tcidrs = append(cidrs, &ipNet)\n\t\t} else {\n\t\t\tipAddr, err := netip.ParseAddr(str)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, nil, fmt.Errorf(\"invalid IP address: '%s': %v\", str, err)\n\t\t\t}\n\t\t\tipNew := netip.PrefixFrom(ipAddr, ipAddr.BitLen())\n\t\t\tcidrs = append(cidrs, &ipNew)\n\t\t}\n\t}\n\treturn cidrs, zones, nil\n}\n\nfunc parseIPZoneFromString(address string) (netip.Addr, string, error) {\n\tipStr, _, err := net.SplitHostPort(address)\n\tif err != nil {\n\t\tipStr = address // OK; probably didn't have a port\n\t}\n\n\t// Some IPv6-Addresses can contain zone identifiers at the end,\n\t// which are separated with \"%\"\n\tzoneID := \"\"\n\tif strings.Contains(ipStr, \"%\") {\n\t\tsplit := strings.Split(ipStr, \"%\")\n\t\tipStr = split[0]\n\t\tzoneID = split[1]\n\t}\n\n\tipAddr, err := netip.ParseAddr(ipStr)\n\tif err != nil {\n\t\treturn netip.IPv4Unspecified(), \"\", err\n\t}\n\n\treturn ipAddr, zoneID, nil\n}\n\nfunc matchIPByCidrZones(clientIP netip.Addr, zoneID string, cidrs []*netip.Prefix, zones []string) (bool, bool) {\n\tzoneFilter := true\n\tfor i, ipRange := range cidrs {\n\t\tif ipRange.Contains(clientIP) {\n\t\t\t// Check if there are zone filters assigned and if they match.\n\t\t\tif zones[i] == \"\" || zoneID == zones[i] {\n\t\t\t\treturn true, false\n\t\t\t}\n\t\t\tzoneFilter = false\n\t\t}\n\t}\n\treturn false, zoneFilter\n}\n\n// Interface guards\nvar (\n\t_ RequestMatcherWithError = (*MatchRemoteIP)(nil)\n\t_ caddy.Provisioner       = (*MatchRemoteIP)(nil)\n\t_ caddyfile.Unmarshaler   = (*MatchRemoteIP)(nil)\n\t_ CELLibraryProducer      = (*MatchRemoteIP)(nil)\n\n\t_ RequestMatcherWithError = (*MatchClientIP)(nil)\n\t_ caddy.Provisioner       = (*MatchClientIP)(nil)\n\t_ caddyfile.Unmarshaler   = (*MatchClientIP)(nil)\n\t_ CELLibraryProducer      = (*MatchClientIP)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/ip_range.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/netip\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/internal\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(StaticIPRange{})\n}\n\n// IPRangeSource gets a list of IP ranges.\n//\n// The request is passed as an argument to allow plugin implementations\n// to have more flexibility. But, a plugin MUST NOT modify the request.\n// The caller will have read the `r.RemoteAddr` before getting IP ranges.\n//\n// This should be a very fast function -- instant if possible.\n// The list of IP ranges should be sourced as soon as possible if loaded\n// from an external source (i.e. initially loaded during Provisioning),\n// so that it's ready to be used when requests start getting handled.\n// A read lock should probably be used to get the cached value if the\n// ranges can change at runtime (e.g. periodically refreshed).\n// Using a `caddy.UsagePool` may be a good idea to avoid having refetch\n// the values when a config reload occurs, which would waste time.\n//\n// If the list of IP ranges cannot be sourced, then provisioning SHOULD\n// fail. Getting the IP ranges at runtime MUST NOT fail, because it would\n// cancel incoming requests. If refreshing the list fails, then the\n// previous list of IP ranges should continue to be returned so that the\n// server can continue to operate normally.\ntype IPRangeSource interface {\n\tGetIPRanges(*http.Request) []netip.Prefix\n}\n\n// StaticIPRange provides a static range of IP address prefixes (CIDRs).\ntype StaticIPRange struct {\n\t// A static list of IP ranges (supports CIDR notation).\n\tRanges []string `json:\"ranges,omitempty\"`\n\n\t// Holds the parsed CIDR ranges from Ranges.\n\tranges []netip.Prefix\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (StaticIPRange) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.ip_sources.static\",\n\t\tNew: func() caddy.Module { return new(StaticIPRange) },\n\t}\n}\n\nfunc (s *StaticIPRange) Provision(ctx caddy.Context) error {\n\tfor _, str := range s.Ranges {\n\t\tprefix, err := CIDRExpressionToPrefix(str)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\ts.ranges = append(s.ranges, prefix)\n\t}\n\n\treturn nil\n}\n\nfunc (s *StaticIPRange) GetIPRanges(_ *http.Request) []netip.Prefix {\n\treturn s.ranges\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (m *StaticIPRange) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\tif !d.Next() {\n\t\treturn nil\n\t}\n\tfor d.NextArg() {\n\t\tif d.Val() == \"private_ranges\" {\n\t\t\tm.Ranges = append(m.Ranges, internal.PrivateRangesCIDR()...)\n\t\t\tcontinue\n\t\t}\n\t\tm.Ranges = append(m.Ranges, d.Val())\n\t}\n\treturn nil\n}\n\n// CIDRExpressionToPrefix takes a string which could be either a\n// CIDR expression or a single IP address, and returns a netip.Prefix.\nfunc CIDRExpressionToPrefix(expr string) (netip.Prefix, error) {\n\t// Having a slash means it should be a CIDR expression\n\tif strings.Contains(expr, \"/\") {\n\t\tprefix, err := netip.ParsePrefix(expr)\n\t\tif err != nil {\n\t\t\treturn netip.Prefix{}, fmt.Errorf(\"parsing CIDR expression: '%s': %v\", expr, err)\n\t\t}\n\t\treturn prefix, nil\n\t}\n\n\t// Otherwise it's likely a single IP address\n\tparsed, err := netip.ParseAddr(expr)\n\tif err != nil {\n\t\treturn netip.Prefix{}, fmt.Errorf(\"invalid IP address: '%s': %v\", expr, err)\n\t}\n\tprefix := netip.PrefixFrom(parsed, parsed.BitLen())\n\treturn prefix, nil\n}\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner     = (*StaticIPRange)(nil)\n\t_ caddyfile.Unmarshaler = (*StaticIPRange)(nil)\n\t_ IPRangeSource         = (*StaticIPRange)(nil)\n)\n\n// PrivateRangesCIDR returns a list of private CIDR range\n// strings, which can be used as a configuration shortcut.\n// Note: this function is used at least by mholt/caddy-l4.\nfunc PrivateRangesCIDR() []string {\n\treturn internal.PrivateRangesCIDR()\n}\n"
  },
  {
    "path": "modules/caddyhttp/logging/caddyfile.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage logging\n\nimport (\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\thttpcaddyfile.RegisterHandlerDirective(\"log_append\", parseCaddyfile)\n}\n\n// parseCaddyfile sets up the log_append handler from Caddyfile tokens. Syntax:\n//\n//\tlog_append [<matcher>] [<]<key> <value>\nfunc parseCaddyfile(h httpcaddyfile.Helper) (caddyhttp.MiddlewareHandler, error) {\n\thandler := new(LogAppend)\n\terr := handler.UnmarshalCaddyfile(h.Dispenser)\n\treturn handler, err\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (h *LogAppend) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume directive name\n\tif !d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\th.Key = d.Val()\n\tif !d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\tif strings.HasPrefix(h.Key, \"<\") && len(h.Key) > 1 {\n\t\th.Early = true\n\t\th.Key = h.Key[1:]\n\t}\n\th.Value = d.Val()\n\treturn nil\n}\n\n// Interface guards\nvar (\n\t_ caddyfile.Unmarshaler = (*LogAppend)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/logging/logappend.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage logging\n\nimport (\n\t\"bytes\"\n\t\"encoding/base64\"\n\t\"net/http\"\n\t\"strings\"\n\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(LogAppend{})\n}\n\n// LogAppend implements a middleware that takes a key and value, where\n// the key is the name of a log field and the value is a placeholder,\n// or variable key, or constant value to use for that field.\ntype LogAppend struct {\n\t// Key is the name of the log field.\n\tKey string `json:\"key,omitempty\"`\n\n\t// Value is the value to use for the log field.\n\t// If it is a placeholder (with surrounding `{}`),\n\t// it will be evaluated when the log is written.\n\t// If the value is a key that exists in the `vars`\n\t// map, the value of that key will be used. Otherwise\n\t// the value will be used as-is as a constant string.\n\tValue string `json:\"value,omitempty\"`\n\n\t// Early, if true, adds the log field before calling\n\t// the next handler in the chain. By default, the log\n\t// field is added on the way back up the middleware chain,\n\t// after all subsequent handlers have completed.\n\tEarly bool `json:\"early,omitempty\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (LogAppend) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.log_append\",\n\t\tNew: func() caddy.Module { return new(LogAppend) },\n\t}\n}\n\nfunc (h LogAppend) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {\n\t// Determine if we need to add the log field early.\n\t// We do if the Early flag is set, or for convenience,\n\t// if the value is a special placeholder for the request body.\n\tneedsEarly := h.Early || h.Value == placeholderRequestBody || h.Value == placeholderRequestBodyBase64\n\n\t// Check if we need to buffer the response for special placeholders\n\tneedsResponseBody := h.Value == placeholderResponseBody || h.Value == placeholderResponseBodyBase64\n\n\tif needsEarly && !needsResponseBody {\n\t\t// Add the log field before calling the next handler\n\t\t// (but not if we need the response body, which isn't available yet)\n\t\th.addLogField(r, nil)\n\t}\n\n\tvar rec caddyhttp.ResponseRecorder\n\tvar buf *bytes.Buffer\n\n\tif needsResponseBody {\n\t\t// Wrap the response writer with a recorder to capture the response body\n\t\tbuf = new(bytes.Buffer)\n\t\trec = caddyhttp.NewResponseRecorder(w, buf, func(status int, header http.Header) bool {\n\t\t\t// Always buffer the response when we need to log the body\n\t\t\treturn true\n\t\t})\n\t\tw = rec\n\t}\n\n\t// Run the next handler in the chain.\n\t// If an error occurs, we still want to add\n\t// any extra log fields that we can, so we\n\t// hold onto the error and return it later.\n\thandlerErr := next.ServeHTTP(w, r)\n\n\tif needsResponseBody {\n\t\t// Write the buffered response to the client\n\t\tif rec.Buffered() {\n\t\t\th.addLogField(r, buf)\n\t\t\terr := rec.WriteResponse()\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t\treturn handlerErr\n\t}\n\n\tif !h.Early {\n\t\t// Add the log field after the handler completes\n\t\th.addLogField(r, buf)\n\t}\n\n\treturn handlerErr\n}\n\n// addLogField adds the log field to the request's extra log fields.\n// If buf is not nil, it contains the buffered response body for special\n// response body placeholders.\nfunc (h LogAppend) addLogField(r *http.Request, buf *bytes.Buffer) {\n\tctx := r.Context()\n\n\tvars := ctx.Value(caddyhttp.VarsCtxKey).(map[string]any)\n\trepl := ctx.Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\textra := ctx.Value(caddyhttp.ExtraLogFieldsCtxKey).(*caddyhttp.ExtraLogFields)\n\n\tvar varValue any\n\n\t// Handle special case placeholders for response body\n\tif h.Value == placeholderResponseBody {\n\t\tif buf != nil {\n\t\t\tvarValue = buf.String()\n\t\t} else {\n\t\t\tvarValue = \"\"\n\t\t}\n\t} else if h.Value == placeholderResponseBodyBase64 {\n\t\tif buf != nil {\n\t\t\tvarValue = base64.StdEncoding.EncodeToString(buf.Bytes())\n\t\t} else {\n\t\t\tvarValue = \"\"\n\t\t}\n\t} else if strings.HasPrefix(h.Value, \"{\") &&\n\t\tstrings.HasSuffix(h.Value, \"}\") &&\n\t\tstrings.Count(h.Value, \"{\") == 1 {\n\t\t// the value looks like a placeholder, so get its value\n\t\tvarValue, _ = repl.Get(strings.Trim(h.Value, \"{}\"))\n\t} else if val, ok := vars[h.Value]; ok {\n\t\t// the value is a key in the vars map\n\t\tvarValue = val\n\t} else {\n\t\t// the value is a constant string\n\t\tvarValue = h.Value\n\t}\n\n\t// Add the field to the extra log fields.\n\t// We use zap.Any because it will reflect\n\t// to the correct type for us.\n\textra.Add(zap.Any(h.Key, varValue))\n}\n\nconst (\n\t// Special placeholder values that are handled by log_append\n\t// rather than by the replacer.\n\tplaceholderRequestBody        = \"{http.request.body}\"\n\tplaceholderRequestBodyBase64  = \"{http.request.body_base64}\"\n\tplaceholderResponseBody       = \"{http.response.body}\"\n\tplaceholderResponseBodyBase64 = \"{http.response.body_base64}\"\n)\n\n// Interface guards\nvar (\n\t_ caddyhttp.MiddlewareHandler = (*LogAppend)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/logging.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"log/slog\"\n\t\"net\"\n\t\"net/http\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/exp/zapslog\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterSlogHandlerFactory(func(handler slog.Handler, core zapcore.Core, moduleID string) slog.Handler {\n\t\treturn &extraFieldsSlogHandler{defaultHandler: handler, core: core, moduleID: moduleID}\n\t})\n}\n\n// ServerLogConfig describes a server's logging configuration. If\n// enabled without customization, all requests to this server are\n// logged to the default logger; logger destinations may be\n// customized per-request-host.\ntype ServerLogConfig struct {\n\t// The default logger name for all logs emitted by this server for\n\t// hostnames that are not in the logger_names map.\n\tDefaultLoggerName string `json:\"default_logger_name,omitempty\"`\n\n\t// LoggerNames maps request hostnames to one or more custom logger\n\t// names. For example, a mapping of `\"example.com\": [\"example\"]` would\n\t// cause access logs from requests with a Host of example.com to be\n\t// emitted by a logger named \"http.log.access.example\". If there are\n\t// multiple logger names, then the log will be emitted to all of them.\n\t// If the logger name is an empty, the default logger is used, i.e.\n\t// the logger \"http.log.access\".\n\t//\n\t// Keys must be hostnames (without ports), and may contain wildcards\n\t// to match subdomains. The value is an array of logger names.\n\t//\n\t// For backwards compatibility, if the value is a string, it is treated\n\t// as a single-element array.\n\tLoggerNames map[string]StringArray `json:\"logger_names,omitempty\"`\n\n\t// By default, all requests to this server will be logged if\n\t// access logging is enabled. This field lists the request\n\t// hosts for which access logging should be disabled.\n\tSkipHosts []string `json:\"skip_hosts,omitempty\"`\n\n\t// If true, requests to any host not appearing in the\n\t// logger_names map will not be logged.\n\tSkipUnmappedHosts bool `json:\"skip_unmapped_hosts,omitempty\"`\n\n\t// If true, credentials that are otherwise omitted, will be logged.\n\t// The definition of credentials is defined by https://fetch.spec.whatwg.org/#credentials,\n\t// and this includes some request and response headers, i.e `Cookie`,\n\t// `Set-Cookie`, `Authorization`, and `Proxy-Authorization`.\n\tShouldLogCredentials bool `json:\"should_log_credentials,omitempty\"`\n\n\t// Log each individual handler that is invoked.\n\t// Requires that the log emit at DEBUG level.\n\t//\n\t// NOTE: This may log the configuration of your\n\t// HTTP handler modules; do not enable this in\n\t// insecure contexts when there is sensitive\n\t// data in the configuration.\n\t//\n\t// EXPERIMENTAL: Subject to change or removal.\n\tTrace bool `json:\"trace,omitempty\"`\n}\n\n// wrapLogger wraps logger in one or more logger named\n// according to user preferences for the given host.\nfunc (slc ServerLogConfig) wrapLogger(logger *zap.Logger, req *http.Request) []*zap.Logger {\n\t// using the `log_name` directive or the `access_logger_names` variable,\n\t// the logger names can be overridden for the current request\n\tif names := GetVar(req.Context(), AccessLoggerNameVarKey); names != nil {\n\t\tif namesSlice, ok := names.([]any); ok {\n\t\t\tloggers := make([]*zap.Logger, 0, len(namesSlice))\n\t\t\tfor _, loggerName := range namesSlice {\n\t\t\t\t// no name, use the default logger\n\t\t\t\tif loggerName == \"\" {\n\t\t\t\t\tloggers = append(loggers, logger)\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\t// make a logger with the given name\n\t\t\t\tloggers = append(loggers, logger.Named(loggerName.(string)))\n\t\t\t}\n\t\t\treturn loggers\n\t\t}\n\t}\n\n\t// get the hostname from the request, with the port number stripped\n\thost, _, err := net.SplitHostPort(req.Host)\n\tif err != nil {\n\t\thost = req.Host\n\t}\n\n\t// get the logger names for this host from the config\n\thosts := slc.getLoggerHosts(host)\n\n\t// make a list of named loggers, or the default logger\n\tloggers := make([]*zap.Logger, 0, len(hosts))\n\tfor _, loggerName := range hosts {\n\t\t// no name, use the default logger\n\t\tif loggerName == \"\" {\n\t\t\tloggers = append(loggers, logger)\n\t\t\tcontinue\n\t\t}\n\t\t// make a logger with the given name\n\t\tloggers = append(loggers, logger.Named(loggerName))\n\t}\n\treturn loggers\n}\n\nfunc (slc ServerLogConfig) getLoggerHosts(host string) []string {\n\t// try the exact hostname first\n\tif hosts, ok := slc.LoggerNames[host]; ok {\n\t\treturn hosts\n\t}\n\n\t// try matching wildcard domains if other non-specific loggers exist\n\tlabels := strings.Split(host, \".\")\n\tfor i := range labels {\n\t\tif labels[i] == \"\" {\n\t\t\tcontinue\n\t\t}\n\t\tlabels[i] = \"*\"\n\t\twildcardHost := strings.Join(labels, \".\")\n\t\tif hosts, ok := slc.LoggerNames[wildcardHost]; ok {\n\t\t\treturn hosts\n\t\t}\n\t}\n\n\treturn []string{slc.DefaultLoggerName}\n}\n\nfunc (slc *ServerLogConfig) clone() *ServerLogConfig {\n\tclone := &ServerLogConfig{\n\t\tDefaultLoggerName:    slc.DefaultLoggerName,\n\t\tLoggerNames:          make(map[string]StringArray),\n\t\tSkipHosts:            append([]string{}, slc.SkipHosts...),\n\t\tSkipUnmappedHosts:    slc.SkipUnmappedHosts,\n\t\tShouldLogCredentials: slc.ShouldLogCredentials,\n\t}\n\tfor k, v := range slc.LoggerNames {\n\t\tclone.LoggerNames[k] = append([]string{}, v...)\n\t}\n\treturn clone\n}\n\n// StringArray is a slices of strings, but also accepts\n// a single string as a value when JSON unmarshaling,\n// converting it to a slice of one string.\ntype StringArray []string\n\n// UnmarshalJSON satisfies json.Unmarshaler.\nfunc (sa *StringArray) UnmarshalJSON(b []byte) error {\n\tvar jsonObj any\n\terr := json.Unmarshal(b, &jsonObj)\n\tif err != nil {\n\t\treturn err\n\t}\n\tswitch obj := jsonObj.(type) {\n\tcase string:\n\t\t*sa = StringArray([]string{obj})\n\t\treturn nil\n\tcase []any:\n\t\ts := make([]string, 0, len(obj))\n\t\tfor _, v := range obj {\n\t\t\tvalue, ok := v.(string)\n\t\t\tif !ok {\n\t\t\t\treturn errors.New(\"unsupported type\")\n\t\t\t}\n\t\t\ts = append(s, value)\n\t\t}\n\t\t*sa = StringArray(s)\n\t\treturn nil\n\t}\n\treturn errors.New(\"unsupported type\")\n}\n\n// errLogValues inspects err and returns the status code\n// to use, the error log message, and any extra fields.\n// If err is a HandlerError, the returned values will\n// have richer information.\nfunc errLogValues(err error) (status int, msg string, fields func() []zapcore.Field) {\n\tvar handlerErr HandlerError\n\tif errors.As(err, &handlerErr) {\n\t\tstatus = handlerErr.StatusCode\n\t\tif handlerErr.Err == nil {\n\t\t\tmsg = err.Error()\n\t\t} else {\n\t\t\tmsg = handlerErr.Err.Error()\n\t\t}\n\t\tfields = func() []zapcore.Field {\n\t\t\treturn []zapcore.Field{\n\t\t\t\tzap.Int(\"status\", handlerErr.StatusCode),\n\t\t\t\tzap.String(\"err_id\", handlerErr.ID),\n\t\t\t\tzap.String(\"err_trace\", handlerErr.Trace),\n\t\t\t}\n\t\t}\n\t\treturn status, msg, fields\n\t}\n\tfields = func() []zapcore.Field {\n\t\treturn []zapcore.Field{\n\t\t\tzap.Error(err),\n\t\t}\n\t}\n\tstatus = http.StatusInternalServerError\n\tmsg = err.Error()\n\treturn status, msg, fields\n}\n\n// ExtraLogFields is a list of extra fields to log with every request.\ntype ExtraLogFields struct {\n\tfields   []zapcore.Field\n\thandlers sync.Map\n}\n\n// Add adds a field to the list of extra fields to log.\nfunc (e *ExtraLogFields) Add(field zap.Field) {\n\te.handlers.Clear()\n\te.fields = append(e.fields, field)\n}\n\n// Set sets a field in the list of extra fields to log.\n// If the field already exists, it is replaced.\nfunc (e *ExtraLogFields) Set(field zap.Field) {\n\te.handlers.Clear()\n\n\tfor i := range e.fields {\n\t\tif e.fields[i].Key == field.Key {\n\t\t\te.fields[i] = field\n\t\t\treturn\n\t\t}\n\t}\n\te.fields = append(e.fields, field)\n}\n\nfunc (e *ExtraLogFields) getSloggerHandler(handler *extraFieldsSlogHandler) (h slog.Handler) {\n\tif existing, ok := e.handlers.Load(handler); ok {\n\t\treturn existing.(slog.Handler)\n\t}\n\n\tif handler.moduleID == \"\" {\n\t\th = zapslog.NewHandler(handler.core.With(e.fields))\n\t} else {\n\t\th = zapslog.NewHandler(handler.core.With(e.fields), zapslog.WithName(handler.moduleID))\n\t}\n\n\tif handler.group != \"\" {\n\t\th = h.WithGroup(handler.group)\n\t}\n\tif handler.attrs != nil {\n\t\th = h.WithAttrs(handler.attrs)\n\t}\n\n\te.handlers.Store(handler, h)\n\n\treturn h\n}\n\nconst (\n\t// Variable name used to indicate that this request\n\t// should be omitted from the access logs\n\tLogSkipVar string = \"log_skip\"\n\n\t// For adding additional fields to the access logs\n\tExtraLogFieldsCtxKey caddy.CtxKey = \"extra_log_fields\"\n\n\t// Variable name used to indicate the logger to be used\n\tAccessLoggerNameVarKey string = \"access_logger_names\"\n)\n\ntype extraFieldsSlogHandler struct {\n\tdefaultHandler slog.Handler\n\tcore           zapcore.Core\n\tmoduleID       string\n\tgroup          string\n\tattrs          []slog.Attr\n}\n\nfunc (e *extraFieldsSlogHandler) Enabled(ctx context.Context, level slog.Level) bool {\n\treturn e.defaultHandler.Enabled(ctx, level)\n}\n\nfunc (e *extraFieldsSlogHandler) Handle(ctx context.Context, record slog.Record) error {\n\tif elf, ok := ctx.Value(ExtraLogFieldsCtxKey).(*ExtraLogFields); ok {\n\t\treturn elf.getSloggerHandler(e).Handle(ctx, record)\n\t}\n\n\treturn e.defaultHandler.Handle(ctx, record)\n}\n\nfunc (e *extraFieldsSlogHandler) WithAttrs(attrs []slog.Attr) slog.Handler {\n\treturn &extraFieldsSlogHandler{\n\t\te.defaultHandler.WithAttrs(attrs),\n\t\te.core,\n\t\te.moduleID,\n\t\te.group,\n\t\tappend(e.attrs, attrs...),\n\t}\n}\n\nfunc (e *extraFieldsSlogHandler) WithGroup(name string) slog.Handler {\n\treturn &extraFieldsSlogHandler{\n\t\te.defaultHandler.WithGroup(name),\n\t\te.core,\n\t\te.moduleID,\n\t\tname,\n\t\te.attrs,\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/map/caddyfile.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage maphandler\n\nimport (\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\thttpcaddyfile.RegisterHandlerDirective(\"map\", parseCaddyfile)\n}\n\n// parseCaddyfile sets up the map handler from Caddyfile tokens. Syntax:\n//\n//\tmap [<matcher>] <source> <destinations...> {\n//\t    [~]<input> <outputs...>\n//\t    default    <defaults...>\n//\t}\n//\n// If the input value is prefixed with a tilde (~), then the input will be parsed as a\n// regular expression.\n//\n// The Caddyfile adapter treats outputs that are a literal hyphen (-) as a null/nil\n// value. This is useful if you want to fall back to default for that particular output.\n//\n// The number of outputs for each mapping must not be more than the number of destinations.\n// However, for convenience, there may be fewer outputs than destinations and any missing\n// outputs will be filled in implicitly.\nfunc parseCaddyfile(h httpcaddyfile.Helper) (caddyhttp.MiddlewareHandler, error) {\n\th.Next() // consume directive name\n\n\tvar handler Handler\n\n\t// source\n\tif !h.NextArg() {\n\t\treturn nil, h.ArgErr()\n\t}\n\thandler.Source = h.Val()\n\n\t// destinations\n\thandler.Destinations = h.RemainingArgs()\n\tif len(handler.Destinations) == 0 {\n\t\treturn nil, h.Err(\"missing destination argument(s)\")\n\t}\n\tfor _, dest := range handler.Destinations {\n\t\tif shorthand := httpcaddyfile.WasReplacedPlaceholderShorthand(dest); shorthand != \"\" {\n\t\t\treturn nil, h.Errf(\"destination %s conflicts with a Caddyfile placeholder shorthand\", shorthand)\n\t\t}\n\t}\n\n\t// mappings\n\tfor h.NextBlock(0) {\n\t\t// defaults are a special case\n\t\tif h.Val() == \"default\" {\n\t\t\tif len(handler.Defaults) > 0 {\n\t\t\t\treturn nil, h.Err(\"defaults already defined\")\n\t\t\t}\n\t\t\thandler.Defaults = h.RemainingArgs()\n\t\t\tfor len(handler.Defaults) < len(handler.Destinations) {\n\t\t\t\thandler.Defaults = append(handler.Defaults, \"\")\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\t// every line maps an input value to one or more outputs\n\t\tin := h.Val()\n\t\tvar outs []any\n\t\tfor h.NextArg() {\n\t\t\tval := h.ScalarVal()\n\t\t\tif val == \"-\" {\n\t\t\t\touts = append(outs, nil)\n\t\t\t} else {\n\t\t\t\touts = append(outs, val)\n\t\t\t}\n\t\t}\n\n\t\t// cannot have more outputs than destinations\n\t\tif len(outs) > len(handler.Destinations) {\n\t\t\treturn nil, h.Err(\"too many outputs\")\n\t\t}\n\n\t\t// for convenience, can have fewer outputs than destinations, but the\n\t\t// underlying handler won't accept that, so we fill in nil values\n\t\tfor len(outs) < len(handler.Destinations) {\n\t\t\touts = append(outs, nil)\n\t\t}\n\n\t\t// create the mapping\n\t\tmapping := Mapping{Outputs: outs}\n\t\tif strings.HasPrefix(in, \"~\") {\n\t\t\tmapping.InputRegexp = in[1:]\n\t\t} else {\n\t\t\tmapping.Input = in\n\t\t}\n\n\t\thandler.Mappings = append(handler.Mappings, mapping)\n\t}\n\treturn handler, nil\n}\n"
  },
  {
    "path": "modules/caddyhttp/map/map.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage maphandler\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\t\"regexp\"\n\t\"slices\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Handler{})\n}\n\n// Handler implements a middleware that maps inputs to outputs. Specifically, it\n// compares a source value against the map inputs, and for one that matches, it\n// applies the output values to each destination. Destinations become placeholder\n// names.\n//\n// Mapped placeholders are not evaluated until they are used, so even for very\n// large mappings, this handler is quite efficient.\ntype Handler struct {\n\t// Source is the placeholder from which to get the input value.\n\tSource string `json:\"source,omitempty\"`\n\n\t// Destinations are the names of placeholders in which to store the outputs.\n\t// Destination values should be wrapped in braces, for example, {my_placeholder}.\n\tDestinations []string `json:\"destinations,omitempty\"`\n\n\t// Mappings from source values (inputs) to destination values (outputs).\n\t// The first matching, non-nil mapping will be applied.\n\tMappings []Mapping `json:\"mappings,omitempty\"`\n\n\t// If no mappings match or if the mapped output is null/nil, the associated\n\t// default output will be applied (optional).\n\tDefaults []string `json:\"defaults,omitempty\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Handler) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.map\",\n\t\tNew: func() caddy.Module { return new(Handler) },\n\t}\n}\n\n// Provision sets up h.\nfunc (h *Handler) Provision(_ caddy.Context) error {\n\tfor j, dest := range h.Destinations {\n\t\tif strings.Count(dest, \"{\") != 1 || !strings.HasPrefix(dest, \"{\") {\n\t\t\treturn fmt.Errorf(\"destination must be a placeholder and only a placeholder\")\n\t\t}\n\t\th.Destinations[j] = strings.Trim(dest, \"{}\")\n\t}\n\n\tfor i, m := range h.Mappings {\n\t\tif m.InputRegexp == \"\" {\n\t\t\tcontinue\n\t\t}\n\t\tvar err error\n\t\th.Mappings[i].re, err = regexp.Compile(m.InputRegexp)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"compiling regexp for mapping %d: %v\", i, err)\n\t\t}\n\t}\n\n\t// TODO: improve efficiency even further by using an actual map type\n\t// for the non-regexp mappings, OR sort them and do a binary search\n\n\treturn nil\n}\n\n// Validate ensures that h is configured properly.\nfunc (h *Handler) Validate() error {\n\tnDest, nDef := len(h.Destinations), len(h.Defaults)\n\tif nDef > 0 && nDef != nDest {\n\t\treturn fmt.Errorf(\"%d destinations != %d defaults\", nDest, nDef)\n\t}\n\n\tseen := make(map[string]int)\n\tfor i, m := range h.Mappings {\n\t\t// prevent confusing/ambiguous mappings\n\t\tif m.Input != \"\" && m.InputRegexp != \"\" {\n\t\t\treturn fmt.Errorf(\"mapping %d has both input and input_regexp fields specified, which is confusing\", i)\n\t\t}\n\n\t\t// prevent duplicate mappings\n\t\tinput := m.Input\n\t\tif m.InputRegexp != \"\" {\n\t\t\tinput = m.InputRegexp\n\t\t}\n\t\tif prev, ok := seen[input]; ok {\n\t\t\treturn fmt.Errorf(\"mapping %d has a duplicate input '%s' previously used with mapping %d\", i, input, prev)\n\t\t}\n\t\tseen[input] = i\n\n\t\t// ensure mappings have 1:1 output-to-destination correspondence\n\t\tnOut := len(m.Outputs)\n\t\tif nOut != nDest {\n\t\t\treturn fmt.Errorf(\"mapping %d has %d outputs but there are %d destinations defined\", i, nOut, nDest)\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (h Handler) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\n\t// defer work until a variable is actually evaluated by using replacer's Map callback\n\trepl.Map(func(key string) (any, bool) {\n\t\t// return early if the variable is not even a configured destination\n\t\tdestIdx := slices.Index(h.Destinations, key)\n\t\tif destIdx < 0 {\n\t\t\treturn nil, false\n\t\t}\n\n\t\tinput := repl.ReplaceAll(h.Source, \"\")\n\n\t\t// find the first mapping matching the input and return\n\t\t// the requested destination/output value\n\t\tfor _, m := range h.Mappings {\n\t\t\toutput := m.Outputs[destIdx]\n\t\t\tif output == nil {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\toutputStr := caddy.ToString(output)\n\n\t\t\t// evaluate regular expression if configured\n\t\t\tif m.re != nil {\n\t\t\t\tvar result []byte\n\t\t\t\tmatches := m.re.FindStringSubmatchIndex(input)\n\t\t\t\tif matches == nil {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tresult = m.re.ExpandString(result, outputStr, input, matches)\n\t\t\t\treturn string(result), true\n\t\t\t}\n\n\t\t\t// otherwise simple string comparison\n\t\t\tif input == m.Input {\n\t\t\t\treturn repl.ReplaceAll(outputStr, \"\"), true\n\t\t\t}\n\t\t}\n\n\t\t// fall back to default if no match or if matched nil value\n\t\tif len(h.Defaults) > destIdx {\n\t\t\treturn repl.ReplaceAll(h.Defaults[destIdx], \"\"), true\n\t\t}\n\n\t\treturn nil, true\n\t})\n\n\treturn next.ServeHTTP(w, r)\n}\n\n// Mapping describes a mapping from input to outputs.\ntype Mapping struct {\n\t// The input value to match. Must be distinct from other mappings.\n\t// Mutually exclusive to input_regexp.\n\tInput string `json:\"input,omitempty\"`\n\n\t// The input regular expression to match. Mutually exclusive to input.\n\tInputRegexp string `json:\"input_regexp,omitempty\"`\n\n\t// Upon a match with the input, each output is positionally correlated\n\t// with each destination of the parent handler. An output that is null\n\t// (nil) will be treated as if it was not mapped at all.\n\tOutputs []any `json:\"outputs,omitempty\"`\n\n\tre *regexp.Regexp\n}\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner           = (*Handler)(nil)\n\t_ caddy.Validator             = (*Handler)(nil)\n\t_ caddyhttp.MiddlewareHandler = (*Handler)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/map/map_test.go",
    "content": "package maphandler\n\nimport (\n\t\"context\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"reflect\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc TestHandler(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\thandler Handler\n\t\treqURI  string\n\t\texpect  map[string]any\n\t}{\n\t\t{\n\t\t\treqURI: \"/foo\",\n\t\t\thandler: Handler{\n\t\t\t\tSource:       \"{http.request.uri.path}\",\n\t\t\t\tDestinations: []string{\"{output}\"},\n\t\t\t\tMappings: []Mapping{\n\t\t\t\t\t{\n\t\t\t\t\t\tInput:   \"/foo\",\n\t\t\t\t\t\tOutputs: []any{\"FOO\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpect: map[string]any{\n\t\t\t\t\"output\": \"FOO\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\treqURI: \"/abcdef\",\n\t\t\thandler: Handler{\n\t\t\t\tSource:       \"{http.request.uri.path}\",\n\t\t\t\tDestinations: []string{\"{output}\"},\n\t\t\t\tMappings: []Mapping{\n\t\t\t\t\t{\n\t\t\t\t\t\tInputRegexp: \"(/abc)\",\n\t\t\t\t\t\tOutputs:     []any{\"ABC\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpect: map[string]any{\n\t\t\t\t\"output\": \"ABC\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\treqURI: \"/ABCxyzDEF\",\n\t\t\thandler: Handler{\n\t\t\t\tSource:       \"{http.request.uri.path}\",\n\t\t\t\tDestinations: []string{\"{output}\"},\n\t\t\t\tMappings: []Mapping{\n\t\t\t\t\t{\n\t\t\t\t\t\tInputRegexp: \"(xyz)\",\n\t\t\t\t\t\tOutputs:     []any{\"...${1}...\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpect: map[string]any{\n\t\t\t\t\"output\": \"...xyz...\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\t// Test case from https://caddy.community/t/map-directive-and-regular-expressions/13866/14?u=matt\n\t\t\treqURI: \"/?s=0%27+AND+%28SELECT+0+FROM+%28SELECT+count%28%2A%29%2C+CONCAT%28%28SELECT+%40%40version%29%2C+0x23%2C+FLOOR%28RAND%280%29%2A2%29%29+AS+x+FROM+information_schema.columns+GROUP+BY+x%29+y%29+-+-+%27\",\n\t\t\thandler: Handler{\n\t\t\t\tSource:       \"{http.request.uri}\",\n\t\t\t\tDestinations: []string{\"{output}\"},\n\t\t\t\tMappings: []Mapping{\n\t\t\t\t\t{\n\t\t\t\t\t\tInputRegexp: \"(?i)(\\\\^|`|<|>|%|\\\\\\\\|\\\\{|\\\\}|\\\\|)\",\n\t\t\t\t\t\tOutputs:     []any{\"3\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpect: map[string]any{\n\t\t\t\t\"output\": \"3\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\treqURI: \"/foo\",\n\t\t\thandler: Handler{\n\t\t\t\tSource:       \"{http.request.uri.path}\",\n\t\t\t\tDestinations: []string{\"{output}\"},\n\t\t\t\tMappings: []Mapping{\n\t\t\t\t\t{\n\t\t\t\t\t\tInput:   \"/foo\",\n\t\t\t\t\t\tOutputs: []any{\"{testvar}\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpect: map[string]any{\n\t\t\t\t\"output\": \"testing\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\treqURI: \"/foo\",\n\t\t\thandler: Handler{\n\t\t\t\tSource:       \"{http.request.uri.path}\",\n\t\t\t\tDestinations: []string{\"{output}\"},\n\t\t\t\tDefaults:     []string{\"default\"},\n\t\t\t},\n\t\t\texpect: map[string]any{\n\t\t\t\t\"output\": \"default\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\treqURI: \"/foo\",\n\t\t\thandler: Handler{\n\t\t\t\tSource:       \"{http.request.uri.path}\",\n\t\t\t\tDestinations: []string{\"{output}\"},\n\t\t\t\tDefaults:     []string{\"{testvar}\"},\n\t\t\t},\n\t\t\texpect: map[string]any{\n\t\t\t\t\"output\": \"testing\",\n\t\t\t},\n\t\t},\n\t} {\n\t\tif err := tc.handler.Provision(caddy.Context{}); err != nil {\n\t\t\tt.Fatalf(\"Test %d: Provisioning handler: %v\", i, err)\n\t\t}\n\n\t\treq, err := http.NewRequest(http.MethodGet, tc.reqURI, nil)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Test %d: Creating request: %v\", i, err)\n\t\t}\n\t\trepl := caddyhttp.NewTestReplacer(req)\n\t\trepl.Set(\"testvar\", \"testing\")\n\t\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\t\treq = req.WithContext(ctx)\n\n\t\trr := httptest.NewRecorder()\n\t\tnoop := caddyhttp.HandlerFunc(func(_ http.ResponseWriter, _ *http.Request) error { return nil })\n\n\t\tif err := tc.handler.ServeHTTP(rr, req, noop); err != nil {\n\t\t\tt.Errorf(\"Test %d: Handler returned error: %v\", i, err)\n\t\t\tcontinue\n\t\t}\n\n\t\tfor key, expected := range tc.expect {\n\t\t\tactual, _ := repl.Get(key)\n\t\t\tif !reflect.DeepEqual(actual, expected) {\n\t\t\t\tt.Errorf(\"Test %d: Expected %#v but got %#v for {%s}\", i, expected, actual, key)\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/marshalers.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"crypto/tls\"\n\t\"net\"\n\t\"net/http\"\n\t\"strings\"\n\n\t\"go.uber.org/zap/zapcore\"\n)\n\n// LoggableHTTPRequest makes an HTTP request loggable with zap.Object().\ntype LoggableHTTPRequest struct {\n\t*http.Request\n\n\tShouldLogCredentials bool\n}\n\n// MarshalLogObject satisfies the zapcore.ObjectMarshaler interface.\nfunc (r LoggableHTTPRequest) MarshalLogObject(enc zapcore.ObjectEncoder) error {\n\tip, port, err := net.SplitHostPort(r.RemoteAddr)\n\tif err != nil {\n\t\tip = r.RemoteAddr\n\t\tport = \"\"\n\t}\n\n\tenc.AddString(\"remote_ip\", ip)\n\tenc.AddString(\"remote_port\", port)\n\tif ip, ok := GetVar(r.Context(), ClientIPVarKey).(string); ok {\n\t\tenc.AddString(\"client_ip\", ip)\n\t}\n\tenc.AddString(\"proto\", r.Proto)\n\tenc.AddString(\"method\", r.Method)\n\tenc.AddString(\"host\", r.Host)\n\tenc.AddString(\"uri\", r.RequestURI)\n\tenc.AddObject(\"headers\", LoggableHTTPHeader{\n\t\tHeader:               r.Header,\n\t\tShouldLogCredentials: r.ShouldLogCredentials,\n\t})\n\tif r.TransferEncoding != nil {\n\t\tenc.AddArray(\"transfer_encoding\", LoggableStringArray(r.TransferEncoding))\n\t}\n\tif r.TLS != nil {\n\t\tenc.AddObject(\"tls\", LoggableTLSConnState(*r.TLS))\n\t}\n\treturn nil\n}\n\n// LoggableHTTPHeader makes an HTTP header loggable with zap.Object().\n// Headers with potentially sensitive information (Cookie, Set-Cookie,\n// Authorization, and Proxy-Authorization) are logged with empty values.\ntype LoggableHTTPHeader struct {\n\thttp.Header\n\n\tShouldLogCredentials bool\n}\n\n// MarshalLogObject satisfies the zapcore.ObjectMarshaler interface.\nfunc (h LoggableHTTPHeader) MarshalLogObject(enc zapcore.ObjectEncoder) error {\n\tif h.Header == nil {\n\t\treturn nil\n\t}\n\tfor key, val := range h.Header {\n\t\tif !h.ShouldLogCredentials {\n\t\t\tswitch strings.ToLower(key) {\n\t\t\tcase \"cookie\", \"set-cookie\", \"authorization\", \"proxy-authorization\":\n\t\t\t\tval = []string{\"REDACTED\"} // see #5669. I still think ▒▒▒▒ would be cool.\n\t\t\t}\n\t\t}\n\t\tenc.AddArray(key, LoggableStringArray(val))\n\t}\n\treturn nil\n}\n\n// LoggableStringArray makes a slice of strings marshalable for logging.\ntype LoggableStringArray []string\n\n// MarshalLogArray satisfies the zapcore.ArrayMarshaler interface.\nfunc (sa LoggableStringArray) MarshalLogArray(enc zapcore.ArrayEncoder) error {\n\tif sa == nil {\n\t\treturn nil\n\t}\n\tfor _, s := range sa {\n\t\tenc.AppendString(s)\n\t}\n\treturn nil\n}\n\n// LoggableTLSConnState makes a TLS connection state loggable with zap.Object().\ntype LoggableTLSConnState tls.ConnectionState\n\n// MarshalLogObject satisfies the zapcore.ObjectMarshaler interface.\nfunc (t LoggableTLSConnState) MarshalLogObject(enc zapcore.ObjectEncoder) error {\n\tenc.AddBool(\"resumed\", t.DidResume)\n\tenc.AddUint16(\"version\", t.Version)\n\tenc.AddUint16(\"cipher_suite\", t.CipherSuite)\n\tenc.AddString(\"proto\", t.NegotiatedProtocol)\n\tenc.AddString(\"server_name\", t.ServerName)\n\tenc.AddBool(\"ech\", t.ECHAccepted)\n\tif len(t.PeerCertificates) > 0 {\n\t\tenc.AddString(\"client_common_name\", t.PeerCertificates[0].Subject.CommonName)\n\t\tenc.AddString(\"client_serial\", t.PeerCertificates[0].SerialNumber.String())\n\t}\n\treturn nil\n}\n\n// Interface guards\nvar (\n\t_ zapcore.ObjectMarshaler = (*LoggableHTTPRequest)(nil)\n\t_ zapcore.ObjectMarshaler = (*LoggableHTTPHeader)(nil)\n\t_ zapcore.ArrayMarshaler  = (*LoggableStringArray)(nil)\n\t_ zapcore.ObjectMarshaler = (*LoggableTLSConnState)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/matchers.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/textproto\"\n\t\"net/url\"\n\t\"path\"\n\t\"regexp\"\n\t\"runtime\"\n\t\"slices\"\n\t\"sort\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/google/cel-go/cel\"\n\t\"github.com/google/cel-go/common/types\"\n\t\"github.com/google/cel-go/common/types/ref\"\n\t\"golang.org/x/net/idna\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\ntype (\n\t// MatchHost matches requests by the Host value (case-insensitive).\n\t//\n\t// When used in a top-level HTTP route,\n\t// [qualifying domain names](/docs/automatic-https#hostname-requirements)\n\t// may trigger [automatic HTTPS](/docs/automatic-https), which automatically\n\t// provisions and renews certificates for you. Before doing this, you\n\t// should ensure that DNS records for these domains are properly configured,\n\t// especially A/AAAA pointed at your server.\n\t//\n\t// Automatic HTTPS can be\n\t// [customized or disabled](/docs/modules/http#servers/automatic_https).\n\t//\n\t// Wildcards (`*`) may be used to represent exactly one label of the\n\t// hostname, in accordance with RFC 1034 (because host matchers are also\n\t// used for automatic HTTPS which influences TLS certificates). Thus,\n\t// a host of `*` matches hosts like `localhost` or `internal` but not\n\t// `example.com`. To catch all hosts, omit the host matcher entirely.\n\t//\n\t// The wildcard can be useful for matching all subdomains, for example:\n\t// `*.example.com` matches `foo.example.com` but not `foo.bar.example.com`.\n\t//\n\t// Duplicate entries will return an error.\n\tMatchHost []string\n\n\t// MatchPath case-insensitively matches requests by the URI's path. Path\n\t// matching is exact, not prefix-based, giving you more control and clarity\n\t// over matching. Wildcards (`*`) may be used:\n\t//\n\t// - At the end only, for a prefix match (`/prefix/*`)\n\t// - At the beginning only, for a suffix match (`*.suffix`)\n\t// - On both sides only, for a substring match (`*/contains/*`)\n\t// - In the middle, for a globular match (`/accounts/*/info`)\n\t//\n\t// Slashes are significant; i.e. `/foo*` matches `/foo`, `/foo/`, `/foo/bar`,\n\t// and `/foobar`; but `/foo/*` does not match `/foo` or `/foobar`. Valid\n\t// paths start with a slash `/`.\n\t//\n\t// Because there are, in general, multiple possible escaped forms of any\n\t// path, path matchers operate in unescaped space; that is, path matchers\n\t// should be written in their unescaped form to prevent ambiguities and\n\t// possible security issues, as all request paths will be normalized to\n\t// their unescaped forms before matcher evaluation.\n\t//\n\t// However, escape sequences in a match pattern are supported; they are\n\t// compared with the request's raw/escaped path for those bytes only.\n\t// In other words, a matcher of `/foo%2Fbar` will match a request path\n\t// of precisely `/foo%2Fbar`, but not `/foo/bar`. It follows that matching\n\t// the literal percent sign (%) in normalized space can be done using the\n\t// escaped form, `%25`.\n\t//\n\t// Even though wildcards (`*`) operate in the normalized space, the special\n\t// escaped wildcard (`%*`), which is not a valid escape sequence, may be\n\t// used in place of a span that should NOT be decoded; that is, `/bands/%*`\n\t// will match `/bands/AC%2fDC` whereas `/bands/*` will not.\n\t//\n\t// Even though path matching is done in normalized space, the special\n\t// wildcard `%*` may be used in place of a span that should NOT be decoded;\n\t// that is, `/bands/%*/` will match `/bands/AC%2fDC/` whereas `/bands/*/`\n\t// will not.\n\t//\n\t// This matcher is fast, so it does not support regular expressions or\n\t// capture groups. For slower but more powerful matching, use the\n\t// path_regexp matcher. (Note that due to the special treatment of\n\t// escape sequences in matcher patterns, they may perform slightly slower\n\t// in high-traffic environments.)\n\tMatchPath []string\n\n\t// MatchPathRE matches requests by a regular expression on the URI's path.\n\t// Path matching is performed in the unescaped (decoded) form of the path.\n\t//\n\t// Upon a match, it adds placeholders to the request: `{http.regexp.name.capture_group}`\n\t// where `name` is the regular expression's name, and `capture_group` is either\n\t// the named or positional capture group from the expression itself. If no name\n\t// is given, then the placeholder omits the name: `{http.regexp.capture_group}`\n\t// (potentially leading to collisions).\n\tMatchPathRE struct{ MatchRegexp }\n\n\t// MatchMethod matches requests by the method.\n\tMatchMethod []string\n\n\t// MatchQuery matches requests by the URI's query string. It takes a JSON object\n\t// keyed by the query keys, with an array of string values to match for that key.\n\t// Query key matches are exact, but wildcards may be used for value matches. Both\n\t// keys and values may be placeholders.\n\t//\n\t// An example of the structure to match `?key=value&topic=api&query=something` is:\n\t//\n\t// ```json\n\t// {\n\t// \t\"key\": [\"value\"],\n\t//\t\"topic\": [\"api\"],\n\t//\t\"query\": [\"*\"]\n\t// }\n\t// ```\n\t//\n\t// Invalid query strings, including those with bad escapings or illegal characters\n\t// like semicolons, will fail to parse and thus fail to match.\n\t//\n\t// **NOTE:** Notice that query string values are arrays, not singular values. This is\n\t// because repeated keys are valid in query strings, and each one may have a\n\t// different value. This matcher will match for a key if any one of its configured\n\t// values is assigned in the query string. Backend applications relying on query\n\t// strings MUST take into consideration that query string values are arrays and can\n\t// have multiple values.\n\tMatchQuery url.Values\n\n\t// MatchHeader matches requests by header fields. The key is the field\n\t// name and the array is the list of field values. It performs fast,\n\t// exact string comparisons of the field values. Fast prefix, suffix,\n\t// and substring matches can also be done by suffixing, prefixing, or\n\t// surrounding the value with the wildcard `*` character, respectively.\n\t// If a list is null, the header must not exist. If the list is empty,\n\t// the field must simply exist, regardless of its value.\n\t//\n\t// **NOTE:** Notice that header values are arrays, not singular values. This is\n\t// because repeated fields are valid in headers, and each one may have a\n\t// different value. This matcher will match for a field if any one of its configured\n\t// values matches in the header. Backend applications relying on headers MUST take\n\t// into consideration that header field values are arrays and can have multiple\n\t// values.\n\tMatchHeader http.Header\n\n\t// MatchHeaderRE matches requests by a regular expression on header fields.\n\t//\n\t// Upon a match, it adds placeholders to the request: `{http.regexp.name.capture_group}`\n\t// where `name` is the regular expression's name, and `capture_group` is either\n\t// the named or positional capture group from the expression itself. If no name\n\t// is given, then the placeholder omits the name: `{http.regexp.capture_group}`\n\t// (potentially leading to collisions).\n\tMatchHeaderRE map[string]*MatchRegexp\n\n\t// MatchProtocol matches requests by protocol. Recognized values are\n\t// \"http\", \"https\", and \"grpc\" for broad protocol matches, or specific\n\t// HTTP versions can be specified like so: \"http/1\", \"http/1.1\",\n\t// \"http/2\", \"http/3\", or minimum versions: \"http/2+\", etc.\n\tMatchProtocol string\n\n\t// MatchTLS matches HTTP requests based on the underlying\n\t// TLS connection state. If this matcher is specified but\n\t// the request did not come over TLS, it will never match.\n\t// If this matcher is specified but is empty and the request\n\t// did come in over TLS, it will always match.\n\tMatchTLS struct {\n\t\t// Matches if the TLS handshake has completed. QUIC 0-RTT early\n\t\t// data may arrive before the handshake completes. Generally, it\n\t\t// is unsafe to replay these requests if they are not idempotent;\n\t\t// additionally, the remote IP of early data packets can more\n\t\t// easily be spoofed. It is conventional to respond with HTTP 425\n\t\t// Too Early if the request cannot risk being processed in this\n\t\t// state.\n\t\tHandshakeComplete *bool `json:\"handshake_complete,omitempty\"`\n\t}\n\n\t// MatchNot matches requests by negating the results of its matcher\n\t// sets. A single \"not\" matcher takes one or more matcher sets. Each\n\t// matcher set is OR'ed; in other words, if any matcher set returns\n\t// true, the final result of the \"not\" matcher is false. Individual\n\t// matchers within a set work the same (i.e. different matchers in\n\t// the same set are AND'ed).\n\t//\n\t// NOTE: The generated docs which describe the structure of this\n\t// module are wrong because of how this type unmarshals JSON in a\n\t// custom way. The correct structure is:\n\t//\n\t// ```json\n\t// [\n\t// \t{},\n\t// \t{}\n\t// ]\n\t// ```\n\t//\n\t// where each of the array elements is a matcher set, i.e. an\n\t// object keyed by matcher name.\n\tMatchNot struct {\n\t\tMatcherSetsRaw []caddy.ModuleMap `json:\"-\" caddy:\"namespace=http.matchers\"`\n\t\tMatcherSets    []MatcherSet      `json:\"-\"`\n\t}\n)\n\nfunc init() {\n\tcaddy.RegisterModule(MatchHost{})\n\tcaddy.RegisterModule(MatchPath{})\n\tcaddy.RegisterModule(MatchPathRE{})\n\tcaddy.RegisterModule(MatchMethod{})\n\tcaddy.RegisterModule(MatchQuery{})\n\tcaddy.RegisterModule(MatchHeader{})\n\tcaddy.RegisterModule(MatchHeaderRE{})\n\tcaddy.RegisterModule(new(MatchProtocol))\n\tcaddy.RegisterModule(MatchTLS{})\n\tcaddy.RegisterModule(MatchNot{})\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchHost) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.matchers.host\",\n\t\tNew: func() caddy.Module { return new(MatchHost) },\n\t}\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (m *MatchHost) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\t// iterate to merge multiple matchers into one\n\tfor d.Next() {\n\t\t*m = append(*m, d.RemainingArgs()...)\n\t\tif d.NextBlock(0) {\n\t\t\treturn d.Err(\"malformed host matcher: blocks are not supported\")\n\t\t}\n\t}\n\treturn nil\n}\n\n// Provision sets up and validates m, including making it more efficient for large lists.\nfunc (m MatchHost) Provision(_ caddy.Context) error {\n\t// check for duplicates; they are nonsensical and reduce efficiency\n\t// (we could just remove them, but the user should know their config is erroneous)\n\tseen := make(map[string]int, len(m))\n\tfor i, host := range m {\n\t\tasciiHost, err := idna.ToASCII(host)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"converting hostname '%s' to ASCII: %v\", host, err)\n\t\t}\n\t\tnormalizedHost := strings.ToLower(asciiHost)\n\t\tif firstI, ok := seen[normalizedHost]; ok {\n\t\t\treturn fmt.Errorf(\"host at index %d is repeated at index %d: %s\", firstI, i, host)\n\t\t}\n\t\t// Normalize exact hosts for standardized comparison in large-list fastpath later on.\n\t\t// Keep wildcards/placeholders untouched.\n\t\tif m.fuzzy(asciiHost) {\n\t\t\tm[i] = asciiHost\n\t\t} else {\n\t\t\tm[i] = normalizedHost\n\t\t}\n\t\tseen[normalizedHost] = i\n\t}\n\n\tif m.large() {\n\t\t// sort the slice lexicographically, grouping \"fuzzy\" entries (wildcards and placeholders)\n\t\t// at the front of the list; this allows us to use binary search for exact matches, which\n\t\t// we have seen from experience is the most common kind of value in large lists; and any\n\t\t// other kinds of values (wildcards and placeholders) are grouped in front so the linear\n\t\t// search should find a match fairly quickly\n\t\tsort.Slice(m, func(i, j int) bool {\n\t\t\tiInexact, jInexact := m.fuzzy(m[i]), m.fuzzy(m[j])\n\t\t\tif iInexact && !jInexact {\n\t\t\t\treturn true\n\t\t\t}\n\t\t\tif !iInexact && jInexact {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\treturn m[i] < m[j]\n\t\t})\n\t}\n\n\treturn nil\n}\n\n// Match returns true if r matches m.\nfunc (m MatchHost) Match(r *http.Request) bool {\n\tmatch, _ := m.MatchWithError(r)\n\treturn match\n}\n\n// MatchWithError returns true if r matches m.\nfunc (m MatchHost) MatchWithError(r *http.Request) (bool, error) {\n\treqHost, _, err := net.SplitHostPort(r.Host)\n\tif err != nil {\n\t\t// OK; probably didn't have a port\n\t\treqHost = r.Host\n\n\t\t// make sure we strip the brackets from IPv6 addresses\n\t\treqHost = strings.TrimPrefix(reqHost, \"[\")\n\t\treqHost = strings.TrimSuffix(reqHost, \"]\")\n\t}\n\n\tif m.large() {\n\t\treqHostLower := strings.ToLower(reqHost)\n\t\t// fast path: locate exact match using binary search (about 100-1000x faster for large lists)\n\t\tpos := sort.Search(len(m), func(i int) bool {\n\t\t\tif m.fuzzy(m[i]) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\treturn m[i] >= reqHostLower\n\t\t})\n\t\tif pos < len(m) && m[pos] == reqHostLower {\n\t\t\treturn true, nil\n\t\t}\n\t}\n\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\nouter:\n\tfor _, host := range m {\n\t\t// fast path: if matcher is large, we already know we don't have an exact\n\t\t// match, so we're only looking for fuzzy match now, which should be at the\n\t\t// front of the list; if we have reached a value that is not fuzzy, there\n\t\t// will be no match and we can short-circuit for efficiency\n\t\tif m.large() && !m.fuzzy(host) {\n\t\t\tbreak\n\t\t}\n\n\t\thost = repl.ReplaceAll(host, \"\")\n\t\tif strings.Contains(host, \"*\") {\n\t\t\tpatternParts := strings.Split(host, \".\")\n\t\t\tincomingParts := strings.Split(reqHost, \".\")\n\t\t\tif len(patternParts) != len(incomingParts) {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tfor i := range patternParts {\n\t\t\t\tif patternParts[i] == \"*\" {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tif !strings.EqualFold(patternParts[i], incomingParts[i]) {\n\t\t\t\t\tcontinue outer\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn true, nil\n\t\t} else if strings.EqualFold(reqHost, host) {\n\t\t\treturn true, nil\n\t\t}\n\t}\n\n\treturn false, nil\n}\n\n// CELLibrary produces options that expose this matcher for use in CEL\n// expression matchers.\n//\n// Example:\n//\n//\texpression host('localhost')\nfunc (MatchHost) CELLibrary(ctx caddy.Context) (cel.Library, error) {\n\treturn CELMatcherImpl(\n\t\t\"host\",\n\t\t\"host_match_request_list\",\n\t\t[]*cel.Type{cel.ListType(cel.StringType)},\n\t\tfunc(data ref.Val) (RequestMatcherWithError, error) {\n\t\t\trefStringList := stringSliceType\n\t\t\tstrList, err := data.ConvertToNative(refStringList)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tmatcher := MatchHost(strList.([]string))\n\t\t\terr = matcher.Provision(ctx)\n\t\t\treturn matcher, err\n\t\t},\n\t)\n}\n\n// fuzzy returns true if the given hostname h is not a specific\n// hostname, e.g. has placeholders or wildcards.\nfunc (MatchHost) fuzzy(h string) bool { return strings.ContainsAny(h, \"{*\") }\n\n// large returns true if m is considered to be large. Optimizing\n// the matcher for smaller lists has diminishing returns.\n// See related benchmark function in test file to conduct experiments.\nfunc (m MatchHost) large() bool { return len(m) > 100 }\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchPath) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.matchers.path\",\n\t\tNew: func() caddy.Module { return new(MatchPath) },\n\t}\n}\n\n// Provision lower-cases the paths in m to ensure case-insensitive matching.\nfunc (m MatchPath) Provision(_ caddy.Context) error {\n\tfor i := range m {\n\t\tif m[i] == \"*\" && i > 0 {\n\t\t\t// will always match, so just put it first\n\t\t\tm[0] = m[i]\n\t\t\tbreak\n\t\t}\n\t\tm[i] = strings.ToLower(m[i])\n\t}\n\treturn nil\n}\n\n// Match returns true if r matches m.\nfunc (m MatchPath) Match(r *http.Request) bool {\n\tmatch, _ := m.MatchWithError(r)\n\treturn match\n}\n\n// MatchWithError returns true if r matches m.\nfunc (m MatchPath) MatchWithError(r *http.Request) (bool, error) {\n\t// Even though RFC 9110 says that path matching is case-sensitive\n\t// (https://www.rfc-editor.org/rfc/rfc9110.html#section-4.2.3),\n\t// we do case-insensitive matching to mitigate security issues\n\t// related to differences between operating systems, applications,\n\t// etc; if case-sensitive matching is needed, the regex matcher\n\t// can be used instead.\n\treqPath := strings.ToLower(r.URL.Path)\n\n\t// See #2917; Windows ignores trailing dots and spaces\n\t// when accessing files (sigh), potentially causing a\n\t// security risk (cry) if PHP files end up being served\n\t// as static files, exposing the source code, instead of\n\t// being matched by *.php to be treated as PHP scripts.\n\tif runtime.GOOS == \"windows\" { // issue #5613\n\t\treqPath = strings.TrimRight(reqPath, \". \")\n\t}\n\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\n\tfor _, matchPattern := range m {\n\t\tmatchPattern = repl.ReplaceAll(matchPattern, \"\")\n\n\t\t// special case: whole path is wildcard; this is unnecessary\n\t\t// as it matches all requests, which is the same as no matcher\n\t\tif matchPattern == \"*\" {\n\t\t\treturn true, nil\n\t\t}\n\n\t\t// Clean the path, merge doubled slashes, etc.\n\t\t// This ensures maliciously crafted requests can't bypass\n\t\t// the path matcher. See #4407. Good security posture\n\t\t// requires that we should do all we can to reduce any\n\t\t// funny-looking paths into \"normalized\" forms such that\n\t\t// weird variants can't sneak by.\n\t\t//\n\t\t// How we clean the path depends on the kind of pattern:\n\t\t// we either merge slashes or we don't. If the pattern\n\t\t// has double slashes, we preserve them in the path.\n\t\t//\n\t\t// TODO: Despite the fact that the *vast* majority of path\n\t\t// matchers have only 1 pattern, a possible optimization is\n\t\t// to remember the cleaned form of the path for future\n\t\t// iterations; it's just that the way we clean depends on\n\t\t// the kind of pattern.\n\n\t\tmergeSlashes := !strings.Contains(matchPattern, \"//\")\n\n\t\t// if '%' appears in the match pattern, we interpret that to mean\n\t\t// the intent is to compare that part of the path in raw/escaped\n\t\t// space; i.e. \"%40\"==\"%40\", not \"@\", and \"%2F\"==\"%2F\", not \"/\"\n\t\tif strings.Contains(matchPattern, \"%\") {\n\t\t\treqPathForPattern := CleanPath(r.URL.EscapedPath(), mergeSlashes)\n\t\t\tif m.matchPatternWithEscapeSequence(reqPathForPattern, matchPattern) {\n\t\t\t\treturn true, nil\n\t\t\t}\n\n\t\t\t// doing prefix/suffix/substring matches doesn't make sense\n\t\t\tcontinue\n\t\t}\n\n\t\treqPathForPattern := CleanPath(reqPath, mergeSlashes)\n\n\t\t// for substring, prefix, and suffix matching, only perform those\n\t\t// special, fast matches if they are the only wildcards in the pattern;\n\t\t// otherwise we assume a globular match if any * appears in the middle\n\n\t\t// special case: first and last characters are wildcard,\n\t\t// treat it as a fast substring match\n\t\tif strings.Count(matchPattern, \"*\") == 2 &&\n\t\t\tstrings.HasPrefix(matchPattern, \"*\") &&\n\t\t\tstrings.HasSuffix(matchPattern, \"*\") {\n\t\t\tif strings.Contains(reqPathForPattern, matchPattern[1:len(matchPattern)-1]) {\n\t\t\t\treturn true, nil\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\t// only perform prefix/suffix match if it is the only wildcard...\n\t\t// I think that is more correct most of the time\n\t\tif strings.Count(matchPattern, \"*\") == 1 {\n\t\t\t// special case: first character is a wildcard,\n\t\t\t// treat it as a fast suffix match\n\t\t\tif strings.HasPrefix(matchPattern, \"*\") {\n\t\t\t\tif strings.HasSuffix(reqPathForPattern, matchPattern[1:]) {\n\t\t\t\t\treturn true, nil\n\t\t\t\t}\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// special case: last character is a wildcard,\n\t\t\t// treat it as a fast prefix match\n\t\t\tif strings.HasSuffix(matchPattern, \"*\") {\n\t\t\t\tif strings.HasPrefix(reqPathForPattern, matchPattern[:len(matchPattern)-1]) {\n\t\t\t\t\treturn true, nil\n\t\t\t\t}\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\n\t\t// at last, use globular matching, which also is exact matching\n\t\t// if there are no glob/wildcard chars; we ignore the error here\n\t\t// because we can't handle it anyway\n\t\tmatches, _ := path.Match(matchPattern, reqPathForPattern)\n\t\tif matches {\n\t\t\treturn true, nil\n\t\t}\n\t}\n\treturn false, nil\n}\n\nfunc (MatchPath) matchPatternWithEscapeSequence(escapedPath, matchPath string) bool {\n\tescapedPath = strings.ToLower(escapedPath)\n\t// We would just compare the pattern against r.URL.Path,\n\t// but the pattern contains %, indicating that we should\n\t// compare at least some part of the path in raw/escaped\n\t// space, not normalized space; so we build the string we\n\t// will compare against by adding the normalized parts\n\t// of the path, then switching to the escaped parts where\n\t// the pattern hints to us wherever % is present.\n\tvar sb strings.Builder\n\n\t// iterate the pattern and escaped path in lock-step;\n\t// increment iPattern every time we consume a char from the pattern,\n\t// increment iPath every time we consume a char from the path;\n\t// iPattern and iPath are our cursors/iterator positions for each string\n\tvar iPattern, iPath int\n\tfor {\n\t\tif iPattern >= len(matchPath) || iPath >= len(escapedPath) {\n\t\t\tbreak\n\t\t}\n\t\t// get the next character from the request path\n\n\t\tpathCh := string(escapedPath[iPath])\n\t\tvar escapedPathCh string\n\n\t\t// normalize (decode) escape sequences\n\t\tif pathCh == \"%\" && len(escapedPath) >= iPath+3 {\n\t\t\t// hold onto this in case we find out the intent is to match in escaped space here;\n\t\t\t// we lowercase it even though technically the spec says: \"For consistency, URI\n\t\t\t// producers and normalizers should use uppercase hexadecimal digits for all percent-\n\t\t\t// encodings\" (RFC 3986 section 2.1) - we lowercased the matcher pattern earlier in\n\t\t\t// provisioning so we do the same here to gain case-insensitivity in equivalence;\n\t\t\t// besides, this string is never shown visibly\n\t\t\tescapedPathCh = strings.ToLower(escapedPath[iPath : iPath+3])\n\n\t\t\tvar err error\n\t\t\tpathCh, err = url.PathUnescape(escapedPathCh)\n\t\t\tif err != nil {\n\t\t\t\t// should be impossible unless EscapedPath() is giving us an invalid sequence!\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tiPath += 2 // escape sequence is 2 bytes longer than normal char\n\t\t}\n\n\t\t// now get the next character from the pattern\n\n\t\tnormalize := true\n\t\tswitch matchPath[iPattern] {\n\t\tcase '%':\n\t\t\t// escape sequence\n\n\t\t\t// if not a wildcard (\"%*\"), compare literally; consume next two bytes of pattern\n\t\t\tif len(matchPath) >= iPattern+3 && matchPath[iPattern+1] != '*' {\n\t\t\t\tsb.WriteString(escapedPathCh)\n\t\t\t\tiPath++\n\t\t\t\tiPattern += 2\n\t\t\t\tbreak\n\t\t\t}\n\n\t\t\t// escaped wildcard sequence; consume next byte only ('*')\n\t\t\tiPattern++\n\t\t\tnormalize = false\n\n\t\t\tfallthrough\n\t\tcase '*':\n\t\t\t// wildcard, so consume until next matching character\n\t\t\tremaining := escapedPath[iPath:]\n\t\t\tuntil := len(escapedPath) - iPath // go until end of string...\n\t\t\tif iPattern < len(matchPath)-1 {  // ...unless the * is not at the end\n\t\t\t\tnextCh := matchPath[iPattern+1]\n\t\t\t\tuntil = strings.IndexByte(remaining, nextCh)\n\t\t\t\tif until == -1 {\n\t\t\t\t\t// terminating char of wildcard span not found, so definitely no match\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t}\n\t\t\tif until == 0 {\n\t\t\t\t// empty span; nothing to add on this iteration\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tnext := remaining[:until]\n\t\t\tif normalize {\n\t\t\t\tvar err error\n\t\t\t\tnext, err = url.PathUnescape(next)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn false // should be impossible anyway\n\t\t\t\t}\n\t\t\t}\n\t\t\tsb.WriteString(next)\n\t\t\tiPath += until\n\t\tdefault:\n\t\t\tsb.WriteString(pathCh)\n\t\t\tiPath++\n\t\t}\n\n\t\tiPattern++\n\t}\n\n\t// we can now treat rawpath globs (%*) as regular globs (*)\n\tmatchPath = strings.ReplaceAll(matchPath, \"%*\", \"*\")\n\n\t// ignore error here because we can't handle it anyway\n\tmatches, _ := path.Match(matchPath, strings.ToLower(sb.String()))\n\treturn matches\n}\n\n// CELLibrary produces options that expose this matcher for use in CEL\n// expression matchers.\n//\n// Example:\n//\n//\texpression path('*substring*', '*suffix')\nfunc (MatchPath) CELLibrary(ctx caddy.Context) (cel.Library, error) {\n\treturn CELMatcherImpl(\n\t\t// name of the macro, this is the function name that users see when writing expressions.\n\t\t\"path\",\n\t\t// name of the function that the macro will be rewritten to call.\n\t\t\"path_match_request_list\",\n\t\t// internal data type of the MatchPath value.\n\t\t[]*cel.Type{cel.ListType(cel.StringType)},\n\t\t// function to convert a constant list of strings to a MatchPath instance.\n\t\tfunc(data ref.Val) (RequestMatcherWithError, error) {\n\t\t\trefStringList := stringSliceType\n\t\t\tstrList, err := data.ConvertToNative(refStringList)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tmatcher := MatchPath(strList.([]string))\n\t\t\terr = matcher.Provision(ctx)\n\t\t\treturn matcher, err\n\t\t},\n\t)\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (m *MatchPath) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\t// iterate to merge multiple matchers into one\n\tfor d.Next() {\n\t\t*m = append(*m, d.RemainingArgs()...)\n\t\tif d.NextBlock(0) {\n\t\t\treturn d.Err(\"malformed path matcher: blocks are not supported\")\n\t\t}\n\t}\n\treturn nil\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchPathRE) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.matchers.path_regexp\",\n\t\tNew: func() caddy.Module { return new(MatchPathRE) },\n\t}\n}\n\n// Match returns true if r matches m.\nfunc (m MatchPathRE) Match(r *http.Request) bool {\n\tmatch, _ := m.MatchWithError(r)\n\treturn match\n}\n\n// MatchWithError returns true if r matches m.\nfunc (m MatchPathRE) MatchWithError(r *http.Request) (bool, error) {\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\n\t// Clean the path, merges doubled slashes, etc.\n\t// This ensures maliciously crafted requests can't bypass\n\t// the path matcher. See #4407\n\tcleanedPath := cleanPath(r.URL.Path)\n\n\treturn m.MatchRegexp.Match(cleanedPath, repl), nil\n}\n\n// CELLibrary produces options that expose this matcher for use in CEL\n// expression matchers.\n//\n// Example:\n//\n//\texpression path_regexp('^/bar')\nfunc (MatchPathRE) CELLibrary(ctx caddy.Context) (cel.Library, error) {\n\tunnamedPattern, err := CELMatcherImpl(\n\t\t\"path_regexp\",\n\t\t\"path_regexp_request_string\",\n\t\t[]*cel.Type{cel.StringType},\n\t\tfunc(data ref.Val) (RequestMatcherWithError, error) {\n\t\t\tpattern := data.(types.String)\n\t\t\tmatcher := MatchPathRE{MatchRegexp{\n\t\t\t\tName:    ctx.Value(MatcherNameCtxKey).(string),\n\t\t\t\tPattern: string(pattern),\n\t\t\t}}\n\t\t\terr := matcher.Provision(ctx)\n\t\t\treturn matcher, err\n\t\t},\n\t)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tnamedPattern, err := CELMatcherImpl(\n\t\t\"path_regexp\",\n\t\t\"path_regexp_request_string_string\",\n\t\t[]*cel.Type{cel.StringType, cel.StringType},\n\t\tfunc(data ref.Val) (RequestMatcherWithError, error) {\n\t\t\trefStringList := stringSliceType\n\t\t\tparams, err := data.ConvertToNative(refStringList)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tstrParams := params.([]string)\n\t\t\tname := strParams[0]\n\t\t\tif name == \"\" {\n\t\t\t\tname = ctx.Value(MatcherNameCtxKey).(string)\n\t\t\t}\n\t\t\tmatcher := MatchPathRE{MatchRegexp{\n\t\t\t\tName:    name,\n\t\t\t\tPattern: strParams[1],\n\t\t\t}}\n\t\t\terr = matcher.Provision(ctx)\n\t\t\treturn matcher, err\n\t\t},\n\t)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tenvOpts := append(unnamedPattern.CompileOptions(), namedPattern.CompileOptions()...)\n\tprgOpts := append(unnamedPattern.ProgramOptions(), namedPattern.ProgramOptions()...)\n\treturn NewMatcherCELLibrary(envOpts, prgOpts), nil\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchMethod) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.matchers.method\",\n\t\tNew: func() caddy.Module { return new(MatchMethod) },\n\t}\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (m *MatchMethod) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\t// iterate to merge multiple matchers into one\n\tfor d.Next() {\n\t\t*m = append(*m, d.RemainingArgs()...)\n\t\tif d.NextBlock(0) {\n\t\t\treturn d.Err(\"malformed method matcher: blocks are not supported\")\n\t\t}\n\t}\n\treturn nil\n}\n\n// Match returns true if r matches m.\nfunc (m MatchMethod) Match(r *http.Request) bool {\n\tmatch, _ := m.MatchWithError(r)\n\treturn match\n}\n\n// MatchWithError returns true if r matches m.\nfunc (m MatchMethod) MatchWithError(r *http.Request) (bool, error) {\n\treturn slices.Contains(m, r.Method), nil\n}\n\n// CELLibrary produces options that expose this matcher for use in CEL\n// expression matchers.\n//\n// Example:\n//\n//\texpression method('PUT', 'POST')\nfunc (MatchMethod) CELLibrary(_ caddy.Context) (cel.Library, error) {\n\treturn CELMatcherImpl(\n\t\t\"method\",\n\t\t\"method_request_list\",\n\t\t[]*cel.Type{cel.ListType(cel.StringType)},\n\t\tfunc(data ref.Val) (RequestMatcherWithError, error) {\n\t\t\trefStringList := stringSliceType\n\t\t\tstrList, err := data.ConvertToNative(refStringList)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\treturn MatchMethod(strList.([]string)), nil\n\t\t},\n\t)\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchQuery) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.matchers.query\",\n\t\tNew: func() caddy.Module { return new(MatchQuery) },\n\t}\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (m *MatchQuery) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\tif *m == nil {\n\t\t*m = make(map[string][]string)\n\t}\n\t// iterate to merge multiple matchers into one\n\tfor d.Next() {\n\t\tfor _, query := range d.RemainingArgs() {\n\t\t\tif query == \"\" {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tbefore, after, found := strings.Cut(query, \"=\")\n\t\t\tif !found {\n\t\t\t\treturn d.Errf(\"malformed query matcher token: %s; must be in param=val format\", d.Val())\n\t\t\t}\n\t\t\turl.Values(*m).Add(before, after)\n\t\t}\n\t\tif d.NextBlock(0) {\n\t\t\treturn d.Err(\"malformed query matcher: blocks are not supported\")\n\t\t}\n\t}\n\treturn nil\n}\n\n// Match returns true if r matches m. An empty m matches an empty query string.\nfunc (m MatchQuery) Match(r *http.Request) bool {\n\tmatch, _ := m.MatchWithError(r)\n\treturn match\n}\n\n// MatchWithError returns true if r matches m.\n// An empty m matches an empty query string.\nfunc (m MatchQuery) MatchWithError(r *http.Request) (bool, error) {\n\t// If no query keys are configured, this only\n\t// matches an empty query string.\n\tif len(m) == 0 {\n\t\treturn len(r.URL.Query()) == 0, nil\n\t}\n\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\n\t// parse query string just once, for efficiency\n\tparsed, err := url.ParseQuery(r.URL.RawQuery)\n\tif err != nil {\n\t\t// Illegal query string. Likely bad escape sequence or unescaped literals.\n\t\t// Note that semicolons in query string have a controversial history. Summaries:\n\t\t// - https://github.com/golang/go/issues/50034\n\t\t// - https://github.com/golang/go/issues/25192\n\t\t// Despite the URL WHATWG spec mandating the use of & separators for query strings,\n\t\t// every URL parser implementation is different, and Filippo Valsorda rightly wrote:\n\t\t// \"Relying on parser alignment for security is doomed.\" Overall conclusion is that\n\t\t// splitting on & and rejecting ; in key=value pairs is safer than accepting raw ;.\n\t\t// We regard the Go team's decision as sound and thus reject malformed query strings.\n\t\treturn false, nil\n\t}\n\n\t// Count the amount of matched keys, to ensure we AND\n\t// between all configured query keys; all keys must\n\t// match at least one value.\n\tmatchedKeys := 0\n\tfor param, vals := range m {\n\t\tparam = repl.ReplaceAll(param, \"\")\n\t\tparamVal, found := parsed[param]\n\t\tif !found {\n\t\t\treturn false, nil\n\t\t}\n\t\tfor _, v := range vals {\n\t\t\tv = repl.ReplaceAll(v, \"\")\n\t\t\tif slices.Contains(paramVal, v) || v == \"*\" {\n\t\t\t\tmatchedKeys++\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\treturn matchedKeys == len(m), nil\n}\n\n// CELLibrary produces options that expose this matcher for use in CEL\n// expression matchers.\n//\n// Example:\n//\n//\texpression query({'sort': 'asc'}) || query({'foo': ['*bar*', 'baz']})\nfunc (MatchQuery) CELLibrary(_ caddy.Context) (cel.Library, error) {\n\treturn CELMatcherImpl(\n\t\t\"query\",\n\t\t\"query_matcher_request_map\",\n\t\t[]*cel.Type{CELTypeJSON},\n\t\tfunc(data ref.Val) (RequestMatcherWithError, error) {\n\t\t\tmapStrListStr, err := CELValueToMapStrList(data)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\treturn MatchQuery(url.Values(mapStrListStr)), nil\n\t\t},\n\t)\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchHeader) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.matchers.header\",\n\t\tNew: func() caddy.Module { return new(MatchHeader) },\n\t}\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (m *MatchHeader) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\tif *m == nil {\n\t\t*m = make(map[string][]string)\n\t}\n\t// iterate to merge multiple matchers into one\n\tfor d.Next() {\n\t\tvar field, val string\n\t\tif !d.Args(&field) {\n\t\t\treturn d.Errf(\"malformed header matcher: expected field\")\n\t\t}\n\n\t\tif strings.HasPrefix(field, \"!\") {\n\t\t\tif len(field) == 1 {\n\t\t\t\treturn d.Errf(\"malformed header matcher: must have field name following ! character\")\n\t\t\t}\n\n\t\t\tfield = field[1:]\n\t\t\theaders := *m\n\t\t\theaders[field] = nil\n\t\t\tm = &headers\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.Errf(\"malformed header matcher: null matching headers cannot have a field value\")\n\t\t\t}\n\t\t} else {\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.Errf(\"malformed header matcher: expected both field and value\")\n\t\t\t}\n\n\t\t\t// If multiple header matchers with the same header field are defined,\n\t\t\t// we want to add the existing to the list of headers (will be OR'ed)\n\t\t\tval = d.Val()\n\t\t\thttp.Header(*m).Add(field, val)\n\t\t}\n\n\t\tif d.NextBlock(0) {\n\t\t\treturn d.Err(\"malformed header matcher: blocks are not supported\")\n\t\t}\n\t}\n\treturn nil\n}\n\n// Match returns true if r matches m.\nfunc (m MatchHeader) Match(r *http.Request) bool {\n\tmatch, _ := m.MatchWithError(r)\n\treturn match\n}\n\n// MatchWithError returns true if r matches m.\nfunc (m MatchHeader) MatchWithError(r *http.Request) (bool, error) {\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\treturn matchHeaders(r.Header, http.Header(m), r.Host, r.TransferEncoding, repl), nil\n}\n\n// CELLibrary produces options that expose this matcher for use in CEL\n// expression matchers.\n//\n// Example:\n//\n//\texpression header({'content-type': 'image/png'})\n//\texpression header({'foo': ['bar', 'baz']}) // match bar or baz\nfunc (MatchHeader) CELLibrary(_ caddy.Context) (cel.Library, error) {\n\treturn CELMatcherImpl(\n\t\t\"header\",\n\t\t\"header_matcher_request_map\",\n\t\t[]*cel.Type{CELTypeJSON},\n\t\tfunc(data ref.Val) (RequestMatcherWithError, error) {\n\t\t\tmapStrListStr, err := CELValueToMapStrList(data)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\treturn MatchHeader(http.Header(mapStrListStr)), nil\n\t\t},\n\t)\n}\n\n// getHeaderFieldVals returns the field values for the given fieldName from input.\n// The host parameter should be obtained from the http.Request.Host field, and the\n// transferEncoding from http.Request.TransferEncoding, since net/http removes them\n// from the header map.\nfunc getHeaderFieldVals(input http.Header, fieldName, host string, transferEncoding []string) []string {\n\tfieldName = textproto.CanonicalMIMEHeaderKey(fieldName)\n\tif fieldName == \"Host\" && host != \"\" {\n\t\treturn []string{host}\n\t}\n\tif fieldName == \"Transfer-Encoding\" && input[fieldName] == nil {\n\t\treturn transferEncoding\n\t}\n\treturn input[fieldName]\n}\n\n// matchHeaders returns true if input matches the criteria in against without regex.\n// The host parameter should be obtained from the http.Request.Host field since\n// net/http removes it from the header map.\nfunc matchHeaders(input, against http.Header, host string, transferEncoding []string, repl *caddy.Replacer) bool {\n\tfor field, allowedFieldVals := range against {\n\t\tactualFieldVals := getHeaderFieldVals(input, field, host, transferEncoding)\n\t\tif allowedFieldVals != nil && len(allowedFieldVals) == 0 && actualFieldVals != nil {\n\t\t\t// a non-nil but empty list of allowed values means\n\t\t\t// match if the header field exists at all\n\t\t\tcontinue\n\t\t}\n\t\tif allowedFieldVals == nil && actualFieldVals == nil {\n\t\t\t// a nil list means match if the header does not exist at all\n\t\t\tcontinue\n\t\t}\n\t\tvar match bool\n\tfieldVals:\n\t\tfor _, actualFieldVal := range actualFieldVals {\n\t\t\tfor _, allowedFieldVal := range allowedFieldVals {\n\t\t\t\tif repl != nil {\n\t\t\t\t\tallowedFieldVal = repl.ReplaceAll(allowedFieldVal, \"\")\n\t\t\t\t}\n\t\t\t\tswitch {\n\t\t\t\tcase allowedFieldVal == \"*\":\n\t\t\t\t\tmatch = true\n\t\t\t\tcase strings.HasPrefix(allowedFieldVal, \"*\") && strings.HasSuffix(allowedFieldVal, \"*\"):\n\t\t\t\t\tmatch = strings.Contains(actualFieldVal, allowedFieldVal[1:len(allowedFieldVal)-1])\n\t\t\t\tcase strings.HasPrefix(allowedFieldVal, \"*\"):\n\t\t\t\t\tmatch = strings.HasSuffix(actualFieldVal, allowedFieldVal[1:])\n\t\t\t\tcase strings.HasSuffix(allowedFieldVal, \"*\"):\n\t\t\t\t\tmatch = strings.HasPrefix(actualFieldVal, allowedFieldVal[:len(allowedFieldVal)-1])\n\t\t\t\tdefault:\n\t\t\t\t\tmatch = actualFieldVal == allowedFieldVal\n\t\t\t\t}\n\t\t\t\tif match {\n\t\t\t\t\tbreak fieldVals\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif !match {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchHeaderRE) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.matchers.header_regexp\",\n\t\tNew: func() caddy.Module { return new(MatchHeaderRE) },\n\t}\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (m *MatchHeaderRE) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\tif *m == nil {\n\t\t*m = make(map[string]*MatchRegexp)\n\t}\n\t// iterate to merge multiple matchers into one\n\tfor d.Next() {\n\t\tvar first, second, third string\n\t\tif !d.Args(&first, &second) {\n\t\t\treturn d.ArgErr()\n\t\t}\n\n\t\tvar name, field, val string\n\t\tif d.Args(&third) {\n\t\t\tname = first\n\t\t\tfield = second\n\t\t\tval = third\n\t\t} else {\n\t\t\tfield = first\n\t\t\tval = second\n\t\t}\n\n\t\t// Default to the named matcher's name, if no regexp name is provided\n\t\tif name == \"\" {\n\t\t\tname = d.GetContextString(caddyfile.MatcherNameCtxKey)\n\t\t}\n\n\t\t// If there's already a pattern for this field\n\t\t// then we would end up overwriting the old one\n\t\tif (*m)[field] != nil {\n\t\t\treturn d.Errf(\"header_regexp matcher can only be used once per named matcher, per header field: %s\", field)\n\t\t}\n\n\t\t(*m)[field] = &MatchRegexp{Pattern: val, Name: name}\n\n\t\tif d.NextBlock(0) {\n\t\t\treturn d.Err(\"malformed header_regexp matcher: blocks are not supported\")\n\t\t}\n\t}\n\treturn nil\n}\n\n// Match returns true if r matches m.\nfunc (m MatchHeaderRE) Match(r *http.Request) bool {\n\tmatch, _ := m.MatchWithError(r)\n\treturn match\n}\n\n// MatchWithError returns true if r matches m.\nfunc (m MatchHeaderRE) MatchWithError(r *http.Request) (bool, error) {\n\tfor field, rm := range m {\n\t\tactualFieldVals := getHeaderFieldVals(r.Header, field, r.Host, r.TransferEncoding)\n\t\tmatch := false\n\tfieldVal:\n\t\tfor _, actualFieldVal := range actualFieldVals {\n\t\t\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\t\t\tif rm.Match(actualFieldVal, repl) {\n\t\t\t\tmatch = true\n\t\t\t\tbreak fieldVal\n\t\t\t}\n\t\t}\n\t\tif !match {\n\t\t\treturn false, nil\n\t\t}\n\t}\n\treturn true, nil\n}\n\n// Provision compiles m's regular expressions.\nfunc (m MatchHeaderRE) Provision(ctx caddy.Context) error {\n\tfor _, rm := range m {\n\t\terr := rm.Provision(ctx)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\n// Validate validates m's regular expressions.\nfunc (m MatchHeaderRE) Validate() error {\n\tfor _, rm := range m {\n\t\terr := rm.Validate()\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\n// CELLibrary produces options that expose this matcher for use in CEL\n// expression matchers.\n//\n// Example:\n//\n//\texpression header_regexp('foo', 'Field', 'fo+')\nfunc (MatchHeaderRE) CELLibrary(ctx caddy.Context) (cel.Library, error) {\n\tunnamedPattern, err := CELMatcherImpl(\n\t\t\"header_regexp\",\n\t\t\"header_regexp_request_string_string\",\n\t\t[]*cel.Type{cel.StringType, cel.StringType},\n\t\tfunc(data ref.Val) (RequestMatcherWithError, error) {\n\t\t\trefStringList := stringSliceType\n\t\t\tparams, err := data.ConvertToNative(refStringList)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tstrParams := params.([]string)\n\t\t\tmatcher := MatchHeaderRE{}\n\t\t\tmatcher[strParams[0]] = &MatchRegexp{\n\t\t\t\tPattern: strParams[1],\n\t\t\t\tName:    ctx.Value(MatcherNameCtxKey).(string),\n\t\t\t}\n\t\t\terr = matcher.Provision(ctx)\n\t\t\treturn matcher, err\n\t\t},\n\t)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tnamedPattern, err := CELMatcherImpl(\n\t\t\"header_regexp\",\n\t\t\"header_regexp_request_string_string_string\",\n\t\t[]*cel.Type{cel.StringType, cel.StringType, cel.StringType},\n\t\tfunc(data ref.Val) (RequestMatcherWithError, error) {\n\t\t\trefStringList := stringSliceType\n\t\t\tparams, err := data.ConvertToNative(refStringList)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tstrParams := params.([]string)\n\t\t\tname := strParams[0]\n\t\t\tif name == \"\" {\n\t\t\t\tname = ctx.Value(MatcherNameCtxKey).(string)\n\t\t\t}\n\t\t\tmatcher := MatchHeaderRE{}\n\t\t\tmatcher[strParams[1]] = &MatchRegexp{\n\t\t\t\tPattern: strParams[2],\n\t\t\t\tName:    name,\n\t\t\t}\n\t\t\terr = matcher.Provision(ctx)\n\t\t\treturn matcher, err\n\t\t},\n\t)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tenvOpts := append(unnamedPattern.CompileOptions(), namedPattern.CompileOptions()...)\n\tprgOpts := append(unnamedPattern.ProgramOptions(), namedPattern.ProgramOptions()...)\n\treturn NewMatcherCELLibrary(envOpts, prgOpts), nil\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchProtocol) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.matchers.protocol\",\n\t\tNew: func() caddy.Module { return new(MatchProtocol) },\n\t}\n}\n\n// Match returns true if r matches m.\nfunc (m MatchProtocol) Match(r *http.Request) bool {\n\tmatch, _ := m.MatchWithError(r)\n\treturn match\n}\n\n// MatchWithError returns true if r matches m.\nfunc (m MatchProtocol) MatchWithError(r *http.Request) (bool, error) {\n\tswitch string(m) {\n\tcase \"grpc\":\n\t\treturn strings.HasPrefix(r.Header.Get(\"content-type\"), \"application/grpc\"), nil\n\tcase \"https\":\n\t\treturn r.TLS != nil, nil\n\tcase \"http\":\n\t\treturn r.TLS == nil, nil\n\tcase \"http/1.0\":\n\t\treturn r.ProtoMajor == 1 && r.ProtoMinor == 0, nil\n\tcase \"http/1.0+\":\n\t\treturn r.ProtoAtLeast(1, 0), nil\n\tcase \"http/1.1\":\n\t\treturn r.ProtoMajor == 1 && r.ProtoMinor == 1, nil\n\tcase \"http/1.1+\":\n\t\treturn r.ProtoAtLeast(1, 1), nil\n\tcase \"http/2\":\n\t\treturn r.ProtoMajor == 2, nil\n\tcase \"http/2+\":\n\t\treturn r.ProtoAtLeast(2, 0), nil\n\tcase \"http/3\":\n\t\treturn r.ProtoMajor == 3, nil\n\tcase \"http/3+\":\n\t\treturn r.ProtoAtLeast(3, 0), nil\n\t}\n\treturn false, nil\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (m *MatchProtocol) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\t// iterate to merge multiple matchers into one\n\tfor d.Next() {\n\t\tvar proto string\n\t\tif !d.Args(&proto) {\n\t\t\treturn d.Err(\"expected exactly one protocol\")\n\t\t}\n\t\t*m = MatchProtocol(proto)\n\t}\n\treturn nil\n}\n\n// CELLibrary produces options that expose this matcher for use in CEL\n// expression matchers.\n//\n// Example:\n//\n//\texpression protocol('https')\nfunc (MatchProtocol) CELLibrary(_ caddy.Context) (cel.Library, error) {\n\treturn CELMatcherImpl(\n\t\t\"protocol\",\n\t\t\"protocol_request_string\",\n\t\t[]*cel.Type{cel.StringType},\n\t\tfunc(data ref.Val) (RequestMatcherWithError, error) {\n\t\t\tprotocolStr, ok := data.(types.String)\n\t\t\tif !ok {\n\t\t\t\treturn nil, errors.New(\"protocol argument was not a string\")\n\t\t\t}\n\t\t\treturn MatchProtocol(strings.ToLower(string(protocolStr))), nil\n\t\t},\n\t)\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchTLS) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.matchers.tls\",\n\t\tNew: func() caddy.Module { return new(MatchTLS) },\n\t}\n}\n\n// Match returns true if r matches m.\nfunc (m MatchTLS) Match(r *http.Request) bool {\n\tmatch, _ := m.MatchWithError(r)\n\treturn match\n}\n\n// MatchWithError returns true if r matches m.\nfunc (m MatchTLS) MatchWithError(r *http.Request) (bool, error) {\n\tif r.TLS == nil {\n\t\treturn false, nil\n\t}\n\tif m.HandshakeComplete != nil {\n\t\tif (!*m.HandshakeComplete && r.TLS.HandshakeComplete) ||\n\t\t\t(*m.HandshakeComplete && !r.TLS.HandshakeComplete) {\n\t\t\treturn false, nil\n\t\t}\n\t}\n\treturn true, nil\n}\n\n// UnmarshalCaddyfile parses Caddyfile tokens for this matcher. Syntax:\n//\n// ... tls [early_data]\n//\n// EXPERIMENTAL SYNTAX: Subject to change.\nfunc (m *MatchTLS) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\t// iterate to merge multiple matchers into one\n\tfor d.Next() {\n\t\tif d.NextArg() {\n\t\t\tswitch d.Val() {\n\t\t\tcase \"early_data\":\n\t\t\t\tvar false bool\n\t\t\t\tm.HandshakeComplete = &false\n\t\t\tdefault:\n\t\t\t\treturn d.Errf(\"unrecognized option '%s'\", d.Val())\n\t\t\t}\n\t\t}\n\t\tif d.NextArg() {\n\t\t\treturn d.ArgErr()\n\t\t}\n\t\tif d.NextBlock(0) {\n\t\t\treturn d.Err(\"malformed tls matcher: blocks are not supported yet\")\n\t\t}\n\t}\n\treturn nil\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchNot) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.matchers.not\",\n\t\tNew: func() caddy.Module { return new(MatchNot) },\n\t}\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (m *MatchNot) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\t// iterate to merge multiple matchers into one\n\tfor d.Next() {\n\t\tmatcherSet, err := ParseCaddyfileNestedMatcherSet(d)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tm.MatcherSetsRaw = append(m.MatcherSetsRaw, matcherSet)\n\t}\n\treturn nil\n}\n\n// UnmarshalJSON satisfies json.Unmarshaler. It puts the JSON\n// bytes directly into m's MatcherSetsRaw field.\nfunc (m *MatchNot) UnmarshalJSON(data []byte) error {\n\treturn json.Unmarshal(data, &m.MatcherSetsRaw)\n}\n\n// MarshalJSON satisfies json.Marshaler by marshaling\n// m's raw matcher sets.\nfunc (m MatchNot) MarshalJSON() ([]byte, error) {\n\treturn json.Marshal(m.MatcherSetsRaw)\n}\n\n// Provision loads the matcher modules to be negated.\nfunc (m *MatchNot) Provision(ctx caddy.Context) error {\n\tmatcherSets, err := ctx.LoadModule(m, \"MatcherSetsRaw\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"loading matcher sets: %v\", err)\n\t}\n\tfor _, modMap := range matcherSets.([]map[string]any) {\n\t\tvar ms MatcherSet\n\t\tfor _, modIface := range modMap {\n\t\t\tif mod, ok := modIface.(RequestMatcherWithError); ok {\n\t\t\t\tms = append(ms, mod)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif mod, ok := modIface.(RequestMatcher); ok {\n\t\t\t\tms = append(ms, mod)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\treturn fmt.Errorf(\"module is not a request matcher: %T\", modIface)\n\t\t}\n\t\tm.MatcherSets = append(m.MatcherSets, ms)\n\t}\n\treturn nil\n}\n\n// Match returns true if r matches m. Since this matcher negates\n// the embedded matchers, false is returned if any of its matcher\n// sets return true.\nfunc (m MatchNot) Match(r *http.Request) bool {\n\tmatch, _ := m.MatchWithError(r)\n\treturn match\n}\n\n// MatchWithError returns true if r matches m. Since this matcher\n// negates the embedded matchers, false is returned if any of its\n// matcher sets return true.\nfunc (m MatchNot) MatchWithError(r *http.Request) (bool, error) {\n\tfor _, ms := range m.MatcherSets {\n\t\tmatches, err := ms.MatchWithError(r)\n\t\tif err != nil {\n\t\t\treturn false, err\n\t\t}\n\t\tif matches {\n\t\t\treturn false, nil\n\t\t}\n\t}\n\treturn true, nil\n}\n\n// MatchRegexp is an embedable type for matching\n// using regular expressions. It adds placeholders\n// to the request's replacer.\ntype MatchRegexp struct {\n\t// A unique name for this regular expression. Optional,\n\t// but useful to prevent overwriting captures from other\n\t// regexp matchers.\n\tName string `json:\"name,omitempty\"`\n\n\t// The regular expression to evaluate, in RE2 syntax,\n\t// which is the same general syntax used by Go, Perl,\n\t// and Python. For details, see\n\t// [Go's regexp package](https://golang.org/pkg/regexp/).\n\t// Captures are accessible via placeholders. Unnamed\n\t// capture groups are exposed as their numeric, 1-based\n\t// index, while named capture groups are available by\n\t// the capture group name.\n\tPattern string `json:\"pattern\"`\n\n\tcompiled *regexp.Regexp\n}\n\n// Provision compiles the regular expression.\nfunc (mre *MatchRegexp) Provision(caddy.Context) error {\n\tre, err := regexp.Compile(mre.Pattern)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"compiling matcher regexp %s: %v\", mre.Pattern, err)\n\t}\n\tmre.compiled = re\n\treturn nil\n}\n\n// Validate ensures mre is set up correctly.\nfunc (mre *MatchRegexp) Validate() error {\n\tif mre.Name != \"\" && !wordRE.MatchString(mre.Name) {\n\t\treturn fmt.Errorf(\"invalid regexp name (must contain only word characters): %s\", mre.Name)\n\t}\n\treturn nil\n}\n\n// Match returns true if input matches the compiled regular\n// expression in mre. It sets values on the replacer repl\n// associated with capture groups, using the given scope\n// (namespace).\nfunc (mre *MatchRegexp) Match(input string, repl *caddy.Replacer) bool {\n\tmatches := mre.compiled.FindStringSubmatch(input)\n\tif matches == nil {\n\t\treturn false\n\t}\n\n\t// save all capture groups, first by index\n\tfor i, match := range matches {\n\t\tkeySuffix := \".\" + strconv.Itoa(i)\n\t\tif mre.Name != \"\" {\n\t\t\trepl.Set(regexpPlaceholderPrefix+\".\"+mre.Name+keySuffix, match)\n\t\t}\n\t\trepl.Set(regexpPlaceholderPrefix+keySuffix, match)\n\t}\n\n\t// then by name\n\tfor i, name := range mre.compiled.SubexpNames() {\n\t\t// skip the first element (the full match), and empty names\n\t\tif i == 0 || name == \"\" {\n\t\t\tcontinue\n\t\t}\n\n\t\tkeySuffix := \".\" + name\n\t\tif mre.Name != \"\" {\n\t\t\trepl.Set(regexpPlaceholderPrefix+\".\"+mre.Name+keySuffix, matches[i])\n\t\t}\n\t\trepl.Set(regexpPlaceholderPrefix+keySuffix, matches[i])\n\t}\n\n\treturn true\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (mre *MatchRegexp) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\t// iterate to merge multiple matchers into one\n\tfor d.Next() {\n\t\t// If this is the second iteration of the loop\n\t\t// then there's more than one path_regexp matcher\n\t\t// and we would end up overwriting the old one\n\t\tif mre.Pattern != \"\" {\n\t\t\treturn d.Err(\"regular expression can only be used once per named matcher\")\n\t\t}\n\n\t\targs := d.RemainingArgs()\n\t\tswitch len(args) {\n\t\tcase 1:\n\t\t\tmre.Pattern = args[0]\n\t\tcase 2:\n\t\t\tmre.Name = args[0]\n\t\t\tmre.Pattern = args[1]\n\t\tdefault:\n\t\t\treturn d.ArgErr()\n\t\t}\n\n\t\t// Default to the named matcher's name, if no regexp name is provided\n\t\tif mre.Name == \"\" {\n\t\t\tmre.Name = d.GetContextString(caddyfile.MatcherNameCtxKey)\n\t\t}\n\n\t\tif d.NextBlock(0) {\n\t\t\treturn d.Err(\"malformed path_regexp matcher: blocks are not supported\")\n\t\t}\n\t}\n\treturn nil\n}\n\n// ParseCaddyfileNestedMatcherSet parses the Caddyfile tokens for a nested\n// matcher set, and returns its raw module map value.\nfunc ParseCaddyfileNestedMatcherSet(d *caddyfile.Dispenser) (caddy.ModuleMap, error) {\n\tmatcherMap := make(map[string]any)\n\n\t// in case there are multiple instances of the same matcher, concatenate\n\t// their tokens (we expect that UnmarshalCaddyfile should be able to\n\t// handle more than one segment); otherwise, we'd overwrite other\n\t// instances of the matcher in this set\n\ttokensByMatcherName := make(map[string][]caddyfile.Token)\n\tfor nesting := d.Nesting(); d.NextArg() || d.NextBlock(nesting); {\n\t\tmatcherName := d.Val()\n\t\ttokensByMatcherName[matcherName] = append(tokensByMatcherName[matcherName], d.NextSegment()...)\n\t}\n\n\tfor matcherName, tokens := range tokensByMatcherName {\n\t\tmod, err := caddy.GetModule(\"http.matchers.\" + matcherName)\n\t\tif err != nil {\n\t\t\treturn nil, d.Errf(\"getting matcher module '%s': %v\", matcherName, err)\n\t\t}\n\t\tunm, ok := mod.New().(caddyfile.Unmarshaler)\n\t\tif !ok {\n\t\t\treturn nil, d.Errf(\"matcher module '%s' is not a Caddyfile unmarshaler\", matcherName)\n\t\t}\n\t\terr = unm.UnmarshalCaddyfile(caddyfile.NewDispenser(tokens))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif rm, ok := unm.(RequestMatcherWithError); ok {\n\t\t\tmatcherMap[matcherName] = rm\n\t\t\tcontinue\n\t\t}\n\t\tif rm, ok := unm.(RequestMatcher); ok {\n\t\t\tmatcherMap[matcherName] = rm\n\t\t\tcontinue\n\t\t}\n\t\treturn nil, fmt.Errorf(\"matcher module '%s' is not a request matcher\", matcherName)\n\t}\n\n\t// we should now have a functional matcher, but we also\n\t// need to be able to marshal as JSON, otherwise config\n\t// adaptation will be missing the matchers!\n\tmatcherSet := make(caddy.ModuleMap)\n\tfor name, matcher := range matcherMap {\n\t\tjsonBytes, err := json.Marshal(matcher)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"marshaling %T matcher: %v\", matcher, err)\n\t\t}\n\t\tmatcherSet[name] = jsonBytes\n\t}\n\n\treturn matcherSet, nil\n}\n\nvar wordRE = regexp.MustCompile(`\\w+`)\n\nconst regexpPlaceholderPrefix = \"http.regexp\"\n\n// MatcherErrorVarKey is the key used for the variable that\n// holds an optional error emitted from a request matcher,\n// to short-circuit the handler chain, since matchers cannot\n// return errors via the RequestMatcher interface.\n//\n// Deprecated: Matchers should implement RequestMatcherWithError\n// which can return an error directly, instead of smuggling it\n// through the vars map.\nconst MatcherErrorVarKey = \"matchers.error\"\n\n// Interface guards\nvar (\n\t_ RequestMatcherWithError = (*MatchHost)(nil)\n\t_ caddy.Provisioner       = (*MatchHost)(nil)\n\t_ RequestMatcherWithError = (*MatchPath)(nil)\n\t_ RequestMatcherWithError = (*MatchPathRE)(nil)\n\t_ caddy.Provisioner       = (*MatchPathRE)(nil)\n\t_ RequestMatcherWithError = (*MatchMethod)(nil)\n\t_ RequestMatcherWithError = (*MatchQuery)(nil)\n\t_ RequestMatcherWithError = (*MatchHeader)(nil)\n\t_ RequestMatcherWithError = (*MatchHeaderRE)(nil)\n\t_ caddy.Provisioner       = (*MatchHeaderRE)(nil)\n\t_ RequestMatcherWithError = (*MatchProtocol)(nil)\n\t_ RequestMatcherWithError = (*MatchNot)(nil)\n\t_ caddy.Provisioner       = (*MatchNot)(nil)\n\t_ caddy.Provisioner       = (*MatchRegexp)(nil)\n\n\t_ caddyfile.Unmarshaler = (*MatchHost)(nil)\n\t_ caddyfile.Unmarshaler = (*MatchPath)(nil)\n\t_ caddyfile.Unmarshaler = (*MatchPathRE)(nil)\n\t_ caddyfile.Unmarshaler = (*MatchMethod)(nil)\n\t_ caddyfile.Unmarshaler = (*MatchQuery)(nil)\n\t_ caddyfile.Unmarshaler = (*MatchHeader)(nil)\n\t_ caddyfile.Unmarshaler = (*MatchHeaderRE)(nil)\n\t_ caddyfile.Unmarshaler = (*MatchProtocol)(nil)\n\t_ caddyfile.Unmarshaler = (*VarsMatcher)(nil)\n\t_ caddyfile.Unmarshaler = (*MatchVarsRE)(nil)\n\n\t_ CELLibraryProducer = (*MatchHost)(nil)\n\t_ CELLibraryProducer = (*MatchPath)(nil)\n\t_ CELLibraryProducer = (*MatchPathRE)(nil)\n\t_ CELLibraryProducer = (*MatchMethod)(nil)\n\t_ CELLibraryProducer = (*MatchQuery)(nil)\n\t_ CELLibraryProducer = (*MatchHeader)(nil)\n\t_ CELLibraryProducer = (*MatchHeaderRE)(nil)\n\t_ CELLibraryProducer = (*MatchProtocol)(nil)\n\t_ CELLibraryProducer = (*VarsMatcher)(nil)\n\t_ CELLibraryProducer = (*MatchVarsRE)(nil)\n\n\t_ json.Marshaler   = (*MatchNot)(nil)\n\t_ json.Unmarshaler = (*MatchNot)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/matchers_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"net/url\"\n\t\"os\"\n\t\"runtime\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc TestHostMatcher(t *testing.T) {\n\terr := os.Setenv(\"GO_BENCHMARK_DOMAIN\", \"localhost\")\n\tif err != nil {\n\t\tt.Errorf(\"error while setting up environment: %v\", err)\n\t}\n\n\tfor i, tc := range []struct {\n\t\tmatch  MatchHost\n\t\tinput  string\n\t\texpect bool\n\t}{\n\t\t{\n\t\t\tmatch:  MatchHost{},\n\t\t\tinput:  \"example.com\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"example.com\"},\n\t\t\tinput:  \"example.com\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"EXAMPLE.COM\"},\n\t\t\tinput:  \"example.com\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"example.com\"},\n\t\t\tinput:  \"EXAMPLE.COM\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"example.com\"},\n\t\t\tinput:  \"foo.example.com\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"example.com\"},\n\t\t\tinput:  \"EXAMPLE.COM\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"foo.example.com\"},\n\t\t\tinput:  \"foo.example.com\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"foo.example.com\"},\n\t\t\tinput:  \"bar.example.com\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"éxàmplê.com\"},\n\t\t\tinput:  \"xn--xmpl-0na6cm.com\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"*.example.com\"},\n\t\t\tinput:  \"example.com\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"*.example.com\"},\n\t\t\tinput:  \"SUB.EXAMPLE.COM\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"*.example.com\"},\n\t\t\tinput:  \"foo.example.com\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"*.example.com\"},\n\t\t\tinput:  \"foo.bar.example.com\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"*.example.com\", \"example.net\"},\n\t\t\tinput:  \"example.net\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"example.net\", \"*.example.com\"},\n\t\t\tinput:  \"foo.example.com\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"*.example.net\", \"*.*.example.com\"},\n\t\t\tinput:  \"foo.bar.example.com\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"*.example.net\", \"sub.*.example.com\"},\n\t\t\tinput:  \"sub.foo.example.com\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"*.example.net\", \"sub.*.example.com\"},\n\t\t\tinput:  \"sub.foo.example.net\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"www.*.*\"},\n\t\t\tinput:  \"www.example.com\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"example.com\"},\n\t\t\tinput:  \"example.com:5555\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"{env.GO_BENCHMARK_DOMAIN}\"},\n\t\t\tinput:  \"localhost\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHost{\"{env.GO_NONEXISTENT}\"},\n\t\t\tinput:  \"localhost\",\n\t\t\texpect: false,\n\t\t},\n\t} {\n\t\treq := &http.Request{Host: tc.input}\n\t\trepl := caddy.NewReplacer()\n\t\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\t\treq = req.WithContext(ctx)\n\n\t\tif err := tc.match.Provision(caddy.Context{}); err != nil {\n\t\t\tt.Errorf(\"Test %d %v: provisioning failed: %v\", i, tc.match, err)\n\t\t}\n\n\t\tactual, err := tc.match.MatchWithError(req)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Test %d %v: matching failed: %v\", i, tc.match, err)\n\t\t}\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d %v: Expected %t, got %t for '%s'\", i, tc.match, tc.expect, actual, tc.input)\n\t\t\tcontinue\n\t\t}\n\t}\n}\n\nfunc TestPathMatcher(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tmatch        MatchPath // not URI-encoded because not parsing from a URI\n\t\tinput        string    // should be valid URI encoding (escaped) since it will become part of a request\n\t\texpect       bool\n\t\tprovisionErr bool\n\t}{\n\t\t{\n\t\t\tmatch:  MatchPath{},\n\t\t\tinput:  \"/\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/\"},\n\t\t\tinput:  \"/\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo/bar\"},\n\t\t\tinput:  \"/\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo/bar\"},\n\t\t\tinput:  \"/foo/bar\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo/bar/\"},\n\t\t\tinput:  \"/foo/bar\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo/bar/\"},\n\t\t\tinput:  \"/foo/bar/\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo/bar/\", \"/other\"},\n\t\t\tinput:  \"/other/\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo/bar/\", \"/other\"},\n\t\t\tinput:  \"/other\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"*.ext\"},\n\t\t\tinput:  \"/foo/bar.ext\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"*.php\"},\n\t\t\tinput:  \"/index.PHP\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"*.ext\"},\n\t\t\tinput:  \"/foo/bar.ext\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo/*/baz\"},\n\t\t\tinput:  \"/foo/bar/baz\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo/*/baz/bam\"},\n\t\t\tinput:  \"/foo/bar/bam\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"*substring*\"},\n\t\t\tinput:  \"/foo/substring/bar.txt\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo\"},\n\t\t\tinput:  \"/foo/bar\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo\"},\n\t\t\tinput:  \"/foo/bar\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo\"},\n\t\t\tinput:  \"/FOO\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo*\"},\n\t\t\tinput:  \"/FOOOO\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo/bar.txt\"},\n\t\t\tinput:  \"/foo/BAR.txt\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo*\"},\n\t\t\tinput:  \"//foo/bar\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo\"},\n\t\t\tinput:  \"//foo\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"//foo\"},\n\t\t\tinput:  \"/foo\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"//foo\"},\n\t\t\tinput:  \"//foo\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo//*\"},\n\t\t\tinput:  \"/foo//bar\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo//*\"},\n\t\t\tinput:  \"/foo/%2Fbar\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo/%2F*\"},\n\t\t\tinput:  \"/foo/%2Fbar\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo/%2F*\"},\n\t\t\tinput:  \"/foo//bar\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo//bar\"},\n\t\t\tinput:  \"/foo//bar\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo/*//bar\"},\n\t\t\tinput:  \"/foo///bar\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo/%*//bar\"},\n\t\t\tinput:  \"/foo///bar\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo/%*//bar\"},\n\t\t\tinput:  \"/foo//%2Fbar\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo*\"},\n\t\t\tinput:  \"/%2F/foo\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"*\"},\n\t\t\tinput:  \"/\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"*\"},\n\t\t\tinput:  \"/foo/bar\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"**\"},\n\t\t\tinput:  \"/\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"**\"},\n\t\t\tinput:  \"/foo/bar\",\n\t\t\texpect: true,\n\t\t},\n\t\t// notice these next three test cases are the same normalized path but are written differently\n\t\t{\n\t\t\tmatch:  MatchPath{\"/%25@.txt\"},\n\t\t\tinput:  \"/%25@.txt\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/%25@.txt\"},\n\t\t\tinput:  \"/%25%40.txt\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/%25%40.txt\"},\n\t\t\tinput:  \"/%25%40.txt\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/bands/*/*\"},\n\t\t\tinput:  \"/bands/AC%2FDC/T.N.T\",\n\t\t\texpect: false, // because * operates in normalized space\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/bands/%*/%*\"},\n\t\t\tinput:  \"/bands/AC%2FDC/T.N.T\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/bands/%*/%*\"},\n\t\t\tinput:  \"/bands/AC/DC/T.N.T\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/bands/%*\"},\n\t\t\tinput:  \"/bands/AC/DC\",\n\t\t\texpect: false, // not a suffix match\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/bands/%*\"},\n\t\t\tinput:  \"/bands/AC%2FDC\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo%2fbar/baz\"},\n\t\t\tinput:  \"/foo%2Fbar/baz\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo%2fbar/baz\"},\n\t\t\tinput:  \"/foo/bar/baz\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/foo/bar/baz\"},\n\t\t\tinput:  \"/foo%2fbar/baz\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/admin%2fpanel\"},\n\t\t\tinput:  \"/ADMIN%2fpanel\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPath{\"/admin%2fpa*el\"},\n\t\t\tinput:  \"/ADMIN%2fPaAzZLm123NEL\",\n\t\t\texpect: true,\n\t\t},\n\t} {\n\t\terr := tc.match.Provision(caddy.Context{})\n\t\tif err == nil && tc.provisionErr {\n\t\t\tt.Errorf(\"Test %d %v: Expected error provisioning, but there was no error\", i, tc.match)\n\t\t}\n\t\tif err != nil && !tc.provisionErr {\n\t\t\tt.Errorf(\"Test %d %v: Expected no error provisioning, but there was an error: %v\", i, tc.match, err)\n\t\t}\n\t\tif tc.provisionErr {\n\t\t\tcontinue // if it's not supposed to provision properly, pointless to test it\n\t\t}\n\n\t\tu, err := url.ParseRequestURI(tc.input)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Test %d (%v): Invalid request URI (should be rejected by Go's HTTP server): %v\", i, tc.input, err)\n\t\t}\n\t\treq := &http.Request{URL: u}\n\t\trepl := caddy.NewReplacer()\n\t\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\t\treq = req.WithContext(ctx)\n\n\t\tactual, err := tc.match.MatchWithError(req)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Test %d %v: matching failed: %v\", i, tc.match, err)\n\t\t}\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d %v: Expected %t, got %t for '%s'\", i, tc.match, tc.expect, actual, tc.input)\n\t\t\tcontinue\n\t\t}\n\t}\n}\n\nfunc TestPathMatcherWindows(t *testing.T) {\n\t// only Windows has this bug where it will ignore\n\t// trailing dots and spaces in a filename\n\tif runtime.GOOS != \"windows\" {\n\t\treturn\n\t}\n\n\treq := &http.Request{URL: &url.URL{Path: \"/index.php . . ..\"}}\n\trepl := caddy.NewReplacer()\n\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\treq = req.WithContext(ctx)\n\n\tmatch := MatchPath{\"*.php\"}\n\tmatched, err := match.MatchWithError(req)\n\tif err != nil {\n\t\tt.Errorf(\"Expected no error, but got: %v\", err)\n\t}\n\tif !matched {\n\t\tt.Errorf(\"Expected to match; should ignore trailing dots and spaces\")\n\t}\n}\n\nfunc TestPathREMatcher(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tmatch      MatchPathRE\n\t\tinput      string\n\t\texpect     bool\n\t\texpectRepl map[string]string\n\t}{\n\t\t{\n\t\t\tmatch:  MatchPathRE{},\n\t\t\tinput:  \"/\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPathRE{MatchRegexp{Pattern: \"/\"}},\n\t\t\tinput:  \"/\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPathRE{MatchRegexp{Pattern: \"^/foo\"}},\n\t\t\tinput:  \"/foo\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPathRE{MatchRegexp{Pattern: \"^/foo\"}},\n\t\t\tinput:  \"/foo/\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPathRE{MatchRegexp{Pattern: \"^/foo\"}},\n\t\t\tinput:  \"//foo\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPathRE{MatchRegexp{Pattern: \"^/foo\"}},\n\t\t\tinput:  \"//foo/\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPathRE{MatchRegexp{Pattern: \"^/foo\"}},\n\t\t\tinput:  \"/%2F/foo/\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPathRE{MatchRegexp{Pattern: \"/bar\"}},\n\t\t\tinput:  \"/foo/\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPathRE{MatchRegexp{Pattern: \"^/bar\"}},\n\t\t\tinput:  \"/foo/bar\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:      MatchPathRE{MatchRegexp{Pattern: \"^/foo/(.*)/baz$\", Name: \"name\"}},\n\t\t\tinput:      \"/foo/bar/baz\",\n\t\t\texpect:     true,\n\t\t\texpectRepl: map[string]string{\"name.1\": \"bar\"},\n\t\t},\n\t\t{\n\t\t\tmatch:      MatchPathRE{MatchRegexp{Pattern: \"^/foo/(?P<myparam>.*)/baz$\", Name: \"name\"}},\n\t\t\tinput:      \"/foo/bar/baz\",\n\t\t\texpect:     true,\n\t\t\texpectRepl: map[string]string{\"name.myparam\": \"bar\"},\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPathRE{MatchRegexp{Pattern: \"^/%@.txt\"}},\n\t\t\tinput:  \"/%25@.txt\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchPathRE{MatchRegexp{Pattern: \"^/%25@.txt\"}},\n\t\t\tinput:  \"/%25@.txt\",\n\t\t\texpect: false,\n\t\t},\n\t} {\n\t\t// compile the regexp and validate its name\n\t\terr := tc.match.Provision(caddy.Context{})\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Test %d %v: Provisioning: %v\", i, tc.match, err)\n\t\t\tcontinue\n\t\t}\n\t\terr = tc.match.Validate()\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Test %d %v: Validating: %v\", i, tc.match, err)\n\t\t\tcontinue\n\t\t}\n\n\t\t// set up the fake request and its Replacer\n\t\tu, err := url.ParseRequestURI(tc.input)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Test %d: Bad input URI: %v\", i, err)\n\t\t}\n\t\treq := &http.Request{URL: u}\n\t\trepl := caddy.NewReplacer()\n\t\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\t\treq = req.WithContext(ctx)\n\t\taddHTTPVarsToReplacer(repl, req, httptest.NewRecorder())\n\n\t\tactual, err := tc.match.MatchWithError(req)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Test %d %v: matching failed: %v\", i, tc.match, err)\n\t\t}\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d [%v]: Expected %t, got %t for input '%s'\",\n\t\t\t\ti, tc.match.Pattern, tc.expect, actual, tc.input)\n\t\t\tcontinue\n\t\t}\n\n\t\tfor key, expectVal := range tc.expectRepl {\n\t\t\tplaceholder := fmt.Sprintf(\"{http.regexp.%s}\", key)\n\t\t\tactualVal := repl.ReplaceAll(placeholder, \"<empty>\")\n\t\t\tif actualVal != expectVal {\n\t\t\t\tt.Errorf(\"Test %d [%v]: Expected placeholder {http.regexp.%s} to be '%s' but got '%s'\",\n\t\t\t\t\ti, tc.match.Pattern, key, expectVal, actualVal)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc TestHeaderMatcher(t *testing.T) {\n\trepl := caddy.NewReplacer()\n\trepl.Set(\"a\", \"foobar\")\n\n\tfor i, tc := range []struct {\n\t\tmatch  MatchHeader\n\t\tinput  http.Header // make sure these are canonical cased (std lib will do that in a real request)\n\t\thost   string\n\t\texpect bool\n\t}{\n\t\t{\n\t\t\tmatch:  MatchHeader{\"Field\": []string{\"foo\"}},\n\t\t\tinput:  http.Header{\"Field\": []string{\"foo\"}},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"Field\": []string{\"foo\", \"bar\"}},\n\t\t\tinput:  http.Header{\"Field\": []string{\"bar\"}},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"Field\": []string{\"foo\", \"bar\"}},\n\t\t\tinput:  http.Header{\"Alakazam\": []string{\"kapow\"}},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"Field\": []string{\"foo\", \"bar\"}},\n\t\t\tinput:  http.Header{\"Field\": []string{\"kapow\"}},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"Field\": []string{\"foo\", \"bar\"}},\n\t\t\tinput:  http.Header{\"Field\": []string{\"kapow\", \"foo\"}},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"Field1\": []string{\"foo\"}, \"Field2\": []string{\"bar\"}},\n\t\t\tinput:  http.Header{\"Field1\": []string{\"foo\"}, \"Field2\": []string{\"bar\"}},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"field1\": []string{\"foo\"}, \"field2\": []string{\"bar\"}},\n\t\t\tinput:  http.Header{\"Field1\": []string{\"foo\"}, \"Field2\": []string{\"bar\"}},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"field1\": []string{\"foo\"}, \"field2\": []string{\"bar\"}},\n\t\t\tinput:  http.Header{\"Field1\": []string{\"foo\"}, \"Field2\": []string{\"kapow\"}},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"field1\": []string{\"*\"}},\n\t\t\tinput:  http.Header{\"Field1\": []string{\"foo\"}},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"field1\": []string{\"*\"}},\n\t\t\tinput:  http.Header{\"Field2\": []string{\"foo\"}},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"Field1\": []string{\"foo*\"}},\n\t\t\tinput:  http.Header{\"Field1\": []string{\"foo\"}},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"Field1\": []string{\"foo*\"}},\n\t\t\tinput:  http.Header{\"Field1\": []string{\"asdf\", \"foobar\"}},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"Field1\": []string{\"*bar\"}},\n\t\t\tinput:  http.Header{\"Field1\": []string{\"asdf\", \"foobar\"}},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"host\": []string{\"localhost\"}},\n\t\t\tinput:  http.Header{},\n\t\t\thost:   \"localhost\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"host\": []string{\"localhost\"}},\n\t\t\tinput:  http.Header{},\n\t\t\thost:   \"caddyserver.com\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"Must-Not-Exist\": nil},\n\t\t\tinput:  http.Header{},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"Must-Not-Exist\": nil},\n\t\t\tinput:  http.Header{\"Must-Not-Exist\": []string{\"do not match\"}},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"Foo\": []string{\"{a}\"}},\n\t\t\tinput:  http.Header{\"Foo\": []string{\"foobar\"}},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"Foo\": []string{\"{a}\"}},\n\t\t\tinput:  http.Header{\"Foo\": []string{\"asdf\"}},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeader{\"Foo\": []string{\"{a}*\"}},\n\t\t\tinput:  http.Header{\"Foo\": []string{\"foobar-baz\"}},\n\t\t\texpect: true,\n\t\t},\n\t} {\n\t\treq := &http.Request{Header: tc.input, Host: tc.host}\n\t\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\t\treq = req.WithContext(ctx)\n\n\t\tactual, err := tc.match.MatchWithError(req)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Test %d %v: matching failed: %v\", i, tc.match, err)\n\t\t}\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d %v: Expected %t, got %t for '%s'\", i, tc.match, tc.expect, actual, tc.input)\n\t\t\tcontinue\n\t\t}\n\t}\n}\n\nfunc TestQueryMatcher(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tscenario string\n\t\tmatch    MatchQuery\n\t\tinput    string\n\t\texpect   bool\n\t}{\n\t\t{\n\t\t\tscenario: \"non match against a specific value\",\n\t\t\tmatch:    MatchQuery{\"debug\": []string{\"1\"}},\n\t\t\tinput:    \"/\",\n\t\t\texpect:   false,\n\t\t},\n\t\t{\n\t\t\tscenario: \"match against a specific value\",\n\t\t\tmatch:    MatchQuery{\"debug\": []string{\"1\"}},\n\t\t\tinput:    \"/?debug=1\",\n\t\t\texpect:   true,\n\t\t},\n\t\t{\n\t\t\tscenario: \"match against a wildcard\",\n\t\t\tmatch:    MatchQuery{\"debug\": []string{\"*\"}},\n\t\t\tinput:    \"/?debug=something\",\n\t\t\texpect:   true,\n\t\t},\n\t\t{\n\t\t\tscenario: \"non match against a wildcarded\",\n\t\t\tmatch:    MatchQuery{\"debug\": []string{\"*\"}},\n\t\t\tinput:    \"/?other=something\",\n\t\t\texpect:   false,\n\t\t},\n\t\t{\n\t\t\tscenario: \"match against an empty value\",\n\t\t\tmatch:    MatchQuery{\"debug\": []string{\"\"}},\n\t\t\tinput:    \"/?debug\",\n\t\t\texpect:   true,\n\t\t},\n\t\t{\n\t\t\tscenario: \"non match against an empty value\",\n\t\t\tmatch:    MatchQuery{\"debug\": []string{\"\"}},\n\t\t\tinput:    \"/?someparam\",\n\t\t\texpect:   false,\n\t\t},\n\t\t{\n\t\t\tscenario: \"empty matcher value should match empty query\",\n\t\t\tmatch:    MatchQuery{},\n\t\t\tinput:    \"/?\",\n\t\t\texpect:   true,\n\t\t},\n\t\t{\n\t\t\tscenario: \"nil matcher value should NOT match a non-empty query\",\n\t\t\tmatch:    MatchQuery{},\n\t\t\tinput:    \"/?foo=bar\",\n\t\t\texpect:   false,\n\t\t},\n\t\t{\n\t\t\tscenario: \"non-nil matcher should NOT match an empty query\",\n\t\t\tmatch:    MatchQuery{\"\": nil},\n\t\t\tinput:    \"/?\",\n\t\t\texpect:   false,\n\t\t},\n\t\t{\n\t\t\tscenario: \"match against a placeholder value\",\n\t\t\tmatch:    MatchQuery{\"debug\": []string{\"{http.vars.debug}\"}},\n\t\t\tinput:    \"/?debug=1\",\n\t\t\texpect:   true,\n\t\t},\n\t\t{\n\t\t\tscenario: \"match against a placeholder key\",\n\t\t\tmatch:    MatchQuery{\"{http.vars.key}\": []string{\"1\"}},\n\t\t\tinput:    \"/?somekey=1\",\n\t\t\texpect:   true,\n\t\t},\n\t\t{\n\t\t\tscenario: \"do not match when not all query params are present\",\n\t\t\tmatch:    MatchQuery{\"debug\": []string{\"1\"}, \"foo\": []string{\"bar\"}},\n\t\t\tinput:    \"/?debug=1\",\n\t\t\texpect:   false,\n\t\t},\n\t\t{\n\t\t\tscenario: \"match when all query params are present\",\n\t\t\tmatch:    MatchQuery{\"debug\": []string{\"1\"}, \"foo\": []string{\"bar\"}},\n\t\t\tinput:    \"/?debug=1&foo=bar\",\n\t\t\texpect:   true,\n\t\t},\n\t\t{\n\t\t\tscenario: \"do not match when the value of a query param does not match\",\n\t\t\tmatch:    MatchQuery{\"debug\": []string{\"1\"}, \"foo\": []string{\"bar\"}},\n\t\t\tinput:    \"/?debug=2&foo=bar\",\n\t\t\texpect:   false,\n\t\t},\n\t\t{\n\t\t\tscenario: \"do not match when all the values the query params do not match\",\n\t\t\tmatch:    MatchQuery{\"debug\": []string{\"1\"}, \"foo\": []string{\"bar\"}},\n\t\t\tinput:    \"/?debug=2&foo=baz\",\n\t\t\texpect:   false,\n\t\t},\n\t\t{\n\t\t\tscenario: \"match against two values for the same key\",\n\t\t\tmatch:    MatchQuery{\"debug\": []string{\"1\"}},\n\t\t\tinput:    \"/?debug=1&debug=2\",\n\t\t\texpect:   true,\n\t\t},\n\t\t{\n\t\t\tscenario: \"match against two values for the same key\",\n\t\t\tmatch:    MatchQuery{\"debug\": []string{\"2\", \"1\"}},\n\t\t\tinput:    \"/?debug=2&debug=1\",\n\t\t\texpect:   true,\n\t\t},\n\t} {\n\n\t\tu, _ := url.Parse(tc.input)\n\n\t\treq := &http.Request{URL: u}\n\t\trepl := caddy.NewReplacer()\n\t\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\t\trepl.Set(\"http.vars.debug\", \"1\")\n\t\trepl.Set(\"http.vars.key\", \"somekey\")\n\t\treq = req.WithContext(ctx)\n\t\tactual, err := tc.match.MatchWithError(req)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Test %d %v: matching failed: %v\", i, tc.match, err)\n\t\t}\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d %v: Expected %t, got %t for '%s'\", i, tc.match, tc.expect, actual, tc.input)\n\t\t\tcontinue\n\t\t}\n\t}\n}\n\nfunc TestHeaderREMatcher(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tmatch      MatchHeaderRE\n\t\tinput      http.Header // make sure these are canonical cased (std lib will do that in a real request)\n\t\thost       string\n\t\texpect     bool\n\t\texpectRepl map[string]string\n\t}{\n\t\t{\n\t\t\tmatch:  MatchHeaderRE{\"Field\": &MatchRegexp{Pattern: \"foo\"}},\n\t\t\tinput:  http.Header{\"Field\": []string{\"foo\"}},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeaderRE{\"Field\": &MatchRegexp{Pattern: \"$foo^\"}},\n\t\t\tinput:  http.Header{\"Field\": []string{\"foobar\"}},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tmatch:      MatchHeaderRE{\"Field\": &MatchRegexp{Pattern: \"^foo(.*)$\", Name: \"name\"}},\n\t\t\tinput:      http.Header{\"Field\": []string{\"foobar\"}},\n\t\t\texpect:     true,\n\t\t\texpectRepl: map[string]string{\"name.1\": \"bar\"},\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeaderRE{\"Field\": &MatchRegexp{Pattern: \"^foo.*$\", Name: \"name\"}},\n\t\t\tinput:  http.Header{\"Field\": []string{\"barfoo\", \"foobar\"}},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeaderRE{\"host\": &MatchRegexp{Pattern: \"^localhost$\", Name: \"name\"}},\n\t\t\tinput:  http.Header{},\n\t\t\thost:   \"localhost\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tmatch:  MatchHeaderRE{\"host\": &MatchRegexp{Pattern: \"^local$\", Name: \"name\"}},\n\t\t\tinput:  http.Header{},\n\t\t\thost:   \"localhost\",\n\t\t\texpect: false,\n\t\t},\n\t} {\n\t\t// compile the regexp and validate its name\n\t\terr := tc.match.Provision(caddy.Context{})\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Test %d %v: Provisioning: %v\", i, tc.match, err)\n\t\t\tcontinue\n\t\t}\n\t\terr = tc.match.Validate()\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Test %d %v: Validating: %v\", i, tc.match, err)\n\t\t\tcontinue\n\t\t}\n\n\t\t// set up the fake request and its Replacer\n\t\treq := &http.Request{Header: tc.input, URL: new(url.URL), Host: tc.host}\n\t\trepl := caddy.NewReplacer()\n\t\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\t\treq = req.WithContext(ctx)\n\t\taddHTTPVarsToReplacer(repl, req, httptest.NewRecorder())\n\n\t\tactual, err := tc.match.MatchWithError(req)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Test %d %v: matching failed: %v\", i, tc.match, err)\n\t\t}\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d [%v]: Expected %t, got %t for input '%s'\",\n\t\t\t\ti, tc.match, tc.expect, actual, tc.input)\n\t\t\tcontinue\n\t\t}\n\n\t\tfor key, expectVal := range tc.expectRepl {\n\t\t\tplaceholder := fmt.Sprintf(\"{http.regexp.%s}\", key)\n\t\t\tactualVal := repl.ReplaceAll(placeholder, \"<empty>\")\n\t\t\tif actualVal != expectVal {\n\t\t\t\tt.Errorf(\"Test %d [%v]: Expected placeholder {http.regexp.%s} to be '%s' but got '%s'\",\n\t\t\t\t\ti, tc.match, key, expectVal, actualVal)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc BenchmarkHeaderREMatcher(b *testing.B) {\n\ti := 0\n\tmatch := MatchHeaderRE{\"Field\": &MatchRegexp{Pattern: \"^foo(.*)$\", Name: \"name\"}}\n\tinput := http.Header{\"Field\": []string{\"foobar\"}}\n\tvar host string\n\terr := match.Provision(caddy.Context{})\n\tif err != nil {\n\t\tb.Errorf(\"Test %d %v: Provisioning: %v\", i, match, err)\n\t}\n\terr = match.Validate()\n\tif err != nil {\n\t\tb.Errorf(\"Test %d %v: Validating: %v\", i, match, err)\n\t}\n\n\t// set up the fake request and its Replacer\n\treq := &http.Request{Header: input, URL: new(url.URL), Host: host}\n\trepl := caddy.NewReplacer()\n\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\treq = req.WithContext(ctx)\n\taddHTTPVarsToReplacer(repl, req, httptest.NewRecorder())\n\tfor b.Loop() {\n\t\tmatch.MatchWithError(req)\n\t}\n}\n\nfunc TestVarREMatcher(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tdesc       string\n\t\tmatch      MatchVarsRE\n\t\tinput      VarsMiddleware\n\t\theaders    http.Header\n\t\texpect     bool\n\t\texpectRepl map[string]string\n\t}{\n\t\t{\n\t\t\tdesc:   \"match static value within var set by the VarsMiddleware succeeds\",\n\t\t\tmatch:  MatchVarsRE{\"Var1\": &MatchRegexp{Pattern: \"foo\"}},\n\t\t\tinput:  VarsMiddleware{\"Var1\": \"here is foo val\"},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tdesc:   \"value set by VarsMiddleware not satisfying regexp matcher fails to match\",\n\t\t\tmatch:  MatchVarsRE{\"Var1\": &MatchRegexp{Pattern: \"$foo^\"}},\n\t\t\tinput:  VarsMiddleware{\"Var1\": \"foobar\"},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tdesc:       \"successfully matched value is captured and its placeholder is added to replacer\",\n\t\t\tmatch:      MatchVarsRE{\"Var1\": &MatchRegexp{Pattern: \"^foo(.*)$\", Name: \"name\"}},\n\t\t\tinput:      VarsMiddleware{\"Var1\": \"foobar\"},\n\t\t\texpect:     true,\n\t\t\texpectRepl: map[string]string{\"name.1\": \"bar\"},\n\t\t},\n\t\t{\n\t\t\tdesc:   \"matching against a value of standard variables succeeds\",\n\t\t\tmatch:  MatchVarsRE{\"{http.request.method}\": &MatchRegexp{Pattern: \"^G.[tT]$\"}},\n\t\t\tinput:  VarsMiddleware{},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tdesc:   \"matching against value of var set by the VarsMiddleware and referenced by its placeholder succeeds\",\n\t\t\tmatch:  MatchVarsRE{\"{http.vars.Var1}\": &MatchRegexp{Pattern: \"[vV]ar[0-9]\"}},\n\t\t\tinput:  VarsMiddleware{\"Var1\": \"var1Value\"},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tdesc:       \"placeholder key value containing braces is not double-expanded\",\n\t\t\tmatch:      MatchVarsRE{\"{http.request.header.X-Input}\": &MatchRegexp{Pattern: \".+\", Name: \"val\"}},\n\t\t\tinput:      VarsMiddleware{},\n\t\t\theaders:    http.Header{\"X-Input\": []string{\"{env.HOME}\"}},\n\t\t\texpect:     true,\n\t\t\texpectRepl: map[string]string{\"val.0\": \"{env.HOME}\"},\n\t\t},\n\t} {\n\t\tt.Run(tc.desc, func(t *testing.T) {\n\t\t\tt.Parallel()\n\t\t\t// compile the regexp and validate its name\n\t\t\terr := tc.match.Provision(caddy.Context{})\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"Test %d %v: Provisioning: %v\", i, tc.match, err)\n\t\t\t\treturn\n\t\t\t}\n\t\t\terr = tc.match.Validate()\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"Test %d %v: Validating: %v\", i, tc.match, err)\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\t// set up the fake request and its Replacer\n\t\t\treq := &http.Request{URL: new(url.URL), Method: http.MethodGet, Header: tc.headers}\n\t\t\trepl := caddy.NewReplacer()\n\t\t\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\t\t\tctx = context.WithValue(ctx, VarsCtxKey, make(map[string]any))\n\t\t\treq = req.WithContext(ctx)\n\n\t\t\taddHTTPVarsToReplacer(repl, req, httptest.NewRecorder())\n\n\t\t\ttc.input.ServeHTTP(httptest.NewRecorder(), req, emptyHandler)\n\n\t\t\tactual, err := tc.match.MatchWithError(req)\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"Test %d %v: matching failed: %v\", i, tc.match, err)\n\t\t\t}\n\t\t\tif actual != tc.expect {\n\t\t\t\tt.Errorf(\"Test %d [%v]: Expected %t, got %t for input '%s'\",\n\t\t\t\t\ti, tc.match, tc.expect, actual, tc.input)\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tfor key, expectVal := range tc.expectRepl {\n\t\t\t\tplaceholder := fmt.Sprintf(\"{http.regexp.%s}\", key)\n\t\t\t\tactualVal := repl.ReplaceAll(placeholder, \"<empty>\")\n\t\t\t\tif actualVal != expectVal {\n\t\t\t\t\tt.Errorf(\"Test %d [%v]: Expected placeholder {http.regexp.%s} to be '%s' but got '%s'\",\n\t\t\t\t\t\ti, tc.match, key, expectVal, actualVal)\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestNotMatcher(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\thost, path string\n\t\tmatch      MatchNot\n\t\texpect     bool\n\t}{\n\t\t{\n\t\t\thost: \"example.com\", path: \"/\",\n\t\t\tmatch:  MatchNot{},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\thost: \"example.com\", path: \"/foo\",\n\t\t\tmatch: MatchNot{\n\t\t\t\tMatcherSets: []MatcherSet{\n\t\t\t\t\t{\n\t\t\t\t\t\tMatchPath{\"/foo\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\thost: \"example.com\", path: \"/bar\",\n\t\t\tmatch: MatchNot{\n\t\t\t\tMatcherSets: []MatcherSet{\n\t\t\t\t\t{\n\t\t\t\t\t\tMatchPath{\"/foo\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\thost: \"example.com\", path: \"/bar\",\n\t\t\tmatch: MatchNot{\n\t\t\t\tMatcherSets: []MatcherSet{\n\t\t\t\t\t{\n\t\t\t\t\t\tMatchPath{\"/foo\"},\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tMatchHost{\"example.com\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\thost: \"example.com\", path: \"/bar\",\n\t\t\tmatch: MatchNot{\n\t\t\t\tMatcherSets: []MatcherSet{\n\t\t\t\t\t{\n\t\t\t\t\t\tMatchPath{\"/bar\"},\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tMatchHost{\"example.com\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\thost: \"example.com\", path: \"/foo\",\n\t\t\tmatch: MatchNot{\n\t\t\t\tMatcherSets: []MatcherSet{\n\t\t\t\t\t{\n\t\t\t\t\t\tMatchPath{\"/bar\"},\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tMatchHost{\"sub.example.com\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\thost: \"example.com\", path: \"/foo\",\n\t\t\tmatch: MatchNot{\n\t\t\t\tMatcherSets: []MatcherSet{\n\t\t\t\t\t{\n\t\t\t\t\t\tMatchPath{\"/foo\"},\n\t\t\t\t\t\tMatchHost{\"example.com\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\thost: \"example.com\", path: \"/foo\",\n\t\t\tmatch: MatchNot{\n\t\t\t\tMatcherSets: []MatcherSet{\n\t\t\t\t\t{\n\t\t\t\t\t\tMatchPath{\"/bar\"},\n\t\t\t\t\t\tMatchHost{\"example.com\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpect: true,\n\t\t},\n\t} {\n\t\treq := &http.Request{Host: tc.host, URL: &url.URL{Path: tc.path}}\n\t\trepl := caddy.NewReplacer()\n\t\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\t\treq = req.WithContext(ctx)\n\n\t\tactual, err := tc.match.MatchWithError(req)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Test %d %v: matching failed: %v\", i, tc.match, err)\n\t\t}\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d %+v: Expected %t, got %t for: host=%s path=%s'\", i, tc.match, tc.expect, actual, tc.host, tc.path)\n\t\t\tcontinue\n\t\t}\n\t}\n}\n\nfunc BenchmarkLargeHostMatcher(b *testing.B) {\n\t// this benchmark simulates a large host matcher (thousands of entries) where each\n\t// value is an exact hostname (not a placeholder or wildcard) - compare the results\n\t// of this with and without the binary search (comment out the various fast path\n\t// sections in Match) to conduct experiments\n\n\tconst n = 10000\n\tlastHost := fmt.Sprintf(\"%d.example.com\", n-1)\n\treq := &http.Request{Host: lastHost}\n\trepl := caddy.NewReplacer()\n\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\treq = req.WithContext(ctx)\n\n\tmatcher := make(MatchHost, n)\n\tfor i := 0; i < n; i++ {\n\t\tmatcher[i] = fmt.Sprintf(\"%d.example.com\", i)\n\t}\n\terr := matcher.Provision(caddy.Context{})\n\tif err != nil {\n\t\tb.Fatal(err)\n\t}\n\n\tfor b.Loop() {\n\t\tmatcher.MatchWithError(req)\n\t}\n}\n\nfunc BenchmarkHostMatcherWithoutPlaceholder(b *testing.B) {\n\treq := &http.Request{Host: \"localhost\"}\n\trepl := caddy.NewReplacer()\n\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\treq = req.WithContext(ctx)\n\n\tmatch := MatchHost{\"localhost\"}\n\n\tfor b.Loop() {\n\t\tmatch.MatchWithError(req)\n\t}\n}\n\nfunc BenchmarkHostMatcherWithPlaceholder(b *testing.B) {\n\terr := os.Setenv(\"GO_BENCHMARK_DOMAIN\", \"localhost\")\n\tif err != nil {\n\t\tb.Errorf(\"error while setting up environment: %v\", err)\n\t}\n\n\treq := &http.Request{Host: \"localhost\"}\n\trepl := caddy.NewReplacer()\n\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\treq = req.WithContext(ctx)\n\tmatch := MatchHost{\"{env.GO_BENCHMARK_DOMAIN}\"}\n\n\tfor b.Loop() {\n\t\tmatch.MatchWithError(req)\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/metrics.go",
    "content": "package caddyhttp\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"net/http\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/prometheus/client_golang/prometheus\"\n\t\"github.com/prometheus/client_golang/prometheus/promauto\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/internal/metrics\"\n)\n\n// Metrics configures metrics observations.\n// EXPERIMENTAL and subject to change or removal.\n//\n// Example configuration:\n//\n//\t{\n//\t\t\"apps\": {\n//\t\t\t\"http\": {\n//\t\t\t\t\"metrics\": {\n//\t\t\t\t\t\"per_host\": true,\n//\t\t\t\t\t\"observe_catchall_hosts\": false\n//\t\t\t\t},\n//\t\t\t\t\"servers\": {\n//\t\t\t\t\t\"srv0\": {\n//\t\t\t\t\t\t\"routes\": [{\n//\t\t\t\t\t\t\t\"match\": [{\"host\": [\"example.com\", \"www.example.com\"]}],\n//\t\t\t\t\t\t\t\"handle\": [{\"handler\": \"static_response\", \"body\": \"Hello\"}]\n//\t\t\t\t\t\t}]\n//\t\t\t\t\t}\n//\t\t\t\t}\n//\t\t\t}\n//\t\t}\n//\t}\n//\n// In this configuration:\n// - Requests to example.com and www.example.com get individual host labels\n// - All other hosts (e.g., attacker.com) are aggregated under \"_other\" label\n// - This prevents unlimited cardinality from arbitrary Host headers\ntype Metrics struct {\n\t// Enable per-host metrics. Enabling this option may\n\t// incur high-memory consumption, depending on the number of hosts\n\t// managed by Caddy.\n\t//\n\t// CARDINALITY PROTECTION: To prevent unbounded cardinality attacks,\n\t// only explicitly configured hosts (via host matchers) are allowed\n\t// by default. Other hosts are aggregated under the \"_other\" label.\n\t// See AllowCatchAllHosts to change this behavior.\n\tPerHost bool `json:\"per_host,omitempty\"`\n\n\t// Allow metrics for catch-all hosts (hosts without explicit configuration).\n\t// When false (default), only hosts explicitly configured via host matchers\n\t// will get individual metrics labels. All other hosts will be aggregated\n\t// under the \"_other\" label to prevent cardinality explosion.\n\t//\n\t// This is automatically enabled for HTTPS servers (since certificates provide\n\t// some protection against unbounded cardinality), but disabled for HTTP servers\n\t// by default to prevent cardinality attacks from arbitrary Host headers.\n\t//\n\t// Set to true to allow all hosts to get individual metrics (NOT RECOMMENDED\n\t// for production environments exposed to the internet).\n\tObserveCatchallHosts bool `json:\"observe_catchall_hosts,omitempty\"`\n\n\tinit           sync.Once\n\thttpMetrics    *httpMetrics\n\tallowedHosts   map[string]struct{}\n\thasHTTPSServer bool\n}\n\ntype httpMetrics struct {\n\trequestInFlight  *prometheus.GaugeVec\n\trequestCount     *prometheus.CounterVec\n\trequestErrors    *prometheus.CounterVec\n\trequestDuration  *prometheus.HistogramVec\n\trequestSize      *prometheus.HistogramVec\n\tresponseSize     *prometheus.HistogramVec\n\tresponseDuration *prometheus.HistogramVec\n}\n\nfunc initHTTPMetrics(ctx caddy.Context, metrics *Metrics) {\n\tconst ns, sub = \"caddy\", \"http\"\n\tregistry := ctx.GetMetricsRegistry()\n\tbasicLabels := []string{\"server\", \"handler\"}\n\tif metrics.PerHost {\n\t\tbasicLabels = append(basicLabels, \"host\")\n\t}\n\tmetrics.httpMetrics.requestInFlight = promauto.With(registry).NewGaugeVec(prometheus.GaugeOpts{\n\t\tNamespace: ns,\n\t\tSubsystem: sub,\n\t\tName:      \"requests_in_flight\",\n\t\tHelp:      \"Number of requests currently handled by this server.\",\n\t}, basicLabels)\n\tmetrics.httpMetrics.requestErrors = promauto.With(registry).NewCounterVec(prometheus.CounterOpts{\n\t\tNamespace: ns,\n\t\tSubsystem: sub,\n\t\tName:      \"request_errors_total\",\n\t\tHelp:      \"Number of requests resulting in middleware errors.\",\n\t}, basicLabels)\n\tmetrics.httpMetrics.requestCount = promauto.With(registry).NewCounterVec(prometheus.CounterOpts{\n\t\tNamespace: ns,\n\t\tSubsystem: sub,\n\t\tName:      \"requests_total\",\n\t\tHelp:      \"Counter of HTTP(S) requests made.\",\n\t}, basicLabels)\n\n\t// TODO: allow these to be customized in the config\n\tdurationBuckets := prometheus.DefBuckets\n\tsizeBuckets := prometheus.ExponentialBuckets(256, 4, 8)\n\n\thttpLabels := []string{\"server\", \"handler\", \"code\", \"method\"}\n\tif metrics.PerHost {\n\t\thttpLabels = append(httpLabels, \"host\")\n\t}\n\tmetrics.httpMetrics.requestDuration = promauto.With(registry).NewHistogramVec(prometheus.HistogramOpts{\n\t\tNamespace: ns,\n\t\tSubsystem: sub,\n\t\tName:      \"request_duration_seconds\",\n\t\tHelp:      \"Histogram of round-trip request durations.\",\n\t\tBuckets:   durationBuckets,\n\t}, httpLabels)\n\tmetrics.httpMetrics.requestSize = promauto.With(registry).NewHistogramVec(prometheus.HistogramOpts{\n\t\tNamespace: ns,\n\t\tSubsystem: sub,\n\t\tName:      \"request_size_bytes\",\n\t\tHelp:      \"Total size of the request. Includes body\",\n\t\tBuckets:   sizeBuckets,\n\t}, httpLabels)\n\tmetrics.httpMetrics.responseSize = promauto.With(registry).NewHistogramVec(prometheus.HistogramOpts{\n\t\tNamespace: ns,\n\t\tSubsystem: sub,\n\t\tName:      \"response_size_bytes\",\n\t\tHelp:      \"Size of the returned response.\",\n\t\tBuckets:   sizeBuckets,\n\t}, httpLabels)\n\tmetrics.httpMetrics.responseDuration = promauto.With(registry).NewHistogramVec(prometheus.HistogramOpts{\n\t\tNamespace: ns,\n\t\tSubsystem: sub,\n\t\tName:      \"response_duration_seconds\",\n\t\tHelp:      \"Histogram of times to first byte in response bodies.\",\n\t\tBuckets:   durationBuckets,\n\t}, httpLabels)\n}\n\n// scanConfigForHosts scans the HTTP app configuration to build a set of allowed hosts\n// for metrics collection, similar to how auto-HTTPS scans for domain names.\nfunc (m *Metrics) scanConfigForHosts(app *App) {\n\tif !m.PerHost {\n\t\treturn\n\t}\n\n\tm.allowedHosts = make(map[string]struct{})\n\tm.hasHTTPSServer = false\n\n\tfor _, srv := range app.Servers {\n\t\t// Check if this server has TLS enabled\n\t\tserverHasTLS := len(srv.TLSConnPolicies) > 0\n\t\tif serverHasTLS {\n\t\t\tm.hasHTTPSServer = true\n\t\t}\n\n\t\t// Collect hosts from route matchers\n\t\tfor _, route := range srv.Routes {\n\t\t\tfor _, matcherSet := range route.MatcherSets {\n\t\t\t\tfor _, matcher := range matcherSet {\n\t\t\t\t\tif hm, ok := matcher.(*MatchHost); ok {\n\t\t\t\t\t\tfor _, host := range *hm {\n\t\t\t\t\t\t\t// Only allow non-fuzzy hosts to prevent unbounded cardinality\n\t\t\t\t\t\t\tif !hm.fuzzy(host) {\n\t\t\t\t\t\t\t\tm.allowedHosts[strings.ToLower(host)] = struct{}{}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n// shouldAllowHostMetrics determines if metrics should be collected for the given host.\n// This implements the cardinality protection by only allowing metrics for:\n// 1. Explicitly configured hosts\n// 2. Catch-all requests on HTTPS servers (if AllowCatchAllHosts is true or auto-enabled)\n// 3. Catch-all requests on HTTP servers only if explicitly allowed\nfunc (m *Metrics) shouldAllowHostMetrics(host string, isHTTPS bool) bool {\n\tif !m.PerHost {\n\t\treturn true // host won't be used in labels anyway\n\t}\n\n\tnormalizedHost := strings.ToLower(host)\n\n\t// Always allow explicitly configured hosts\n\tif _, exists := m.allowedHosts[normalizedHost]; exists {\n\t\treturn true\n\t}\n\n\t// For catch-all requests (not in allowed hosts)\n\tallowCatchAll := m.ObserveCatchallHosts || (isHTTPS && m.hasHTTPSServer)\n\treturn allowCatchAll\n}\n\n// serverNameFromContext extracts the current server name from the context.\n// Returns \"UNKNOWN\" if none is available (should probably never happen).\nfunc serverNameFromContext(ctx context.Context) string {\n\tsrv, ok := ctx.Value(ServerCtxKey).(*Server)\n\tif !ok || srv == nil || srv.name == \"\" {\n\t\treturn \"UNKNOWN\"\n\t}\n\treturn srv.name\n}\n\n// metricsInstrumentedRoute wraps a compiled route Handler with metrics\n// instrumentation. It wraps the entire compiled route chain once,\n// collecting metrics only once per route match.\ntype metricsInstrumentedRoute struct {\n\thandler string\n\tnext    Handler\n\tmetrics *Metrics\n}\n\nfunc newMetricsInstrumentedRoute(ctx caddy.Context, handler string, next Handler, m *Metrics) *metricsInstrumentedRoute {\n\tm.init.Do(func() {\n\t\tinitHTTPMetrics(ctx, m)\n\t})\n\n\treturn &metricsInstrumentedRoute{handler: handler, next: next, metrics: m}\n}\n\nfunc (h *metricsInstrumentedRoute) ServeHTTP(w http.ResponseWriter, r *http.Request) error {\n\tserver := serverNameFromContext(r.Context())\n\tlabels := prometheus.Labels{\"server\": server, \"handler\": h.handler}\n\tmethod := metrics.SanitizeMethod(r.Method)\n\t// the \"code\" value is set later, but initialized here to eliminate the possibility\n\t// of a panic\n\tstatusLabels := prometheus.Labels{\"server\": server, \"handler\": h.handler, \"method\": method, \"code\": \"\"}\n\n\t// Determine if this is an HTTPS request\n\tisHTTPS := r.TLS != nil\n\n\tif h.metrics.PerHost {\n\t\t// Apply cardinality protection for host metrics\n\t\tif h.metrics.shouldAllowHostMetrics(r.Host, isHTTPS) {\n\t\t\tlabels[\"host\"] = strings.ToLower(r.Host)\n\t\t\tstatusLabels[\"host\"] = strings.ToLower(r.Host)\n\t\t} else {\n\t\t\t// Use a catch-all label for unallowed hosts to prevent cardinality explosion\n\t\t\tlabels[\"host\"] = \"_other\"\n\t\t\tstatusLabels[\"host\"] = \"_other\"\n\t\t}\n\t}\n\n\tinFlight := h.metrics.httpMetrics.requestInFlight.With(labels)\n\tinFlight.Inc()\n\tdefer inFlight.Dec()\n\n\tstart := time.Now()\n\n\t// This is a _bit_ of a hack - it depends on the ShouldBufferFunc always\n\t// being called when the headers are written.\n\t// Effectively the same behaviour as promhttp.InstrumentHandlerTimeToWriteHeader.\n\twriteHeaderRecorder := ShouldBufferFunc(func(status int, header http.Header) bool {\n\t\tstatusLabels[\"code\"] = metrics.SanitizeCode(status)\n\t\tttfb := time.Since(start).Seconds()\n\t\th.metrics.httpMetrics.responseDuration.With(statusLabels).Observe(ttfb)\n\t\treturn false\n\t})\n\twrec := NewResponseRecorder(w, nil, writeHeaderRecorder)\n\terr := h.next.ServeHTTP(wrec, r)\n\tdur := time.Since(start).Seconds()\n\th.metrics.httpMetrics.requestCount.With(labels).Inc()\n\n\tobserveRequest := func(status int) {\n\t\t// If the code hasn't been set yet, and we didn't encounter an error, we're\n\t\t// probably falling through with an empty handler.\n\t\tif statusLabels[\"code\"] == \"\" {\n\t\t\t// we still sanitize it, even though it's likely to be 0. A 200 is\n\t\t\t// returned on fallthrough so we want to reflect that.\n\t\t\tstatusLabels[\"code\"] = metrics.SanitizeCode(status)\n\t\t}\n\n\t\th.metrics.httpMetrics.requestDuration.With(statusLabels).Observe(dur)\n\t\th.metrics.httpMetrics.requestSize.With(statusLabels).Observe(float64(computeApproximateRequestSize(r)))\n\t\th.metrics.httpMetrics.responseSize.With(statusLabels).Observe(float64(wrec.Size()))\n\t}\n\n\tif err != nil {\n\t\tvar handlerErr HandlerError\n\t\tif errors.As(err, &handlerErr) {\n\t\t\tobserveRequest(handlerErr.StatusCode)\n\t\t}\n\n\t\th.metrics.httpMetrics.requestErrors.With(labels).Inc()\n\n\t\treturn err\n\t}\n\n\tobserveRequest(wrec.Status())\n\n\treturn nil\n}\n\n// taken from https://github.com/prometheus/client_golang/blob/6007b2b5cae01203111de55f753e76d8dac1f529/prometheus/promhttp/instrument_server.go#L298\nfunc computeApproximateRequestSize(r *http.Request) int {\n\ts := 0\n\tif r.URL != nil {\n\t\ts += len(r.URL.String())\n\t}\n\n\ts += len(r.Method)\n\ts += len(r.Proto)\n\tfor name, values := range r.Header {\n\t\ts += len(name)\n\t\tfor _, value := range values {\n\t\t\ts += len(value)\n\t\t}\n\t}\n\ts += len(r.Host)\n\n\t// N.B. r.Form and r.MultipartForm are assumed to be included in r.URL.\n\n\tif r.ContentLength != -1 {\n\t\ts += int(r.ContentLength)\n\t}\n\treturn s\n}\n"
  },
  {
    "path": "modules/caddyhttp/metrics_test.go",
    "content": "package caddyhttp\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"errors\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"strings\"\n\t\"sync\"\n\t\"testing\"\n\n\t\"github.com/prometheus/client_golang/prometheus/testutil\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc TestServerNameFromContext(t *testing.T) {\n\tctx := context.Background()\n\texpected := \"UNKNOWN\"\n\tif actual := serverNameFromContext(ctx); actual != expected {\n\t\tt.Errorf(\"Not equal: expected %q, but got %q\", expected, actual)\n\t}\n\n\tin := \"foo\"\n\tctx = context.WithValue(ctx, ServerCtxKey, &Server{name: in})\n\tif actual := serverNameFromContext(ctx); actual != in {\n\t\tt.Errorf(\"Not equal: expected %q, but got %q\", in, actual)\n\t}\n}\n\nfunc TestMetricsInstrumentedHandler(t *testing.T) {\n\tctx, _ := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tmetrics := &Metrics{\n\t\tinit:        sync.Once{},\n\t\thttpMetrics: &httpMetrics{},\n\t}\n\thandlerErr := errors.New(\"oh noes\")\n\tresponse := []byte(\"hello world!\")\n\th := HandlerFunc(func(w http.ResponseWriter, r *http.Request) error {\n\t\tif actual := testutil.ToFloat64(metrics.httpMetrics.requestInFlight); actual != 1.0 {\n\t\t\tt.Errorf(\"Not same: expected %#v, but got %#v\", 1.0, actual)\n\t\t}\n\t\tif handlerErr == nil {\n\t\t\tw.Write(response)\n\t\t}\n\t\treturn handlerErr\n\t})\n\n\tih := newMetricsInstrumentedRoute(ctx, \"bar\", h, metrics)\n\n\tr := httptest.NewRequest(\"GET\", \"/\", nil)\n\tw := httptest.NewRecorder()\n\n\tif actual := ih.ServeHTTP(w, r); actual != handlerErr {\n\t\tt.Errorf(\"Not same: expected %#v, but got %#v\", handlerErr, actual)\n\t}\n\tif actual := testutil.ToFloat64(metrics.httpMetrics.requestInFlight); actual != 0.0 {\n\t\tt.Errorf(\"Not same: expected %#v, but got %#v\", 0.0, actual)\n\t}\n\n\thandlerErr = nil\n\tif err := ih.ServeHTTP(w, r); err != nil {\n\t\tt.Errorf(\"Received unexpected error: %v\", err)\n\t}\n\n\t// an empty handler - no errors, no header written\n\temptyHandler := HandlerFunc(func(w http.ResponseWriter, r *http.Request) error {\n\t\treturn nil\n\t})\n\tih = newMetricsInstrumentedRoute(ctx, \"empty\", emptyHandler, metrics)\n\tr = httptest.NewRequest(\"GET\", \"/\", nil)\n\tw = httptest.NewRecorder()\n\n\tif err := ih.ServeHTTP(w, r); err != nil {\n\t\tt.Errorf(\"Received unexpected error: %v\", err)\n\t}\n\tif actual := w.Result().StatusCode; actual != 200 {\n\t\tt.Errorf(\"Not same: expected status code %#v, but got %#v\", 200, actual)\n\t}\n\tif actual := w.Result().Header; len(actual) != 0 {\n\t\tt.Errorf(\"Not empty: expected headers to be empty, but got %#v\", actual)\n\t}\n\n\t// handler returning an error with an HTTP status\n\terrHandler := HandlerFunc(func(w http.ResponseWriter, r *http.Request) error {\n\t\treturn Error(http.StatusTooManyRequests, nil)\n\t})\n\n\tih = newMetricsInstrumentedRoute(ctx, \"foo\", errHandler, metrics)\n\n\tr = httptest.NewRequest(\"GET\", \"/\", nil)\n\tw = httptest.NewRecorder()\n\n\tif err := ih.ServeHTTP(w, r); err == nil {\n\t\tt.Errorf(\"expected error to be propagated\")\n\t}\n\n\texpected := `\n\t# HELP caddy_http_request_duration_seconds Histogram of round-trip request durations.\n\t# TYPE caddy_http_request_duration_seconds histogram\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"0.005\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"0.01\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"0.025\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"0.05\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"0.1\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"0.25\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"0.5\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"1\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"2.5\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"5\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"10\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"+Inf\"} 1\n\tcaddy_http_request_duration_seconds_count{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\"} 1\n\t# HELP caddy_http_request_size_bytes Total size of the request. Includes body\n\t# TYPE caddy_http_request_size_bytes histogram\n\tcaddy_http_request_size_bytes_bucket{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\",le=\"256\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\",le=\"1024\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\",le=\"4096\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\",le=\"16384\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\",le=\"65536\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\",le=\"262144\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\",le=\"1.048576e+06\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\",le=\"4.194304e+06\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\",le=\"+Inf\"} 1\n    caddy_http_request_size_bytes_sum{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\"} 23\n    caddy_http_request_size_bytes_count{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\",le=\"256\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\",le=\"1024\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\",le=\"4096\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\",le=\"16384\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\",le=\"65536\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\",le=\"262144\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\",le=\"1.048576e+06\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\",le=\"4.194304e+06\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\",le=\"+Inf\"} 1\n    caddy_http_request_size_bytes_sum{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\"} 23\n    caddy_http_request_size_bytes_count{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\"} 1\n\tcaddy_http_request_size_bytes_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"256\"} 1\n\tcaddy_http_request_size_bytes_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"1024\"} 1\n\tcaddy_http_request_size_bytes_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"4096\"} 1\n\tcaddy_http_request_size_bytes_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"16384\"} 1\n\tcaddy_http_request_size_bytes_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"65536\"} 1\n\tcaddy_http_request_size_bytes_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"262144\"} 1\n\tcaddy_http_request_size_bytes_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"1.048576e+06\"} 1\n\tcaddy_http_request_size_bytes_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"4.194304e+06\"} 1\n\tcaddy_http_request_size_bytes_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"+Inf\"} 1\n\tcaddy_http_request_size_bytes_sum{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\"} 23\n\tcaddy_http_request_size_bytes_count{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\"} 1\n\t# HELP caddy_http_response_size_bytes Size of the returned response.\n\t# TYPE caddy_http_response_size_bytes histogram\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\",le=\"256\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\",le=\"1024\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\",le=\"4096\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\",le=\"16384\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\",le=\"65536\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\",le=\"262144\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\",le=\"1.048576e+06\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\",le=\"4.194304e+06\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\",le=\"+Inf\"} 1\n\tcaddy_http_response_size_bytes_sum{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\"} 12\n\tcaddy_http_response_size_bytes_count{code=\"200\",handler=\"bar\",method=\"GET\",server=\"UNKNOWN\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\",le=\"256\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\",le=\"1024\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\",le=\"4096\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\",le=\"16384\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\",le=\"65536\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\",le=\"262144\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\",le=\"1.048576e+06\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\",le=\"4.194304e+06\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\",le=\"+Inf\"} 1\n\tcaddy_http_response_size_bytes_sum{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\"} 0\n\tcaddy_http_response_size_bytes_count{code=\"200\",handler=\"empty\",method=\"GET\",server=\"UNKNOWN\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"256\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"1024\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"4096\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"16384\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"65536\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"262144\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"1.048576e+06\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"4.194304e+06\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\",le=\"+Inf\"} 1\n\tcaddy_http_response_size_bytes_sum{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\"} 0\n\tcaddy_http_response_size_bytes_count{code=\"429\",handler=\"foo\",method=\"GET\",server=\"UNKNOWN\"} 1\n\t# HELP caddy_http_request_errors_total Number of requests resulting in middleware errors.\n\t# TYPE caddy_http_request_errors_total counter\n\tcaddy_http_request_errors_total{handler=\"bar\",server=\"UNKNOWN\"} 1\n\tcaddy_http_request_errors_total{handler=\"foo\",server=\"UNKNOWN\"} 1\n\t`\n\tif err := testutil.GatherAndCompare(ctx.GetMetricsRegistry(), strings.NewReader(expected),\n\t\t\"caddy_http_request_size_bytes\",\n\t\t\"caddy_http_response_size_bytes\",\n\t\t// caddy_http_request_duration_seconds_sum will vary based on how long the test took to run,\n\t\t// so we check just the _bucket and _count metrics\n\t\t\"caddy_http_request_duration_seconds_bucket\",\n\t\t\"caddy_http_request_duration_seconds_count\",\n\t\t\"caddy_http_request_errors_total\",\n\t); err != nil {\n\t\tt.Errorf(\"received unexpected error: %s\", err)\n\t}\n}\n\nfunc TestMetricsInstrumentedHandlerPerHost(t *testing.T) {\n\tctx, _ := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tmetrics := &Metrics{\n\t\tPerHost:              true,\n\t\tObserveCatchallHosts: true, // Allow all hosts for testing\n\t\tinit:                 sync.Once{},\n\t\thttpMetrics:          &httpMetrics{},\n\t\tallowedHosts:         make(map[string]struct{}),\n\t}\n\thandlerErr := errors.New(\"oh noes\")\n\tresponse := []byte(\"hello world!\")\n\th := HandlerFunc(func(w http.ResponseWriter, r *http.Request) error {\n\t\tif actual := testutil.ToFloat64(metrics.httpMetrics.requestInFlight); actual != 1.0 {\n\t\t\tt.Errorf(\"Not same: expected %#v, but got %#v\", 1.0, actual)\n\t\t}\n\t\tif handlerErr == nil {\n\t\t\tw.Write(response)\n\t\t}\n\t\treturn handlerErr\n\t})\n\n\tih := newMetricsInstrumentedRoute(ctx, \"bar\", h, metrics)\n\n\tr := httptest.NewRequest(\"GET\", \"/\", nil)\n\tw := httptest.NewRecorder()\n\n\tif actual := ih.ServeHTTP(w, r); actual != handlerErr {\n\t\tt.Errorf(\"Not same: expected %#v, but got %#v\", handlerErr, actual)\n\t}\n\tif actual := testutil.ToFloat64(metrics.httpMetrics.requestInFlight); actual != 0.0 {\n\t\tt.Errorf(\"Not same: expected %#v, but got %#v\", 0.0, actual)\n\t}\n\n\thandlerErr = nil\n\tif err := ih.ServeHTTP(w, r); err != nil {\n\t\tt.Errorf(\"Received unexpected error: %v\", err)\n\t}\n\n\t// an empty handler - no errors, no header written\n\temptyHandler := HandlerFunc(func(w http.ResponseWriter, r *http.Request) error {\n\t\treturn nil\n\t})\n\tih = newMetricsInstrumentedRoute(ctx, \"empty\", emptyHandler, metrics)\n\tr = httptest.NewRequest(\"GET\", \"/\", nil)\n\tw = httptest.NewRecorder()\n\n\tif err := ih.ServeHTTP(w, r); err != nil {\n\t\tt.Errorf(\"Received unexpected error: %v\", err)\n\t}\n\tif actual := w.Result().StatusCode; actual != 200 {\n\t\tt.Errorf(\"Not same: expected status code %#v, but got %#v\", 200, actual)\n\t}\n\tif actual := w.Result().Header; len(actual) != 0 {\n\t\tt.Errorf(\"Not empty: expected headers to be empty, but got %#v\", actual)\n\t}\n\n\t// handler returning an error with an HTTP status\n\terrHandler := HandlerFunc(func(w http.ResponseWriter, r *http.Request) error {\n\t\treturn Error(http.StatusTooManyRequests, nil)\n\t})\n\n\tih = newMetricsInstrumentedRoute(ctx, \"foo\", errHandler, metrics)\n\n\tr = httptest.NewRequest(\"GET\", \"/\", nil)\n\tw = httptest.NewRecorder()\n\n\tif err := ih.ServeHTTP(w, r); err == nil {\n\t\tt.Errorf(\"expected error to be propagated\")\n\t}\n\n\texpected := `\n\t# HELP caddy_http_request_duration_seconds Histogram of round-trip request durations.\n\t# TYPE caddy_http_request_duration_seconds histogram\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"0.005\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"0.01\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"0.025\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"0.05\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"0.1\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"0.25\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"0.5\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"1\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"2.5\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"5\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"10\"} 1\n\tcaddy_http_request_duration_seconds_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"+Inf\"} 1\n\tcaddy_http_request_duration_seconds_count{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\"} 1\n\t# HELP caddy_http_request_size_bytes Total size of the request. Includes body\n\t# TYPE caddy_http_request_size_bytes histogram\n\tcaddy_http_request_size_bytes_bucket{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"256\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"1024\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"4096\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"16384\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"65536\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"262144\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"1.048576e+06\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"4.194304e+06\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"+Inf\"} 1\n    caddy_http_request_size_bytes_sum{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\"} 23\n    caddy_http_request_size_bytes_count{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"256\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"1024\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"4096\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"16384\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"65536\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"262144\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"1.048576e+06\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"4.194304e+06\"} 1\n    caddy_http_request_size_bytes_bucket{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"+Inf\"} 1\n    caddy_http_request_size_bytes_sum{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\"} 23\n    caddy_http_request_size_bytes_count{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\"} 1\n\tcaddy_http_request_size_bytes_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"256\"} 1\n\tcaddy_http_request_size_bytes_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"1024\"} 1\n\tcaddy_http_request_size_bytes_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"4096\"} 1\n\tcaddy_http_request_size_bytes_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"16384\"} 1\n\tcaddy_http_request_size_bytes_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"65536\"} 1\n\tcaddy_http_request_size_bytes_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"262144\"} 1\n\tcaddy_http_request_size_bytes_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"1.048576e+06\"} 1\n\tcaddy_http_request_size_bytes_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"4.194304e+06\"} 1\n\tcaddy_http_request_size_bytes_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"+Inf\"} 1\n\tcaddy_http_request_size_bytes_sum{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\"} 23\n\tcaddy_http_request_size_bytes_count{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\"} 1\n\t# HELP caddy_http_response_size_bytes Size of the returned response.\n\t# TYPE caddy_http_response_size_bytes histogram\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"256\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"1024\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"4096\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"16384\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"65536\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"262144\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"1.048576e+06\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"4.194304e+06\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"+Inf\"} 1\n\tcaddy_http_response_size_bytes_sum{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\"} 12\n\tcaddy_http_response_size_bytes_count{code=\"200\",handler=\"bar\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"256\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"1024\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"4096\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"16384\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"65536\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"262144\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"1.048576e+06\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"4.194304e+06\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"+Inf\"} 1\n\tcaddy_http_response_size_bytes_sum{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\"} 0\n\tcaddy_http_response_size_bytes_count{code=\"200\",handler=\"empty\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"256\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"1024\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"4096\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"16384\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"65536\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"262144\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"1.048576e+06\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"4.194304e+06\"} 1\n\tcaddy_http_response_size_bytes_bucket{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\",le=\"+Inf\"} 1\n\tcaddy_http_response_size_bytes_sum{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\"} 0\n\tcaddy_http_response_size_bytes_count{code=\"429\",handler=\"foo\",host=\"example.com\",method=\"GET\",server=\"UNKNOWN\"} 1\n\t# HELP caddy_http_request_errors_total Number of requests resulting in middleware errors.\n\t# TYPE caddy_http_request_errors_total counter\n\tcaddy_http_request_errors_total{handler=\"bar\",host=\"example.com\",server=\"UNKNOWN\"} 1\n\tcaddy_http_request_errors_total{handler=\"foo\",host=\"example.com\",server=\"UNKNOWN\"} 1\n\t`\n\tif err := testutil.GatherAndCompare(ctx.GetMetricsRegistry(), strings.NewReader(expected),\n\t\t\"caddy_http_request_size_bytes\",\n\t\t\"caddy_http_response_size_bytes\",\n\t\t// caddy_http_request_duration_seconds_sum will vary based on how long the test took to run,\n\t\t// so we check just the _bucket and _count metrics\n\t\t\"caddy_http_request_duration_seconds_bucket\",\n\t\t\"caddy_http_request_duration_seconds_count\",\n\t\t\"caddy_http_request_errors_total\",\n\t); err != nil {\n\t\tt.Errorf(\"received unexpected error: %s\", err)\n\t}\n}\n\nfunc TestMetricsCardinalityProtection(t *testing.T) {\n\tctx, _ := caddy.NewContext(caddy.Context{Context: context.Background()})\n\n\t// Test 1: Without AllowCatchAllHosts, arbitrary hosts should be mapped to \"_other\"\n\tmetrics := &Metrics{\n\t\tPerHost:              true,\n\t\tObserveCatchallHosts: false, // Default - should map unknown hosts to \"_other\"\n\t\tinit:                 sync.Once{},\n\t\thttpMetrics:          &httpMetrics{},\n\t\tallowedHosts:         make(map[string]struct{}),\n\t}\n\n\t// Add one allowed host\n\tmetrics.allowedHosts[\"allowed.com\"] = struct{}{}\n\n\th := HandlerFunc(func(w http.ResponseWriter, r *http.Request) error {\n\t\tw.Write([]byte(\"hello\"))\n\t\treturn nil\n\t})\n\n\tih := newMetricsInstrumentedRoute(ctx, \"test\", h, metrics)\n\n\t// Test request to allowed host\n\tr1 := httptest.NewRequest(\"GET\", \"http://allowed.com/\", nil)\n\tr1.Host = \"allowed.com\"\n\tw1 := httptest.NewRecorder()\n\tih.ServeHTTP(w1, r1)\n\n\t// Test request to unknown host (should be mapped to \"_other\")\n\tr2 := httptest.NewRequest(\"GET\", \"http://attacker.com/\", nil)\n\tr2.Host = \"attacker.com\"\n\tw2 := httptest.NewRecorder()\n\tih.ServeHTTP(w2, r2)\n\n\t// Test request to another unknown host (should also be mapped to \"_other\")\n\tr3 := httptest.NewRequest(\"GET\", \"http://evil.com/\", nil)\n\tr3.Host = \"evil.com\"\n\tw3 := httptest.NewRecorder()\n\tih.ServeHTTP(w3, r3)\n\n\t// Check that metrics contain:\n\t// - One entry for \"allowed.com\"\n\t// - One entry for \"_other\" (aggregating attacker.com and evil.com)\n\texpected := `\n\t# HELP caddy_http_requests_total Counter of HTTP(S) requests made.\n\t# TYPE caddy_http_requests_total counter\n\tcaddy_http_requests_total{handler=\"test\",host=\"_other\",server=\"UNKNOWN\"} 2\n\tcaddy_http_requests_total{handler=\"test\",host=\"allowed.com\",server=\"UNKNOWN\"} 1\n\t`\n\n\tif err := testutil.GatherAndCompare(ctx.GetMetricsRegistry(), strings.NewReader(expected),\n\t\t\"caddy_http_requests_total\",\n\t); err != nil {\n\t\tt.Errorf(\"Cardinality protection test failed: %s\", err)\n\t}\n}\n\nfunc TestMetricsHTTPSCatchAll(t *testing.T) {\n\tctx, _ := caddy.NewContext(caddy.Context{Context: context.Background()})\n\n\t// Test that HTTPS requests allow catch-all even when AllowCatchAllHosts is false\n\tmetrics := &Metrics{\n\t\tPerHost:              true,\n\t\tObserveCatchallHosts: false,\n\t\thasHTTPSServer:       true, // Simulate having HTTPS servers\n\t\tinit:                 sync.Once{},\n\t\thttpMetrics:          &httpMetrics{},\n\t\tallowedHosts:         make(map[string]struct{}), // Empty - no explicitly allowed hosts\n\t}\n\n\th := HandlerFunc(func(w http.ResponseWriter, r *http.Request) error {\n\t\tw.Write([]byte(\"hello\"))\n\t\treturn nil\n\t})\n\n\tih := newMetricsInstrumentedRoute(ctx, \"test\", h, metrics)\n\n\t// Test HTTPS request (should be allowed even though not in allowedHosts)\n\tr1 := httptest.NewRequest(\"GET\", \"https://unknown.com/\", nil)\n\tr1.Host = \"unknown.com\"\n\tr1.TLS = &tls.ConnectionState{} // Mark as TLS/HTTPS\n\tw1 := httptest.NewRecorder()\n\tih.ServeHTTP(w1, r1)\n\n\t// Test HTTP request (should be mapped to \"_other\")\n\tr2 := httptest.NewRequest(\"GET\", \"http://unknown.com/\", nil)\n\tr2.Host = \"unknown.com\"\n\t// No TLS field = HTTP request\n\tw2 := httptest.NewRecorder()\n\tih.ServeHTTP(w2, r2)\n\n\t// Check that HTTPS request gets real host, HTTP gets \"_other\"\n\texpected := `\n\t# HELP caddy_http_requests_total Counter of HTTP(S) requests made.\n\t# TYPE caddy_http_requests_total counter\n\tcaddy_http_requests_total{handler=\"test\",host=\"_other\",server=\"UNKNOWN\"} 1\n\tcaddy_http_requests_total{handler=\"test\",host=\"unknown.com\",server=\"UNKNOWN\"} 1\n\t`\n\n\tif err := testutil.GatherAndCompare(ctx.GetMetricsRegistry(), strings.NewReader(expected),\n\t\t\"caddy_http_requests_total\",\n\t); err != nil {\n\t\tt.Errorf(\"HTTPS catch-all test failed: %s\", err)\n\t}\n}\n\nfunc TestMetricsInstrumentedRoute(t *testing.T) {\n\tctx, _ := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tm := &Metrics{\n\t\tinit:        sync.Once{},\n\t\thttpMetrics: &httpMetrics{},\n\t}\n\n\thandlerErr := errors.New(\"oh noes\")\n\tresponse := []byte(\"hello world!\")\n\tinnerHandler := HandlerFunc(func(w http.ResponseWriter, r *http.Request) error {\n\t\tif actual := testutil.ToFloat64(m.httpMetrics.requestInFlight); actual != 1.0 {\n\t\t\tt.Errorf(\"Expected requestInFlight to be 1.0, got %v\", actual)\n\t\t}\n\t\tif handlerErr == nil {\n\t\t\tw.Write(response)\n\t\t}\n\t\treturn handlerErr\n\t})\n\n\tih := newMetricsInstrumentedRoute(ctx, \"test_handler\", innerHandler, m)\n\n\tr := httptest.NewRequest(\"GET\", \"/\", nil)\n\tw := httptest.NewRecorder()\n\n\t// Test with error\n\tif actual := ih.ServeHTTP(w, r); actual != handlerErr {\n\t\tt.Errorf(\"Expected error %v, got %v\", handlerErr, actual)\n\t}\n\tif actual := testutil.ToFloat64(m.httpMetrics.requestInFlight); actual != 0.0 {\n\t\tt.Errorf(\"Expected requestInFlight to be 0.0 after request, got %v\", actual)\n\t}\n\tif actual := testutil.ToFloat64(m.httpMetrics.requestErrors); actual != 1.0 {\n\t\tt.Errorf(\"Expected requestErrors to be 1.0, got %v\", actual)\n\t}\n\n\t// Test without error\n\thandlerErr = nil\n\tw = httptest.NewRecorder()\n\tif err := ih.ServeHTTP(w, r); err != nil {\n\t\tt.Errorf(\"Unexpected error: %v\", err)\n\t}\n}\n\nfunc BenchmarkMetricsInstrumentedRoute(b *testing.B) {\n\tctx, _ := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tm := &Metrics{\n\t\tinit:        sync.Once{},\n\t\thttpMetrics: &httpMetrics{},\n\t}\n\n\tnoopHandler := HandlerFunc(func(w http.ResponseWriter, r *http.Request) error {\n\t\tw.Write([]byte(\"ok\"))\n\t\treturn nil\n\t})\n\n\tih := newMetricsInstrumentedRoute(ctx, \"bench_handler\", noopHandler, m)\n\n\tr := httptest.NewRequest(\"GET\", \"/\", nil)\n\tw := httptest.NewRecorder()\n\n\tb.ResetTimer()\n\tb.ReportAllocs()\n\tfor i := 0; i < b.N; i++ {\n\t\tih.ServeHTTP(w, r)\n\t}\n}\n\n// BenchmarkSingleRouteMetrics simulates the new behavior where metrics\n// are collected once for the entire route.\nfunc BenchmarkSingleRouteMetrics(b *testing.B) {\n\tctx, _ := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tm := &Metrics{\n\t\tinit:        sync.Once{},\n\t\thttpMetrics: &httpMetrics{},\n\t}\n\n\t// Build a chain of 5 plain middleware handlers (no per-handler metrics)\n\tvar next Handler = HandlerFunc(func(w http.ResponseWriter, r *http.Request) error {\n\t\treturn nil\n\t})\n\tfor i := 0; i < 5; i++ {\n\t\tcapturedNext := next\n\t\tnext = HandlerFunc(func(w http.ResponseWriter, r *http.Request) error {\n\t\t\treturn capturedNext.ServeHTTP(w, r)\n\t\t})\n\t}\n\n\t// Wrap the entire chain with a single route-level metrics handler\n\tih := newMetricsInstrumentedRoute(ctx, \"handler\", next, m)\n\n\tr := httptest.NewRequest(\"GET\", \"/\", nil)\n\tw := httptest.NewRecorder()\n\n\tb.ResetTimer()\n\tb.ReportAllocs()\n\tfor i := 0; i < b.N; i++ {\n\t\tih.ServeHTTP(w, r)\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/proxyprotocol/listenerwrapper.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage proxyprotocol\n\nimport (\n\t\"net\"\n\t\"net/netip\"\n\t\"time\"\n\n\tgoproxy \"github.com/pires/go-proxyproto\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\n// ListenerWrapper provides PROXY protocol support to Caddy by implementing\n// the caddy.ListenerWrapper interface. If a connection is received via Unix\n// socket, it's trusted. Otherwise, it's checked against the Allow/Deny lists,\n// then it's handled by the FallbackPolicy.\n//\n// It must be loaded before the `tls` listener because the PROXY protocol\n// encapsulates the TLS data.\n//\n// Credit goes to https://github.com/mastercactapus/caddy2-proxyprotocol for having\n// initially implemented this as a plugin.\ntype ListenerWrapper struct {\n\t// Timeout specifies an optional maximum time for\n\t// the PROXY header to be received.\n\t// If zero, timeout is disabled. Default is 5s.\n\tTimeout caddy.Duration `json:\"timeout,omitempty\"`\n\n\t// Allow is an optional list of CIDR ranges to\n\t// allow/require PROXY headers from.\n\tAllow []string `json:\"allow,omitempty\"`\n\tallow []netip.Prefix\n\n\t// Deny is an optional list of CIDR ranges to\n\t// deny PROXY headers from.\n\tDeny []string `json:\"deny,omitempty\"`\n\tdeny []netip.Prefix\n\n\t// FallbackPolicy specifies the policy to use if the downstream\n\t// IP address is not in the Allow list nor is in the Deny list.\n\t//\n\t// NOTE: The generated docs which describe the value of this\n\t// field is wrong because of how this type unmarshals JSON in a\n\t// custom way. The field expects a string, not a number.\n\t//\n\t// Accepted values are: IGNORE, USE, REJECT, REQUIRE, SKIP\n\t//\n\t// - IGNORE: address from PROXY header, but accept connection\n\t//\n\t// - USE: address from PROXY header\n\t//\n\t// - REJECT: connection when PROXY header is sent\n\t//   Note: even though the first read on the connection returns an error if\n\t//   a PROXY header is present, subsequent reads do not. It is the task of\n\t//   the code using the connection to handle that case properly.\n\t//\n\t// - REQUIRE: connection to send PROXY header, reject if not present\n\t//   Note: even though the first read on the connection returns an error if\n\t//   a PROXY header is not present, subsequent reads do not. It is the task\n\t//   of the code using the connection to handle that case properly.\n\t//\n\t// - SKIP: accepts a connection without requiring the PROXY header.\n\t//   Note: an example usage can be found in the SkipProxyHeaderForCIDR\n\t//   function.\n\t//\n\t// Default: IGNORE\n\t//\n\t// Policy definitions are here: https://pkg.go.dev/github.com/pires/go-proxyproto@v0.7.0#Policy\n\tFallbackPolicy Policy `json:\"fallback_policy,omitempty\"`\n\n\tpolicy goproxy.ConnPolicyFunc\n}\n\n// Provision sets up the listener wrapper.\nfunc (pp *ListenerWrapper) Provision(ctx caddy.Context) error {\n\tfor _, cidr := range pp.Allow {\n\t\tipnet, err := netip.ParsePrefix(cidr)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tpp.allow = append(pp.allow, ipnet)\n\t}\n\tfor _, cidr := range pp.Deny {\n\t\tipnet, err := netip.ParsePrefix(cidr)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tpp.deny = append(pp.deny, ipnet)\n\t}\n\n\tpp.policy = func(options goproxy.ConnPolicyOptions) (goproxy.Policy, error) {\n\t\t// trust unix sockets\n\t\tif network := options.Upstream.Network(); caddy.IsUnixNetwork(network) || caddy.IsFdNetwork(network) {\n\t\t\treturn goproxy.USE, nil\n\t\t}\n\t\tret := pp.FallbackPolicy\n\t\thost, _, err := net.SplitHostPort(options.Upstream.String())\n\t\tif err != nil {\n\t\t\treturn goproxy.REJECT, err\n\t\t}\n\n\t\tip, err := netip.ParseAddr(host)\n\t\tif err != nil {\n\t\t\treturn goproxy.REJECT, err\n\t\t}\n\t\tfor _, ipnet := range pp.deny {\n\t\t\tif ipnet.Contains(ip) {\n\t\t\t\treturn goproxy.REJECT, nil\n\t\t\t}\n\t\t}\n\t\tfor _, ipnet := range pp.allow {\n\t\t\tif ipnet.Contains(ip) {\n\t\t\t\tret = PolicyUSE\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\treturn policyToGoProxyPolicy[ret], nil\n\t}\n\treturn nil\n}\n\n// WrapListener adds PROXY protocol support to the listener.\nfunc (pp *ListenerWrapper) WrapListener(l net.Listener) net.Listener {\n\tpl := &goproxy.Listener{\n\t\tListener:          l,\n\t\tReadHeaderTimeout: time.Duration(pp.Timeout),\n\t}\n\tpl.ConnPolicy = pp.policy\n\treturn pl\n}\n"
  },
  {
    "path": "modules/caddyhttp/proxyprotocol/module.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage proxyprotocol\n\nimport (\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(ListenerWrapper{})\n}\n\nfunc (ListenerWrapper) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.listeners.proxy_protocol\",\n\t\tNew: func() caddy.Module { return new(ListenerWrapper) },\n\t}\n}\n\n// UnmarshalCaddyfile sets up the listener Listenerwrapper from Caddyfile tokens. Syntax:\n//\n//\tproxy_protocol {\n//\t\ttimeout <duration>\n//\t\tallow <IPs...>\n//\t\tdeny <IPs...>\n//\t\tfallback_policy <policy>\n//\t}\nfunc (w *ListenerWrapper) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume wrapper name\n\n\t// No same-line options are supported\n\tif d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"timeout\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"parsing proxy_protocol timeout duration: %v\", err)\n\t\t\t}\n\t\t\tw.Timeout = caddy.Duration(dur)\n\n\t\tcase \"allow\":\n\t\t\tw.Allow = append(w.Allow, d.RemainingArgs()...)\n\t\tcase \"deny\":\n\t\t\tw.Deny = append(w.Deny, d.RemainingArgs()...)\n\t\tcase \"fallback_policy\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tp, err := parsePolicy(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.WrapErr(err)\n\t\t\t}\n\t\t\tw.FallbackPolicy = p\n\t\tdefault:\n\t\t\treturn d.ArgErr()\n\t\t}\n\t}\n\treturn nil\n}\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner     = (*ListenerWrapper)(nil)\n\t_ caddy.Module          = (*ListenerWrapper)(nil)\n\t_ caddy.ListenerWrapper = (*ListenerWrapper)(nil)\n\t_ caddyfile.Unmarshaler = (*ListenerWrapper)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/proxyprotocol/policy.go",
    "content": "package proxyprotocol\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"strings\"\n\n\tgoproxy \"github.com/pires/go-proxyproto\"\n)\n\ntype Policy int\n\n// as defined in: https://pkg.go.dev/github.com/pires/go-proxyproto@v0.7.0#Policy\nconst (\n\t// IGNORE address from PROXY header, but accept connection\n\tPolicyIGNORE Policy = iota\n\t// USE address from PROXY header\n\tPolicyUSE\n\t// REJECT connection when PROXY header is sent\n\t// Note: even though the first read on the connection returns an error if\n\t// a PROXY header is present, subsequent reads do not. It is the task of\n\t// the code using the connection to handle that case properly.\n\tPolicyREJECT\n\t// REQUIRE connection to send PROXY header, reject if not present\n\t// Note: even though the first read on the connection returns an error if\n\t// a PROXY header is not present, subsequent reads do not. It is the task\n\t// of the code using the connection to handle that case properly.\n\tPolicyREQUIRE\n\t// SKIP accepts a connection without requiring the PROXY header\n\t// Note: an example usage can be found in the SkipProxyHeaderForCIDR\n\t// function.\n\tPolicySKIP\n)\n\nvar policyToGoProxyPolicy = map[Policy]goproxy.Policy{\n\tPolicyUSE:     goproxy.USE,\n\tPolicyIGNORE:  goproxy.IGNORE,\n\tPolicyREJECT:  goproxy.REJECT,\n\tPolicyREQUIRE: goproxy.REQUIRE,\n\tPolicySKIP:    goproxy.SKIP,\n}\n\nvar policyMap = map[Policy]string{\n\tPolicyUSE:     \"USE\",\n\tPolicyIGNORE:  \"IGNORE\",\n\tPolicyREJECT:  \"REJECT\",\n\tPolicyREQUIRE: \"REQUIRE\",\n\tPolicySKIP:    \"SKIP\",\n}\n\nvar policyMapRev = map[string]Policy{\n\t\"USE\":     PolicyUSE,\n\t\"IGNORE\":  PolicyIGNORE,\n\t\"REJECT\":  PolicyREJECT,\n\t\"REQUIRE\": PolicyREQUIRE,\n\t\"SKIP\":    PolicySKIP,\n}\n\n// MarshalText implements the text marshaller method.\nfunc (x Policy) MarshalText() ([]byte, error) {\n\treturn []byte(policyMap[x]), nil\n}\n\n// UnmarshalText implements the text unmarshaller method.\nfunc (x *Policy) UnmarshalText(text []byte) error {\n\tname := string(text)\n\ttmp, err := parsePolicy(name)\n\tif err != nil {\n\t\treturn err\n\t}\n\t*x = tmp\n\treturn nil\n}\n\nfunc parsePolicy(name string) (Policy, error) {\n\tif x, ok := policyMapRev[strings.ToUpper(name)]; ok {\n\t\treturn x, nil\n\t}\n\treturn Policy(0), fmt.Errorf(\"%s is %w\", name, errInvalidPolicy)\n}\n\nvar errInvalidPolicy = errors.New(\"invalid policy\")\n"
  },
  {
    "path": "modules/caddyhttp/push/caddyfile.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage push\n\nimport (\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/headers\"\n)\n\nfunc init() {\n\thttpcaddyfile.RegisterHandlerDirective(\"push\", parseCaddyfile)\n}\n\n// parseCaddyfile sets up the push handler. Syntax:\n//\n//\tpush [<matcher>] [<resource>] {\n//\t    [GET|HEAD] <resource>\n//\t    headers {\n//\t        [+]<field> [<value|regexp> [<replacement>]]\n//\t        -<field>\n//\t    }\n//\t}\n//\n// A single resource can be specified inline without opening a\n// block for the most common/simple case. Or, a block can be\n// opened and multiple resources can be specified, one per\n// line, optionally preceded by the method. The headers\n// subdirective can be used to customize the headers that\n// are set on each (synthetic) push request, using the same\n// syntax as the 'header' directive for request headers.\n// Placeholders are accepted in resource and header field\n// name and value and replacement tokens.\nfunc parseCaddyfile(h httpcaddyfile.Helper) (caddyhttp.MiddlewareHandler, error) {\n\th.Next() // consume directive name\n\n\thandler := new(Handler)\n\n\t// inline resources\n\tif h.NextArg() {\n\t\thandler.Resources = append(handler.Resources, Resource{Target: h.Val()})\n\t}\n\n\t// optional block\n\tfor h.NextBlock(0) {\n\t\tswitch h.Val() {\n\t\tcase \"headers\":\n\t\t\tif h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tfor nesting := h.Nesting(); h.NextBlock(nesting); {\n\t\t\t\tvar err error\n\n\t\t\t\t// include current token, which we treat as an argument here\n\t\t\t\t// nolint:prealloc\n\t\t\t\targs := []string{h.Val()}\n\t\t\t\targs = append(args, h.RemainingArgs()...)\n\n\t\t\t\tif handler.Headers == nil {\n\t\t\t\t\thandler.Headers = new(HeaderConfig)\n\t\t\t\t}\n\n\t\t\t\tswitch len(args) {\n\t\t\t\tcase 1:\n\t\t\t\t\terr = headers.CaddyfileHeaderOp(&handler.Headers.HeaderOps, args[0], \"\", nil)\n\t\t\t\tcase 2:\n\t\t\t\t\terr = headers.CaddyfileHeaderOp(&handler.Headers.HeaderOps, args[0], args[1], nil)\n\t\t\t\tcase 3:\n\t\t\t\t\terr = headers.CaddyfileHeaderOp(&handler.Headers.HeaderOps, args[0], args[1], &args[2])\n\t\t\t\tdefault:\n\t\t\t\t\treturn nil, h.ArgErr()\n\t\t\t\t}\n\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, h.Err(err.Error())\n\t\t\t\t}\n\t\t\t}\n\n\t\tcase \"GET\", \"HEAD\":\n\t\t\tmethod := h.Val()\n\t\t\tif !h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\ttarget := h.Val()\n\t\t\thandler.Resources = append(handler.Resources, Resource{\n\t\t\t\tMethod: method,\n\t\t\t\tTarget: target,\n\t\t\t})\n\n\t\tdefault:\n\t\t\thandler.Resources = append(handler.Resources, Resource{Target: h.Val()})\n\t\t}\n\t}\n\treturn handler, nil\n}\n"
  },
  {
    "path": "modules/caddyhttp/push/handler.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage push\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\t\"strings\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/headers\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Handler{})\n}\n\n// Handler is a middleware for HTTP/2 server push. Note that\n// HTTP/2 server push has been deprecated by some clients and\n// its use is discouraged unless you can accurately predict\n// which resources actually need to be pushed to the client;\n// it can be difficult to know what the client already has\n// cached. Pushing unnecessary resources results in worse\n// performance. Consider using HTTP 103 Early Hints instead.\n//\n// This handler supports pushing from Link headers; in other\n// words, if the eventual response has Link headers, this\n// handler will push the resources indicated by those headers,\n// even without specifying any resources in its config.\ntype Handler struct {\n\t// The resources to push.\n\tResources []Resource `json:\"resources,omitempty\"`\n\n\t// Headers to modify for the push requests.\n\tHeaders *HeaderConfig `json:\"headers,omitempty\"`\n\n\tlogger *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Handler) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.push\",\n\t\tNew: func() caddy.Module { return new(Handler) },\n\t}\n}\n\n// Provision sets up h.\nfunc (h *Handler) Provision(ctx caddy.Context) error {\n\th.logger = ctx.Logger()\n\tif h.Headers != nil {\n\t\terr := h.Headers.Provision(ctx)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"provisioning header operations: %v\", err)\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (h Handler) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {\n\tpusher, ok := w.(http.Pusher)\n\tif !ok {\n\t\treturn next.ServeHTTP(w, r)\n\t}\n\n\t// short-circuit recursive pushes\n\tif _, ok := r.Header[pushHeader]; ok {\n\t\treturn next.ServeHTTP(w, r)\n\t}\n\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\tserver := r.Context().Value(caddyhttp.ServerCtxKey).(*caddyhttp.Server)\n\tshouldLogCredentials := server.Logs != nil && server.Logs.ShouldLogCredentials\n\n\t// create header for push requests\n\thdr := h.initializePushHeaders(r, repl)\n\n\t// push first!\n\tfor _, resource := range h.Resources {\n\t\tif c := h.logger.Check(zapcore.DebugLevel, \"pushing resource\"); c != nil {\n\t\t\tc.Write(\n\t\t\t\tzap.String(\"uri\", r.RequestURI),\n\t\t\t\tzap.String(\"push_method\", resource.Method),\n\t\t\t\tzap.String(\"push_target\", resource.Target),\n\t\t\t\tzap.Object(\"push_headers\", caddyhttp.LoggableHTTPHeader{\n\t\t\t\t\tHeader:               hdr,\n\t\t\t\t\tShouldLogCredentials: shouldLogCredentials,\n\t\t\t\t}),\n\t\t\t)\n\t\t}\n\t\terr := pusher.Push(repl.ReplaceAll(resource.Target, \".\"), &http.PushOptions{\n\t\t\tMethod: resource.Method,\n\t\t\tHeader: hdr,\n\t\t})\n\t\tif err != nil {\n\t\t\t// usually this means either that push is not\n\t\t\t// supported or concurrent streams are full\n\t\t\tbreak\n\t\t}\n\t}\n\n\t// wrap the response writer so that we can initiate push of any resources\n\t// described in Link header fields before the response is written\n\tlp := linkPusher{\n\t\tResponseWriterWrapper: &caddyhttp.ResponseWriterWrapper{ResponseWriter: w},\n\t\thandler:               h,\n\t\tpusher:                pusher,\n\t\theader:                hdr,\n\t\trequest:               r,\n\t}\n\n\t// serve only after pushing!\n\tif err := next.ServeHTTP(lp, r); err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\nfunc (h Handler) initializePushHeaders(r *http.Request, repl *caddy.Replacer) http.Header {\n\thdr := make(http.Header)\n\n\t// prevent recursive pushes\n\thdr.Set(pushHeader, \"1\")\n\n\t// set initial header fields; since exactly how headers should\n\t// be implemented for server push is not well-understood, we\n\t// are being conservative for now like httpd is:\n\t// https://httpd.apache.org/docs/2.4/en/howto/http2.html#push\n\t// we only copy some well-known, safe headers that are likely\n\t// crucial when requesting certain kinds of content\n\tfor _, fieldName := range safeHeaders {\n\t\tif vals, ok := r.Header[fieldName]; ok {\n\t\t\thdr[fieldName] = vals\n\t\t}\n\t}\n\n\t// user can customize the push request headers\n\tif h.Headers != nil {\n\t\th.Headers.ApplyTo(hdr, repl)\n\t}\n\n\treturn hdr\n}\n\n// servePreloadLinks parses Link headers from upstream and pushes\n// resources described by them. If a resource has the \"nopush\"\n// attribute or describes an external entity (meaning, the resource\n// URI includes a scheme), it will not be pushed.\nfunc (h Handler) servePreloadLinks(pusher http.Pusher, hdr http.Header, resources []string) {\n\tfor _, resource := range resources {\n\t\tfor _, resource := range parseLinkHeader(resource) {\n\t\t\tif _, ok := resource.params[\"nopush\"]; ok {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif isRemoteResource(resource.uri) {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\terr := pusher.Push(resource.uri, &http.PushOptions{\n\t\t\t\tHeader: hdr,\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t}\n}\n\n// Resource represents a request for a resource to push.\ntype Resource struct {\n\t// Method is the request method, which must be GET or HEAD.\n\t// Default is GET.\n\tMethod string `json:\"method,omitempty\"`\n\n\t// Target is the path to the resource being pushed.\n\tTarget string `json:\"target,omitempty\"`\n}\n\n// HeaderConfig configures headers for synthetic push requests.\ntype HeaderConfig struct {\n\theaders.HeaderOps\n}\n\n// linkPusher is a http.ResponseWriter that intercepts\n// the WriteHeader() call to ensure that any resources\n// described by Link response headers get pushed before\n// the response is allowed to be written.\ntype linkPusher struct {\n\t*caddyhttp.ResponseWriterWrapper\n\thandler Handler\n\tpusher  http.Pusher\n\theader  http.Header\n\trequest *http.Request\n}\n\nfunc (lp linkPusher) WriteHeader(statusCode int) {\n\tif links, ok := lp.ResponseWriter.Header()[\"Link\"]; ok {\n\t\t// only initiate these pushes if it hasn't been done yet\n\t\tif val := caddyhttp.GetVar(lp.request.Context(), pushedLink); val == nil {\n\t\t\tif c := lp.handler.logger.Check(zapcore.DebugLevel, \"pushing Link resources\"); c != nil {\n\t\t\t\tc.Write(zap.Strings(\"linked\", links))\n\t\t\t}\n\t\t\tcaddyhttp.SetVar(lp.request.Context(), pushedLink, true)\n\t\t\tlp.handler.servePreloadLinks(lp.pusher, lp.header, links)\n\t\t}\n\t}\n\tlp.ResponseWriter.WriteHeader(statusCode)\n}\n\n// isRemoteResource returns true if resource starts with\n// a scheme or is a protocol-relative URI.\nfunc isRemoteResource(resource string) bool {\n\treturn strings.HasPrefix(resource, \"//\") ||\n\t\tstrings.HasPrefix(resource, \"http://\") ||\n\t\tstrings.HasPrefix(resource, \"https://\")\n}\n\n// safeHeaders is a list of header fields that are\n// safe to copy to push requests implicitly. It is\n// assumed that requests for certain kinds of content\n// would fail without these fields present.\nvar safeHeaders = []string{\n\t\"Accept-Encoding\",\n\t\"Accept-Language\",\n\t\"Accept\",\n\t\"Cache-Control\",\n\t\"User-Agent\",\n}\n\n// pushHeader is a header field that gets added to push requests\n// in order to avoid recursive/infinite pushes.\nconst pushHeader = \"Caddy-Push\"\n\n// pushedLink is the key for the variable on the request\n// context that we use to remember whether we have already\n// pushed resources from Link headers yet; otherwise, if\n// multiple push handlers are invoked, it would repeat the\n// pushing of Link headers.\nconst pushedLink = \"http.handlers.push.pushed_link\"\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner           = (*Handler)(nil)\n\t_ caddyhttp.MiddlewareHandler = (*Handler)(nil)\n\t_ http.ResponseWriter         = (*linkPusher)(nil)\n\t_ http.Pusher                 = (*linkPusher)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/push/link.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage push\n\nimport (\n\t\"strings\"\n)\n\n// linkResource contains the results of a parsed Link header.\ntype linkResource struct {\n\turi    string\n\tparams map[string]string\n}\n\n// parseLinkHeader is responsible for parsing Link header\n// and returning list of found resources.\n//\n// Accepted formats are:\n//\n//\tLink: <resource>; as=script\n//\tLink: <resource>; as=script,<resource>; as=style\n//\tLink: <resource>;<resource2>\n//\n// where <resource> begins with a forward slash (/).\nfunc parseLinkHeader(header string) []linkResource {\n\tresources := []linkResource{}\n\n\tif header == \"\" {\n\t\treturn resources\n\t}\n\n\tfor link := range strings.SplitSeq(header, comma) {\n\t\tl := linkResource{params: make(map[string]string)}\n\n\t\tli, ri := strings.Index(link, \"<\"), strings.Index(link, \">\")\n\t\tif li == -1 || ri == -1 {\n\t\t\tcontinue\n\t\t}\n\n\t\tl.uri = strings.TrimSpace(link[li+1 : ri])\n\n\t\tfor param := range strings.SplitSeq(strings.TrimSpace(link[ri+1:]), semicolon) {\n\t\t\tbefore, after, isCut := strings.Cut(strings.TrimSpace(param), equal)\n\t\t\tkey := strings.TrimSpace(before)\n\t\t\tif key == \"\" {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif isCut {\n\t\t\t\tl.params[key] = strings.TrimSpace(after)\n\t\t\t} else {\n\t\t\t\tl.params[key] = key\n\t\t\t}\n\t\t}\n\n\t\tresources = append(resources, l)\n\t}\n\n\treturn resources\n}\n\nconst (\n\tcomma     = \",\"\n\tsemicolon = \";\"\n\tequal     = \"=\"\n)\n"
  },
  {
    "path": "modules/caddyhttp/push/link_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//\thttp://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\npackage push\n\nimport (\n\t\"reflect\"\n\t\"testing\"\n)\n\nfunc TestParseLinkHeader(t *testing.T) {\n\ttestCases := []struct {\n\t\theader            string\n\t\texpectedResources []linkResource\n\t}{\n\t\t{\n\t\t\theader:            \"</resource>; as=script\",\n\t\t\texpectedResources: []linkResource{{uri: \"/resource\", params: map[string]string{\"as\": \"script\"}}},\n\t\t},\n\t\t{\n\t\t\theader:            \"</resource>\",\n\t\t\texpectedResources: []linkResource{{uri: \"/resource\", params: map[string]string{}}},\n\t\t},\n\t\t{\n\t\t\theader:            \"</resource>; nopush\",\n\t\t\texpectedResources: []linkResource{{uri: \"/resource\", params: map[string]string{\"nopush\": \"nopush\"}}},\n\t\t},\n\t\t{\n\t\t\theader:            \"</resource>;nopush;rel=next\",\n\t\t\texpectedResources: []linkResource{{uri: \"/resource\", params: map[string]string{\"nopush\": \"nopush\", \"rel\": \"next\"}}},\n\t\t},\n\t\t{\n\t\t\theader: \"</resource>;nopush;rel=next,</resource2>;nopush\",\n\t\t\texpectedResources: []linkResource{\n\t\t\t\t{uri: \"/resource\", params: map[string]string{\"nopush\": \"nopush\", \"rel\": \"next\"}},\n\t\t\t\t{uri: \"/resource2\", params: map[string]string{\"nopush\": \"nopush\"}},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\theader: \"</resource>,</resource2>\",\n\t\t\texpectedResources: []linkResource{\n\t\t\t\t{uri: \"/resource\", params: map[string]string{}},\n\t\t\t\t{uri: \"/resource2\", params: map[string]string{}},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\theader:            \"malformed\",\n\t\t\texpectedResources: []linkResource{},\n\t\t},\n\t\t{\n\t\t\theader:            \"<malformed\",\n\t\t\texpectedResources: []linkResource{},\n\t\t},\n\t\t{\n\t\t\theader:            \",\",\n\t\t\texpectedResources: []linkResource{},\n\t\t},\n\t\t{\n\t\t\theader:            \";\",\n\t\t\texpectedResources: []linkResource{},\n\t\t},\n\t\t{\n\t\t\theader:            \"</resource> ; \",\n\t\t\texpectedResources: []linkResource{{uri: \"/resource\", params: map[string]string{}}},\n\t\t},\n\t}\n\n\tfor i, test := range testCases {\n\t\tactualResources := parseLinkHeader(test.header)\n\t\tif !reflect.DeepEqual(actualResources, test.expectedResources) {\n\t\t\tt.Errorf(\"Test %d (header: %s) - expected resources %v, got %v\",\n\t\t\t\ti, test.header, test.expectedResources, actualResources)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/replacer.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"crypto/ecdsa\"\n\t\"crypto/ed25519\"\n\t\"crypto/rsa\"\n\t\"crypto/sha256\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"encoding/asn1\"\n\t\"encoding/base64\"\n\t\"encoding/pem\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/netip\"\n\t\"net/textproto\"\n\t\"net/url\"\n\t\"path\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/google/uuid\"\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddytls\"\n)\n\n// NewTestReplacer creates a replacer for an http.Request\n// for use in tests that are not in this package\nfunc NewTestReplacer(req *http.Request) *caddy.Replacer {\n\trepl := caddy.NewReplacer()\n\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\t*req = *req.WithContext(ctx)\n\taddHTTPVarsToReplacer(repl, req, nil)\n\treturn repl\n}\n\nfunc addHTTPVarsToReplacer(repl *caddy.Replacer, req *http.Request, w http.ResponseWriter) {\n\tSetVar(req.Context(), \"start_time\", time.Now())\n\tSetVar(req.Context(), \"uuid\", new(requestID))\n\n\thttpVars := func(key string) (any, bool) {\n\t\tif req != nil {\n\t\t\t// query string parameters\n\t\t\tif strings.HasPrefix(key, reqURIQueryReplPrefix) {\n\t\t\t\tvals := req.URL.Query()[key[len(reqURIQueryReplPrefix):]]\n\t\t\t\t// always return true, since the query param might\n\t\t\t\t// be present only in some requests\n\t\t\t\treturn strings.Join(vals, \",\"), true\n\t\t\t}\n\n\t\t\t// request header fields\n\t\t\tif strings.HasPrefix(key, reqHeaderReplPrefix) {\n\t\t\t\tfield := key[len(reqHeaderReplPrefix):]\n\t\t\t\tvals := req.Header[textproto.CanonicalMIMEHeaderKey(field)]\n\t\t\t\t// always return true, since the header field might\n\t\t\t\t// be present only in some requests\n\t\t\t\treturn strings.Join(vals, \",\"), true\n\t\t\t}\n\n\t\t\t// cookies\n\t\t\tif strings.HasPrefix(key, reqCookieReplPrefix) {\n\t\t\t\tname := key[len(reqCookieReplPrefix):]\n\t\t\t\tfor _, cookie := range req.Cookies() {\n\t\t\t\t\tif strings.EqualFold(name, cookie.Name) {\n\t\t\t\t\t\t// always return true, since the cookie might\n\t\t\t\t\t\t// be present only in some requests\n\t\t\t\t\t\treturn cookie.Value, true\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// http.request.tls.*\n\t\t\tif strings.HasPrefix(key, reqTLSReplPrefix) {\n\t\t\t\treturn getReqTLSReplacement(req, key)\n\t\t\t}\n\n\t\t\tswitch key {\n\t\t\tcase \"http.request.method\":\n\t\t\t\treturn req.Method, true\n\t\t\tcase \"http.request.scheme\":\n\t\t\t\tif req.TLS != nil {\n\t\t\t\t\treturn \"https\", true\n\t\t\t\t}\n\t\t\t\treturn \"http\", true\n\t\t\tcase \"http.request.proto\":\n\t\t\t\treturn req.Proto, true\n\t\t\tcase \"http.request.host\":\n\t\t\t\thost, _, err := net.SplitHostPort(req.Host)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn req.Host, true // OK; there probably was no port\n\t\t\t\t}\n\t\t\t\treturn host, true\n\t\t\tcase \"http.request.port\":\n\t\t\t\t_, port, _ := net.SplitHostPort(req.Host)\n\t\t\t\tif portNum, err := strconv.Atoi(port); err == nil {\n\t\t\t\t\treturn portNum, true\n\t\t\t\t}\n\t\t\t\treturn port, true\n\t\t\tcase \"http.request.hostport\":\n\t\t\t\treturn req.Host, true\n\t\t\tcase \"http.request.local\":\n\t\t\t\tlocalAddr, _ := req.Context().Value(http.LocalAddrContextKey).(net.Addr)\n\t\t\t\treturn localAddr.String(), true\n\t\t\tcase \"http.request.local.host\":\n\t\t\t\tlocalAddr, _ := req.Context().Value(http.LocalAddrContextKey).(net.Addr)\n\t\t\t\thost, _, err := net.SplitHostPort(localAddr.String())\n\t\t\t\tif err != nil {\n\t\t\t\t\t// localAddr is host:port for tcp and udp sockets and /unix/socket.path\n\t\t\t\t\t// for unix sockets. net.SplitHostPort only operates on tcp and udp sockets,\n\t\t\t\t\t// not unix sockets and will fail with the latter.\n\t\t\t\t\t// We assume when net.SplitHostPort fails, localAddr is a unix socket and thus\n\t\t\t\t\t// already \"split\" and save to return.\n\t\t\t\t\treturn localAddr, true\n\t\t\t\t}\n\t\t\t\treturn host, true\n\t\t\tcase \"http.request.local.port\":\n\t\t\t\tlocalAddr, _ := req.Context().Value(http.LocalAddrContextKey).(net.Addr)\n\t\t\t\t_, port, _ := net.SplitHostPort(localAddr.String())\n\t\t\t\tif portNum, err := strconv.Atoi(port); err == nil {\n\t\t\t\t\treturn portNum, true\n\t\t\t\t}\n\t\t\t\treturn port, true\n\t\t\tcase \"http.request.remote\":\n\t\t\t\tif req.TLS != nil && !req.TLS.HandshakeComplete {\n\t\t\t\t\t// without a complete handshake (QUIC \"early data\") we can't trust the remote IP address to not be spoofed\n\t\t\t\t\treturn nil, true\n\t\t\t\t}\n\t\t\t\treturn req.RemoteAddr, true\n\t\t\tcase \"http.request.remote.host\":\n\t\t\t\tif req.TLS != nil && !req.TLS.HandshakeComplete {\n\t\t\t\t\t// without a complete handshake (QUIC \"early data\") we can't trust the remote IP address to not be spoofed\n\t\t\t\t\treturn nil, true\n\t\t\t\t}\n\t\t\t\thost, _, err := net.SplitHostPort(req.RemoteAddr)\n\t\t\t\tif err != nil {\n\t\t\t\t\t// req.RemoteAddr is host:port for tcp and udp sockets and /unix/socket.path\n\t\t\t\t\t// for unix sockets. net.SplitHostPort only operates on tcp and udp sockets,\n\t\t\t\t\t// not unix sockets and will fail with the latter.\n\t\t\t\t\t// We assume when net.SplitHostPort fails, req.RemoteAddr is a unix socket\n\t\t\t\t\t// and thus already \"split\" and save to return.\n\t\t\t\t\treturn req.RemoteAddr, true\n\t\t\t\t}\n\t\t\t\treturn host, true\n\t\t\tcase \"http.request.remote.port\":\n\t\t\t\t_, port, _ := net.SplitHostPort(req.RemoteAddr)\n\t\t\t\tif portNum, err := strconv.Atoi(port); err == nil {\n\t\t\t\t\treturn portNum, true\n\t\t\t\t}\n\t\t\t\treturn port, true\n\n\t\t\t// current URI, including any internal rewrites\n\t\t\tcase \"http.request.uri\":\n\t\t\t\treturn req.URL.RequestURI(), true\n\t\t\tcase \"http.request.uri_escaped\":\n\t\t\t\treturn url.QueryEscape(req.URL.RequestURI()), true\n\t\t\tcase \"http.request.uri.path\":\n\t\t\t\treturn req.URL.Path, true\n\t\t\tcase \"http.request.uri.path_escaped\":\n\t\t\t\treturn url.QueryEscape(req.URL.Path), true\n\t\t\tcase \"http.request.uri.path.file\":\n\t\t\t\t_, file := path.Split(req.URL.Path)\n\t\t\t\treturn file, true\n\t\t\tcase \"http.request.uri.path.dir\":\n\t\t\t\tdir, _ := path.Split(req.URL.Path)\n\t\t\t\treturn dir, true\n\t\t\tcase \"http.request.uri.path.file.base\":\n\t\t\t\treturn strings.TrimSuffix(path.Base(req.URL.Path), path.Ext(req.URL.Path)), true\n\t\t\tcase \"http.request.uri.path.file.ext\":\n\t\t\t\treturn path.Ext(req.URL.Path), true\n\t\t\tcase \"http.request.uri.query\":\n\t\t\t\treturn req.URL.RawQuery, true\n\t\t\tcase \"http.request.uri.query_escaped\":\n\t\t\t\treturn url.QueryEscape(req.URL.RawQuery), true\n\t\t\tcase \"http.request.uri.prefixed_query\":\n\t\t\t\tif req.URL.RawQuery == \"\" {\n\t\t\t\t\treturn \"\", true\n\t\t\t\t}\n\t\t\t\treturn \"?\" + req.URL.RawQuery, true\n\t\t\tcase \"http.request.duration\":\n\t\t\t\tstart := GetVar(req.Context(), \"start_time\").(time.Time)\n\t\t\t\treturn time.Since(start), true\n\t\t\tcase \"http.request.duration_ms\":\n\t\t\t\tstart := GetVar(req.Context(), \"start_time\").(time.Time)\n\t\t\t\treturn time.Since(start).Seconds() * 1e3, true // multiply seconds to preserve decimal (see #4666)\n\n\t\t\tcase \"http.request.uuid\":\n\t\t\t\t// fetch the UUID for this request\n\t\t\t\tid := GetVar(req.Context(), \"uuid\").(*requestID)\n\n\t\t\t\t// set it to this request's access log\n\t\t\t\textra := req.Context().Value(ExtraLogFieldsCtxKey).(*ExtraLogFields)\n\t\t\t\textra.Set(zap.String(\"uuid\", id.String()))\n\n\t\t\t\treturn id.String(), true\n\n\t\t\tcase \"http.request.body\":\n\t\t\t\tif req.Body == nil {\n\t\t\t\t\treturn \"\", true\n\t\t\t\t}\n\t\t\t\t// normally net/http will close the body for us, but since we\n\t\t\t\t// are replacing it with a fake one, we have to ensure we close\n\t\t\t\t// the real body ourselves when we're done\n\t\t\t\tdefer req.Body.Close()\n\t\t\t\t// read the request body into a buffer (can't pool because we\n\t\t\t\t// don't know its lifetime and would have to make a copy anyway)\n\t\t\t\tbuf := new(bytes.Buffer)\n\t\t\t\t_, _ = io.Copy(buf, req.Body) // can't handle error, so just ignore it\n\t\t\t\treq.Body = io.NopCloser(buf)  // replace real body with buffered data\n\t\t\t\treturn buf.String(), true\n\n\t\t\tcase \"http.request.body_base64\":\n\t\t\t\tif req.Body == nil {\n\t\t\t\t\treturn \"\", true\n\t\t\t\t}\n\t\t\t\t// normally net/http will close the body for us, but since we\n\t\t\t\t// are replacing it with a fake one, we have to ensure we close\n\t\t\t\t// the real body ourselves when we're done\n\t\t\t\tdefer req.Body.Close()\n\t\t\t\t// read the request body into a buffer (can't pool because we\n\t\t\t\t// don't know its lifetime and would have to make a copy anyway)\n\t\t\t\tbuf := new(bytes.Buffer)\n\t\t\t\t_, _ = io.Copy(buf, req.Body) // can't handle error, so just ignore it\n\t\t\t\treq.Body = io.NopCloser(buf)  // replace real body with buffered data\n\t\t\t\treturn base64.StdEncoding.EncodeToString(buf.Bytes()), true\n\n\t\t\t// original request, before any internal changes\n\t\t\tcase \"http.request.orig_method\":\n\t\t\t\tor, _ := req.Context().Value(OriginalRequestCtxKey).(http.Request)\n\t\t\t\treturn or.Method, true\n\t\t\tcase \"http.request.orig_uri\":\n\t\t\t\tor, _ := req.Context().Value(OriginalRequestCtxKey).(http.Request)\n\t\t\t\treturn or.RequestURI, true\n\t\t\tcase \"http.request.orig_uri.path\":\n\t\t\t\tor, _ := req.Context().Value(OriginalRequestCtxKey).(http.Request)\n\t\t\t\treturn or.URL.Path, true\n\t\t\tcase \"http.request.orig_uri.path.file\":\n\t\t\t\tor, _ := req.Context().Value(OriginalRequestCtxKey).(http.Request)\n\t\t\t\t_, file := path.Split(or.URL.Path)\n\t\t\t\treturn file, true\n\t\t\tcase \"http.request.orig_uri.path.dir\":\n\t\t\t\tor, _ := req.Context().Value(OriginalRequestCtxKey).(http.Request)\n\t\t\t\tdir, _ := path.Split(or.URL.Path)\n\t\t\t\treturn dir, true\n\t\t\tcase \"http.request.orig_uri.query\":\n\t\t\t\tor, _ := req.Context().Value(OriginalRequestCtxKey).(http.Request)\n\t\t\t\treturn or.URL.RawQuery, true\n\t\t\tcase \"http.request.orig_uri.prefixed_query\":\n\t\t\t\tor, _ := req.Context().Value(OriginalRequestCtxKey).(http.Request)\n\t\t\t\tif or.URL.RawQuery == \"\" {\n\t\t\t\t\treturn \"\", true\n\t\t\t\t}\n\t\t\t\treturn \"?\" + or.URL.RawQuery, true\n\t\t\t}\n\n\t\t\t// remote IP range/prefix (e.g. keep top 24 bits of 1.2.3.4  => \"1.2.3.0/24\")\n\t\t\t// syntax: \"/V4,V6\" where V4 = IPv4 bits, and V6 = IPv6 bits; if no comma, then same bit length used for both\n\t\t\t// (EXPERIMENTAL)\n\t\t\tif strings.HasPrefix(key, \"http.request.remote.host/\") {\n\t\t\t\thost, _, err := net.SplitHostPort(req.RemoteAddr)\n\t\t\t\tif err != nil {\n\t\t\t\t\thost = req.RemoteAddr // assume no port, I guess?\n\t\t\t\t}\n\t\t\t\taddr, err := netip.ParseAddr(host)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn host, true // not an IP address\n\t\t\t\t}\n\t\t\t\t// extract the bits from the end of the placeholder (start after \"/\") then split on \",\"\n\t\t\t\tbitsBoth := key[strings.Index(key, \"/\")+1:]\n\t\t\t\tipv4BitsStr, ipv6BitsStr, cutOK := strings.Cut(bitsBoth, \",\")\n\t\t\t\tbitsStr := ipv4BitsStr\n\t\t\t\tif addr.Is6() && cutOK {\n\t\t\t\t\tbitsStr = ipv6BitsStr\n\t\t\t\t}\n\t\t\t\t// convert to integer then compute prefix\n\t\t\t\tbits, err := strconv.Atoi(bitsStr)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn \"\", true\n\t\t\t\t}\n\t\t\t\tprefix, err := addr.Prefix(bits)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn \"\", true\n\t\t\t\t}\n\t\t\t\treturn prefix.String(), true\n\t\t\t}\n\n\t\t\t// hostname labels (case insensitive, so normalize to lowercase)\n\t\t\tif strings.HasPrefix(key, reqHostLabelsReplPrefix) {\n\t\t\t\tidxStr := key[len(reqHostLabelsReplPrefix):]\n\t\t\t\tidx, err := strconv.Atoi(idxStr)\n\t\t\t\tif err != nil || idx < 0 {\n\t\t\t\t\treturn \"\", false\n\t\t\t\t}\n\t\t\t\treqHost, _, err := net.SplitHostPort(req.Host)\n\t\t\t\tif err != nil {\n\t\t\t\t\treqHost = req.Host // OK; assume there was no port\n\t\t\t\t}\n\t\t\t\thostLabels := strings.Split(reqHost, \".\")\n\t\t\t\tif idx >= len(hostLabels) {\n\t\t\t\t\treturn \"\", true\n\t\t\t\t}\n\t\t\t\treturn strings.ToLower(hostLabels[len(hostLabels)-idx-1]), true\n\t\t\t}\n\n\t\t\t// path parts\n\t\t\tif strings.HasPrefix(key, reqURIPathReplPrefix) {\n\t\t\t\tidxStr := key[len(reqURIPathReplPrefix):]\n\t\t\t\tidx, err := strconv.Atoi(idxStr)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn \"\", false\n\t\t\t\t}\n\t\t\t\tpathParts := strings.Split(req.URL.Path, \"/\")\n\t\t\t\tif len(pathParts) > 0 && pathParts[0] == \"\" {\n\t\t\t\t\tpathParts = pathParts[1:]\n\t\t\t\t}\n\t\t\t\tif idx < 0 {\n\t\t\t\t\treturn \"\", false\n\t\t\t\t}\n\t\t\t\tif idx >= len(pathParts) {\n\t\t\t\t\treturn \"\", true\n\t\t\t\t}\n\t\t\t\treturn pathParts[idx], true\n\t\t\t}\n\n\t\t\t// orig uri path parts\n\t\t\tif strings.HasPrefix(key, reqOrigURIPathReplPrefix) {\n\t\t\t\tidxStr := key[len(reqOrigURIPathReplPrefix):]\n\t\t\t\tidx, err := strconv.Atoi(idxStr)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn \"\", false\n\t\t\t\t}\n\t\t\t\tor, _ := req.Context().Value(OriginalRequestCtxKey).(http.Request)\n\t\t\t\tpathParts := strings.Split(or.URL.Path, \"/\")\n\t\t\t\tif len(pathParts) > 0 && pathParts[0] == \"\" {\n\t\t\t\t\tpathParts = pathParts[1:]\n\t\t\t\t}\n\t\t\t\tif idx < 0 {\n\t\t\t\t\treturn \"\", false\n\t\t\t\t}\n\t\t\t\tif idx >= len(pathParts) {\n\t\t\t\t\treturn \"\", true\n\t\t\t\t}\n\t\t\t\treturn pathParts[idx], true\n\t\t\t}\n\n\t\t\t// middleware variables\n\t\t\tif strings.HasPrefix(key, varsReplPrefix) {\n\t\t\t\tvarName := key[len(varsReplPrefix):]\n\t\t\t\traw := GetVar(req.Context(), varName)\n\t\t\t\t// variables can be dynamic, so always return true\n\t\t\t\t// even when it may not be set; treat as empty then\n\t\t\t\treturn raw, true\n\t\t\t}\n\t\t}\n\n\t\tif w != nil {\n\t\t\t// response header fields\n\t\t\tif strings.HasPrefix(key, respHeaderReplPrefix) {\n\t\t\t\tfield := key[len(respHeaderReplPrefix):]\n\t\t\t\tvals := w.Header()[textproto.CanonicalMIMEHeaderKey(field)]\n\t\t\t\t// always return true, since the header field might\n\t\t\t\t// be present only in some responses\n\t\t\t\treturn strings.Join(vals, \",\"), true\n\t\t\t}\n\t\t}\n\n\t\tswitch key {\n\t\tcase \"http.shutting_down\":\n\t\t\tserver := req.Context().Value(ServerCtxKey).(*Server)\n\t\t\tserver.shutdownAtMu.RLock()\n\t\t\tdefer server.shutdownAtMu.RUnlock()\n\t\t\treturn !server.shutdownAt.IsZero(), true\n\t\tcase \"http.time_until_shutdown\":\n\t\t\tserver := req.Context().Value(ServerCtxKey).(*Server)\n\t\t\tserver.shutdownAtMu.RLock()\n\t\t\tdefer server.shutdownAtMu.RUnlock()\n\t\t\tif server.shutdownAt.IsZero() {\n\t\t\t\treturn nil, true\n\t\t\t}\n\t\t\treturn time.Until(server.shutdownAt), true\n\t\t}\n\n\t\treturn nil, false\n\t}\n\n\trepl.Map(httpVars)\n}\n\nfunc getReqTLSReplacement(req *http.Request, key string) (any, bool) {\n\tif req == nil || req.TLS == nil {\n\t\treturn nil, false\n\t}\n\n\tif len(key) < len(reqTLSReplPrefix) {\n\t\treturn nil, false\n\t}\n\n\tfield := strings.ToLower(key[len(reqTLSReplPrefix):])\n\n\tif strings.HasPrefix(field, \"client.\") {\n\t\tcert := getTLSPeerCert(req.TLS)\n\t\tif cert == nil {\n\t\t\t// Instead of returning (nil, false) here, we set it to a dummy\n\t\t\t// value to fix #7530. This way, even if there is no client cert,\n\t\t\t// evaluating placeholders with ReplaceKnown() will still remove\n\t\t\t// the placeholder, which would be expected. It is not expected\n\t\t\t// for the placeholder to sometimes get removed based on whether\n\t\t\t// the client presented a cert. We also do not return true here\n\t\t\t// because we probably should remain accurate about whether a\n\t\t\t// placeholder is, in fact, known or not.\n\t\t\t// (This allocation may be slightly inefficient.)\n\t\t\tcert = new(x509.Certificate)\n\t\t}\n\n\t\t// subject alternate names (SANs)\n\t\tif strings.HasPrefix(field, \"client.san.\") {\n\t\t\tfield = field[len(\"client.san.\"):]\n\t\t\tvar fieldName string\n\t\t\tvar fieldValue any\n\t\t\tswitch {\n\t\t\tcase strings.HasPrefix(field, \"dns_names\"):\n\t\t\t\tfieldName = \"dns_names\"\n\t\t\t\tfieldValue = cert.DNSNames\n\t\t\tcase strings.HasPrefix(field, \"emails\"):\n\t\t\t\tfieldName = \"emails\"\n\t\t\t\tfieldValue = cert.EmailAddresses\n\t\t\tcase strings.HasPrefix(field, \"ips\"):\n\t\t\t\tfieldName = \"ips\"\n\t\t\t\tfieldValue = cert.IPAddresses\n\t\t\tcase strings.HasPrefix(field, \"uris\"):\n\t\t\t\tfieldName = \"uris\"\n\t\t\t\tfieldValue = cert.URIs\n\t\t\tdefault:\n\t\t\t\treturn nil, false\n\t\t\t}\n\t\t\tfield = field[len(fieldName):]\n\n\t\t\t// if no index was specified, return the whole list\n\t\t\tif field == \"\" {\n\t\t\t\treturn fieldValue, true\n\t\t\t}\n\t\t\tif len(field) < 2 || field[0] != '.' {\n\t\t\t\treturn nil, false\n\t\t\t}\n\t\t\tfield = field[1:] // trim '.' between field name and index\n\n\t\t\t// get the numeric index\n\t\t\tidx, err := strconv.Atoi(field)\n\t\t\tif err != nil || idx < 0 {\n\t\t\t\treturn nil, false\n\t\t\t}\n\n\t\t\t// access the indexed element and return it\n\t\t\tswitch v := fieldValue.(type) {\n\t\t\tcase []string:\n\t\t\t\tif idx >= len(v) {\n\t\t\t\t\treturn nil, true\n\t\t\t\t}\n\t\t\t\treturn v[idx], true\n\t\t\tcase []net.IP:\n\t\t\t\tif idx >= len(v) {\n\t\t\t\t\treturn nil, true\n\t\t\t\t}\n\t\t\t\treturn v[idx], true\n\t\t\tcase []*url.URL:\n\t\t\t\tif idx >= len(v) {\n\t\t\t\t\treturn nil, true\n\t\t\t\t}\n\t\t\t\treturn v[idx], true\n\t\t\t}\n\t\t}\n\n\t\tswitch field {\n\t\tcase \"client.fingerprint\":\n\t\t\treturn fmt.Sprintf(\"%x\", sha256.Sum256(cert.Raw)), true\n\t\tcase \"client.public_key\", \"client.public_key_sha256\":\n\t\t\tif cert.PublicKey == nil {\n\t\t\t\treturn nil, true\n\t\t\t}\n\t\t\tpubKeyBytes, err := marshalPublicKey(cert.PublicKey)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, true\n\t\t\t}\n\t\t\tif strings.HasSuffix(field, \"_sha256\") {\n\t\t\t\treturn fmt.Sprintf(\"%x\", sha256.Sum256(pubKeyBytes)), true\n\t\t\t}\n\t\t\treturn fmt.Sprintf(\"%x\", pubKeyBytes), true\n\t\tcase \"client.issuer\":\n\t\t\treturn cert.Issuer, true\n\t\tcase \"client.serial\":\n\t\t\treturn cert.SerialNumber, true\n\t\tcase \"client.subject\":\n\t\t\treturn cert.Subject, true\n\t\tcase \"client.certificate_pem\":\n\t\t\tblock := pem.Block{Type: \"CERTIFICATE\", Bytes: cert.Raw}\n\t\t\treturn pem.EncodeToMemory(&block), true\n\t\tcase \"client.certificate_der_base64\":\n\t\t\treturn base64.StdEncoding.EncodeToString(cert.Raw), true\n\t\tdefault:\n\t\t\treturn nil, false\n\t\t}\n\t}\n\n\tswitch field {\n\tcase \"version\":\n\t\treturn caddytls.ProtocolName(req.TLS.Version), true\n\tcase \"cipher_suite\":\n\t\treturn tls.CipherSuiteName(req.TLS.CipherSuite), true\n\tcase \"resumed\":\n\t\treturn req.TLS.DidResume, true\n\tcase \"proto\":\n\t\treturn req.TLS.NegotiatedProtocol, true\n\tcase \"proto_mutual\":\n\t\t// req.TLS.NegotiatedProtocolIsMutual is deprecated - it's always true.\n\t\treturn true, true\n\tcase \"server_name\":\n\t\treturn req.TLS.ServerName, true\n\tcase \"ech\":\n\t\treturn req.TLS.ECHAccepted, true\n\t}\n\treturn nil, false\n}\n\n// marshalPublicKey returns the byte encoding of pubKey.\nfunc marshalPublicKey(pubKey any) ([]byte, error) {\n\tswitch key := pubKey.(type) {\n\tcase *rsa.PublicKey:\n\t\treturn asn1.Marshal(key)\n\tcase *ecdsa.PublicKey:\n\t\te, err := key.ECDH()\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn e.Bytes(), nil\n\tcase ed25519.PublicKey:\n\t\treturn key, nil\n\t}\n\treturn nil, fmt.Errorf(\"unrecognized public key type: %T\", pubKey)\n}\n\n// getTLSPeerCert retrieves the first peer certificate from a TLS session.\n// Returns nil if no peer cert is in use.\nfunc getTLSPeerCert(cs *tls.ConnectionState) *x509.Certificate {\n\tif len(cs.PeerCertificates) == 0 {\n\t\treturn nil\n\t}\n\treturn cs.PeerCertificates[0]\n}\n\ntype requestID struct {\n\tvalue string\n}\n\n// Lazy generates UUID string or return cached value if present\nfunc (rid *requestID) String() string {\n\tif rid.value == \"\" {\n\t\tif id, err := uuid.NewRandom(); err == nil {\n\t\t\trid.value = id.String()\n\t\t}\n\t}\n\treturn rid.value\n}\n\nconst (\n\treqCookieReplPrefix      = \"http.request.cookie.\"\n\treqHeaderReplPrefix      = \"http.request.header.\"\n\treqHostLabelsReplPrefix  = \"http.request.host.labels.\"\n\treqTLSReplPrefix         = \"http.request.tls.\"\n\treqURIPathReplPrefix     = \"http.request.uri.path.\"\n\treqURIQueryReplPrefix    = \"http.request.uri.query.\"\n\trespHeaderReplPrefix     = \"http.response.header.\"\n\tvarsReplPrefix           = \"http.vars.\"\n\treqOrigURIPathReplPrefix = \"http.request.orig_uri.path.\"\n)\n"
  },
  {
    "path": "modules/caddyhttp/replacer_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"encoding/pem\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc TestHTTPVarReplacement(t *testing.T) {\n\treq, _ := http.NewRequest(http.MethodGet, \"/foo/bar.tar.gz?a=1&b=2\", nil)\n\trepl := caddy.NewReplacer()\n\tlocalAddr, _ := net.ResolveTCPAddr(\"tcp\", \"192.168.159.1:80\")\n\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\tctx = context.WithValue(ctx, http.LocalAddrContextKey, localAddr)\n\treq = req.WithContext(ctx)\n\treq.Host = \"example.com:80\"\n\treq.RemoteAddr = \"192.168.159.32:1234\"\n\n\tclientCert := []byte(`-----BEGIN CERTIFICATE-----\nMIIB9jCCAV+gAwIBAgIBAjANBgkqhkiG9w0BAQsFADAYMRYwFAYDVQQDDA1DYWRk\neSBUZXN0IENBMB4XDTE4MDcyNDIxMzUwNVoXDTI4MDcyMTIxMzUwNVowHTEbMBkG\nA1UEAwwSY2xpZW50LmxvY2FsZG9tYWluMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCB\niQKBgQDFDEpzF0ew68teT3xDzcUxVFaTII+jXH1ftHXxxP4BEYBU4q90qzeKFneF\nz83I0nC0WAQ45ZwHfhLMYHFzHPdxr6+jkvKPASf0J2v2HDJuTM1bHBbik5Ls5eq+\nfVZDP8o/VHKSBKxNs8Goc2NTsr5b07QTIpkRStQK+RJALk4x9QIDAQABo0swSTAJ\nBgNVHRMEAjAAMAsGA1UdDwQEAwIHgDAaBgNVHREEEzARgglsb2NhbGhvc3SHBH8A\nAAEwEwYDVR0lBAwwCgYIKwYBBQUHAwIwDQYJKoZIhvcNAQELBQADgYEANSjz2Sk+\neqp31wM9il1n+guTNyxJd+FzVAH+hCZE5K+tCgVDdVFUlDEHHbS/wqb2PSIoouLV\n3Q9fgDkiUod+uIK0IynzIKvw+Cjg+3nx6NQ0IM0zo8c7v398RzB4apbXKZyeeqUH\n9fNwfEi+OoXR6s+upSKobCmLGLGi9Na5s5g=\n-----END CERTIFICATE-----`)\n\n\tblock, _ := pem.Decode(clientCert)\n\tif block == nil {\n\t\tt.Fatalf(\"failed to decode PEM certificate\")\n\t}\n\n\tcert, err := x509.ParseCertificate(block.Bytes)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to decode PEM certificate: %v\", err)\n\t}\n\n\treq.TLS = &tls.ConnectionState{\n\t\tVersion:                    tls.VersionTLS13,\n\t\tHandshakeComplete:          true,\n\t\tServerName:                 \"example.com\",\n\t\tCipherSuite:                tls.TLS_AES_256_GCM_SHA384,\n\t\tPeerCertificates:           []*x509.Certificate{cert},\n\t\tNegotiatedProtocol:         \"h2\",\n\t\tNegotiatedProtocolIsMutual: true,\n\t}\n\n\tres := httptest.NewRecorder()\n\taddHTTPVarsToReplacer(repl, req, res)\n\n\tfor i, tc := range []struct {\n\t\tget    string\n\t\texpect string\n\t}{\n\t\t{\n\t\t\tget:    \"http.request.scheme\",\n\t\t\texpect: \"https\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.method\",\n\t\t\texpect: http.MethodGet,\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.host\",\n\t\t\texpect: \"example.com\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.port\",\n\t\t\texpect: \"80\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.hostport\",\n\t\t\texpect: \"example.com:80\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.local.host\",\n\t\t\texpect: \"192.168.159.1\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.local.port\",\n\t\t\texpect: \"80\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.local\",\n\t\t\texpect: \"192.168.159.1:80\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.remote.host\",\n\t\t\texpect: \"192.168.159.32\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.remote.host/24\",\n\t\t\texpect: \"192.168.159.0/24\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.remote.host/24,32\",\n\t\t\texpect: \"192.168.159.0/24\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.remote.host/999\",\n\t\t\texpect: \"\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.remote.port\",\n\t\t\texpect: \"1234\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.host.labels.0\",\n\t\t\texpect: \"com\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.host.labels.1\",\n\t\t\texpect: \"example\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.host.labels.2\",\n\t\t\texpect: \"\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.uri\",\n\t\t\texpect: \"/foo/bar.tar.gz?a=1&b=2\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.uri_escaped\",\n\t\t\texpect: \"%2Ffoo%2Fbar.tar.gz%3Fa%3D1%26b%3D2\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.uri.path\",\n\t\t\texpect: \"/foo/bar.tar.gz\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.uri.path_escaped\",\n\t\t\texpect: \"%2Ffoo%2Fbar.tar.gz\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.uri.path.file\",\n\t\t\texpect: \"bar.tar.gz\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.uri.path.file.base\",\n\t\t\texpect: \"bar.tar\",\n\t\t},\n\t\t{\n\t\t\t// not ideal, but also most correct, given that files can have dots (example: index.<SHA>.html) TODO: maybe this isn't right..\n\t\t\tget:    \"http.request.uri.path.file.ext\",\n\t\t\texpect: \".gz\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.uri.query\",\n\t\t\texpect: \"a=1&b=2\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.uri.query_escaped\",\n\t\t\texpect: \"a%3D1%26b%3D2\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.uri.query.a\",\n\t\t\texpect: \"1\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.uri.query.b\",\n\t\t\texpect: \"2\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.uri.prefixed_query\",\n\t\t\texpect: \"?a=1&b=2\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.tls.cipher_suite\",\n\t\t\texpect: \"TLS_AES_256_GCM_SHA384\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.tls.proto\",\n\t\t\texpect: \"h2\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.tls.proto_mutual\",\n\t\t\texpect: \"true\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.tls.resumed\",\n\t\t\texpect: \"false\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.tls.server_name\",\n\t\t\texpect: \"example.com\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.tls.version\",\n\t\t\texpect: \"tls1.3\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.tls.client.fingerprint\",\n\t\t\texpect: \"9f57b7b497cceacc5459b76ac1c3afedbc12b300e728071f55f84168ff0f7702\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.tls.client.issuer\",\n\t\t\texpect: \"CN=Caddy Test CA\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.tls.client.serial\",\n\t\t\texpect: \"2\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.tls.client.subject\",\n\t\t\texpect: \"CN=client.localdomain\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.tls.client.san.dns_names\",\n\t\t\texpect: \"[localhost]\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.tls.client.san.dns_names.0\",\n\t\t\texpect: \"localhost\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.tls.client.san.dns_names.1\",\n\t\t\texpect: \"\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.tls.client.san.ips\",\n\t\t\texpect: \"[127.0.0.1]\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.tls.client.san.ips.0\",\n\t\t\texpect: \"127.0.0.1\",\n\t\t},\n\t\t{\n\t\t\tget:    \"http.request.tls.client.certificate_pem\",\n\t\t\texpect: string(clientCert) + \"\\n\", // returned value comes with a newline appended to it\n\t\t},\n\t} {\n\t\tactual, got := repl.GetString(tc.get)\n\t\tif !got {\n\t\t\tt.Errorf(\"Test %d: Expected to recognize the placeholder name, but didn't\", i)\n\t\t}\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d: Expected %s to be '%s' but got '%s'\",\n\t\t\t\ti, tc.get, tc.expect, actual)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/requestbody/caddyfile.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage requestbody\n\nimport (\n\t\"time\"\n\n\t\"github.com/dustin/go-humanize\"\n\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\thttpcaddyfile.RegisterHandlerDirective(\"request_body\", parseCaddyfile)\n}\n\nfunc parseCaddyfile(h httpcaddyfile.Helper) (caddyhttp.MiddlewareHandler, error) {\n\th.Next() // consume directive name\n\n\trb := new(RequestBody)\n\n\t// configuration should be in a block\n\tfor h.NextBlock(0) {\n\t\tswitch h.Val() {\n\t\tcase \"max_size\":\n\t\t\tvar sizeStr string\n\t\t\tif !h.AllArgs(&sizeStr) {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tsize, err := humanize.ParseBytes(sizeStr)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, h.Errf(\"parsing max_size: %v\", err)\n\t\t\t}\n\t\t\trb.MaxSize = int64(size)\n\n\t\tcase \"read_timeout\":\n\t\t\tvar timeoutStr string\n\t\t\tif !h.AllArgs(&timeoutStr) {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\ttimeout, err := time.ParseDuration(timeoutStr)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, h.Errf(\"parsing read_timeout: %v\", err)\n\t\t\t}\n\t\t\trb.ReadTimeout = timeout\n\n\t\tcase \"write_timeout\":\n\t\t\tvar timeoutStr string\n\t\t\tif !h.AllArgs(&timeoutStr) {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\ttimeout, err := time.ParseDuration(timeoutStr)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, h.Errf(\"parsing write_timeout: %v\", err)\n\t\t\t}\n\t\t\trb.WriteTimeout = timeout\n\n\t\tcase \"set\":\n\t\t\tvar setStr string\n\t\t\tif !h.AllArgs(&setStr) {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\trb.Set = setStr\n\t\tdefault:\n\t\t\treturn nil, h.Errf(\"unrecognized request_body subdirective '%s'\", h.Val())\n\t\t}\n\t}\n\n\treturn rb, nil\n}\n"
  },
  {
    "path": "modules/caddyhttp/requestbody/requestbody.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage requestbody\n\nimport (\n\t\"errors\"\n\t\"io\"\n\t\"net/http\"\n\t\"strings\"\n\t\"time\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(RequestBody{})\n}\n\n// RequestBody is a middleware for manipulating the request body.\ntype RequestBody struct {\n\t// The maximum number of bytes to allow reading from the body by a later handler.\n\t// If more bytes are read, an error with HTTP status 413 is returned.\n\tMaxSize int64 `json:\"max_size,omitempty\"`\n\n\t// EXPERIMENTAL. Subject to change/removal.\n\tReadTimeout time.Duration `json:\"read_timeout,omitempty\"`\n\n\t// EXPERIMENTAL. Subject to change/removal.\n\tWriteTimeout time.Duration `json:\"write_timeout,omitempty\"`\n\n\t// This field permit to replace body on the fly\n\t// EXPERIMENTAL. Subject to change/removal.\n\tSet string `json:\"set,omitempty\"`\n\n\tlogger *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (RequestBody) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.request_body\",\n\t\tNew: func() caddy.Module { return new(RequestBody) },\n\t}\n}\n\nfunc (rb *RequestBody) Provision(ctx caddy.Context) error {\n\trb.logger = ctx.Logger()\n\treturn nil\n}\n\nfunc (rb RequestBody) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {\n\tif rb.Set != \"\" {\n\t\tif r.Body != nil {\n\t\t\terr := r.Body.Close()\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\t\treplacedBody := repl.ReplaceAll(rb.Set, \"\")\n\t\tr.Body = io.NopCloser(strings.NewReader(replacedBody))\n\t\tr.ContentLength = int64(len(replacedBody))\n\t}\n\tif r.Body == nil {\n\t\treturn next.ServeHTTP(w, r)\n\t}\n\tif rb.MaxSize > 0 {\n\t\tr.Body = errorWrapper{http.MaxBytesReader(w, r.Body, rb.MaxSize)}\n\t}\n\tif rb.ReadTimeout > 0 || rb.WriteTimeout > 0 {\n\t\t//nolint:bodyclose\n\t\trc := http.NewResponseController(w)\n\t\tif rb.ReadTimeout > 0 {\n\t\t\tif err := rc.SetReadDeadline(time.Now().Add(rb.ReadTimeout)); err != nil {\n\t\t\t\tif c := rb.logger.Check(zapcore.ErrorLevel, \"could not set read deadline\"); c != nil {\n\t\t\t\t\tc.Write(zap.Error(err))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif rb.WriteTimeout > 0 {\n\t\t\tif err := rc.SetWriteDeadline(time.Now().Add(rb.WriteTimeout)); err != nil {\n\t\t\t\tif c := rb.logger.Check(zapcore.ErrorLevel, \"could not set write deadline\"); c != nil {\n\t\t\t\t\tc.Write(zap.Error(err))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn next.ServeHTTP(w, r)\n}\n\n// errorWrapper wraps errors that are returned from Read()\n// so that they can be associated with a proper status code.\ntype errorWrapper struct {\n\tio.ReadCloser\n}\n\nfunc (ew errorWrapper) Read(p []byte) (n int, err error) {\n\tn, err = ew.ReadCloser.Read(p)\n\tvar mbe *http.MaxBytesError\n\tif errors.As(err, &mbe) {\n\t\terr = caddyhttp.Error(http.StatusRequestEntityTooLarge, err)\n\t}\n\treturn n, err\n}\n\n// Interface guard\nvar _ caddyhttp.MiddlewareHandler = (*RequestBody)(nil)\n"
  },
  {
    "path": "modules/caddyhttp/responsematchers.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"net/http\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\n// ResponseMatcher is a type which can determine if an\n// HTTP response matches some criteria.\ntype ResponseMatcher struct {\n\t// If set, one of these status codes would be required.\n\t// A one-digit status can be used to represent all codes\n\t// in that class (e.g. 3 for all 3xx codes).\n\tStatusCode []int `json:\"status_code,omitempty\"`\n\n\t// If set, each header specified must be one of the\n\t// specified values, with the same logic used by the\n\t// [request header matcher](/docs/json/apps/http/servers/routes/match/header/).\n\tHeaders http.Header `json:\"headers,omitempty\"`\n}\n\n// Match returns true if the given statusCode and hdr match rm.\nfunc (rm ResponseMatcher) Match(statusCode int, hdr http.Header) bool {\n\tif !rm.matchStatusCode(statusCode) {\n\t\treturn false\n\t}\n\treturn matchHeaders(hdr, rm.Headers, \"\", []string{}, nil)\n}\n\nfunc (rm ResponseMatcher) matchStatusCode(statusCode int) bool {\n\tif rm.StatusCode == nil {\n\t\treturn true\n\t}\n\tfor _, code := range rm.StatusCode {\n\t\tif StatusCodeMatches(statusCode, code) {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\n// ParseNamedResponseMatcher parses the tokens of a named response matcher.\n//\n//\t@name {\n//\t    header <field> [<value>]\n//\t    status <code...>\n//\t}\n//\n// Or, single line syntax:\n//\n//\t@name [header <field> [<value>]] | [status <code...>]\nfunc ParseNamedResponseMatcher(d *caddyfile.Dispenser, matchers map[string]ResponseMatcher) error {\n\td.Next() // consume matcher name\n\tdefinitionName := d.Val()\n\n\tif _, ok := matchers[definitionName]; ok {\n\t\treturn d.Errf(\"matcher is defined more than once: %s\", definitionName)\n\t}\n\n\tmatcher := ResponseMatcher{}\n\tfor nesting := d.Nesting(); d.NextArg() || d.NextBlock(nesting); {\n\t\tswitch d.Val() {\n\t\tcase \"header\":\n\t\t\tif matcher.Headers == nil {\n\t\t\t\tmatcher.Headers = http.Header{}\n\t\t\t}\n\n\t\t\t// reuse the header request matcher's unmarshaler\n\t\t\theaderMatcher := MatchHeader(matcher.Headers)\n\t\t\terr := headerMatcher.UnmarshalCaddyfile(d.NewFromNextSegment())\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\tmatcher.Headers = http.Header(headerMatcher)\n\t\tcase \"status\":\n\t\t\tif matcher.StatusCode == nil {\n\t\t\t\tmatcher.StatusCode = []int{}\n\t\t\t}\n\n\t\t\targs := d.RemainingArgs()\n\t\t\tif len(args) == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\t\tfor _, arg := range args {\n\t\t\t\tif len(arg) == 3 && strings.HasSuffix(arg, \"xx\") {\n\t\t\t\t\targ = arg[:1]\n\t\t\t\t}\n\t\t\t\tstatusNum, err := strconv.Atoi(arg)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn d.Errf(\"bad status value '%s': %v\", arg, err)\n\t\t\t\t}\n\t\t\t\tmatcher.StatusCode = append(matcher.StatusCode, statusNum)\n\t\t\t}\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized response matcher %s\", d.Val())\n\t\t}\n\t}\n\tmatchers[definitionName] = matcher\n\treturn nil\n}\n"
  },
  {
    "path": "modules/caddyhttp/responsematchers_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"net/http\"\n\t\"testing\"\n)\n\nfunc TestResponseMatcher(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\trequire ResponseMatcher\n\t\tstatus  int\n\t\thdr     http.Header // make sure these are canonical cased (std lib will do that in a real request)\n\t\texpect  bool\n\t}{\n\t\t{\n\t\t\trequire: ResponseMatcher{},\n\t\t\tstatus:  200,\n\t\t\texpect:  true,\n\t\t},\n\t\t{\n\t\t\trequire: ResponseMatcher{\n\t\t\t\tStatusCode: []int{200},\n\t\t\t},\n\t\t\tstatus: 200,\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\trequire: ResponseMatcher{\n\t\t\t\tStatusCode: []int{2},\n\t\t\t},\n\t\t\tstatus: 200,\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\trequire: ResponseMatcher{\n\t\t\t\tStatusCode: []int{201},\n\t\t\t},\n\t\t\tstatus: 200,\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\trequire: ResponseMatcher{\n\t\t\t\tStatusCode: []int{2},\n\t\t\t},\n\t\t\tstatus: 301,\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\trequire: ResponseMatcher{\n\t\t\t\tStatusCode: []int{3},\n\t\t\t},\n\t\t\tstatus: 301,\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\trequire: ResponseMatcher{\n\t\t\t\tStatusCode: []int{3},\n\t\t\t},\n\t\t\tstatus: 399,\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\trequire: ResponseMatcher{\n\t\t\t\tStatusCode: []int{3},\n\t\t\t},\n\t\t\tstatus: 400,\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\trequire: ResponseMatcher{\n\t\t\t\tStatusCode: []int{3, 4},\n\t\t\t},\n\t\t\tstatus: 400,\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\trequire: ResponseMatcher{\n\t\t\t\tStatusCode: []int{3, 401},\n\t\t\t},\n\t\t\tstatus: 401,\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\trequire: ResponseMatcher{\n\t\t\t\tHeaders: http.Header{\n\t\t\t\t\t\"Foo\": []string{\"bar\"},\n\t\t\t\t},\n\t\t\t},\n\t\t\thdr:    http.Header{\"Foo\": []string{\"bar\"}},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\trequire: ResponseMatcher{\n\t\t\t\tHeaders: http.Header{\n\t\t\t\t\t\"Foo2\": []string{\"bar\"},\n\t\t\t\t},\n\t\t\t},\n\t\t\thdr:    http.Header{\"Foo\": []string{\"bar\"}},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\trequire: ResponseMatcher{\n\t\t\t\tHeaders: http.Header{\n\t\t\t\t\t\"Foo\": []string{\"bar\", \"baz\"},\n\t\t\t\t},\n\t\t\t},\n\t\t\thdr:    http.Header{\"Foo\": []string{\"baz\"}},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\trequire: ResponseMatcher{\n\t\t\t\tHeaders: http.Header{\n\t\t\t\t\t\"Foo\":  []string{\"bar\"},\n\t\t\t\t\t\"Foo2\": []string{\"baz\"},\n\t\t\t\t},\n\t\t\t},\n\t\t\thdr:    http.Header{\"Foo\": []string{\"baz\"}},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\trequire: ResponseMatcher{\n\t\t\t\tHeaders: http.Header{\n\t\t\t\t\t\"Foo\":  []string{\"bar\"},\n\t\t\t\t\t\"Foo2\": []string{\"baz\"},\n\t\t\t\t},\n\t\t\t},\n\t\t\thdr:    http.Header{\"Foo\": []string{\"bar\"}, \"Foo2\": []string{\"baz\"}},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\trequire: ResponseMatcher{\n\t\t\t\tHeaders: http.Header{\n\t\t\t\t\t\"Foo\": []string{\"foo*\"},\n\t\t\t\t},\n\t\t\t},\n\t\t\thdr:    http.Header{\"Foo\": []string{\"foobar\"}},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\trequire: ResponseMatcher{\n\t\t\t\tHeaders: http.Header{\n\t\t\t\t\t\"Foo\": []string{\"foo*\"},\n\t\t\t\t},\n\t\t\t},\n\t\t\thdr:    http.Header{\"Foo\": []string{\"foobar\"}},\n\t\t\texpect: true,\n\t\t},\n\t} {\n\t\tactual := tc.require.Match(tc.status, tc.hdr)\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d %v: Expected %t, got %t for HTTP %d %v\", i, tc.require, tc.expect, actual, tc.status, tc.hdr)\n\t\t\tcontinue\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/responsewriter.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"net/http\"\n)\n\n// ResponseWriterWrapper wraps an underlying ResponseWriter and\n// promotes its Pusher method as well. To use this type, embed\n// a pointer to it within your own struct type that implements\n// the http.ResponseWriter interface, then call methods on the\n// embedded value.\ntype ResponseWriterWrapper struct {\n\thttp.ResponseWriter\n}\n\n// Push implements http.Pusher. It simply calls the underlying\n// ResponseWriter's Push method if there is one, or returns\n// ErrNotImplemented otherwise.\nfunc (rww *ResponseWriterWrapper) Push(target string, opts *http.PushOptions) error {\n\tif pusher, ok := rww.ResponseWriter.(http.Pusher); ok {\n\t\treturn pusher.Push(target, opts)\n\t}\n\treturn ErrNotImplemented\n}\n\n// ReadFrom implements io.ReaderFrom. It retries to use io.ReaderFrom if available,\n// then fallback to io.Copy.\n// see: https://github.com/caddyserver/caddy/issues/6546\nfunc (rww *ResponseWriterWrapper) ReadFrom(r io.Reader) (n int64, err error) {\n\tif rf, ok := rww.ResponseWriter.(io.ReaderFrom); ok {\n\t\treturn rf.ReadFrom(r)\n\t}\n\treturn io.Copy(rww.ResponseWriter, r)\n}\n\n// Unwrap returns the underlying ResponseWriter, necessary for\n// http.ResponseController to work correctly.\nfunc (rww *ResponseWriterWrapper) Unwrap() http.ResponseWriter {\n\treturn rww.ResponseWriter\n}\n\n// ErrNotImplemented is returned when an underlying\n// ResponseWriter does not implement the required method.\nvar ErrNotImplemented = fmt.Errorf(\"method not implemented\")\n\ntype responseRecorder struct {\n\t*ResponseWriterWrapper\n\tstatusCode   int\n\tbuf          *bytes.Buffer\n\tshouldBuffer ShouldBufferFunc\n\tsize         int\n\twroteHeader  bool\n\tstream       bool\n\n\treadSize *int\n}\n\n// NewResponseRecorder returns a new ResponseRecorder that can be\n// used instead of a standard http.ResponseWriter. The recorder is\n// useful for middlewares which need to buffer a response and\n// potentially process its entire body before actually writing the\n// response to the underlying writer. Of course, buffering the entire\n// body has a memory overhead, but sometimes there is no way to avoid\n// buffering the whole response, hence the existence of this type.\n// Still, if at all practical, handlers should strive to stream\n// responses by wrapping Write and WriteHeader methods instead of\n// buffering whole response bodies.\n//\n// Buffering is actually optional. The shouldBuffer function will\n// be called just before the headers are written. If it returns\n// true, the headers and body will be buffered by this recorder\n// and not written to the underlying writer; if false, the headers\n// will be written immediately and the body will be streamed out\n// directly to the underlying writer. If shouldBuffer is nil,\n// the response will never be buffered and will always be streamed\n// directly to the writer.\n//\n// You can know if shouldBuffer returned true by calling Buffered().\n//\n// The provided buffer buf should be obtained from a pool for best\n// performance (see the sync.Pool type).\n//\n// Proper usage of a recorder looks like this:\n//\n//\trec := caddyhttp.NewResponseRecorder(w, buf, shouldBuffer)\n//\terr := next.ServeHTTP(rec, req)\n//\tif err != nil {\n//\t    return err\n//\t}\n//\tif !rec.Buffered() {\n//\t    return nil\n//\t}\n//\t// process the buffered response here\n//\n// The header map is not buffered; i.e. the ResponseRecorder's Header()\n// method returns the same header map of the underlying ResponseWriter.\n// This is a crucial design decision to allow HTTP trailers to be\n// flushed properly (https://github.com/caddyserver/caddy/issues/3236).\n//\n// Once you are ready to write the response, there are two ways you can\n// do it. The easier way is to have the recorder do it:\n//\n//\trec.WriteResponse()\n//\n// This writes the recorded response headers as well as the buffered body.\n// Or, you may wish to do it yourself, especially if you manipulated the\n// buffered body. First you will need to write the headers with the\n// recorded status code, then write the body (this example writes the\n// recorder's body buffer, but you might have your own body to write\n// instead):\n//\n//\tw.WriteHeader(rec.Status())\n//\tio.Copy(w, rec.Buffer())\n//\n// As a special case, 1xx responses are not buffered nor recorded\n// because they are not the final response; they are passed through\n// directly to the underlying ResponseWriter.\nfunc NewResponseRecorder(w http.ResponseWriter, buf *bytes.Buffer, shouldBuffer ShouldBufferFunc) ResponseRecorder {\n\treturn &responseRecorder{\n\t\tResponseWriterWrapper: &ResponseWriterWrapper{ResponseWriter: w},\n\t\tbuf:                   buf,\n\t\tshouldBuffer:          shouldBuffer,\n\t}\n}\n\n// WriteHeader writes the headers with statusCode to the wrapped\n// ResponseWriter unless the response is to be buffered instead.\n// 1xx responses are never buffered.\nfunc (rr *responseRecorder) WriteHeader(statusCode int) {\n\tif rr.wroteHeader {\n\t\treturn\n\t}\n\n\t// save statusCode always, in case HTTP middleware upgrades websocket\n\t// connections by manually setting headers and writing status 101\n\trr.statusCode = statusCode\n\n\t// decide whether we should buffer the response\n\tif rr.shouldBuffer == nil {\n\t\trr.stream = true\n\t} else {\n\t\trr.stream = !rr.shouldBuffer(rr.statusCode, rr.ResponseWriterWrapper.Header())\n\t}\n\n\t// 1xx responses aren't final; just informational\n\tif statusCode < 100 || statusCode > 199 {\n\t\trr.wroteHeader = true\n\t}\n\n\t// if informational or not buffered, immediately write header\n\tif rr.stream || (100 <= statusCode && statusCode <= 199) {\n\t\trr.ResponseWriterWrapper.WriteHeader(statusCode)\n\t}\n}\n\nfunc (rr *responseRecorder) Write(data []byte) (int, error) {\n\trr.WriteHeader(http.StatusOK)\n\tvar n int\n\tvar err error\n\tif rr.stream {\n\t\tn, err = rr.ResponseWriterWrapper.Write(data)\n\t} else {\n\t\tn, err = rr.buf.Write(data)\n\t}\n\n\trr.size += n\n\treturn n, err\n}\n\nfunc (rr *responseRecorder) ReadFrom(r io.Reader) (int64, error) {\n\trr.WriteHeader(http.StatusOK)\n\tvar n int64\n\tvar err error\n\tif rr.stream {\n\t\tn, err = rr.ResponseWriterWrapper.ReadFrom(r)\n\t} else {\n\t\tn, err = rr.buf.ReadFrom(r)\n\t}\n\n\trr.size += int(n)\n\treturn n, err\n}\n\n// Status returns the status code that was written, if any.\nfunc (rr *responseRecorder) Status() int {\n\treturn rr.statusCode\n}\n\n// Size returns the number of bytes written,\n// not including the response headers.\nfunc (rr *responseRecorder) Size() int {\n\treturn rr.size\n}\n\n// Buffer returns the body buffer that rr was created with.\n// You should still have your original pointer, though.\nfunc (rr *responseRecorder) Buffer() *bytes.Buffer {\n\treturn rr.buf\n}\n\n// Buffered returns whether rr has decided to buffer the response.\nfunc (rr *responseRecorder) Buffered() bool {\n\treturn !rr.stream\n}\n\nfunc (rr *responseRecorder) WriteResponse() error {\n\tif rr.statusCode == 0 {\n\t\t// could happen if no handlers actually wrote anything,\n\t\t// and this prevents a panic; status must be > 0\n\t\trr.WriteHeader(http.StatusOK)\n\t}\n\tif rr.stream {\n\t\treturn nil\n\t}\n\trr.ResponseWriterWrapper.WriteHeader(rr.statusCode)\n\t_, err := io.Copy(rr.ResponseWriterWrapper, rr.buf)\n\treturn err\n}\n\n// FlushError will suppress actual flushing if the response is buffered. See:\n// https://github.com/caddyserver/caddy/issues/6144\nfunc (rr *responseRecorder) FlushError() error {\n\tif rr.stream {\n\t\t//nolint:bodyclose\n\t\treturn http.NewResponseController(rr.ResponseWriterWrapper).Flush()\n\t}\n\treturn nil\n}\n\n// Private interface so it can only be used in this package\n// #TODO: maybe export it later\nfunc (rr *responseRecorder) setReadSize(size *int) {\n\trr.readSize = size\n}\n\nfunc (rr *responseRecorder) Hijack() (net.Conn, *bufio.ReadWriter, error) {\n\t//nolint:bodyclose\n\tconn, brw, err := http.NewResponseController(rr.ResponseWriterWrapper).Hijack()\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\t// Per http documentation, returned bufio.Writer is empty, but bufio.Read maybe not\n\tconn = &hijackedConn{conn, rr}\n\tbrw.Writer.Reset(conn)\n\n\tbuffered := brw.Reader.Buffered()\n\tif buffered != 0 {\n\t\tconn.(*hijackedConn).updateReadSize(buffered)\n\t\tdata, _ := brw.Peek(buffered)\n\t\tbrw.Reader.Reset(io.MultiReader(bytes.NewReader(data), conn))\n\t\t// peek to make buffered data appear, as Reset will make it 0\n\t\t_, _ = brw.Peek(buffered)\n\t} else {\n\t\tbrw.Reader.Reset(conn)\n\t}\n\treturn conn, brw, nil\n}\n\n// used to track the size of hijacked response writers\ntype hijackedConn struct {\n\tnet.Conn\n\trr *responseRecorder\n}\n\nfunc (hc *hijackedConn) updateReadSize(n int) {\n\tif hc.rr.readSize != nil {\n\t\t*hc.rr.readSize += n\n\t}\n}\n\nfunc (hc *hijackedConn) Read(p []byte) (int, error) {\n\tn, err := hc.Conn.Read(p)\n\thc.updateReadSize(n)\n\treturn n, err\n}\n\nfunc (hc *hijackedConn) WriteTo(w io.Writer) (int64, error) {\n\tn, err := io.Copy(w, hc.Conn)\n\thc.updateReadSize(int(n))\n\treturn n, err\n}\n\nfunc (hc *hijackedConn) Write(p []byte) (int, error) {\n\tn, err := hc.Conn.Write(p)\n\thc.rr.size += n\n\treturn n, err\n}\n\nfunc (hc *hijackedConn) ReadFrom(r io.Reader) (int64, error) {\n\tn, err := io.Copy(hc.Conn, r)\n\thc.rr.size += int(n)\n\treturn n, err\n}\n\n// ResponseRecorder is a http.ResponseWriter that records\n// responses instead of writing them to the client. See\n// docs for NewResponseRecorder for proper usage.\ntype ResponseRecorder interface {\n\thttp.ResponseWriter\n\tStatus() int\n\tBuffer() *bytes.Buffer\n\tBuffered() bool\n\tSize() int\n\tWriteResponse() error\n}\n\n// ShouldBufferFunc is a function that returns true if the\n// response should be buffered, given the pending HTTP status\n// code and response headers.\ntype ShouldBufferFunc func(status int, header http.Header) bool\n\n// Interface guards\nvar (\n\t_ http.ResponseWriter = (*ResponseWriterWrapper)(nil)\n\t_ ResponseRecorder    = (*responseRecorder)(nil)\n\n\t// Implementing ReaderFrom can be such a significant\n\t// optimization that it should probably be required!\n\t// see PR #5022 (25%-50% speedup)\n\t_ io.ReaderFrom = (*ResponseWriterWrapper)(nil)\n\t_ io.ReaderFrom = (*responseRecorder)(nil)\n\t_ io.ReaderFrom = (*hijackedConn)(nil)\n\n\t_ io.WriterTo = (*hijackedConn)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/responsewriter_test.go",
    "content": "package caddyhttp\n\nimport (\n\t\"bytes\"\n\t\"io\"\n\t\"net/http\"\n\t\"strings\"\n\t\"testing\"\n)\n\ntype responseWriterSpy interface {\n\thttp.ResponseWriter\n\tWritten() string\n\tCalledReadFrom() bool\n}\n\nvar (\n\t_ responseWriterSpy = (*baseRespWriter)(nil)\n\t_ responseWriterSpy = (*readFromRespWriter)(nil)\n)\n\n// a barebones http.ResponseWriter mock\ntype baseRespWriter []byte\n\nfunc (brw *baseRespWriter) Write(d []byte) (int, error) {\n\t*brw = append(*brw, d...)\n\treturn len(d), nil\n}\nfunc (brw *baseRespWriter) Header() http.Header        { return nil }\nfunc (brw *baseRespWriter) WriteHeader(statusCode int) {}\nfunc (brw *baseRespWriter) Written() string            { return string(*brw) }\nfunc (brw *baseRespWriter) CalledReadFrom() bool       { return false }\n\n// an http.ResponseWriter mock that supports ReadFrom\ntype readFromRespWriter struct {\n\tbaseRespWriter\n\tcalled bool\n}\n\nfunc (rf *readFromRespWriter) ReadFrom(r io.Reader) (int64, error) {\n\trf.called = true\n\treturn io.Copy(&rf.baseRespWriter, r)\n}\n\nfunc (rf *readFromRespWriter) CalledReadFrom() bool { return rf.called }\n\nfunc TestResponseWriterWrapperReadFrom(t *testing.T) {\n\ttests := map[string]struct {\n\t\tresponseWriter responseWriterSpy\n\t\twantReadFrom   bool\n\t}{\n\t\t\"no ReadFrom\": {\n\t\t\tresponseWriter: &baseRespWriter{},\n\t\t\twantReadFrom:   false,\n\t\t},\n\t\t\"has ReadFrom\": {\n\t\t\tresponseWriter: &readFromRespWriter{},\n\t\t\twantReadFrom:   true,\n\t\t},\n\t}\n\tfor name, tt := range tests {\n\t\tt.Run(name, func(t *testing.T) {\n\t\t\t// what we expect middlewares to do:\n\t\t\ttype myWrapper struct {\n\t\t\t\t*ResponseWriterWrapper\n\t\t\t}\n\n\t\t\twrapped := myWrapper{\n\t\t\t\tResponseWriterWrapper: &ResponseWriterWrapper{ResponseWriter: tt.responseWriter},\n\t\t\t}\n\n\t\t\tconst srcData = \"boo!\"\n\t\t\t// hides everything but Read, since strings.Reader implements WriteTo it would\n\t\t\t// take precedence over our ReadFrom.\n\t\t\tsrc := struct{ io.Reader }{strings.NewReader(srcData)}\n\n\t\t\tif _, err := io.Copy(wrapped, src); err != nil {\n\t\t\t\tt.Errorf(\"%s: Copy() err = %v\", name, err)\n\t\t\t}\n\n\t\t\tif got := tt.responseWriter.Written(); got != srcData {\n\t\t\t\tt.Errorf(\"%s: data = %q, want %q\", name, got, srcData)\n\t\t\t}\n\n\t\t\tif tt.responseWriter.CalledReadFrom() != tt.wantReadFrom {\n\t\t\t\tif tt.wantReadFrom {\n\t\t\t\t\tt.Errorf(\"%s: ReadFrom() should have been called\", name)\n\t\t\t\t} else {\n\t\t\t\t\tt.Errorf(\"%s: ReadFrom() should not have been called\", name)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestResponseWriterWrapperUnwrap(t *testing.T) {\n\tw := &ResponseWriterWrapper{&baseRespWriter{}}\n\n\tif _, ok := w.Unwrap().(*baseRespWriter); !ok {\n\t\tt.Errorf(\"Unwrap() doesn't return the underlying ResponseWriter\")\n\t}\n}\n\nfunc TestResponseRecorderReadFrom(t *testing.T) {\n\ttests := map[string]struct {\n\t\tresponseWriter responseWriterSpy\n\t\tshouldBuffer   bool\n\t\twantReadFrom   bool\n\t}{\n\t\t\"buffered plain\": {\n\t\t\tresponseWriter: &baseRespWriter{},\n\t\t\tshouldBuffer:   true,\n\t\t\twantReadFrom:   false,\n\t\t},\n\t\t\"streamed plain\": {\n\t\t\tresponseWriter: &baseRespWriter{},\n\t\t\tshouldBuffer:   false,\n\t\t\twantReadFrom:   false,\n\t\t},\n\t\t\"buffered ReadFrom\": {\n\t\t\tresponseWriter: &readFromRespWriter{},\n\t\t\tshouldBuffer:   true,\n\t\t\twantReadFrom:   false,\n\t\t},\n\t\t\"streamed ReadFrom\": {\n\t\t\tresponseWriter: &readFromRespWriter{},\n\t\t\tshouldBuffer:   false,\n\t\t\twantReadFrom:   true,\n\t\t},\n\t}\n\tfor name, tt := range tests {\n\t\tt.Run(name, func(t *testing.T) {\n\t\t\tvar buf bytes.Buffer\n\n\t\t\trr := NewResponseRecorder(tt.responseWriter, &buf, func(status int, header http.Header) bool {\n\t\t\t\treturn tt.shouldBuffer\n\t\t\t})\n\n\t\t\tconst srcData = \"boo!\"\n\t\t\t// hides everything but Read, since strings.Reader implements WriteTo it would\n\t\t\t// take precedence over our ReadFrom.\n\t\t\tsrc := struct{ io.Reader }{strings.NewReader(srcData)}\n\n\t\t\tif _, err := io.Copy(rr, src); err != nil {\n\t\t\t\tt.Errorf(\"Copy() err = %v\", err)\n\t\t\t}\n\n\t\t\twantStreamed := srcData\n\t\t\twantBuffered := \"\"\n\t\t\tif tt.shouldBuffer {\n\t\t\t\twantStreamed = \"\"\n\t\t\t\twantBuffered = srcData\n\t\t\t}\n\n\t\t\tif got := tt.responseWriter.Written(); got != wantStreamed {\n\t\t\t\tt.Errorf(\"streamed data = %q, want %q\", got, wantStreamed)\n\t\t\t}\n\t\t\tif got := buf.String(); got != wantBuffered {\n\t\t\t\tt.Errorf(\"buffered data = %q, want %q\", got, wantBuffered)\n\t\t\t}\n\n\t\t\tif tt.responseWriter.CalledReadFrom() != tt.wantReadFrom {\n\t\t\t\tif tt.wantReadFrom {\n\t\t\t\t\tt.Errorf(\"ReadFrom() should have been called\")\n\t\t\t\t} else {\n\t\t\t\t\tt.Errorf(\"ReadFrom() should not have been called\")\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/addresses.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage reverseproxy\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\t\"net/url\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\ntype parsedAddr struct {\n\tnetwork, scheme, host, port string\n\tvalid                       bool\n}\n\nfunc (p parsedAddr) dialAddr() string {\n\tif !p.valid {\n\t\treturn \"\"\n\t}\n\t// for simplest possible config, we only need to include\n\t// the network portion if the user specified one\n\tif p.network != \"\" {\n\t\treturn caddy.JoinNetworkAddress(p.network, p.host, p.port)\n\t}\n\n\t// if the host is a placeholder, then we don't want to join with an empty port,\n\t// because that would just append an extra ':' at the end of the address.\n\tif p.port == \"\" && strings.Contains(p.host, \"{\") {\n\t\treturn p.host\n\t}\n\treturn net.JoinHostPort(p.host, p.port)\n}\n\nfunc (p parsedAddr) rangedPort() bool {\n\treturn strings.Contains(p.port, \"-\")\n}\n\nfunc (p parsedAddr) replaceablePort() bool {\n\treturn strings.Contains(p.port, \"{\") && strings.Contains(p.port, \"}\")\n}\n\nfunc (p parsedAddr) isUnix() bool {\n\treturn caddy.IsUnixNetwork(p.network)\n}\n\n// parseUpstreamDialAddress parses configuration inputs for\n// the dial address, including support for a scheme in front\n// as a shortcut for the port number, and a network type,\n// for example 'unix' to dial a unix socket.\nfunc parseUpstreamDialAddress(upstreamAddr string) (parsedAddr, error) {\n\tvar network, scheme, host, port string\n\n\tif strings.Contains(upstreamAddr, \"://\") {\n\t\t// we get a parsing error if a placeholder is specified\n\t\t// so we return a more user-friendly error message instead\n\t\t// to explain what to do instead\n\t\tif strings.Contains(upstreamAddr, \"{\") {\n\t\t\treturn parsedAddr{}, fmt.Errorf(\"due to parsing difficulties, placeholders are not allowed when an upstream address contains a scheme\")\n\t\t}\n\n\t\ttoURL, err := url.Parse(upstreamAddr)\n\t\tif err != nil {\n\t\t\t// if the error seems to be due to a port range,\n\t\t\t// try to replace the port range with a dummy\n\t\t\t// single port so that url.Parse() will succeed\n\t\t\tif strings.Contains(err.Error(), \"invalid port\") && strings.Contains(err.Error(), \"-\") {\n\t\t\t\tindex := strings.LastIndex(upstreamAddr, \":\")\n\t\t\t\tif index == -1 {\n\t\t\t\t\treturn parsedAddr{}, fmt.Errorf(\"parsing upstream URL: %v\", err)\n\t\t\t\t}\n\t\t\t\tportRange := upstreamAddr[index+1:]\n\t\t\t\tif strings.Count(portRange, \"-\") != 1 {\n\t\t\t\t\treturn parsedAddr{}, fmt.Errorf(\"parsing upstream URL: parse \\\"%v\\\": port range invalid: %v\", upstreamAddr, portRange)\n\t\t\t\t}\n\t\t\t\ttoURL, err = url.Parse(strings.ReplaceAll(upstreamAddr, portRange, \"0\"))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn parsedAddr{}, fmt.Errorf(\"parsing upstream URL: %v\", err)\n\t\t\t\t}\n\t\t\t\tport = portRange\n\t\t\t} else {\n\t\t\t\treturn parsedAddr{}, fmt.Errorf(\"parsing upstream URL: %v\", err)\n\t\t\t}\n\t\t}\n\t\tif port == \"\" {\n\t\t\tport = toURL.Port()\n\t\t}\n\n\t\t// there is currently no way to perform a URL rewrite between choosing\n\t\t// a backend and proxying to it, so we cannot allow extra components\n\t\t// in backend URLs\n\t\tif toURL.Path != \"\" || toURL.RawQuery != \"\" || toURL.Fragment != \"\" {\n\t\t\treturn parsedAddr{}, fmt.Errorf(\"for now, URLs for proxy upstreams only support scheme, host, and port components\")\n\t\t}\n\n\t\t// ensure the port and scheme aren't in conflict\n\t\tif toURL.Scheme == \"http\" && port == \"443\" {\n\t\t\treturn parsedAddr{}, fmt.Errorf(\"upstream address has conflicting scheme (http://) and port (:443, the HTTPS port)\")\n\t\t}\n\t\tif toURL.Scheme == \"https\" && port == \"80\" {\n\t\t\treturn parsedAddr{}, fmt.Errorf(\"upstream address has conflicting scheme (https://) and port (:80, the HTTP port)\")\n\t\t}\n\t\tif toURL.Scheme == \"h2c\" && port == \"443\" {\n\t\t\treturn parsedAddr{}, fmt.Errorf(\"upstream address has conflicting scheme (h2c://) and port (:443, the HTTPS port)\")\n\t\t}\n\n\t\t// if port is missing, attempt to infer from scheme\n\t\tif port == \"\" {\n\t\t\tswitch toURL.Scheme {\n\t\t\tcase \"\", \"http\", \"h2c\":\n\t\t\t\tport = \"80\"\n\t\t\tcase \"https\":\n\t\t\t\tport = \"443\"\n\t\t\t}\n\t\t}\n\n\t\tscheme, host = toURL.Scheme, toURL.Hostname()\n\t} else {\n\t\tvar err error\n\t\tnetwork, host, port, err = caddy.SplitNetworkAddress(upstreamAddr)\n\t\tif err != nil {\n\t\t\thost = upstreamAddr\n\t\t}\n\t\t// we can assume a port if only a hostname is specified, but use of a\n\t\t// placeholder without a port likely means a port will be filled in\n\t\tif port == \"\" && !strings.Contains(host, \"{\") && !caddy.IsUnixNetwork(network) && !caddy.IsFdNetwork(network) {\n\t\t\tport = \"80\"\n\t\t}\n\t}\n\n\t// special case network to support both unix and h2c at the same time\n\tif network == \"unix+h2c\" {\n\t\tnetwork = \"unix\"\n\t\tscheme = \"h2c\"\n\t}\n\treturn parsedAddr{network, scheme, host, port, true}, nil\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/addresses_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//\thttp://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage reverseproxy\n\nimport \"testing\"\n\nfunc TestParseUpstreamDialAddress(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinput          string\n\t\texpectHostPort string\n\t\texpectScheme   string\n\t\texpectErr      bool\n\t}{\n\t\t{\n\t\t\tinput:          \"foo\",\n\t\t\texpectHostPort: \"foo:80\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"foo:1234\",\n\t\t\texpectHostPort: \"foo:1234\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"127.0.0.1\",\n\t\t\texpectHostPort: \"127.0.0.1:80\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"127.0.0.1:1234\",\n\t\t\texpectHostPort: \"127.0.0.1:1234\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"[::1]\",\n\t\t\texpectHostPort: \"[::1]:80\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"[::1]:1234\",\n\t\t\texpectHostPort: \"[::1]:1234\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"{foo}\",\n\t\t\texpectHostPort: \"{foo}\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"{foo}:80\",\n\t\t\texpectHostPort: \"{foo}:80\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"{foo}:{bar}\",\n\t\t\texpectHostPort: \"{foo}:{bar}\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"http://foo\",\n\t\t\texpectHostPort: \"foo:80\",\n\t\t\texpectScheme:   \"http\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"http://foo:1234\",\n\t\t\texpectHostPort: \"foo:1234\",\n\t\t\texpectScheme:   \"http\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"http://127.0.0.1\",\n\t\t\texpectHostPort: \"127.0.0.1:80\",\n\t\t\texpectScheme:   \"http\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"http://127.0.0.1:1234\",\n\t\t\texpectHostPort: \"127.0.0.1:1234\",\n\t\t\texpectScheme:   \"http\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"http://[::1]\",\n\t\t\texpectHostPort: \"[::1]:80\",\n\t\t\texpectScheme:   \"http\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"http://[::1]:80\",\n\t\t\texpectHostPort: \"[::1]:80\",\n\t\t\texpectScheme:   \"http\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"https://foo\",\n\t\t\texpectHostPort: \"foo:443\",\n\t\t\texpectScheme:   \"https\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"https://foo:1234\",\n\t\t\texpectHostPort: \"foo:1234\",\n\t\t\texpectScheme:   \"https\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"https://127.0.0.1\",\n\t\t\texpectHostPort: \"127.0.0.1:443\",\n\t\t\texpectScheme:   \"https\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"https://127.0.0.1:1234\",\n\t\t\texpectHostPort: \"127.0.0.1:1234\",\n\t\t\texpectScheme:   \"https\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"https://[::1]\",\n\t\t\texpectHostPort: \"[::1]:443\",\n\t\t\texpectScheme:   \"https\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"https://[::1]:1234\",\n\t\t\texpectHostPort: \"[::1]:1234\",\n\t\t\texpectScheme:   \"https\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"h2c://foo\",\n\t\t\texpectHostPort: \"foo:80\",\n\t\t\texpectScheme:   \"h2c\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"h2c://foo:1234\",\n\t\t\texpectHostPort: \"foo:1234\",\n\t\t\texpectScheme:   \"h2c\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"h2c://127.0.0.1\",\n\t\t\texpectHostPort: \"127.0.0.1:80\",\n\t\t\texpectScheme:   \"h2c\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"h2c://127.0.0.1:1234\",\n\t\t\texpectHostPort: \"127.0.0.1:1234\",\n\t\t\texpectScheme:   \"h2c\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"h2c://[::1]\",\n\t\t\texpectHostPort: \"[::1]:80\",\n\t\t\texpectScheme:   \"h2c\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"h2c://[::1]:1234\",\n\t\t\texpectHostPort: \"[::1]:1234\",\n\t\t\texpectScheme:   \"h2c\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"localhost:1001-1009\",\n\t\t\texpectHostPort: \"localhost:1001-1009\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"{host}:1001-1009\",\n\t\t\texpectHostPort: \"{host}:1001-1009\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"http://localhost:1001-1009\",\n\t\t\texpectHostPort: \"localhost:1001-1009\",\n\t\t\texpectScheme:   \"http\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"https://localhost:1001-1009\",\n\t\t\texpectHostPort: \"localhost:1001-1009\",\n\t\t\texpectScheme:   \"https\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"unix//var/php.sock\",\n\t\t\texpectHostPort: \"unix//var/php.sock\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"unix+h2c//var/grpc.sock\",\n\t\t\texpectHostPort: \"unix//var/grpc.sock\",\n\t\t\texpectScheme:   \"h2c\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"unix/{foo}\",\n\t\t\texpectHostPort: \"unix/{foo}\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"unix+h2c/{foo}\",\n\t\t\texpectHostPort: \"unix/{foo}\",\n\t\t\texpectScheme:   \"h2c\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"unix//foo/{foo}/bar\",\n\t\t\texpectHostPort: \"unix//foo/{foo}/bar\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"unix+h2c//foo/{foo}/bar\",\n\t\t\texpectHostPort: \"unix//foo/{foo}/bar\",\n\t\t\texpectScheme:   \"h2c\",\n\t\t},\n\t\t{\n\t\t\tinput:     \"http://{foo}\",\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\tinput:     \"http:// :80\",\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\tinput:     \"http://localhost/path\",\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\tinput:     \"http://localhost?key=value\",\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\tinput:     \"http://localhost#fragment\",\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\tinput:     \"http://localhost:8001-8002-8003\",\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\tinput:     \"http://localhost:8001-8002/foo:bar\",\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\tinput:     \"http://localhost:8001-8002/foo:1\",\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\tinput:     \"http://localhost:8001-8002/foo:1-2\",\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\tinput:     \"http://localhost:8001-8002#foo:1\",\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\tinput:     \"http://foo:443\",\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\tinput:     \"https://foo:80\",\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\tinput:     \"h2c://foo:443\",\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\tinput:          `unix/c:\\absolute\\path`,\n\t\t\texpectHostPort: `unix/c:\\absolute\\path`,\n\t\t},\n\t\t{\n\t\t\tinput:          `unix+h2c/c:\\absolute\\path`,\n\t\t\texpectHostPort: `unix/c:\\absolute\\path`,\n\t\t\texpectScheme:   \"h2c\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"unix/c:/absolute/path\",\n\t\t\texpectHostPort: \"unix/c:/absolute/path\",\n\t\t},\n\t\t{\n\t\t\tinput:          \"unix+h2c/c:/absolute/path\",\n\t\t\texpectHostPort: \"unix/c:/absolute/path\",\n\t\t\texpectScheme:   \"h2c\",\n\t\t},\n\t} {\n\t\tactualAddr, err := parseUpstreamDialAddress(tc.input)\n\t\tif tc.expectErr && err == nil {\n\t\t\tt.Errorf(\"Test %d: Expected error but got %v\", i, err)\n\t\t}\n\t\tif !tc.expectErr && err != nil {\n\t\t\tt.Errorf(\"Test %d: Expected no error but got %v\", i, err)\n\t\t}\n\t\tif actualAddr.dialAddr() != tc.expectHostPort {\n\t\t\tt.Errorf(\"Test %d: input %s: Expected host and port '%s' but got '%s'\", i, tc.input, tc.expectHostPort, actualAddr.dialAddr())\n\t\t}\n\t\tif actualAddr.scheme != tc.expectScheme {\n\t\t\tt.Errorf(\"Test %d: Expected scheme '%s' but got '%s'\", i, tc.expectScheme, actualAddr.scheme)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/admin.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage reverseproxy\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"net/http\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(adminUpstreams{})\n}\n\n// adminUpstreams is a module that provides the\n// /reverse_proxy/upstreams endpoint for the Caddy admin\n// API. This allows for checking the health of configured\n// reverse proxy upstreams in the pool.\ntype adminUpstreams struct{}\n\n// upstreamStatus holds the status of a particular upstream\ntype upstreamStatus struct {\n\tAddress     string `json:\"address\"`\n\tNumRequests int    `json:\"num_requests\"`\n\tFails       int    `json:\"fails\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (adminUpstreams) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"admin.api.reverse_proxy\",\n\t\tNew: func() caddy.Module { return new(adminUpstreams) },\n\t}\n}\n\n// Routes returns a route for the /reverse_proxy/upstreams endpoint.\nfunc (al adminUpstreams) Routes() []caddy.AdminRoute {\n\treturn []caddy.AdminRoute{\n\t\t{\n\t\t\tPattern: \"/reverse_proxy/upstreams\",\n\t\t\tHandler: caddy.AdminHandlerFunc(al.handleUpstreams),\n\t\t},\n\t}\n}\n\n// handleUpstreams reports the status of the reverse proxy\n// upstream pool.\nfunc (adminUpstreams) handleUpstreams(w http.ResponseWriter, r *http.Request) error {\n\tif r.Method != http.MethodGet {\n\t\treturn caddy.APIError{\n\t\t\tHTTPStatus: http.StatusMethodNotAllowed,\n\t\t\tErr:        fmt.Errorf(\"method not allowed\"),\n\t\t}\n\t}\n\n\t// Prep for a JSON response\n\tw.Header().Set(\"Content-Type\", \"application/json\")\n\tenc := json.NewEncoder(w)\n\n\t// Collect the results to respond with\n\tresults := []upstreamStatus{}\n\tknownHosts := make(map[string]struct{})\n\n\t// Iterate over the static upstream pool (needs to be fast)\n\tvar rangeErr error\n\thosts.Range(func(key, val any) bool {\n\t\taddress, ok := key.(string)\n\t\tif !ok {\n\t\t\trangeErr = caddy.APIError{\n\t\t\t\tHTTPStatus: http.StatusInternalServerError,\n\t\t\t\tErr:        fmt.Errorf(\"could not type assert upstream address\"),\n\t\t\t}\n\t\t\treturn false\n\t\t}\n\n\t\tupstream, ok := val.(*Host)\n\t\tif !ok {\n\t\t\trangeErr = caddy.APIError{\n\t\t\t\tHTTPStatus: http.StatusInternalServerError,\n\t\t\t\tErr:        fmt.Errorf(\"could not type assert upstream struct\"),\n\t\t\t}\n\t\t\treturn false\n\t\t}\n\n\t\tknownHosts[address] = struct{}{}\n\n\t\tresults = append(results, upstreamStatus{\n\t\t\tAddress:     address,\n\t\t\tNumRequests: upstream.NumRequests(),\n\t\t\tFails:       upstream.Fails(),\n\t\t})\n\t\treturn true\n\t})\n\n\tcurrentInFlight := getInFlightRequests()\n\tfor address, count := range currentInFlight {\n\t\tif _, exists := knownHosts[address]; !exists && count > 0 {\n\t\t\tresults = append(results, upstreamStatus{\n\t\t\t\tAddress:     address,\n\t\t\t\tNumRequests: int(count),\n\t\t\t\tFails:       0,\n\t\t\t})\n\t\t}\n\t}\n\n\tif rangeErr != nil {\n\t\treturn rangeErr\n\t}\n\n\t// Also include dynamic upstreams\n\tdynamicHostsMu.RLock()\n\tfor address, entry := range dynamicHosts {\n\t\tresults = append(results, upstreamStatus{\n\t\t\tAddress:     address,\n\t\t\tNumRequests: entry.host.NumRequests(),\n\t\t\tFails:       entry.host.Fails(),\n\t\t})\n\t}\n\tdynamicHostsMu.RUnlock()\n\n\terr := enc.Encode(results)\n\tif err != nil {\n\t\treturn caddy.APIError{\n\t\t\tHTTPStatus: http.StatusInternalServerError,\n\t\t\tErr:        err,\n\t\t}\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/admin_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage reverseproxy\n\nimport (\n\t\"encoding/json\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"testing\"\n\t\"time\"\n)\n\n// adminHandlerFixture sets up the global host state for an admin endpoint test\n// and returns a cleanup function that must be deferred by the caller.\n//\n// staticAddrs are inserted into the UsagePool (as a static upstream would be).\n// dynamicAddrs are inserted into the dynamicHosts map (as a dynamic upstream would be).\nfunc adminHandlerFixture(t *testing.T, staticAddrs, dynamicAddrs []string) func() {\n\tt.Helper()\n\n\tfor _, addr := range staticAddrs {\n\t\tu := &Upstream{Dial: addr}\n\t\tu.fillHost()\n\t}\n\n\tdynamicHostsMu.Lock()\n\tfor _, addr := range dynamicAddrs {\n\t\tdynamicHosts[addr] = dynamicHostEntry{host: new(Host), lastSeen: time.Now()}\n\t}\n\tdynamicHostsMu.Unlock()\n\n\treturn func() {\n\t\t// Remove static entries from the UsagePool.\n\t\tfor _, addr := range staticAddrs {\n\t\t\t_, _ = hosts.Delete(addr)\n\t\t}\n\t\t// Remove dynamic entries.\n\t\tdynamicHostsMu.Lock()\n\t\tfor _, addr := range dynamicAddrs {\n\t\t\tdelete(dynamicHosts, addr)\n\t\t}\n\t\tdynamicHostsMu.Unlock()\n\t}\n}\n\n// callAdminUpstreams fires a GET against handleUpstreams and returns the\n// decoded response body.\nfunc callAdminUpstreams(t *testing.T) []upstreamStatus {\n\tt.Helper()\n\treq := httptest.NewRequest(http.MethodGet, \"/reverse_proxy/upstreams\", nil)\n\tw := httptest.NewRecorder()\n\n\thandler := adminUpstreams{}\n\tif err := handler.handleUpstreams(w, req); err != nil {\n\t\tt.Fatalf(\"handleUpstreams returned unexpected error: %v\", err)\n\t}\n\tif w.Code != http.StatusOK {\n\t\tt.Fatalf(\"expected 200, got %d\", w.Code)\n\t}\n\tif ct := w.Header().Get(\"Content-Type\"); ct != \"application/json\" {\n\t\tt.Fatalf(\"expected Content-Type application/json, got %q\", ct)\n\t}\n\n\tvar results []upstreamStatus\n\tif err := json.NewDecoder(w.Body).Decode(&results); err != nil {\n\t\tt.Fatalf(\"failed to decode response: %v\", err)\n\t}\n\treturn results\n}\n\n// resultsByAddress indexes a slice of upstreamStatus by address for easier\n// lookup in assertions.\nfunc resultsByAddress(statuses []upstreamStatus) map[string]upstreamStatus {\n\tm := make(map[string]upstreamStatus, len(statuses))\n\tfor _, s := range statuses {\n\t\tm[s.Address] = s\n\t}\n\treturn m\n}\n\n// TestAdminUpstreamsMethodNotAllowed verifies that non-GET methods are rejected.\nfunc TestAdminUpstreamsMethodNotAllowed(t *testing.T) {\n\tfor _, method := range []string{http.MethodPost, http.MethodPut, http.MethodDelete} {\n\t\treq := httptest.NewRequest(method, \"/reverse_proxy/upstreams\", nil)\n\t\tw := httptest.NewRecorder()\n\t\terr := (adminUpstreams{}).handleUpstreams(w, req)\n\t\tif err == nil {\n\t\t\tt.Errorf(\"method %s: expected an error, got nil\", method)\n\t\t\tcontinue\n\t\t}\n\t\tapiErr, ok := err.(interface{ HTTPStatus() int })\n\t\tif !ok {\n\t\t\t// caddy.APIError stores the code in HTTPStatus field, access via the\n\t\t\t// exported interface it satisfies indirectly; just check non-nil.\n\t\t\tcontinue\n\t\t}\n\t\tif code := apiErr.HTTPStatus(); code != http.StatusMethodNotAllowed {\n\t\t\tt.Errorf(\"method %s: expected 405, got %d\", method, code)\n\t\t}\n\t}\n}\n\n// TestAdminUpstreamsEmpty verifies that an empty response is valid JSON when\n// no upstreams are registered.\nfunc TestAdminUpstreamsEmpty(t *testing.T) {\n\tresetDynamicHosts()\n\n\tresults := callAdminUpstreams(t)\n\tif results == nil {\n\t\tt.Error(\"expected non-nil (empty) slice, got nil\")\n\t}\n\tif len(results) != 0 {\n\t\tt.Errorf(\"expected 0 results with empty pools, got %d\", len(results))\n\t}\n}\n\n// TestAdminUpstreamsStaticOnly verifies that static upstreams (from the\n// UsagePool) appear in the response with correct addresses.\nfunc TestAdminUpstreamsStaticOnly(t *testing.T) {\n\tresetDynamicHosts()\n\tcleanup := adminHandlerFixture(t,\n\t\t[]string{\"10.0.0.1:80\", \"10.0.0.2:80\"},\n\t\tnil,\n\t)\n\tdefer cleanup()\n\n\tresults := callAdminUpstreams(t)\n\tbyAddr := resultsByAddress(results)\n\n\tfor _, addr := range []string{\"10.0.0.1:80\", \"10.0.0.2:80\"} {\n\t\tif _, ok := byAddr[addr]; !ok {\n\t\t\tt.Errorf(\"expected static upstream %q in response\", addr)\n\t\t}\n\t}\n\tif len(results) != 2 {\n\t\tt.Errorf(\"expected exactly 2 results, got %d\", len(results))\n\t}\n}\n\n// TestAdminUpstreamsDynamicOnly verifies that dynamic upstreams (from\n// dynamicHosts) appear in the response with correct addresses.\nfunc TestAdminUpstreamsDynamicOnly(t *testing.T) {\n\tresetDynamicHosts()\n\tcleanup := adminHandlerFixture(t,\n\t\tnil,\n\t\t[]string{\"10.0.1.1:80\", \"10.0.1.2:80\"},\n\t)\n\tdefer cleanup()\n\n\tresults := callAdminUpstreams(t)\n\tbyAddr := resultsByAddress(results)\n\n\tfor _, addr := range []string{\"10.0.1.1:80\", \"10.0.1.2:80\"} {\n\t\tif _, ok := byAddr[addr]; !ok {\n\t\t\tt.Errorf(\"expected dynamic upstream %q in response\", addr)\n\t\t}\n\t}\n\tif len(results) != 2 {\n\t\tt.Errorf(\"expected exactly 2 results, got %d\", len(results))\n\t}\n}\n\n// TestAdminUpstreamsBothPools verifies that static and dynamic upstreams are\n// both present in the same response and that there is no overlap or omission.\nfunc TestAdminUpstreamsBothPools(t *testing.T) {\n\tresetDynamicHosts()\n\tcleanup := adminHandlerFixture(t,\n\t\t[]string{\"10.0.2.1:80\"},\n\t\t[]string{\"10.0.2.2:80\"},\n\t)\n\tdefer cleanup()\n\n\tresults := callAdminUpstreams(t)\n\tif len(results) != 2 {\n\t\tt.Fatalf(\"expected 2 results (1 static + 1 dynamic), got %d\", len(results))\n\t}\n\n\tbyAddr := resultsByAddress(results)\n\tif _, ok := byAddr[\"10.0.2.1:80\"]; !ok {\n\t\tt.Error(\"static upstream missing from response\")\n\t}\n\tif _, ok := byAddr[\"10.0.2.2:80\"]; !ok {\n\t\tt.Error(\"dynamic upstream missing from response\")\n\t}\n}\n\n// TestAdminUpstreamsNoOverlapBetweenPools verifies that an address registered\n// only as a static upstream does not also appear as a dynamic entry, and\n// vice-versa.\nfunc TestAdminUpstreamsNoOverlapBetweenPools(t *testing.T) {\n\tresetDynamicHosts()\n\tcleanup := adminHandlerFixture(t,\n\t\t[]string{\"10.0.3.1:80\"},\n\t\t[]string{\"10.0.3.2:80\"},\n\t)\n\tdefer cleanup()\n\n\tresults := callAdminUpstreams(t)\n\tseen := make(map[string]int)\n\tfor _, r := range results {\n\t\tseen[r.Address]++\n\t}\n\tfor addr, count := range seen {\n\t\tif count > 1 {\n\t\t\tt.Errorf(\"address %q appeared %d times; expected exactly once\", addr, count)\n\t\t}\n\t}\n}\n\n// TestAdminUpstreamsReportsFailCounts verifies that fail counts accumulated on\n// a dynamic upstream's Host are reflected in the response.\nfunc TestAdminUpstreamsReportsFailCounts(t *testing.T) {\n\tresetDynamicHosts()\n\n\tconst addr = \"10.0.4.1:80\"\n\th := new(Host)\n\t_ = h.countFail(3)\n\n\tdynamicHostsMu.Lock()\n\tdynamicHosts[addr] = dynamicHostEntry{host: h, lastSeen: time.Now()}\n\tdynamicHostsMu.Unlock()\n\tdefer func() {\n\t\tdynamicHostsMu.Lock()\n\t\tdelete(dynamicHosts, addr)\n\t\tdynamicHostsMu.Unlock()\n\t}()\n\n\tresults := callAdminUpstreams(t)\n\tbyAddr := resultsByAddress(results)\n\n\tstatus, ok := byAddr[addr]\n\tif !ok {\n\t\tt.Fatalf(\"expected %q in response\", addr)\n\t}\n\tif status.Fails != 3 {\n\t\tt.Errorf(\"expected Fails=3, got %d\", status.Fails)\n\t}\n}\n\n// TestAdminUpstreamsReportsNumRequests verifies that the active request count\n// for a static upstream is reflected in the response.\nfunc TestAdminUpstreamsReportsNumRequests(t *testing.T) {\n\tresetDynamicHosts()\n\n\tconst addr = \"10.0.4.2:80\"\n\tu := &Upstream{Dial: addr}\n\tu.fillHost()\n\tdefer func() { _, _ = hosts.Delete(addr) }()\n\n\t_ = u.Host.countRequest(2)\n\tdefer func() { _ = u.Host.countRequest(-2) }()\n\n\tresults := callAdminUpstreams(t)\n\tbyAddr := resultsByAddress(results)\n\n\tstatus, ok := byAddr[addr]\n\tif !ok {\n\t\tt.Fatalf(\"expected %q in response\", addr)\n\t}\n\tif status.NumRequests != 2 {\n\t\tt.Errorf(\"expected NumRequests=2, got %d\", status.NumRequests)\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/ascii.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Most of the code in this file was initially borrowed from the Go\n// standard library and modified; It had this copyright notice:\n// Copyright 2021 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\n// Original source, copied because the package was marked internal:\n// https://github.com/golang/go/blob/5c489514bc5e61ad9b5b07bd7d8ec65d66a0512a/src/net/http/internal/ascii/print.go\n\npackage reverseproxy\n\n// asciiEqualFold is strings.EqualFold, ASCII only. It reports whether s and t\n// are equal, ASCII-case-insensitively.\nfunc asciiEqualFold(s, t string) bool {\n\tif len(s) != len(t) {\n\t\treturn false\n\t}\n\tfor i := 0; i < len(s); i++ {\n\t\tif asciiLower(s[i]) != asciiLower(t[i]) {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n\n// asciiLower returns the ASCII lowercase version of b.\nfunc asciiLower(b byte) byte {\n\tif 'A' <= b && b <= 'Z' {\n\t\treturn b + ('a' - 'A')\n\t}\n\treturn b\n}\n\n// asciiIsPrint returns whether s is ASCII and printable according to\n// https://tools.ietf.org/html/rfc20#section-4.2.\nfunc asciiIsPrint(s string) bool {\n\tfor i := 0; i < len(s); i++ {\n\t\tif s[i] < ' ' || s[i] > '~' {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/ascii_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Most of the code in this file was initially borrowed from the Go\n// standard library and modified; It had this copyright notice:\n// Copyright 2021 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\n// Original source, copied because the package was marked internal:\n// https://github.com/golang/go/blob/5c489514bc5e61ad9b5b07bd7d8ec65d66a0512a/src/net/http/internal/ascii/print_test.go\n\npackage reverseproxy\n\nimport \"testing\"\n\nfunc TestEqualFold(t *testing.T) {\n\ttests := []struct {\n\t\tname string\n\t\ta, b string\n\t\twant bool\n\t}{\n\t\t{\n\t\t\tname: \"empty\",\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"simple match\",\n\t\t\ta:    \"CHUNKED\",\n\t\t\tb:    \"chunked\",\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"same string\",\n\t\t\ta:    \"chunked\",\n\t\t\tb:    \"chunked\",\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Unicode Kelvin symbol\",\n\t\t\ta:    \"chunKed\", // This \"K\" is 'KELVIN SIGN' (\\u212A)\n\t\t\tb:    \"chunked\",\n\t\t\twant: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := asciiEqualFold(tt.a, tt.b); got != tt.want {\n\t\t\t\tt.Errorf(\"AsciiEqualFold(%q,%q): got %v want %v\", tt.a, tt.b, got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestIsPrint(t *testing.T) {\n\ttests := []struct {\n\t\tname string\n\t\tin   string\n\t\twant bool\n\t}{\n\t\t{\n\t\t\tname: \"empty\",\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"ASCII low\",\n\t\t\tin:   \"This is a space: ' '\",\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"ASCII high\",\n\t\t\tin:   \"This is a tilde: '~'\",\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"ASCII low non-print\",\n\t\t\tin:   \"This is a unit separator: \\x1F\",\n\t\t\twant: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Ascii high non-print\",\n\t\t\tin:   \"This is a Delete: \\x7F\",\n\t\t\twant: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Unicode letter\",\n\t\t\tin:   \"Today it's 280K outside: it's freezing!\", // This \"K\" is 'KELVIN SIGN' (\\u212A)\n\t\t\twant: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Unicode emoji\",\n\t\t\tin:   \"Gophers like 🧀\",\n\t\t\twant: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := asciiIsPrint(tt.in); got != tt.want {\n\t\t\t\tt.Errorf(\"IsASCIIPrint(%q): got %v want %v\", tt.in, got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/buffering_test.go",
    "content": "package reverseproxy\n\nimport (\n\t\"io\"\n\t\"testing\"\n)\n\ntype zeroReader struct{}\n\nfunc (zeroReader) Read(p []byte) (int, error) {\n\tfor i := range p {\n\t\tp[i] = 0\n\t}\n\treturn len(p), nil\n}\n\nfunc TestBuffering(t *testing.T) {\n\tvar (\n\t\th  Handler\n\t\tzr zeroReader\n\t)\n\ttype args struct {\n\t\tbody  io.ReadCloser\n\t\tlimit int64\n\t}\n\ttests := []struct {\n\t\tname        string\n\t\targs        args\n\t\tresultCheck func(io.ReadCloser, int64, args) bool\n\t}{\n\t\t{\n\t\t\tname: \"0 limit, body is returned as is\",\n\t\t\targs: args{\n\t\t\t\tbody:  io.NopCloser(&zr),\n\t\t\t\tlimit: 0,\n\t\t\t},\n\t\t\tresultCheck: func(res io.ReadCloser, read int64, args args) bool {\n\t\t\t\treturn res == args.body && read == args.limit && read == 0\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"negative limit, body is read completely\",\n\t\t\targs: args{\n\t\t\t\tbody:  io.NopCloser(io.LimitReader(&zr, 100)),\n\t\t\t\tlimit: -1,\n\t\t\t},\n\t\t\tresultCheck: func(res io.ReadCloser, read int64, args args) bool {\n\t\t\t\tbrc, ok := res.(bodyReadCloser)\n\t\t\t\treturn ok && brc.body == nil && brc.buf.Len() == 100 && read == 100\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"positive limit, body is read partially\",\n\t\t\targs: args{\n\t\t\t\tbody:  io.NopCloser(io.LimitReader(&zr, 100)),\n\t\t\t\tlimit: 50,\n\t\t\t},\n\t\t\tresultCheck: func(res io.ReadCloser, read int64, args args) bool {\n\t\t\t\tbrc, ok := res.(bodyReadCloser)\n\t\t\t\treturn ok && brc.body != nil && brc.buf.Len() == 50 && read == 50\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"positive limit, body is read completely\",\n\t\t\targs: args{\n\t\t\t\tbody:  io.NopCloser(io.LimitReader(&zr, 100)),\n\t\t\t\tlimit: 101,\n\t\t\t},\n\t\t\tresultCheck: func(res io.ReadCloser, read int64, args args) bool {\n\t\t\t\tbrc, ok := res.(bodyReadCloser)\n\t\t\t\treturn ok && brc.body == nil && brc.buf.Len() == 100 && read == 100\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tres, read := h.bufferedBody(tt.args.body, tt.args.limit)\n\t\t\tif !tt.resultCheck(res, read, tt.args) {\n\t\t\t\tt.Error(\"Handler.bufferedBody() test failed\")\n\t\t\t\treturn\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/caddyfile.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage reverseproxy\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\t\"net/http\"\n\t\"reflect\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/dustin/go-humanize\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/internal\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/headers\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/rewrite\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddytls\"\n\t\"github.com/caddyserver/caddy/v2/modules/internal/network\"\n)\n\nfunc init() {\n\thttpcaddyfile.RegisterHandlerDirective(\"reverse_proxy\", parseCaddyfile)\n\thttpcaddyfile.RegisterHandlerDirective(\"copy_response\", parseCopyResponseCaddyfile)\n\thttpcaddyfile.RegisterHandlerDirective(\"copy_response_headers\", parseCopyResponseHeadersCaddyfile)\n}\n\nfunc parseCaddyfile(h httpcaddyfile.Helper) (caddyhttp.MiddlewareHandler, error) {\n\trp := new(Handler)\n\terr := rp.UnmarshalCaddyfile(h.Dispenser)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\terr = rp.FinalizeUnmarshalCaddyfile(h)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn rp, nil\n}\n\n// UnmarshalCaddyfile sets up the handler from Caddyfile tokens. Syntax:\n//\n//\treverse_proxy [<matcher>] [<upstreams...>] {\n//\t    # backends\n//\t    to      <upstreams...>\n//\t    dynamic <name> [...]\n//\n//\t    # load balancing\n//\t    lb_policy <name> [<options...>]\n//\t    lb_retries <retries>\n//\t    lb_try_duration <duration>\n//\t    lb_try_interval <interval>\n//\t    lb_retry_match <request-matcher>\n//\n//\t    # active health checking\n//\t    health_uri          <uri>\n//\t    health_port         <port>\n//\t    health_interval     <interval>\n//\t    health_passes       <num>\n//\t    health_fails        <num>\n//\t    health_timeout      <duration>\n//\t    health_status       <status>\n//\t    health_body         <regexp>\n//\t    health_method       <value>\n//\t    health_request_body <value>\n//\t    health_follow_redirects\n//\t    health_headers {\n//\t        <field> [<values...>]\n//\t    }\n//\n//\t    # passive health checking\n//\t    fail_duration     <duration>\n//\t    max_fails         <num>\n//\t    unhealthy_status  <status>\n//\t    unhealthy_latency <duration>\n//\t    unhealthy_request_count <num>\n//\n//\t    # streaming\n//\t    flush_interval     <duration>\n//\t    request_buffers    <size>\n//\t    response_buffers   <size>\n//\t    stream_timeout     <duration>\n//\t    stream_close_delay <duration>\n//\t    verbose_logs\n//\n//\t    # request manipulation\n//\t    trusted_proxies [private_ranges] <ranges...>\n//\t    header_up   [+|-]<field> [<value|regexp> [<replacement>]]\n//\t    header_down [+|-]<field> [<value|regexp> [<replacement>]]\n//\t    method <method>\n//\t    rewrite <to>\n//\n//\t    # round trip\n//\t    transport <name> {\n//\t        ...\n//\t    }\n//\n//\t    # optionally intercept responses from upstream\n//\t    @name {\n//\t        status <code...>\n//\t        header <field> [<value>]\n//\t    }\n//\t    replace_status [<matcher>] <status_code>\n//\t    handle_response [<matcher>] {\n//\t        <directives...>\n//\n//\t        # special directives only available in handle_response\n//\t        copy_response [<matcher>] [<status>] {\n//\t            status <status>\n//\t        }\n//\t        copy_response_headers [<matcher>] {\n//\t            include <fields...>\n//\t            exclude <fields...>\n//\t        }\n//\t    }\n//\t}\n//\n// Proxy upstream addresses should be network dial addresses such\n// as `host:port`, or a URL such as `scheme://host:port`. Scheme\n// and port may be inferred from other parts of the address/URL; if\n// either are missing, defaults to HTTP.\n//\n// The FinalizeUnmarshalCaddyfile method should be called after this\n// to finalize parsing of \"handle_response\" blocks, if possible.\nfunc (h *Handler) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\t// currently, all backends must use the same scheme/protocol (the\n\t// underlying JSON does not yet support per-backend transports)\n\tvar commonScheme string\n\n\t// we'll wait until the very end of parsing before\n\t// validating and encoding the transport\n\tvar transport http.RoundTripper\n\tvar transportModuleName string\n\n\t// collect the response matchers defined as subdirectives\n\t// prefixed with \"@\" for use with \"handle_response\" blocks\n\th.responseMatchers = make(map[string]caddyhttp.ResponseMatcher)\n\n\t// appendUpstream creates an upstream for address and adds\n\t// it to the list.\n\tappendUpstream := func(address string) error {\n\t\tpa, err := parseUpstreamDialAddress(address)\n\t\tif err != nil {\n\t\t\treturn d.WrapErr(err)\n\t\t}\n\n\t\t// the underlying JSON does not yet support different\n\t\t// transports (protocols or schemes) to each backend,\n\t\t// so we remember the last one we see and compare them\n\n\t\tswitch pa.scheme {\n\t\tcase \"wss\":\n\t\t\treturn d.Errf(\"the scheme wss:// is only supported in browsers; use https:// instead\")\n\t\tcase \"ws\":\n\t\t\treturn d.Errf(\"the scheme ws:// is only supported in browsers; use http:// instead\")\n\t\tcase \"https\", \"http\", \"h2c\", \"\":\n\t\t\t// Do nothing or handle the valid schemes\n\t\tdefault:\n\t\t\treturn d.Errf(\"unsupported URL scheme %s://\", pa.scheme)\n\t\t}\n\n\t\tif commonScheme != \"\" && pa.scheme != commonScheme {\n\t\t\treturn d.Errf(\"for now, all proxy upstreams must use the same scheme (transport protocol); expecting '%s://' but got '%s://'\",\n\t\t\t\tcommonScheme, pa.scheme)\n\t\t}\n\t\tcommonScheme = pa.scheme\n\n\t\t// if the port of upstream address contains a placeholder, only wrap it with the `Upstream` struct,\n\t\t// delaying actual resolution of the address until request time.\n\t\tif pa.replaceablePort() {\n\t\t\th.Upstreams = append(h.Upstreams, &Upstream{Dial: pa.dialAddr()})\n\t\t\treturn nil\n\t\t}\n\t\tparsedAddr, err := caddy.ParseNetworkAddress(pa.dialAddr())\n\t\tif err != nil {\n\t\t\treturn d.WrapErr(err)\n\t\t}\n\n\t\tif pa.isUnix() || !pa.rangedPort() {\n\t\t\t// unix networks don't have ports\n\t\t\th.Upstreams = append(h.Upstreams, &Upstream{\n\t\t\t\tDial: pa.dialAddr(),\n\t\t\t})\n\t\t} else {\n\t\t\t// expand a port range into multiple upstreams\n\t\t\tfor i := parsedAddr.StartPort; i <= parsedAddr.EndPort; i++ {\n\t\t\t\th.Upstreams = append(h.Upstreams, &Upstream{\n\t\t\t\t\tDial: caddy.JoinNetworkAddress(\"\", parsedAddr.Host, fmt.Sprint(i)),\n\t\t\t\t})\n\t\t\t}\n\t\t}\n\n\t\treturn nil\n\t}\n\n\td.Next() // consume the directive name\n\tfor _, up := range d.RemainingArgs() {\n\t\terr := appendUpstream(up)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"parsing upstream '%s': %w\", up, err)\n\t\t}\n\t}\n\n\tfor d.NextBlock(0) {\n\t\t// if the subdirective has an \"@\" prefix then we\n\t\t// parse it as a response matcher for use with \"handle_response\"\n\t\tif strings.HasPrefix(d.Val(), matcherPrefix) {\n\t\t\terr := caddyhttp.ParseNamedResponseMatcher(d.NewFromNextSegment(), h.responseMatchers)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\tswitch d.Val() {\n\t\tcase \"to\":\n\t\t\targs := d.RemainingArgs()\n\t\t\tif len(args) == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tfor _, up := range args {\n\t\t\t\terr := appendUpstream(up)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn fmt.Errorf(\"parsing upstream '%s': %w\", up, err)\n\t\t\t\t}\n\t\t\t}\n\n\t\tcase \"dynamic\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.DynamicUpstreams != nil {\n\t\t\t\treturn d.Err(\"dynamic upstreams already specified\")\n\t\t\t}\n\t\t\tdynModule := d.Val()\n\t\t\tmodID := \"http.reverse_proxy.upstreams.\" + dynModule\n\t\t\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tsource, ok := unm.(UpstreamSource)\n\t\t\tif !ok {\n\t\t\t\treturn d.Errf(\"module %s (%T) is not an UpstreamSource\", modID, unm)\n\t\t\t}\n\t\t\th.DynamicUpstreamsRaw = caddyconfig.JSONModuleObject(source, \"source\", dynModule, nil)\n\n\t\tcase \"lb_policy\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.LoadBalancing != nil && h.LoadBalancing.SelectionPolicyRaw != nil {\n\t\t\t\treturn d.Err(\"load balancing selection policy already specified\")\n\t\t\t}\n\t\t\tname := d.Val()\n\t\t\tmodID := \"http.reverse_proxy.selection_policies.\" + name\n\t\t\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tsel, ok := unm.(Selector)\n\t\t\tif !ok {\n\t\t\t\treturn d.Errf(\"module %s (%T) is not a reverseproxy.Selector\", modID, unm)\n\t\t\t}\n\t\t\tif h.LoadBalancing == nil {\n\t\t\t\th.LoadBalancing = new(LoadBalancing)\n\t\t\t}\n\t\t\th.LoadBalancing.SelectionPolicyRaw = caddyconfig.JSONModuleObject(sel, \"policy\", name, nil)\n\n\t\tcase \"lb_retries\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\ttries, err := strconv.Atoi(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad lb_retries number '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\tif h.LoadBalancing == nil {\n\t\t\t\th.LoadBalancing = new(LoadBalancing)\n\t\t\t}\n\t\t\th.LoadBalancing.Retries = tries\n\n\t\tcase \"lb_try_duration\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.LoadBalancing == nil {\n\t\t\t\th.LoadBalancing = new(LoadBalancing)\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad duration value %s: %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.LoadBalancing.TryDuration = caddy.Duration(dur)\n\n\t\tcase \"lb_try_interval\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.LoadBalancing == nil {\n\t\t\t\th.LoadBalancing = new(LoadBalancing)\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad interval value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.LoadBalancing.TryInterval = caddy.Duration(dur)\n\n\t\tcase \"lb_retry_match\":\n\t\t\tmatcherSet, err := caddyhttp.ParseCaddyfileNestedMatcherSet(d)\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"failed to parse lb_retry_match: %v\", err)\n\t\t\t}\n\t\t\tif h.LoadBalancing == nil {\n\t\t\t\th.LoadBalancing = new(LoadBalancing)\n\t\t\t}\n\t\t\th.LoadBalancing.RetryMatchRaw = append(h.LoadBalancing.RetryMatchRaw, matcherSet)\n\n\t\tcase \"health_uri\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Active == nil {\n\t\t\t\th.HealthChecks.Active = new(ActiveHealthChecks)\n\t\t\t}\n\t\t\th.HealthChecks.Active.URI = d.Val()\n\n\t\tcase \"health_path\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Active == nil {\n\t\t\t\th.HealthChecks.Active = new(ActiveHealthChecks)\n\t\t\t}\n\t\t\th.HealthChecks.Active.Path = d.Val()\n\t\t\tcaddy.Log().Named(\"config.adapter.caddyfile\").Warn(\"the 'health_path' subdirective is deprecated, please use 'health_uri' instead!\")\n\n\t\tcase \"health_upstream\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Active == nil {\n\t\t\t\th.HealthChecks.Active = new(ActiveHealthChecks)\n\t\t\t}\n\t\t\t_, port, err := net.SplitHostPort(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"health_upstream is malformed '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\t_, err = strconv.Atoi(port)\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad port number '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.HealthChecks.Active.Upstream = d.Val()\n\n\t\tcase \"health_port\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Active == nil {\n\t\t\t\th.HealthChecks.Active = new(ActiveHealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Active.Upstream != \"\" {\n\t\t\t\treturn d.Errf(\"the 'health_port' subdirective is ignored if 'health_upstream' is used!\")\n\t\t\t}\n\t\t\tportNum, err := strconv.Atoi(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad port number '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.HealthChecks.Active.Port = portNum\n\n\t\tcase \"health_headers\":\n\t\t\thealthHeaders := make(http.Header)\n\t\t\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\t\t\tkey := d.Val()\n\t\t\t\tvalues := d.RemainingArgs()\n\t\t\t\tif len(values) == 0 {\n\t\t\t\t\tvalues = append(values, \"\")\n\t\t\t\t}\n\t\t\t\thealthHeaders[key] = append(healthHeaders[key], values...)\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Active == nil {\n\t\t\t\th.HealthChecks.Active = new(ActiveHealthChecks)\n\t\t\t}\n\t\t\th.HealthChecks.Active.Headers = healthHeaders\n\n\t\tcase \"health_method\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Active == nil {\n\t\t\t\th.HealthChecks.Active = new(ActiveHealthChecks)\n\t\t\t}\n\t\t\th.HealthChecks.Active.Method = d.Val()\n\n\t\tcase \"health_request_body\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Active == nil {\n\t\t\t\th.HealthChecks.Active = new(ActiveHealthChecks)\n\t\t\t}\n\t\t\th.HealthChecks.Active.Body = d.Val()\n\n\t\tcase \"health_interval\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Active == nil {\n\t\t\t\th.HealthChecks.Active = new(ActiveHealthChecks)\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad interval value %s: %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.HealthChecks.Active.Interval = caddy.Duration(dur)\n\n\t\tcase \"health_timeout\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Active == nil {\n\t\t\t\th.HealthChecks.Active = new(ActiveHealthChecks)\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad timeout value %s: %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.HealthChecks.Active.Timeout = caddy.Duration(dur)\n\n\t\tcase \"health_status\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Active == nil {\n\t\t\t\th.HealthChecks.Active = new(ActiveHealthChecks)\n\t\t\t}\n\t\t\tval := d.Val()\n\t\t\tif len(val) == 3 && strings.HasSuffix(val, \"xx\") {\n\t\t\t\tval = val[:1]\n\t\t\t}\n\t\t\tstatusNum, err := strconv.Atoi(val)\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad status value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.HealthChecks.Active.ExpectStatus = statusNum\n\n\t\tcase \"health_body\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Active == nil {\n\t\t\t\th.HealthChecks.Active = new(ActiveHealthChecks)\n\t\t\t}\n\t\t\th.HealthChecks.Active.ExpectBody = d.Val()\n\n\t\tcase \"health_follow_redirects\":\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Active == nil {\n\t\t\t\th.HealthChecks.Active = new(ActiveHealthChecks)\n\t\t\t}\n\t\t\th.HealthChecks.Active.FollowRedirects = true\n\n\t\tcase \"health_passes\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Active == nil {\n\t\t\t\th.HealthChecks.Active = new(ActiveHealthChecks)\n\t\t\t}\n\t\t\tpasses, err := strconv.Atoi(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid passes count '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.HealthChecks.Active.Passes = passes\n\n\t\tcase \"health_fails\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Active == nil {\n\t\t\t\th.HealthChecks.Active = new(ActiveHealthChecks)\n\t\t\t}\n\t\t\tfails, err := strconv.Atoi(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid fails count '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.HealthChecks.Active.Fails = fails\n\n\t\tcase \"max_fails\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Passive == nil {\n\t\t\t\th.HealthChecks.Passive = new(PassiveHealthChecks)\n\t\t\t}\n\t\t\tmaxFails, err := strconv.Atoi(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid maximum fail count '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.HealthChecks.Passive.MaxFails = maxFails\n\n\t\tcase \"fail_duration\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Passive == nil {\n\t\t\t\th.HealthChecks.Passive = new(PassiveHealthChecks)\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad duration value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.HealthChecks.Passive.FailDuration = caddy.Duration(dur)\n\n\t\tcase \"unhealthy_request_count\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Passive == nil {\n\t\t\t\th.HealthChecks.Passive = new(PassiveHealthChecks)\n\t\t\t}\n\t\t\tmaxConns, err := strconv.Atoi(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid maximum connection count '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.HealthChecks.Passive.UnhealthyRequestCount = maxConns\n\n\t\tcase \"unhealthy_status\":\n\t\t\targs := d.RemainingArgs()\n\t\t\tif len(args) == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Passive == nil {\n\t\t\t\th.HealthChecks.Passive = new(PassiveHealthChecks)\n\t\t\t}\n\t\t\tfor _, arg := range args {\n\t\t\t\tif len(arg) == 3 && strings.HasSuffix(arg, \"xx\") {\n\t\t\t\t\targ = arg[:1]\n\t\t\t\t}\n\t\t\t\tstatusNum, err := strconv.Atoi(arg)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn d.Errf(\"bad status value '%s': %v\", d.Val(), err)\n\t\t\t\t}\n\t\t\t\th.HealthChecks.Passive.UnhealthyStatus = append(h.HealthChecks.Passive.UnhealthyStatus, statusNum)\n\t\t\t}\n\n\t\tcase \"unhealthy_latency\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.HealthChecks == nil {\n\t\t\t\th.HealthChecks = new(HealthChecks)\n\t\t\t}\n\t\t\tif h.HealthChecks.Passive == nil {\n\t\t\t\th.HealthChecks.Passive = new(PassiveHealthChecks)\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad duration value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.HealthChecks.Passive.UnhealthyLatency = caddy.Duration(dur)\n\n\t\tcase \"flush_interval\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif fi, err := strconv.Atoi(d.Val()); err == nil {\n\t\t\t\th.FlushInterval = caddy.Duration(fi)\n\t\t\t} else {\n\t\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn d.Errf(\"bad duration value '%s': %v\", d.Val(), err)\n\t\t\t\t}\n\t\t\t\th.FlushInterval = caddy.Duration(dur)\n\t\t\t}\n\n\t\tcase \"request_buffers\", \"response_buffers\":\n\t\t\tsubdir := d.Val()\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tval := d.Val()\n\t\t\tvar size int64\n\t\t\tif val == \"unlimited\" {\n\t\t\t\tsize = -1\n\t\t\t} else {\n\t\t\t\tusize, err := humanize.ParseBytes(val)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn d.Errf(\"invalid byte size '%s': %v\", val, err)\n\t\t\t\t}\n\t\t\t\tsize = int64(usize)\n\t\t\t}\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tswitch subdir {\n\t\t\tcase \"request_buffers\":\n\t\t\t\th.RequestBuffers = size\n\t\t\tcase \"response_buffers\":\n\t\t\t\th.ResponseBuffers = size\n\t\t\t}\n\n\t\tcase \"stream_timeout\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif fi, err := strconv.Atoi(d.Val()); err == nil {\n\t\t\t\th.StreamTimeout = caddy.Duration(fi)\n\t\t\t} else {\n\t\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn d.Errf(\"bad duration value '%s': %v\", d.Val(), err)\n\t\t\t\t}\n\t\t\t\th.StreamTimeout = caddy.Duration(dur)\n\t\t\t}\n\n\t\tcase \"stream_close_delay\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif fi, err := strconv.Atoi(d.Val()); err == nil {\n\t\t\t\th.StreamCloseDelay = caddy.Duration(fi)\n\t\t\t} else {\n\t\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn d.Errf(\"bad duration value '%s': %v\", d.Val(), err)\n\t\t\t\t}\n\t\t\t\th.StreamCloseDelay = caddy.Duration(dur)\n\t\t\t}\n\n\t\tcase \"trusted_proxies\":\n\t\t\tfor d.NextArg() {\n\t\t\t\tif d.Val() == \"private_ranges\" {\n\t\t\t\t\th.TrustedProxies = append(h.TrustedProxies, internal.PrivateRangesCIDR()...)\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\th.TrustedProxies = append(h.TrustedProxies, d.Val())\n\t\t\t}\n\n\t\tcase \"header_up\":\n\t\t\tvar err error\n\n\t\t\tif h.Headers == nil {\n\t\t\t\th.Headers = new(headers.Handler)\n\t\t\t}\n\t\t\tif h.Headers.Request == nil {\n\t\t\t\th.Headers.Request = new(headers.HeaderOps)\n\t\t\t}\n\t\t\targs := d.RemainingArgs()\n\n\t\t\tswitch len(args) {\n\t\t\tcase 1:\n\t\t\t\terr = headers.CaddyfileHeaderOp(h.Headers.Request, args[0], \"\", nil)\n\t\t\tcase 2:\n\t\t\t\t// some lint checks, I guess\n\t\t\t\tif strings.EqualFold(args[0], \"host\") && (args[1] == \"{hostport}\" || args[1] == \"{http.request.hostport}\") {\n\t\t\t\t\tcaddy.Log().Named(\"caddyfile\").Warn(\"Unnecessary header_up Host: the reverse proxy's default behavior is to pass headers to the upstream\")\n\t\t\t\t}\n\t\t\t\tif strings.EqualFold(args[0], \"x-forwarded-for\") && (args[1] == \"{remote}\" || args[1] == \"{http.request.remote}\" || args[1] == \"{remote_host}\" || args[1] == \"{http.request.remote.host}\") {\n\t\t\t\t\tcaddy.Log().Named(\"caddyfile\").Warn(\"Unnecessary header_up X-Forwarded-For: the reverse proxy's default behavior is to pass headers to the upstream\")\n\t\t\t\t}\n\t\t\t\tif strings.EqualFold(args[0], \"x-forwarded-proto\") && (args[1] == \"{scheme}\" || args[1] == \"{http.request.scheme}\") {\n\t\t\t\t\tcaddy.Log().Named(\"caddyfile\").Warn(\"Unnecessary header_up X-Forwarded-Proto: the reverse proxy's default behavior is to pass headers to the upstream\")\n\t\t\t\t}\n\t\t\t\tif strings.EqualFold(args[0], \"x-forwarded-host\") && (args[1] == \"{host}\" || args[1] == \"{http.request.host}\" || args[1] == \"{hostport}\" || args[1] == \"{http.request.hostport}\") {\n\t\t\t\t\tcaddy.Log().Named(\"caddyfile\").Warn(\"Unnecessary header_up X-Forwarded-Host: the reverse proxy's default behavior is to pass headers to the upstream\")\n\t\t\t\t}\n\t\t\t\terr = headers.CaddyfileHeaderOp(h.Headers.Request, args[0], args[1], nil)\n\t\t\tcase 3:\n\t\t\t\terr = headers.CaddyfileHeaderOp(h.Headers.Request, args[0], args[1], &args[2])\n\t\t\tdefault:\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\t\tif err != nil {\n\t\t\t\treturn d.Err(err.Error())\n\t\t\t}\n\n\t\tcase \"header_down\":\n\t\t\tvar err error\n\n\t\t\tif h.Headers == nil {\n\t\t\t\th.Headers = new(headers.Handler)\n\t\t\t}\n\t\t\tif h.Headers.Response == nil {\n\t\t\t\th.Headers.Response = &headers.RespHeaderOps{\n\t\t\t\t\tHeaderOps: new(headers.HeaderOps),\n\t\t\t\t}\n\t\t\t}\n\t\t\targs := d.RemainingArgs()\n\n\t\t\tswitch len(args) {\n\t\t\tcase 1:\n\t\t\t\terr = headers.CaddyfileHeaderOp(h.Headers.Response.HeaderOps, args[0], \"\", nil)\n\t\t\tcase 2:\n\t\t\t\terr = headers.CaddyfileHeaderOp(h.Headers.Response.HeaderOps, args[0], args[1], nil)\n\t\t\tcase 3:\n\t\t\t\terr = headers.CaddyfileHeaderOp(h.Headers.Response.HeaderOps, args[0], args[1], &args[2])\n\t\t\tdefault:\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\t\tif err != nil {\n\t\t\t\treturn d.Err(err.Error())\n\t\t\t}\n\n\t\tcase \"method\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.Rewrite == nil {\n\t\t\t\th.Rewrite = &rewrite.Rewrite{}\n\t\t\t}\n\t\t\th.Rewrite.Method = d.Val()\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"rewrite\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.Rewrite == nil {\n\t\t\t\th.Rewrite = &rewrite.Rewrite{}\n\t\t\t}\n\t\t\th.Rewrite.URI = d.Val()\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"transport\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.TransportRaw != nil {\n\t\t\t\treturn d.Err(\"transport already specified\")\n\t\t\t}\n\t\t\ttransportModuleName = d.Val()\n\t\t\tmodID := \"http.reverse_proxy.transport.\" + transportModuleName\n\t\t\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\trt, ok := unm.(http.RoundTripper)\n\t\t\tif !ok {\n\t\t\t\treturn d.Errf(\"module %s (%T) is not a RoundTripper\", modID, unm)\n\t\t\t}\n\t\t\ttransport = rt\n\n\t\tcase \"handle_response\":\n\t\t\t// delegate the parsing of handle_response to the caller,\n\t\t\t// since we need the httpcaddyfile.Helper to parse subroutes.\n\t\t\t// See h.FinalizeUnmarshalCaddyfile\n\t\t\th.handleResponseSegments = append(h.handleResponseSegments, d.NewFromNextSegment())\n\n\t\tcase \"replace_status\":\n\t\t\targs := d.RemainingArgs()\n\t\t\tif len(args) != 1 && len(args) != 2 {\n\t\t\t\treturn d.Errf(\"must have one or two arguments: an optional response matcher, and a status code\")\n\t\t\t}\n\n\t\t\tresponseHandler := caddyhttp.ResponseHandler{}\n\n\t\t\tif len(args) == 2 {\n\t\t\t\tif !strings.HasPrefix(args[0], matcherPrefix) {\n\t\t\t\t\treturn d.Errf(\"must use a named response matcher, starting with '@'\")\n\t\t\t\t}\n\t\t\t\tfoundMatcher, ok := h.responseMatchers[args[0]]\n\t\t\t\tif !ok {\n\t\t\t\t\treturn d.Errf(\"no named response matcher defined with name '%s'\", args[0][1:])\n\t\t\t\t}\n\t\t\t\tresponseHandler.Match = &foundMatcher\n\t\t\t\tresponseHandler.StatusCode = caddyhttp.WeakString(args[1])\n\t\t\t} else if len(args) == 1 {\n\t\t\t\tresponseHandler.StatusCode = caddyhttp.WeakString(args[0])\n\t\t\t}\n\n\t\t\t// make sure there's no block, cause it doesn't make sense\n\t\t\tif nesting := d.Nesting(); d.NextBlock(nesting) {\n\t\t\t\treturn d.Errf(\"cannot define routes for 'replace_status', use 'handle_response' instead.\")\n\t\t\t}\n\n\t\t\th.HandleResponse = append(\n\t\t\t\th.HandleResponse,\n\t\t\t\tresponseHandler,\n\t\t\t)\n\n\t\tcase \"verbose_logs\":\n\t\t\tif h.VerboseLogs {\n\t\t\t\treturn d.Err(\"verbose_logs already specified\")\n\t\t\t}\n\t\t\th.VerboseLogs = true\n\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized subdirective %s\", d.Val())\n\t\t}\n\t}\n\n\t// if the scheme inferred from the backends' addresses is\n\t// HTTPS, we will need a non-nil transport to enable TLS,\n\t// or if H2C, to set the transport versions.\n\tif (commonScheme == \"https\" || commonScheme == \"h2c\") && transport == nil {\n\t\ttransport = new(HTTPTransport)\n\t\ttransportModuleName = \"http\"\n\t}\n\n\t// verify transport configuration, and finally encode it\n\tif transport != nil {\n\t\tif te, ok := transport.(TLSTransport); ok {\n\t\t\tif commonScheme == \"https\" && !te.TLSEnabled() {\n\t\t\t\terr := te.EnableTLS(new(TLSConfig))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t\tif commonScheme == \"http\" && te.TLSEnabled() {\n\t\t\t\treturn d.Errf(\"upstream address scheme is HTTP but transport is configured for HTTP+TLS (HTTPS)\")\n\t\t\t}\n\t\t\tif h2ct, ok := transport.(H2CTransport); ok && commonScheme == \"h2c\" {\n\t\t\t\terr := h2ct.EnableH2C()\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t} else if commonScheme == \"https\" {\n\t\t\treturn d.Errf(\"upstreams are configured for HTTPS but transport module does not support TLS: %T\", transport)\n\t\t}\n\n\t\t// no need to encode empty default transport\n\t\tif !reflect.DeepEqual(transport, new(HTTPTransport)) {\n\t\t\th.TransportRaw = caddyconfig.JSONModuleObject(transport, \"protocol\", transportModuleName, nil)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// FinalizeUnmarshalCaddyfile finalizes the Caddyfile parsing which\n// requires having an httpcaddyfile.Helper to function, to parse subroutes.\nfunc (h *Handler) FinalizeUnmarshalCaddyfile(helper httpcaddyfile.Helper) error {\n\tfor _, d := range h.handleResponseSegments {\n\t\t// consume the \"handle_response\" token\n\t\td.Next()\n\t\targs := d.RemainingArgs()\n\n\t\t// TODO: Remove this check at some point in the future\n\t\tif len(args) == 2 {\n\t\t\treturn d.Errf(\"configuring 'handle_response' for status code replacement is no longer supported. Use 'replace_status' instead.\")\n\t\t}\n\n\t\tif len(args) > 1 {\n\t\t\treturn d.Errf(\"too many arguments for 'handle_response': %s\", args)\n\t\t}\n\n\t\tvar matcher *caddyhttp.ResponseMatcher\n\t\tif len(args) == 1 {\n\t\t\t// the first arg should always be a matcher.\n\t\t\tif !strings.HasPrefix(args[0], matcherPrefix) {\n\t\t\t\treturn d.Errf(\"must use a named response matcher, starting with '@'\")\n\t\t\t}\n\n\t\t\tfoundMatcher, ok := h.responseMatchers[args[0]]\n\t\t\tif !ok {\n\t\t\t\treturn d.Errf(\"no named response matcher defined with name '%s'\", args[0][1:])\n\t\t\t}\n\t\t\tmatcher = &foundMatcher\n\t\t}\n\n\t\t// parse the block as routes\n\t\thandler, err := httpcaddyfile.ParseSegmentAsSubroute(helper.WithDispenser(d.NewFromNextSegment()))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsubroute, ok := handler.(*caddyhttp.Subroute)\n\t\tif !ok {\n\t\t\treturn helper.Errf(\"segment was not parsed as a subroute\")\n\t\t}\n\t\th.HandleResponse = append(\n\t\t\th.HandleResponse,\n\t\t\tcaddyhttp.ResponseHandler{\n\t\t\t\tMatch:  matcher,\n\t\t\t\tRoutes: subroute.Routes,\n\t\t\t},\n\t\t)\n\t}\n\n\t// move the handle_response entries without a matcher to the end.\n\t// we can't use sort.SliceStable because it will reorder the rest of the\n\t// entries which may be undesirable because we don't have a good\n\t// heuristic to use for sorting.\n\twithoutMatchers := []caddyhttp.ResponseHandler{}\n\twithMatchers := []caddyhttp.ResponseHandler{}\n\tfor _, hr := range h.HandleResponse {\n\t\tif hr.Match == nil {\n\t\t\twithoutMatchers = append(withoutMatchers, hr)\n\t\t} else {\n\t\t\twithMatchers = append(withMatchers, hr)\n\t\t}\n\t}\n\th.HandleResponse = append(withMatchers, withoutMatchers...)\n\n\t// clean up the bits we only needed for adapting\n\th.handleResponseSegments = nil\n\th.responseMatchers = nil\n\n\treturn nil\n}\n\n// UnmarshalCaddyfile deserializes Caddyfile tokens into h.\n//\n//\ttransport http {\n//\t    read_buffer             <size>\n//\t    write_buffer            <size>\n//\t    max_response_header     <size>\n//\t    network_proxy           <module> {\n//\t        ...\n//\t    }\n//\t    dial_timeout            <duration>\n//\t    dial_fallback_delay     <duration>\n//\t    response_header_timeout <duration>\n//\t    expect_continue_timeout <duration>\n//\t    resolvers               <resolvers...>\n//\t    tls\n//\t    tls_client_auth <automate_name> | <cert_file> <key_file>\n//\t    tls_insecure_skip_verify\n//\t    tls_timeout <duration>\n//\t    tls_trusted_ca_certs <cert_files...>\n//\t    tls_trust_pool <module> {\n//\t        ...\n//\t    }\n//\t    tls_server_name <sni>\n//\t    tls_renegotiation <level>\n//\t    tls_except_ports <ports...>\n//\t    keepalive [off|<duration>]\n//\t    keepalive_interval <interval>\n//\t    keepalive_idle_conns <max_count>\n//\t    keepalive_idle_conns_per_host <count>\n//\t    versions <versions...>\n//\t    compression off\n//\t    max_conns_per_host <count>\n//\t    max_idle_conns_per_host <count>\n//\t}\nfunc (h *HTTPTransport) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume transport name\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"read_buffer\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tsize, err := humanize.ParseBytes(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid read buffer size '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.ReadBufferSize = int(size)\n\n\t\tcase \"write_buffer\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tsize, err := humanize.ParseBytes(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid write buffer size '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.WriteBufferSize = int(size)\n\n\t\tcase \"read_timeout\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\ttimeout, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid read timeout duration '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.ReadTimeout = caddy.Duration(timeout)\n\n\t\tcase \"write_timeout\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\ttimeout, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid write timeout duration '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.WriteTimeout = caddy.Duration(timeout)\n\n\t\tcase \"max_response_header\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tsize, err := humanize.ParseBytes(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid max response header size '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.MaxResponseHeaderSize = int64(size)\n\n\t\tcase \"proxy_protocol\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tswitch proxyProtocol := d.Val(); proxyProtocol {\n\t\t\tcase \"v1\", \"v2\":\n\t\t\t\th.ProxyProtocol = proxyProtocol\n\t\t\tdefault:\n\t\t\t\treturn d.Errf(\"invalid proxy protocol version '%s'\", proxyProtocol)\n\t\t\t}\n\n\t\tcase \"forward_proxy_url\":\n\t\t\tcaddy.Log().Warn(\"The 'forward_proxy_url' field is deprecated. Use 'network_proxy <url>' instead.\")\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tu := network.ProxyFromURL{URL: d.Val()}\n\t\t\th.NetworkProxyRaw = caddyconfig.JSONModuleObject(u, \"from\", \"url\", nil)\n\n\t\tcase \"network_proxy\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tmodStem := d.Val()\n\t\t\tmodID := \"caddy.network_proxy.\" + modStem\n\t\t\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\th.NetworkProxyRaw = caddyconfig.JSONModuleObject(unm, \"from\", modStem, nil)\n\n\t\tcase \"dial_timeout\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad timeout value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.DialTimeout = caddy.Duration(dur)\n\n\t\tcase \"dial_fallback_delay\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad fallback delay value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.FallbackDelay = caddy.Duration(dur)\n\n\t\tcase \"response_header_timeout\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad timeout value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.ResponseHeaderTimeout = caddy.Duration(dur)\n\n\t\tcase \"expect_continue_timeout\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad timeout value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.ExpectContinueTimeout = caddy.Duration(dur)\n\n\t\tcase \"resolvers\":\n\t\t\tif h.Resolver == nil {\n\t\t\t\th.Resolver = new(UpstreamResolver)\n\t\t\t}\n\t\t\th.Resolver.Addresses = d.RemainingArgs()\n\t\t\tif len(h.Resolver.Addresses) == 0 {\n\t\t\t\treturn d.Errf(\"must specify at least one resolver address\")\n\t\t\t}\n\n\t\tcase \"tls\":\n\t\t\tif h.TLS == nil {\n\t\t\t\th.TLS = new(TLSConfig)\n\t\t\t}\n\n\t\tcase \"tls_client_auth\":\n\t\t\tif h.TLS == nil {\n\t\t\t\th.TLS = new(TLSConfig)\n\t\t\t}\n\t\t\targs := d.RemainingArgs()\n\t\t\tswitch len(args) {\n\t\t\tcase 1:\n\t\t\t\th.TLS.ClientCertificateAutomate = args[0]\n\t\t\tcase 2:\n\t\t\t\th.TLS.ClientCertificateFile = args[0]\n\t\t\t\th.TLS.ClientCertificateKeyFile = args[1]\n\t\t\tdefault:\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"tls_insecure_skip_verify\":\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.TLS == nil {\n\t\t\t\th.TLS = new(TLSConfig)\n\t\t\t}\n\t\t\th.TLS.InsecureSkipVerify = true\n\n\t\tcase \"tls_curves\":\n\t\t\targs := d.RemainingArgs()\n\t\t\tif len(args) == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.TLS == nil {\n\t\t\t\th.TLS = new(TLSConfig)\n\t\t\t}\n\t\t\th.TLS.Curves = args\n\n\t\tcase \"tls_timeout\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad timeout value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\tif h.TLS == nil {\n\t\t\t\th.TLS = new(TLSConfig)\n\t\t\t}\n\t\t\th.TLS.HandshakeTimeout = caddy.Duration(dur)\n\n\t\tcase \"tls_trusted_ca_certs\":\n\t\t\tcaddy.Log().Warn(\"The 'tls_trusted_ca_certs' field is deprecated. Use the 'tls_trust_pool' field instead.\")\n\t\t\targs := d.RemainingArgs()\n\t\t\tif len(args) == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.TLS == nil {\n\t\t\t\th.TLS = new(TLSConfig)\n\t\t\t}\n\t\t\tif len(h.TLS.CARaw) != 0 {\n\t\t\t\treturn d.Err(\"cannot specify both 'tls_trust_pool' and 'tls_trusted_ca_certs\")\n\t\t\t}\n\t\t\th.TLS.RootCAPEMFiles = args\n\n\t\tcase \"tls_server_name\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.TLS == nil {\n\t\t\t\th.TLS = new(TLSConfig)\n\t\t\t}\n\t\t\th.TLS.ServerName = d.Val()\n\n\t\tcase \"tls_renegotiation\":\n\t\t\tif h.TLS == nil {\n\t\t\t\th.TLS = new(TLSConfig)\n\t\t\t}\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tswitch renegotiation := d.Val(); renegotiation {\n\t\t\tcase \"never\", \"once\", \"freely\":\n\t\t\t\th.TLS.Renegotiation = renegotiation\n\t\t\tdefault:\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"tls_except_ports\":\n\t\t\tif h.TLS == nil {\n\t\t\t\th.TLS = new(TLSConfig)\n\t\t\t}\n\t\t\th.TLS.ExceptPorts = d.RemainingArgs()\n\t\t\tif len(h.TLS.ExceptPorts) == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"keepalive\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif h.KeepAlive == nil {\n\t\t\t\th.KeepAlive = new(KeepAlive)\n\t\t\t}\n\t\t\tif d.Val() == \"off\" {\n\t\t\t\tvar disable bool\n\t\t\t\th.KeepAlive.Enabled = &disable\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad duration value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.KeepAlive.IdleConnTimeout = caddy.Duration(dur)\n\n\t\tcase \"keepalive_interval\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad interval value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\tif h.KeepAlive == nil {\n\t\t\t\th.KeepAlive = new(KeepAlive)\n\t\t\t}\n\t\t\th.KeepAlive.ProbeInterval = caddy.Duration(dur)\n\n\t\tcase \"keepalive_idle_conns\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tnum, err := strconv.Atoi(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad integer value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\tif h.KeepAlive == nil {\n\t\t\t\th.KeepAlive = new(KeepAlive)\n\t\t\t}\n\t\t\th.KeepAlive.MaxIdleConns = num\n\n\t\tcase \"keepalive_idle_conns_per_host\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tnum, err := strconv.Atoi(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad integer value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\tif h.KeepAlive == nil {\n\t\t\t\th.KeepAlive = new(KeepAlive)\n\t\t\t}\n\t\t\th.KeepAlive.MaxIdleConnsPerHost = num\n\n\t\tcase \"versions\":\n\t\t\th.Versions = d.RemainingArgs()\n\t\t\tif len(h.Versions) == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"compression\":\n\t\t\tif d.NextArg() {\n\t\t\t\tif d.Val() == \"off\" {\n\t\t\t\t\tvar disable bool\n\t\t\t\t\th.Compression = &disable\n\t\t\t\t}\n\t\t\t}\n\n\t\tcase \"max_conns_per_host\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tnum, err := strconv.Atoi(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad integer value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\th.MaxConnsPerHost = num\n\n\t\tcase \"tls_trust_pool\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tmodStem := d.Val()\n\t\t\tmodID := \"tls.ca_pool.source.\" + modStem\n\t\t\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tca, ok := unm.(caddytls.CA)\n\t\t\tif !ok {\n\t\t\t\treturn d.Errf(\"module %s is not a caddytls.CA\", modID)\n\t\t\t}\n\t\t\tif h.TLS == nil {\n\t\t\t\th.TLS = new(TLSConfig)\n\t\t\t}\n\t\t\tif len(h.TLS.RootCAPEMFiles) != 0 {\n\t\t\t\treturn d.Err(\"cannot specify both 'tls_trust_pool' and 'tls_trusted_ca_certs'\")\n\t\t\t}\n\t\t\tif h.TLS.CARaw != nil {\n\t\t\t\treturn d.Err(\"cannot specify \\\"tls_trust_pool\\\" twice in caddyfile\")\n\t\t\t}\n\t\t\th.TLS.CARaw = caddyconfig.JSONModuleObject(ca, \"provider\", modStem, nil)\n\t\tcase \"local_address\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\th.LocalAddress = d.Val()\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized subdirective %s\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc parseCopyResponseCaddyfile(h httpcaddyfile.Helper) (caddyhttp.MiddlewareHandler, error) {\n\tcrh := new(CopyResponseHandler)\n\terr := crh.UnmarshalCaddyfile(h.Dispenser)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn crh, nil\n}\n\n// UnmarshalCaddyfile sets up the handler from Caddyfile tokens. Syntax:\n//\n//\tcopy_response [<matcher>] [<status>] {\n//\t    status <status>\n//\t}\nfunc (h *CopyResponseHandler) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume directive name\n\n\targs := d.RemainingArgs()\n\tif len(args) == 1 {\n\t\tif num, err := strconv.Atoi(args[0]); err == nil && num > 0 {\n\t\t\th.StatusCode = caddyhttp.WeakString(args[0])\n\t\t\treturn nil\n\t\t}\n\t}\n\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"status\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\th.StatusCode = caddyhttp.WeakString(d.Val())\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized subdirective '%s'\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc parseCopyResponseHeadersCaddyfile(h httpcaddyfile.Helper) (caddyhttp.MiddlewareHandler, error) {\n\tcrh := new(CopyResponseHeadersHandler)\n\terr := crh.UnmarshalCaddyfile(h.Dispenser)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn crh, nil\n}\n\n// UnmarshalCaddyfile sets up the handler from Caddyfile tokens. Syntax:\n//\n//\tcopy_response_headers [<matcher>] {\n//\t    include <fields...>\n//\t    exclude <fields...>\n//\t}\nfunc (h *CopyResponseHeadersHandler) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume directive name\n\n\targs := d.RemainingArgs()\n\tif len(args) > 0 {\n\t\treturn d.ArgErr()\n\t}\n\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"include\":\n\t\t\th.Include = append(h.Include, d.RemainingArgs()...)\n\n\t\tcase \"exclude\":\n\t\t\th.Exclude = append(h.Exclude, d.RemainingArgs()...)\n\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized subdirective '%s'\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\n// UnmarshalCaddyfile deserializes Caddyfile tokens into h.\n//\n//\tdynamic srv [<name>] {\n//\t    service             <service>\n//\t    proto               <proto>\n//\t    name                <name>\n//\t    refresh             <interval>\n//\t    resolvers           <resolvers...>\n//\t    dial_timeout        <timeout>\n//\t    dial_fallback_delay <timeout>\n//\t    grace_period        <duration>\n//\t}\nfunc (u *SRVUpstreams) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume upstream source name\n\n\targs := d.RemainingArgs()\n\tif len(args) > 1 {\n\t\treturn d.ArgErr()\n\t}\n\tif len(args) > 0 {\n\t\tu.Name = args[0]\n\t}\n\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"service\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif u.Service != \"\" {\n\t\t\t\treturn d.Errf(\"srv service has already been specified\")\n\t\t\t}\n\t\t\tu.Service = d.Val()\n\n\t\tcase \"proto\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif u.Proto != \"\" {\n\t\t\t\treturn d.Errf(\"srv proto has already been specified\")\n\t\t\t}\n\t\t\tu.Proto = d.Val()\n\n\t\tcase \"name\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif u.Name != \"\" {\n\t\t\t\treturn d.Errf(\"srv name has already been specified\")\n\t\t\t}\n\t\t\tu.Name = d.Val()\n\n\t\tcase \"refresh\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"parsing refresh interval duration: %v\", err)\n\t\t\t}\n\t\t\tu.Refresh = caddy.Duration(dur)\n\n\t\tcase \"resolvers\":\n\t\t\tif u.Resolver == nil {\n\t\t\t\tu.Resolver = new(UpstreamResolver)\n\t\t\t}\n\t\t\tu.Resolver.Addresses = d.RemainingArgs()\n\t\t\tif len(u.Resolver.Addresses) == 0 {\n\t\t\t\treturn d.Errf(\"must specify at least one resolver address\")\n\t\t\t}\n\n\t\tcase \"dial_timeout\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad timeout value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\tu.DialTimeout = caddy.Duration(dur)\n\n\t\tcase \"dial_fallback_delay\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad delay value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\tu.FallbackDelay = caddy.Duration(dur)\n\n\t\tcase \"grace_period\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad grace period value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\tu.GracePeriod = caddy.Duration(dur)\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized srv option '%s'\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\n// UnmarshalCaddyfile deserializes Caddyfile tokens into h.\n//\n//\tdynamic a [<name> <port] {\n//\t    name                <name>\n//\t    port                <port>\n//\t    refresh             <interval>\n//\t    resolvers           <resolvers...>\n//\t    dial_timeout        <timeout>\n//\t    dial_fallback_delay <timeout>\n//\t    versions            ipv4|ipv6\n//\t}\nfunc (u *AUpstreams) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume upstream source name\n\n\targs := d.RemainingArgs()\n\tif len(args) > 2 {\n\t\treturn d.ArgErr()\n\t}\n\tif len(args) > 0 {\n\t\tu.Name = args[0]\n\t\tif len(args) == 2 {\n\t\t\tu.Port = args[1]\n\t\t}\n\t}\n\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"name\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif u.Name != \"\" {\n\t\t\t\treturn d.Errf(\"a name has already been specified\")\n\t\t\t}\n\t\t\tu.Name = d.Val()\n\n\t\tcase \"port\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif u.Port != \"\" {\n\t\t\t\treturn d.Errf(\"a port has already been specified\")\n\t\t\t}\n\t\t\tu.Port = d.Val()\n\n\t\tcase \"refresh\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"parsing refresh interval duration: %v\", err)\n\t\t\t}\n\t\t\tu.Refresh = caddy.Duration(dur)\n\n\t\tcase \"resolvers\":\n\t\t\tif u.Resolver == nil {\n\t\t\t\tu.Resolver = new(UpstreamResolver)\n\t\t\t}\n\t\t\tu.Resolver.Addresses = d.RemainingArgs()\n\t\t\tif len(u.Resolver.Addresses) == 0 {\n\t\t\t\treturn d.Errf(\"must specify at least one resolver address\")\n\t\t\t}\n\n\t\tcase \"dial_timeout\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad timeout value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\tu.DialTimeout = caddy.Duration(dur)\n\n\t\tcase \"dial_fallback_delay\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad delay value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\tu.FallbackDelay = caddy.Duration(dur)\n\n\t\tcase \"versions\":\n\t\t\targs := d.RemainingArgs()\n\t\t\tif len(args) == 0 {\n\t\t\t\treturn d.Errf(\"must specify at least one version\")\n\t\t\t}\n\n\t\t\tif u.Versions == nil {\n\t\t\t\tu.Versions = &IPVersions{}\n\t\t\t}\n\n\t\t\ttrueBool := true\n\t\t\tfor _, arg := range args {\n\t\t\t\tswitch arg {\n\t\t\t\tcase \"ipv4\":\n\t\t\t\t\tu.Versions.IPv4 = &trueBool\n\t\t\t\tcase \"ipv6\":\n\t\t\t\t\tu.Versions.IPv6 = &trueBool\n\t\t\t\tdefault:\n\t\t\t\t\treturn d.Errf(\"unsupported version: '%s'\", arg)\n\t\t\t\t}\n\t\t\t}\n\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized a option '%s'\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\n// UnmarshalCaddyfile deserializes Caddyfile tokens into h.\n//\n//\tdynamic multi {\n//\t    <source> [...]\n//\t}\nfunc (u *MultiUpstreams) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume upstream source name\n\n\tif d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\n\tfor d.NextBlock(0) {\n\t\tdynModule := d.Val()\n\t\tmodID := \"http.reverse_proxy.upstreams.\" + dynModule\n\t\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsource, ok := unm.(UpstreamSource)\n\t\tif !ok {\n\t\t\treturn d.Errf(\"module %s (%T) is not an UpstreamSource\", modID, unm)\n\t\t}\n\t\tu.SourcesRaw = append(u.SourcesRaw, caddyconfig.JSONModuleObject(source, \"source\", dynModule, nil))\n\t}\n\treturn nil\n}\n\nconst matcherPrefix = \"@\"\n\n// Interface guards\nvar (\n\t_ caddyfile.Unmarshaler = (*Handler)(nil)\n\t_ caddyfile.Unmarshaler = (*HTTPTransport)(nil)\n\t_ caddyfile.Unmarshaler = (*SRVUpstreams)(nil)\n\t_ caddyfile.Unmarshaler = (*AUpstreams)(nil)\n\t_ caddyfile.Unmarshaler = (*MultiUpstreams)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/command.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage reverseproxy\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/spf13/cobra\"\n\t\"go.uber.org/zap\"\n\n\tcaddycmd \"github.com/caddyserver/caddy/v2/cmd\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/headers\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddytls\"\n)\n\nfunc init() {\n\tcaddycmd.RegisterCommand(caddycmd.Command{\n\t\tName:  \"reverse-proxy\",\n\t\tUsage: `[--from <addr>] [--to <addr>] [--change-host-header] [--insecure] [--internal-certs] [--disable-redirects] [--header-up \"Field: value\"] [--header-down \"Field: value\"] [--access-log] [--debug]`,\n\t\tShort: \"A quick and production-ready reverse proxy\",\n\t\tLong: `\nA simple but production-ready reverse proxy. Useful for quick deployments,\ndemos, and development.\n\nSimply shuttles HTTP(S) traffic from the --from address to the --to address.\nMultiple --to addresses may be specified by repeating the flag.\n\nUnless otherwise specified in the addresses, the --from address will be\nassumed to be HTTPS if a hostname is given, and the --to address will be\nassumed to be HTTP.\n\nIf the --from address has a host or IP, Caddy will attempt to serve the\nproxy over HTTPS with a certificate (unless overridden by the HTTP scheme\nor port).\n\nIf serving HTTPS: \n  --disable-redirects can be used to avoid binding to the HTTP port.\n  --internal-certs can be used to force issuance certs using the internal\n    CA instead of attempting to issue a public certificate.\n\nFor proxying:\n  --header-up can be used to set a request header to send to the upstream.\n  --header-down can be used to set a response header to send back to the client.\n  --change-host-header sets the Host header on the request to the address\n    of the upstream, instead of defaulting to the incoming Host header.\n\tThis is a shortcut for --header-up \"Host: {http.reverse_proxy.upstream.hostport}\".\n  --insecure disables TLS verification with the upstream. WARNING: THIS\n    DISABLES SECURITY BY NOT VERIFYING THE UPSTREAM'S CERTIFICATE.\n`,\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\tcmd.Flags().StringP(\"from\", \"f\", \"localhost\", \"Address on which to receive traffic\")\n\t\t\tcmd.Flags().StringSliceP(\"to\", \"t\", []string{}, \"Upstream address(es) to which traffic should be sent\")\n\t\t\tcmd.Flags().BoolP(\"change-host-header\", \"c\", false, \"Set upstream Host header to address of upstream\")\n\t\t\tcmd.Flags().BoolP(\"insecure\", \"\", false, \"Disable TLS verification (WARNING: DISABLES SECURITY BY NOT VERIFYING TLS CERTIFICATES!)\")\n\t\t\tcmd.Flags().BoolP(\"disable-redirects\", \"r\", false, \"Disable HTTP->HTTPS redirects\")\n\t\t\tcmd.Flags().BoolP(\"internal-certs\", \"i\", false, \"Use internal CA for issuing certs\")\n\t\t\tcmd.Flags().StringArrayP(\"header-up\", \"H\", []string{}, \"Set a request header to send to the upstream (format: \\\"Field: value\\\")\")\n\t\t\tcmd.Flags().StringArrayP(\"header-down\", \"d\", []string{}, \"Set a response header to send back to the client (format: \\\"Field: value\\\")\")\n\t\t\tcmd.Flags().BoolP(\"access-log\", \"\", false, \"Enable the access log\")\n\t\t\tcmd.Flags().BoolP(\"debug\", \"v\", false, \"Enable verbose debug logs\")\n\t\t\tcmd.RunE = caddycmd.WrapCommandFuncForCobra(cmdReverseProxy)\n\t\t},\n\t})\n}\n\nfunc cmdReverseProxy(fs caddycmd.Flags) (int, error) {\n\tcaddy.TrapSignals()\n\n\tfrom := fs.String(\"from\")\n\tchangeHost := fs.Bool(\"change-host-header\")\n\tinsecure := fs.Bool(\"insecure\")\n\tdisableRedir := fs.Bool(\"disable-redirects\")\n\tinternalCerts := fs.Bool(\"internal-certs\")\n\taccessLog := fs.Bool(\"access-log\")\n\tdebug := fs.Bool(\"debug\")\n\n\thttpPort := strconv.Itoa(caddyhttp.DefaultHTTPPort)\n\thttpsPort := strconv.Itoa(caddyhttp.DefaultHTTPSPort)\n\n\tto, err := fs.GetStringSlice(\"to\")\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"invalid to flag: %v\", err)\n\t}\n\tif len(to) == 0 {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"--to is required\")\n\t}\n\n\t// set up the downstream address; assume missing information from given parts\n\tfromAddr, err := httpcaddyfile.ParseAddress(from)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"invalid downstream address %s: %v\", from, err)\n\t}\n\tif fromAddr.Path != \"\" {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"paths are not allowed: %s\", from)\n\t}\n\tif fromAddr.Scheme == \"\" {\n\t\tif fromAddr.Port == httpPort || fromAddr.Host == \"\" {\n\t\t\tfromAddr.Scheme = \"http\"\n\t\t} else {\n\t\t\tfromAddr.Scheme = \"https\"\n\t\t}\n\t}\n\tif fromAddr.Port == \"\" {\n\t\tswitch fromAddr.Scheme {\n\t\tcase \"http\":\n\t\t\tfromAddr.Port = httpPort\n\t\tcase \"https\":\n\t\t\tfromAddr.Port = httpsPort\n\t\t}\n\t}\n\n\t// set up the upstream address; assume missing information from given parts\n\t// mixing schemes isn't supported, so use first defined (if available)\n\ttoAddresses := make([]string, len(to))\n\tvar toScheme string\n\tfor i, toLoc := range to {\n\t\taddr, err := parseUpstreamDialAddress(toLoc)\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"invalid upstream address %s: %v\", toLoc, err)\n\t\t}\n\t\tif addr.scheme != \"\" && toScheme == \"\" {\n\t\t\ttoScheme = addr.scheme\n\t\t}\n\t\ttoAddresses[i] = addr.dialAddr()\n\t}\n\n\t// proceed to build the handler and server\n\tht := HTTPTransport{}\n\tif toScheme == \"https\" {\n\t\tht.TLS = new(TLSConfig)\n\t\tif insecure {\n\t\t\tht.TLS.InsecureSkipVerify = true\n\t\t}\n\t}\n\n\tupstreamPool := UpstreamPool{}\n\tfor _, toAddr := range toAddresses {\n\t\tparsedAddr, err := caddy.ParseNetworkAddress(toAddr)\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"invalid upstream address %s: %v\", toAddr, err)\n\t\t}\n\n\t\tif parsedAddr.StartPort == 0 && parsedAddr.EndPort == 0 {\n\t\t\t// unix networks don't have ports\n\t\t\tupstreamPool = append(upstreamPool, &Upstream{\n\t\t\t\tDial: toAddr,\n\t\t\t})\n\t\t} else {\n\t\t\t// expand a port range into multiple upstreams\n\t\t\tfor i := parsedAddr.StartPort; i <= parsedAddr.EndPort; i++ {\n\t\t\t\tupstreamPool = append(upstreamPool, &Upstream{\n\t\t\t\t\tDial: caddy.JoinNetworkAddress(\"\", parsedAddr.Host, fmt.Sprint(i)),\n\t\t\t\t})\n\t\t\t}\n\t\t}\n\t}\n\n\thandler := Handler{\n\t\tTransportRaw: caddyconfig.JSONModuleObject(ht, \"protocol\", \"http\", nil),\n\t\tUpstreams:    upstreamPool,\n\t}\n\n\t// set up header_up\n\theaderUp, err := fs.GetStringArray(\"header-up\")\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"invalid header flag: %v\", err)\n\t}\n\tif len(headerUp) > 0 {\n\t\treqHdr := make(http.Header)\n\t\tfor i, h := range headerUp {\n\t\t\tkey, val, found := strings.Cut(h, \":\")\n\t\t\tkey, val = strings.TrimSpace(key), strings.TrimSpace(val)\n\t\t\tif !found || key == \"\" || val == \"\" {\n\t\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"header-up %d: invalid format \\\"%s\\\" (expecting \\\"Field: value\\\")\", i, h)\n\t\t\t}\n\t\t\treqHdr.Set(key, val)\n\t\t}\n\t\thandler.Headers = &headers.Handler{\n\t\t\tRequest: &headers.HeaderOps{\n\t\t\t\tSet: reqHdr,\n\t\t\t},\n\t\t}\n\t}\n\n\t// set up header_down\n\theaderDown, err := fs.GetStringArray(\"header-down\")\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"invalid header flag: %v\", err)\n\t}\n\tif len(headerDown) > 0 {\n\t\trespHdr := make(http.Header)\n\t\tfor i, h := range headerDown {\n\t\t\tkey, val, found := strings.Cut(h, \":\")\n\t\t\tkey, val = strings.TrimSpace(key), strings.TrimSpace(val)\n\t\t\tif !found || key == \"\" || val == \"\" {\n\t\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"header-down %d: invalid format \\\"%s\\\" (expecting \\\"Field: value\\\")\", i, h)\n\t\t\t}\n\t\t\trespHdr.Set(key, val)\n\t\t}\n\t\tif handler.Headers == nil {\n\t\t\thandler.Headers = &headers.Handler{}\n\t\t}\n\t\thandler.Headers.Response = &headers.RespHeaderOps{\n\t\t\tHeaderOps: &headers.HeaderOps{\n\t\t\t\tSet: respHdr,\n\t\t\t},\n\t\t}\n\t}\n\n\tif changeHost {\n\t\tif handler.Headers == nil {\n\t\t\thandler.Headers = new(headers.Handler)\n\t\t}\n\t\tif handler.Headers.Request == nil {\n\t\t\thandler.Headers.Request = new(headers.HeaderOps)\n\t\t}\n\t\tif handler.Headers.Request.Set == nil {\n\t\t\thandler.Headers.Request.Set = http.Header{}\n\t\t}\n\t\thandler.Headers.Request.Set.Set(\"Host\", \"{http.reverse_proxy.upstream.hostport}\")\n\t}\n\n\troute := caddyhttp.Route{\n\t\tHandlersRaw: []json.RawMessage{\n\t\t\tcaddyconfig.JSONModuleObject(handler, \"handler\", \"reverse_proxy\", nil),\n\t\t},\n\t}\n\tif fromAddr.Host != \"\" {\n\t\troute.MatcherSetsRaw = []caddy.ModuleMap{\n\t\t\t{\n\t\t\t\t\"host\": caddyconfig.JSON(caddyhttp.MatchHost{fromAddr.Host}, nil),\n\t\t\t},\n\t\t}\n\t}\n\n\tserver := &caddyhttp.Server{\n\t\tRoutes: caddyhttp.RouteList{route},\n\t\tListen: []string{\":\" + fromAddr.Port},\n\t}\n\tif accessLog {\n\t\tserver.Logs = &caddyhttp.ServerLogConfig{}\n\t}\n\n\tif fromAddr.Scheme == \"http\" {\n\t\tserver.AutoHTTPS = &caddyhttp.AutoHTTPSConfig{Disabled: true}\n\t} else if disableRedir {\n\t\tserver.AutoHTTPS = &caddyhttp.AutoHTTPSConfig{DisableRedir: true}\n\t}\n\n\thttpApp := caddyhttp.App{\n\t\tServers: map[string]*caddyhttp.Server{\"proxy\": server},\n\t}\n\n\tappsRaw := caddy.ModuleMap{\n\t\t\"http\": caddyconfig.JSON(httpApp, nil),\n\t}\n\tif internalCerts && fromAddr.Host != \"\" {\n\t\ttlsApp := caddytls.TLS{\n\t\t\tAutomation: &caddytls.AutomationConfig{\n\t\t\t\tPolicies: []*caddytls.AutomationPolicy{{\n\t\t\t\t\tSubjectsRaw: []string{fromAddr.Host},\n\t\t\t\t\tIssuersRaw:  []json.RawMessage{json.RawMessage(`{\"module\":\"internal\"}`)},\n\t\t\t\t}},\n\t\t\t},\n\t\t}\n\t\tappsRaw[\"tls\"] = caddyconfig.JSON(tlsApp, nil)\n\t}\n\n\tvar false bool\n\tcfg := &caddy.Config{\n\t\tAdmin: &caddy.AdminConfig{\n\t\t\tDisabled: true,\n\t\t\tConfig: &caddy.ConfigSettings{\n\t\t\t\tPersist: &false,\n\t\t\t},\n\t\t},\n\t\tAppsRaw: appsRaw,\n\t}\n\n\tif debug {\n\t\tcfg.Logging = &caddy.Logging{\n\t\t\tLogs: map[string]*caddy.CustomLog{\n\t\t\t\t\"default\": {BaseLog: caddy.BaseLog{Level: zap.DebugLevel.CapitalString()}},\n\t\t\t},\n\t\t}\n\t}\n\n\terr = caddy.Run(cfg)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\tcaddy.Log().Info(\"caddy proxying\", zap.String(\"from\", fromAddr.String()), zap.Strings(\"to\", toAddresses))\n\tif len(toAddresses) > 1 {\n\t\tcaddy.Log().Info(\"using default load balancing policy\", zap.String(\"policy\", \"random\"))\n\t}\n\n\tselect {}\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/copyresponse.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage reverseproxy\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\t\"strconv\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(CopyResponseHandler{})\n\tcaddy.RegisterModule(CopyResponseHeadersHandler{})\n}\n\n// CopyResponseHandler is a special HTTP handler which may\n// only be used within reverse_proxy's handle_response routes,\n// to copy the proxy response. EXPERIMENTAL, subject to change.\ntype CopyResponseHandler struct {\n\t// To write the upstream response's body but with a different\n\t// status code, set this field to the desired status code.\n\tStatusCode caddyhttp.WeakString `json:\"status_code,omitempty\"`\n\n\tctx caddy.Context\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (CopyResponseHandler) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.copy_response\",\n\t\tNew: func() caddy.Module { return new(CopyResponseHandler) },\n\t}\n}\n\n// Provision ensures that h is set up properly before use.\nfunc (h *CopyResponseHandler) Provision(ctx caddy.Context) error {\n\th.ctx = ctx\n\treturn nil\n}\n\n// ServeHTTP implements the Handler interface.\nfunc (h CopyResponseHandler) ServeHTTP(rw http.ResponseWriter, req *http.Request, _ caddyhttp.Handler) error {\n\trepl := req.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\thrc, ok := req.Context().Value(proxyHandleResponseContextCtxKey).(*handleResponseContext)\n\n\t// don't allow this to be used outside of handle_response routes\n\tif !ok {\n\t\treturn caddyhttp.Error(http.StatusInternalServerError,\n\t\t\tfmt.Errorf(\"cannot use 'copy_response' outside of reverse_proxy's handle_response routes\"))\n\t}\n\n\t// allow a custom status code to be written; otherwise the\n\t// status code from the upstream response is written\n\tif codeStr := h.StatusCode.String(); codeStr != \"\" {\n\t\tintVal, err := strconv.Atoi(repl.ReplaceAll(codeStr, \"\"))\n\t\tif err != nil {\n\t\t\treturn caddyhttp.Error(http.StatusInternalServerError, err)\n\t\t}\n\t\thrc.response.StatusCode = intVal\n\t}\n\n\t// make sure the reverse_proxy handler doesn't try to call\n\t// finalizeResponse again after we've already done it here.\n\thrc.isFinalized = true\n\n\t// write the response\n\treturn hrc.handler.finalizeResponse(rw, req, hrc.response, repl, hrc.start, hrc.logger)\n}\n\n// CopyResponseHeadersHandler is a special HTTP handler which may\n// only be used within reverse_proxy's handle_response routes,\n// to copy headers from the proxy response. EXPERIMENTAL;\n// subject to change.\ntype CopyResponseHeadersHandler struct {\n\t// A list of header fields to copy from the response.\n\t// Cannot be defined at the same time as Exclude.\n\tInclude []string `json:\"include,omitempty\"`\n\n\t// A list of header fields to skip copying from the response.\n\t// Cannot be defined at the same time as Include.\n\tExclude []string `json:\"exclude,omitempty\"`\n\n\tincludeMap map[string]struct{}\n\texcludeMap map[string]struct{}\n\tctx        caddy.Context\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (CopyResponseHeadersHandler) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.copy_response_headers\",\n\t\tNew: func() caddy.Module { return new(CopyResponseHeadersHandler) },\n\t}\n}\n\n// Validate ensures the h's configuration is valid.\nfunc (h *CopyResponseHeadersHandler) Validate() error {\n\tif len(h.Exclude) > 0 && len(h.Include) > 0 {\n\t\treturn fmt.Errorf(\"cannot define both 'exclude' and 'include' lists at the same time\")\n\t}\n\n\treturn nil\n}\n\n// Provision ensures that h is set up properly before use.\nfunc (h *CopyResponseHeadersHandler) Provision(ctx caddy.Context) error {\n\th.ctx = ctx\n\n\t// Optimize the include list by converting it to a map\n\tif len(h.Include) > 0 {\n\t\th.includeMap = map[string]struct{}{}\n\t}\n\tfor _, field := range h.Include {\n\t\th.includeMap[http.CanonicalHeaderKey(field)] = struct{}{}\n\t}\n\n\t// Optimize the exclude list by converting it to a map\n\tif len(h.Exclude) > 0 {\n\t\th.excludeMap = map[string]struct{}{}\n\t}\n\tfor _, field := range h.Exclude {\n\t\th.excludeMap[http.CanonicalHeaderKey(field)] = struct{}{}\n\t}\n\n\treturn nil\n}\n\n// ServeHTTP implements the Handler interface.\nfunc (h CopyResponseHeadersHandler) ServeHTTP(rw http.ResponseWriter, req *http.Request, next caddyhttp.Handler) error {\n\thrc, ok := req.Context().Value(proxyHandleResponseContextCtxKey).(*handleResponseContext)\n\n\t// don't allow this to be used outside of handle_response routes\n\tif !ok {\n\t\treturn caddyhttp.Error(http.StatusInternalServerError,\n\t\t\tfmt.Errorf(\"cannot use 'copy_response_headers' outside of reverse_proxy's handle_response routes\"))\n\t}\n\n\tfor field, values := range hrc.response.Header {\n\t\t// Check the include list first, skip\n\t\t// the header if it's _not_ in this list.\n\t\tif len(h.includeMap) > 0 {\n\t\t\tif _, ok := h.includeMap[field]; !ok {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\n\t\t// Then, check the exclude list, skip\n\t\t// the header if it _is_ in this list.\n\t\tif len(h.excludeMap) > 0 {\n\t\t\tif _, ok := h.excludeMap[field]; ok {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\n\t\t// Copy all the values for the header.\n\t\tfor _, value := range values {\n\t\t\trw.Header().Add(field, value)\n\t\t}\n\t}\n\n\treturn next.ServeHTTP(rw, req)\n}\n\n// Interface guards\nvar (\n\t_ caddyhttp.MiddlewareHandler = (*CopyResponseHandler)(nil)\n\t_ caddyfile.Unmarshaler       = (*CopyResponseHandler)(nil)\n\t_ caddy.Provisioner           = (*CopyResponseHandler)(nil)\n\n\t_ caddyhttp.MiddlewareHandler = (*CopyResponseHeadersHandler)(nil)\n\t_ caddyfile.Unmarshaler       = (*CopyResponseHeadersHandler)(nil)\n\t_ caddy.Provisioner           = (*CopyResponseHeadersHandler)(nil)\n\t_ caddy.Validator             = (*CopyResponseHeadersHandler)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/dynamic_upstreams_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage reverseproxy\n\nimport (\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\n// resetDynamicHosts clears global dynamic host state between tests.\nfunc resetDynamicHosts() {\n\tdynamicHostsMu.Lock()\n\tdynamicHosts = make(map[string]dynamicHostEntry)\n\tdynamicHostsMu.Unlock()\n\t// Reset the Once so cleanup goroutine tests can re-trigger if needed.\n\tdynamicHostsCleanerOnce = sync.Once{}\n}\n\n// TestFillDynamicHostCreatesEntry verifies that calling fillDynamicHost on a\n// new address inserts an entry into dynamicHosts and assigns a non-nil Host.\nfunc TestFillDynamicHostCreatesEntry(t *testing.T) {\n\tresetDynamicHosts()\n\n\tu := &Upstream{Dial: \"192.0.2.1:80\"}\n\tu.fillDynamicHost()\n\n\tif u.Host == nil {\n\t\tt.Fatal(\"expected Host to be set after fillDynamicHost\")\n\t}\n\n\tdynamicHostsMu.RLock()\n\tentry, ok := dynamicHosts[\"192.0.2.1:80\"]\n\tdynamicHostsMu.RUnlock()\n\n\tif !ok {\n\t\tt.Fatal(\"expected entry in dynamicHosts map\")\n\t}\n\tif entry.host != u.Host {\n\t\tt.Error(\"dynamicHosts entry host should be the same pointer assigned to Upstream.Host\")\n\t}\n\tif entry.lastSeen.IsZero() {\n\t\tt.Error(\"expected lastSeen to be set\")\n\t}\n}\n\n// TestFillDynamicHostReusesSameHost verifies that two calls for the same address\n// return the exact same *Host pointer so that state (e.g. fail counts) is shared.\nfunc TestFillDynamicHostReusesSameHost(t *testing.T) {\n\tresetDynamicHosts()\n\n\tu1 := &Upstream{Dial: \"192.0.2.2:80\"}\n\tu1.fillDynamicHost()\n\n\tu2 := &Upstream{Dial: \"192.0.2.2:80\"}\n\tu2.fillDynamicHost()\n\n\tif u1.Host != u2.Host {\n\t\tt.Error(\"expected both upstreams to share the same *Host pointer\")\n\t}\n}\n\n// TestFillDynamicHostUpdatesLastSeen verifies that a second call for the same\n// address advances the lastSeen timestamp.\nfunc TestFillDynamicHostUpdatesLastSeen(t *testing.T) {\n\tresetDynamicHosts()\n\n\tu := &Upstream{Dial: \"192.0.2.3:80\"}\n\tu.fillDynamicHost()\n\n\tdynamicHostsMu.RLock()\n\tfirst := dynamicHosts[\"192.0.2.3:80\"].lastSeen\n\tdynamicHostsMu.RUnlock()\n\n\t// Ensure measurable time passes.\n\ttime.Sleep(2 * time.Millisecond)\n\n\tu2 := &Upstream{Dial: \"192.0.2.3:80\"}\n\tu2.fillDynamicHost()\n\n\tdynamicHostsMu.RLock()\n\tsecond := dynamicHosts[\"192.0.2.3:80\"].lastSeen\n\tdynamicHostsMu.RUnlock()\n\n\tif !second.After(first) {\n\t\tt.Error(\"expected lastSeen to be updated on second fillDynamicHost call\")\n\t}\n}\n\n// TestFillDynamicHostIndependentAddresses verifies that different addresses get\n// independent Host entries.\nfunc TestFillDynamicHostIndependentAddresses(t *testing.T) {\n\tresetDynamicHosts()\n\n\tu1 := &Upstream{Dial: \"192.0.2.4:80\"}\n\tu1.fillDynamicHost()\n\n\tu2 := &Upstream{Dial: \"192.0.2.5:80\"}\n\tu2.fillDynamicHost()\n\n\tif u1.Host == u2.Host {\n\t\tt.Error(\"different addresses should have different *Host entries\")\n\t}\n}\n\n// TestFillDynamicHostPreservesFailCount verifies that fail counts on a dynamic\n// host survive across multiple fillDynamicHost calls (simulating sequential\n// requests), which is the core behaviour fixed by this change.\nfunc TestFillDynamicHostPreservesFailCount(t *testing.T) {\n\tresetDynamicHosts()\n\n\t// First \"request\": provision and record a failure.\n\tu1 := &Upstream{Dial: \"192.0.2.6:80\"}\n\tu1.fillDynamicHost()\n\t_ = u1.Host.countFail(1)\n\n\tif u1.Host.Fails() != 1 {\n\t\tt.Fatalf(\"expected 1 fail, got %d\", u1.Host.Fails())\n\t}\n\n\t// Second \"request\": provision the same address again (new *Upstream, same address).\n\tu2 := &Upstream{Dial: \"192.0.2.6:80\"}\n\tu2.fillDynamicHost()\n\n\tif u2.Host.Fails() != 1 {\n\t\tt.Errorf(\"expected fail count to persist across fillDynamicHost calls, got %d\", u2.Host.Fails())\n\t}\n}\n\n// TestProvisionUpstreamDynamic verifies that provisionUpstream with dynamic=true\n// uses fillDynamicHost (not the UsagePool) and sets healthCheckPolicy /\n// MaxRequests correctly from handler config.\nfunc TestProvisionUpstreamDynamic(t *testing.T) {\n\tresetDynamicHosts()\n\n\tpassive := &PassiveHealthChecks{\n\t\tFailDuration:          caddy.Duration(10 * time.Second),\n\t\tMaxFails:              3,\n\t\tUnhealthyRequestCount: 5,\n\t}\n\th := Handler{\n\t\tHealthChecks: &HealthChecks{\n\t\t\tPassive: passive,\n\t\t},\n\t}\n\n\tu := &Upstream{Dial: \"192.0.2.7:80\"}\n\th.provisionUpstream(u, true)\n\n\tif u.Host == nil {\n\t\tt.Fatal(\"Host should be set after provisionUpstream\")\n\t}\n\tif u.healthCheckPolicy != passive {\n\t\tt.Error(\"healthCheckPolicy should point to the handler's PassiveHealthChecks\")\n\t}\n\tif u.MaxRequests != 5 {\n\t\tt.Errorf(\"expected MaxRequests=5 from UnhealthyRequestCount, got %d\", u.MaxRequests)\n\t}\n\n\t// Must be in dynamicHosts, not in the static UsagePool.\n\tdynamicHostsMu.RLock()\n\t_, inDynamic := dynamicHosts[\"192.0.2.7:80\"]\n\tdynamicHostsMu.RUnlock()\n\tif !inDynamic {\n\t\tt.Error(\"dynamic upstream should be stored in dynamicHosts\")\n\t}\n\t_, inPool := hosts.References(\"192.0.2.7:80\")\n\tif inPool {\n\t\tt.Error(\"dynamic upstream should NOT be stored in the static UsagePool\")\n\t}\n}\n\n// TestProvisionUpstreamStatic verifies that provisionUpstream with dynamic=false\n// uses the UsagePool and does NOT insert into dynamicHosts.\nfunc TestProvisionUpstreamStatic(t *testing.T) {\n\tresetDynamicHosts()\n\n\th := Handler{}\n\n\tu := &Upstream{Dial: \"192.0.2.8:80\"}\n\th.provisionUpstream(u, false)\n\n\tif u.Host == nil {\n\t\tt.Fatal(\"Host should be set after provisionUpstream\")\n\t}\n\n\trefs, inPool := hosts.References(\"192.0.2.8:80\")\n\tif !inPool {\n\t\tt.Error(\"static upstream should be in the UsagePool\")\n\t}\n\tif refs != 1 {\n\t\tt.Errorf(\"expected ref count 1, got %d\", refs)\n\t}\n\n\tdynamicHostsMu.RLock()\n\t_, inDynamic := dynamicHosts[\"192.0.2.8:80\"]\n\tdynamicHostsMu.RUnlock()\n\tif inDynamic {\n\t\tt.Error(\"static upstream should NOT be in dynamicHosts\")\n\t}\n\n\t// Clean up the pool entry we just added.\n\t_, _ = hosts.Delete(\"192.0.2.8:80\")\n}\n\n// TestDynamicHostHealthyConsultsFails verifies the end-to-end passive health\n// check path: after enough failures are recorded against a dynamic upstream's\n// shared *Host, Healthy() returns false for a newly provisioned *Upstream with\n// the same address.\nfunc TestDynamicHostHealthyConsultsFails(t *testing.T) {\n\tresetDynamicHosts()\n\n\tpassive := &PassiveHealthChecks{\n\t\tFailDuration: caddy.Duration(time.Minute),\n\t\tMaxFails:     2,\n\t}\n\th := Handler{\n\t\tHealthChecks: &HealthChecks{Passive: passive},\n\t}\n\n\t// First request: provision and record two failures.\n\tu1 := &Upstream{Dial: \"192.0.2.9:80\"}\n\th.provisionUpstream(u1, true)\n\n\t_ = u1.Host.countFail(1)\n\t_ = u1.Host.countFail(1)\n\n\t// Second request: fresh *Upstream, same address.\n\tu2 := &Upstream{Dial: \"192.0.2.9:80\"}\n\th.provisionUpstream(u2, true)\n\n\tif u2.Healthy() {\n\t\tt.Error(\"upstream should be unhealthy after MaxFails failures have been recorded against its shared Host\")\n\t}\n}\n\n// TestDynamicHostCleanupEvictsStaleEntries verifies that the cleanup sweep\n// removes entries whose lastSeen is older than dynamicHostIdleExpiry.\nfunc TestDynamicHostCleanupEvictsStaleEntries(t *testing.T) {\n\tresetDynamicHosts()\n\n\tconst addr = \"192.0.2.10:80\"\n\n\t// Insert an entry directly with a lastSeen far in the past.\n\tdynamicHostsMu.Lock()\n\tdynamicHosts[addr] = dynamicHostEntry{\n\t\thost:     new(Host),\n\t\tlastSeen: time.Now().Add(-2 * dynamicHostIdleExpiry),\n\t}\n\tdynamicHostsMu.Unlock()\n\n\t// Run the cleanup logic inline (same logic as the goroutine).\n\tdynamicHostsMu.Lock()\n\tfor a, entry := range dynamicHosts {\n\t\tif time.Since(entry.lastSeen) > dynamicHostIdleExpiry {\n\t\t\tdelete(dynamicHosts, a)\n\t\t}\n\t}\n\tdynamicHostsMu.Unlock()\n\n\tdynamicHostsMu.RLock()\n\t_, stillPresent := dynamicHosts[addr]\n\tdynamicHostsMu.RUnlock()\n\n\tif stillPresent {\n\t\tt.Error(\"stale dynamic host entry should have been evicted by cleanup sweep\")\n\t}\n}\n\n// TestDynamicHostCleanupRetainsFreshEntries verifies that the cleanup sweep\n// keeps entries whose lastSeen is within dynamicHostIdleExpiry.\nfunc TestDynamicHostCleanupRetainsFreshEntries(t *testing.T) {\n\tresetDynamicHosts()\n\n\tconst addr = \"192.0.2.11:80\"\n\n\tdynamicHostsMu.Lock()\n\tdynamicHosts[addr] = dynamicHostEntry{\n\t\thost:     new(Host),\n\t\tlastSeen: time.Now(),\n\t}\n\tdynamicHostsMu.Unlock()\n\n\t// Run the cleanup logic inline.\n\tdynamicHostsMu.Lock()\n\tfor a, entry := range dynamicHosts {\n\t\tif time.Since(entry.lastSeen) > dynamicHostIdleExpiry {\n\t\t\tdelete(dynamicHosts, a)\n\t\t}\n\t}\n\tdynamicHostsMu.Unlock()\n\n\tdynamicHostsMu.RLock()\n\t_, stillPresent := dynamicHosts[addr]\n\tdynamicHostsMu.RUnlock()\n\n\tif !stillPresent {\n\t\tt.Error(\"fresh dynamic host entry should be retained by cleanup sweep\")\n\t}\n}\n\n// TestDynamicHostConcurrentFillHost verifies that concurrent calls to\n// fillDynamicHost for the same address all get the same *Host pointer and\n// don't race (run with -race).\nfunc TestDynamicHostConcurrentFillHost(t *testing.T) {\n\tresetDynamicHosts()\n\n\tconst addr = \"192.0.2.12:80\"\n\tconst goroutines = 50\n\n\tvar wg sync.WaitGroup\n\thosts := make([]*Host, goroutines)\n\n\tfor i := range goroutines {\n\t\twg.Add(1)\n\t\tgo func(idx int) {\n\t\t\tdefer wg.Done()\n\t\t\tu := &Upstream{Dial: addr}\n\t\t\tu.fillDynamicHost()\n\t\t\thosts[idx] = u.Host\n\t\t}(i)\n\t}\n\twg.Wait()\n\n\tfirst := hosts[0]\n\tfor i, h := range hosts {\n\t\tif h != first {\n\t\t\tt.Errorf(\"goroutine %d got a different *Host pointer; expected all to share the same entry\", i)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/fastcgi/caddyfile.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage fastcgi\n\nimport (\n\t\"encoding/json\"\n\t\"net/http\"\n\t\"slices\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/fileserver\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/reverseproxy\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/rewrite\"\n)\n\nfunc init() {\n\thttpcaddyfile.RegisterDirective(\"php_fastcgi\", parsePHPFastCGI)\n}\n\n// UnmarshalCaddyfile deserializes Caddyfile tokens into h.\n//\n//\ttransport fastcgi {\n//\t    root <path>\n//\t    split <at>\n//\t    env <key> <value>\n//\t    resolve_root_symlink\n//\t    dial_timeout <duration>\n//\t    read_timeout <duration>\n//\t    write_timeout <duration>\n//\t    capture_stderr\n//\t}\nfunc (t *Transport) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume transport name\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"root\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tt.Root = d.Val()\n\n\t\tcase \"split\":\n\t\t\tt.SplitPath = d.RemainingArgs()\n\t\t\tif len(t.SplitPath) == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"env\":\n\t\t\targs := d.RemainingArgs()\n\t\t\tif len(args) != 2 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif t.EnvVars == nil {\n\t\t\t\tt.EnvVars = make(map[string]string)\n\t\t\t}\n\t\t\tt.EnvVars[args[0]] = args[1]\n\n\t\tcase \"resolve_root_symlink\":\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tt.ResolveRootSymlink = true\n\n\t\tcase \"dial_timeout\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad timeout value %s: %v\", d.Val(), err)\n\t\t\t}\n\t\t\tt.DialTimeout = caddy.Duration(dur)\n\n\t\tcase \"read_timeout\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad timeout value %s: %v\", d.Val(), err)\n\t\t\t}\n\t\t\tt.ReadTimeout = caddy.Duration(dur)\n\n\t\tcase \"write_timeout\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad timeout value %s: %v\", d.Val(), err)\n\t\t\t}\n\t\t\tt.WriteTimeout = caddy.Duration(dur)\n\n\t\tcase \"capture_stderr\":\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tt.CaptureStderr = true\n\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized subdirective %s\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\n// parsePHPFastCGI parses the php_fastcgi directive, which has the same syntax\n// as the reverse_proxy directive (in fact, the reverse_proxy's directive\n// Unmarshaler is invoked by this function) but the resulting proxy is specially\n// configured for most™️ PHP apps over FastCGI. A line such as this:\n//\n//\tphp_fastcgi localhost:7777\n//\n// is equivalent to a route consisting of:\n//\n//\t# Add trailing slash for directory requests\n//\t# This redirection is automatically disabled if \"{http.request.uri.path}/index.php\"\n//\t# doesn't appear in the try_files list\n//\t@canonicalPath {\n//\t    file {path}/index.php\n//\t    not path */\n//\t}\n//\tredir @canonicalPath {path}/ 308\n//\n//\t# If the requested file does not exist, try index files and assume index.php always exists\n//\t@indexFiles file {\n//\t    try_files {path} {path}/index.php index.php\n//\t    try_policy first_exist_fallback\n//\t    split_path .php\n//\t}\n//\trewrite @indexFiles {http.matchers.file.relative}\n//\n//\t# Proxy PHP files to the FastCGI responder\n//\t@phpFiles path *.php\n//\treverse_proxy @phpFiles localhost:7777 {\n//\t    transport fastcgi {\n//\t        split .php\n//\t    }\n//\t}\n//\n// Thus, this directive produces multiple handlers, each with a different\n// matcher because multiple consecutive handlers are necessary to support\n// the common PHP use case. If this \"common\" config is not compatible\n// with a user's PHP requirements, they can use a manual approach based\n// on the example above to configure it precisely as they need.\n//\n// If a matcher is specified by the user, for example:\n//\n//\tphp_fastcgi /subpath localhost:7777\n//\n// then the resulting handlers are wrapped in a subroute that uses the\n// user's matcher as a prerequisite to enter the subroute. In other\n// words, the directive's matcher is necessary, but not sufficient.\nfunc parsePHPFastCGI(h httpcaddyfile.Helper) ([]httpcaddyfile.ConfigValue, error) {\n\tif !h.Next() {\n\t\treturn nil, h.ArgErr()\n\t}\n\n\t// set up the transport for FastCGI, and specifically PHP\n\tfcgiTransport := Transport{}\n\n\t// set up the set of file extensions allowed to execute PHP code\n\textensions := []string{\".php\"}\n\n\t// set the default index file for the try_files rewrites\n\tindexFile := \"index.php\"\n\n\t// set up for explicitly overriding try_files\n\tvar tryFiles []string\n\n\t// if the user specified a matcher token, use that\n\t// matcher in a route that wraps both of our routes;\n\t// either way, strip the matcher token and pass\n\t// the remaining tokens to the unmarshaler so that\n\t// we can gain the rest of the reverse_proxy syntax\n\tuserMatcherSet, err := h.ExtractMatcherSet()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// make a new dispenser from the remaining tokens so that we\n\t// can reset the dispenser back to this point for the\n\t// reverse_proxy unmarshaler to read from it as well\n\tdispenser := h.NewFromNextSegment()\n\n\t// read the subdirectives that we allow as overrides to\n\t// the php_fastcgi shortcut\n\t// NOTE: we delete the tokens as we go so that the reverse_proxy\n\t// unmarshal doesn't see these subdirectives which it cannot handle\n\tfor dispenser.Next() {\n\t\tfor dispenser.NextBlock(0) {\n\t\t\t// ignore any sub-subdirectives that might\n\t\t\t// have the same name somewhere within\n\t\t\t// the reverse_proxy passthrough tokens\n\t\t\tif dispenser.Nesting() != 1 {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// parse the php_fastcgi subdirectives\n\t\t\tswitch dispenser.Val() {\n\t\t\tcase \"root\":\n\t\t\t\tif !dispenser.NextArg() {\n\t\t\t\t\treturn nil, dispenser.ArgErr()\n\t\t\t\t}\n\t\t\t\tfcgiTransport.Root = dispenser.Val()\n\t\t\t\tdispenser.DeleteN(2)\n\n\t\t\tcase \"split\":\n\t\t\t\textensions = dispenser.RemainingArgs()\n\t\t\t\tdispenser.DeleteN(len(extensions) + 1)\n\t\t\t\tif len(extensions) == 0 {\n\t\t\t\t\treturn nil, dispenser.ArgErr()\n\t\t\t\t}\n\n\t\t\tcase \"env\":\n\t\t\t\targs := dispenser.RemainingArgs()\n\t\t\t\tdispenser.DeleteN(len(args) + 1)\n\t\t\t\tif len(args) != 2 {\n\t\t\t\t\treturn nil, dispenser.ArgErr()\n\t\t\t\t}\n\t\t\t\tif fcgiTransport.EnvVars == nil {\n\t\t\t\t\tfcgiTransport.EnvVars = make(map[string]string)\n\t\t\t\t}\n\t\t\t\tfcgiTransport.EnvVars[args[0]] = args[1]\n\n\t\t\tcase \"index\":\n\t\t\t\targs := dispenser.RemainingArgs()\n\t\t\t\tdispenser.DeleteN(len(args) + 1)\n\t\t\t\tif len(args) != 1 {\n\t\t\t\t\treturn nil, dispenser.ArgErr()\n\t\t\t\t}\n\t\t\t\tindexFile = args[0]\n\n\t\t\tcase \"try_files\":\n\t\t\t\targs := dispenser.RemainingArgs()\n\t\t\t\tdispenser.DeleteN(len(args) + 1)\n\t\t\t\tif len(args) < 1 {\n\t\t\t\t\treturn nil, dispenser.ArgErr()\n\t\t\t\t}\n\t\t\t\ttryFiles = args\n\n\t\t\tcase \"resolve_root_symlink\":\n\t\t\t\targs := dispenser.RemainingArgs()\n\t\t\t\tdispenser.DeleteN(len(args) + 1)\n\t\t\t\tfcgiTransport.ResolveRootSymlink = true\n\n\t\t\tcase \"dial_timeout\":\n\t\t\t\tif !dispenser.NextArg() {\n\t\t\t\t\treturn nil, dispenser.ArgErr()\n\t\t\t\t}\n\t\t\t\tdur, err := caddy.ParseDuration(dispenser.Val())\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, dispenser.Errf(\"bad timeout value %s: %v\", dispenser.Val(), err)\n\t\t\t\t}\n\t\t\t\tfcgiTransport.DialTimeout = caddy.Duration(dur)\n\t\t\t\tdispenser.DeleteN(2)\n\n\t\t\tcase \"read_timeout\":\n\t\t\t\tif !dispenser.NextArg() {\n\t\t\t\t\treturn nil, dispenser.ArgErr()\n\t\t\t\t}\n\t\t\t\tdur, err := caddy.ParseDuration(dispenser.Val())\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, dispenser.Errf(\"bad timeout value %s: %v\", dispenser.Val(), err)\n\t\t\t\t}\n\t\t\t\tfcgiTransport.ReadTimeout = caddy.Duration(dur)\n\t\t\t\tdispenser.DeleteN(2)\n\n\t\t\tcase \"write_timeout\":\n\t\t\t\tif !dispenser.NextArg() {\n\t\t\t\t\treturn nil, dispenser.ArgErr()\n\t\t\t\t}\n\t\t\t\tdur, err := caddy.ParseDuration(dispenser.Val())\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, dispenser.Errf(\"bad timeout value %s: %v\", dispenser.Val(), err)\n\t\t\t\t}\n\t\t\t\tfcgiTransport.WriteTimeout = caddy.Duration(dur)\n\t\t\t\tdispenser.DeleteN(2)\n\n\t\t\tcase \"capture_stderr\":\n\t\t\t\targs := dispenser.RemainingArgs()\n\t\t\t\tdispenser.DeleteN(len(args) + 1)\n\t\t\t\tfcgiTransport.CaptureStderr = true\n\t\t\t}\n\t\t}\n\t}\n\n\t// reset the dispenser after we're done so that the reverse_proxy\n\t// unmarshaler can read it from the start\n\tdispenser.Reset()\n\n\t// set up a route list that we'll append to\n\troutes := caddyhttp.RouteList{}\n\n\t// set the list of allowed path segments on which to split\n\tfcgiTransport.SplitPath = extensions\n\n\t// if the index is turned off, we skip the redirect and try_files\n\tif indexFile != \"off\" {\n\t\tvar dirRedir bool\n\t\tdirIndex := \"{http.request.uri.path}/\" + indexFile\n\t\ttryPolicy := \"first_exist_fallback\"\n\n\t\t// if tryFiles wasn't overridden, use a reasonable default\n\t\tif len(tryFiles) == 0 {\n\t\t\ttryFiles = []string{\"{http.request.uri.path}\", dirIndex, indexFile}\n\t\t\tdirRedir = true\n\t\t} else {\n\t\t\tif !strings.HasSuffix(tryFiles[len(tryFiles)-1], \".php\") {\n\t\t\t\t// use first_exist strategy if the last file is not a PHP file\n\t\t\t\ttryPolicy = \"\"\n\t\t\t}\n\n\t\t\tdirRedir = slices.Contains(tryFiles, dirIndex)\n\t\t}\n\n\t\tif dirRedir {\n\t\t\t// route to redirect to canonical path if index PHP file\n\t\t\tredirMatcherSet := caddy.ModuleMap{\n\t\t\t\t\"file\": h.JSON(fileserver.MatchFile{\n\t\t\t\t\tTryFiles: []string{dirIndex},\n\t\t\t\t}),\n\t\t\t\t\"not\": h.JSON(caddyhttp.MatchNot{\n\t\t\t\t\tMatcherSetsRaw: []caddy.ModuleMap{\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"path\": h.JSON(caddyhttp.MatchPath{\"*/\"}),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}),\n\t\t\t}\n\t\t\tredirHandler := caddyhttp.StaticResponse{\n\t\t\t\tStatusCode: caddyhttp.WeakString(strconv.Itoa(http.StatusPermanentRedirect)),\n\t\t\t\tHeaders:    http.Header{\"Location\": []string{\"{http.request.orig_uri.path}/{http.request.orig_uri.prefixed_query}\"}},\n\t\t\t}\n\t\t\tredirRoute := caddyhttp.Route{\n\t\t\t\tMatcherSetsRaw: []caddy.ModuleMap{redirMatcherSet},\n\t\t\t\tHandlersRaw:    []json.RawMessage{caddyconfig.JSONModuleObject(redirHandler, \"handler\", \"static_response\", nil)},\n\t\t\t}\n\n\t\t\troutes = append(routes, redirRoute)\n\t\t}\n\n\t\t// route to rewrite to PHP index file\n\t\trewriteMatcherSet := caddy.ModuleMap{\n\t\t\t\"file\": h.JSON(fileserver.MatchFile{\n\t\t\t\tTryFiles:  tryFiles,\n\t\t\t\tTryPolicy: tryPolicy,\n\t\t\t\tSplitPath: extensions,\n\t\t\t}),\n\t\t}\n\t\trewriteHandler := rewrite.Rewrite{\n\t\t\tURI: \"{http.matchers.file.relative}\",\n\t\t}\n\t\trewriteRoute := caddyhttp.Route{\n\t\t\tMatcherSetsRaw: []caddy.ModuleMap{rewriteMatcherSet},\n\t\t\tHandlersRaw:    []json.RawMessage{caddyconfig.JSONModuleObject(rewriteHandler, \"handler\", \"rewrite\", nil)},\n\t\t}\n\n\t\troutes = append(routes, rewriteRoute)\n\t}\n\n\t// route to actually reverse proxy requests to PHP files;\n\t// match only requests that are for PHP files\n\tpathList := []string{}\n\tfor _, ext := range extensions {\n\t\tpathList = append(pathList, \"*\"+ext)\n\t}\n\trpMatcherSet := caddy.ModuleMap{\n\t\t\"path\": h.JSON(pathList),\n\t}\n\n\t// create the reverse proxy handler which uses our FastCGI transport\n\trpHandler := &reverseproxy.Handler{\n\t\tTransportRaw: caddyconfig.JSONModuleObject(fcgiTransport, \"protocol\", \"fastcgi\", nil),\n\t}\n\n\t// the rest of the config is specified by the user\n\t// using the reverse_proxy directive syntax\n\tdispenser.Next() // consume the directive name\n\terr = rpHandler.UnmarshalCaddyfile(dispenser)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\terr = rpHandler.FinalizeUnmarshalCaddyfile(h)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// create the final reverse proxy route which is\n\t// conditional on matching PHP files\n\trpRoute := caddyhttp.Route{\n\t\tMatcherSetsRaw: []caddy.ModuleMap{rpMatcherSet},\n\t\tHandlersRaw:    []json.RawMessage{caddyconfig.JSONModuleObject(rpHandler, \"handler\", \"reverse_proxy\", nil)},\n\t}\n\n\tsubroute := caddyhttp.Subroute{\n\t\tRoutes: append(routes, rpRoute),\n\t}\n\n\t// the user's matcher is a prerequisite for ours, so\n\t// wrap ours in a subroute and return that\n\tif userMatcherSet != nil {\n\t\treturn []httpcaddyfile.ConfigValue{\n\t\t\t{\n\t\t\t\tClass: \"route\",\n\t\t\t\tValue: caddyhttp.Route{\n\t\t\t\t\tMatcherSetsRaw: []caddy.ModuleMap{userMatcherSet},\n\t\t\t\t\tHandlersRaw:    []json.RawMessage{caddyconfig.JSONModuleObject(subroute, \"handler\", \"subroute\", nil)},\n\t\t\t\t},\n\t\t\t},\n\t\t}, nil\n\t}\n\n\t// otherwise, return the literal subroute instead of\n\t// individual routes, to ensure they stay together and\n\t// are treated as a single unit, without necessarily\n\t// creating an actual subroute in the output\n\treturn []httpcaddyfile.ConfigValue{\n\t\t{\n\t\t\tClass: \"route\",\n\t\t\tValue: subroute,\n\t\t},\n\t}, nil\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/fastcgi/client.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Forked Jan. 2015 from http://bitbucket.org/PinIdea/fcgi_client\n// (which is forked from https://code.google.com/p/go-fastcgi-client/).\n// This fork contains several fixes and improvements by Matt Holt and\n// other contributors to the Caddy project.\n\n// Copyright 2012 Junqing Tan <ivan@mysqlab.net> and The Go Authors\n// Use of this source code is governed by a BSD-style\n// Part of source code is from Go fcgi package\n\npackage fastcgi\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"io\"\n\t\"mime/multipart\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/http/httputil\"\n\t\"net/textproto\"\n\t\"net/url\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\n// FCGIListenSockFileno describes listen socket file number.\nconst FCGIListenSockFileno uint8 = 0\n\n// FCGIHeaderLen describes header length.\nconst FCGIHeaderLen uint8 = 8\n\n// Version1 describes the version.\nconst Version1 uint8 = 1\n\n// FCGINullRequestID describes the null request ID.\nconst FCGINullRequestID uint8 = 0\n\n// FCGIKeepConn describes keep connection mode.\nconst FCGIKeepConn uint8 = 1\n\nconst (\n\t// BeginRequest is the begin request flag.\n\tBeginRequest uint8 = iota + 1\n\t// AbortRequest is the abort request flag.\n\tAbortRequest\n\t// EndRequest is the end request flag.\n\tEndRequest\n\t// Params is the parameters flag.\n\tParams\n\t// Stdin is the standard input flag.\n\tStdin\n\t// Stdout is the standard output flag.\n\tStdout\n\t// Stderr is the standard error flag.\n\tStderr\n\t// Data is the data flag.\n\tData\n\t// GetValues is the get values flag.\n\tGetValues\n\t// GetValuesResult is the get values result flag.\n\tGetValuesResult\n\t// UnknownType is the unknown type flag.\n\tUnknownType\n\t// MaxType is the maximum type flag.\n\tMaxType = UnknownType\n)\n\nconst (\n\t// Responder is the responder flag.\n\tResponder uint8 = iota + 1\n\t// Authorizer is the authorizer flag.\n\tAuthorizer\n\t// Filter is the filter flag.\n\tFilter\n)\n\nconst (\n\t// RequestComplete is the completed request flag.\n\tRequestComplete uint8 = iota\n\t// CantMultiplexConns is the multiplexed connections flag.\n\tCantMultiplexConns\n\t// Overloaded is the overloaded flag.\n\tOverloaded\n\t// UnknownRole is the unknown role flag.\n\tUnknownRole\n)\n\nconst (\n\t// MaxConns is the maximum connections flag.\n\tMaxConns string = \"MAX_CONNS\"\n\t// MaxRequests is the maximum requests flag.\n\tMaxRequests string = \"MAX_REQS\"\n\t// MultiplexConns is the multiplex connections flag.\n\tMultiplexConns string = \"MPXS_CONNS\"\n)\n\nconst (\n\tmaxWrite = 65500 // 65530 may work, but for compatibility\n\tmaxPad   = 255\n)\n\n// for padding so we don't have to allocate all the time\n// not synchronized because we don't care what the contents are\nvar pad [maxPad]byte\n\n// client implements a FastCGI client, which is a standard for\n// interfacing external applications with Web servers.\ntype client struct {\n\trwc net.Conn\n\t// keepAlive bool // TODO: implement\n\treqID  uint16\n\tstderr bool\n\tlogger *zap.Logger\n}\n\n// Do made the request and returns a io.Reader that translates the data read\n// from fcgi responder out of fcgi packet before returning it.\nfunc (c *client) Do(p map[string]string, req io.Reader) (r io.Reader, err error) {\n\t// check for CONTENT_LENGTH, since the lack of it or wrong value will cause the backend to hang\n\tif clStr, ok := p[\"CONTENT_LENGTH\"]; !ok {\n\t\treturn nil, caddyhttp.Error(http.StatusLengthRequired, nil)\n\t} else if _, err := strconv.ParseUint(clStr, 10, 64); err != nil {\n\t\t// stdlib won't return a negative Content-Length, but we check just in case,\n\t\t// the most likely cause is from a missing content length, which is -1\n\t\treturn nil, caddyhttp.Error(http.StatusLengthRequired, err)\n\t}\n\n\twriter := &streamWriter{c: c}\n\twriter.buf = bufPool.Get().(*bytes.Buffer)\n\twriter.buf.Reset()\n\tdefer bufPool.Put(writer.buf)\n\n\terr = writer.writeBeginRequest(uint16(Responder), 0)\n\tif err != nil {\n\t\treturn r, err\n\t}\n\n\twriter.recType = Params\n\terr = writer.writePairs(p)\n\tif err != nil {\n\t\treturn r, err\n\t}\n\n\twriter.recType = Stdin\n\tif req != nil {\n\t\t_, err = io.Copy(writer, req)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\terr = writer.FlushStream()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tr = &streamReader{c: c}\n\treturn r, err\n}\n\n// clientCloser is a io.ReadCloser. It wraps a io.Reader with a Closer\n// that closes the client connection.\ntype clientCloser struct {\n\trwc net.Conn\n\tr   *streamReader\n\tio.Reader\n\n\tstatus int\n\tlogger *zap.Logger\n}\n\nfunc (f clientCloser) Close() error {\n\tstderr := f.r.stderr.Bytes()\n\tif len(stderr) == 0 {\n\t\treturn f.rwc.Close()\n\t}\n\n\tlogLevel := zapcore.WarnLevel\n\tif f.status >= 400 {\n\t\tlogLevel = zapcore.ErrorLevel\n\t}\n\n\tif c := f.logger.Check(logLevel, \"stderr\"); c != nil {\n\t\tc.Write(zap.ByteString(\"body\", stderr))\n\t}\n\n\treturn f.rwc.Close()\n}\n\n// Request returns a HTTP Response with Header and Body\n// from fcgi responder\nfunc (c *client) Request(p map[string]string, req io.Reader) (resp *http.Response, err error) {\n\tr, err := c.Do(p, req)\n\tif err != nil {\n\t\treturn resp, err\n\t}\n\n\trb := bufio.NewReader(r)\n\ttp := textproto.NewReader(rb)\n\tresp = new(http.Response)\n\n\t// Parse the response headers.\n\tmimeHeader, err := tp.ReadMIMEHeader()\n\tif err != nil && err != io.EOF {\n\t\treturn resp, err\n\t}\n\tresp.Header = http.Header(mimeHeader)\n\n\tif resp.Header.Get(\"Status\") != \"\" {\n\t\tstatusNumber, statusInfo, statusIsCut := strings.Cut(resp.Header.Get(\"Status\"), \" \")\n\t\tresp.StatusCode, err = strconv.Atoi(statusNumber)\n\t\tif err != nil {\n\t\t\treturn resp, err\n\t\t}\n\t\tif statusIsCut {\n\t\t\tresp.Status = statusInfo\n\t\t}\n\t} else {\n\t\tresp.StatusCode = http.StatusOK\n\t}\n\n\t// TODO: fixTransferEncoding ?\n\tresp.TransferEncoding = resp.Header[\"Transfer-Encoding\"]\n\tresp.ContentLength, _ = strconv.ParseInt(resp.Header.Get(\"Content-Length\"), 10, 64)\n\n\t// wrap the response body in our closer\n\tcloser := clientCloser{\n\t\trwc:    c.rwc,\n\t\tr:      r.(*streamReader),\n\t\tReader: rb,\n\t\tstatus: resp.StatusCode,\n\t\tlogger: noopLogger,\n\t}\n\tif chunked(resp.TransferEncoding) {\n\t\tcloser.Reader = httputil.NewChunkedReader(rb)\n\t}\n\tif c.stderr {\n\t\tcloser.logger = c.logger\n\t}\n\tresp.Body = closer\n\n\treturn resp, err\n}\n\n// Get issues a GET request to the fcgi responder.\nfunc (c *client) Get(p map[string]string, body io.Reader, l int64) (resp *http.Response, err error) {\n\tp[\"REQUEST_METHOD\"] = \"GET\"\n\tp[\"CONTENT_LENGTH\"] = strconv.FormatInt(l, 10)\n\n\treturn c.Request(p, body)\n}\n\n// Head issues a HEAD request to the fcgi responder.\nfunc (c *client) Head(p map[string]string) (resp *http.Response, err error) {\n\tp[\"REQUEST_METHOD\"] = \"HEAD\"\n\tp[\"CONTENT_LENGTH\"] = \"0\"\n\n\treturn c.Request(p, nil)\n}\n\n// Options issues an OPTIONS request to the fcgi responder.\nfunc (c *client) Options(p map[string]string) (resp *http.Response, err error) {\n\tp[\"REQUEST_METHOD\"] = \"OPTIONS\"\n\tp[\"CONTENT_LENGTH\"] = \"0\"\n\n\treturn c.Request(p, nil)\n}\n\n// Post issues a POST request to the fcgi responder. with request body\n// in the format that bodyType specified\nfunc (c *client) Post(p map[string]string, method string, bodyType string, body io.Reader, l int64) (resp *http.Response, err error) {\n\tif p == nil {\n\t\tp = make(map[string]string)\n\t}\n\n\tp[\"REQUEST_METHOD\"] = strings.ToUpper(method)\n\n\tif len(p[\"REQUEST_METHOD\"]) == 0 || p[\"REQUEST_METHOD\"] == \"GET\" {\n\t\tp[\"REQUEST_METHOD\"] = \"POST\"\n\t}\n\n\tp[\"CONTENT_LENGTH\"] = strconv.FormatInt(l, 10)\n\tif len(bodyType) > 0 {\n\t\tp[\"CONTENT_TYPE\"] = bodyType\n\t} else {\n\t\tp[\"CONTENT_TYPE\"] = \"application/x-www-form-urlencoded\"\n\t}\n\n\treturn c.Request(p, body)\n}\n\n// PostForm issues a POST to the fcgi responder, with form\n// as a string key to a list values (url.Values)\nfunc (c *client) PostForm(p map[string]string, data url.Values) (resp *http.Response, err error) {\n\tbody := bytes.NewReader([]byte(data.Encode()))\n\treturn c.Post(p, \"POST\", \"application/x-www-form-urlencoded\", body, int64(body.Len()))\n}\n\n// PostFile issues a POST to the fcgi responder in multipart(RFC 2046) standard,\n// with form as a string key to a list values (url.Values),\n// and/or with file as a string key to a list file path.\nfunc (c *client) PostFile(p map[string]string, data url.Values, file map[string]string) (resp *http.Response, err error) {\n\tbuf := &bytes.Buffer{}\n\twriter := multipart.NewWriter(buf)\n\tbodyType := writer.FormDataContentType()\n\n\tfor key, val := range data {\n\t\tfor _, v0 := range val {\n\t\t\terr = writer.WriteField(key, v0)\n\t\t\tif err != nil {\n\t\t\t\treturn resp, err\n\t\t\t}\n\t\t}\n\t}\n\n\tfor key, val := range file {\n\t\tfd, e := os.Open(val)\n\t\tif e != nil {\n\t\t\treturn nil, e\n\t\t}\n\t\tdefer fd.Close()\n\n\t\tpart, e := writer.CreateFormFile(key, filepath.Base(val))\n\t\tif e != nil {\n\t\t\treturn nil, e\n\t\t}\n\t\t_, err = io.Copy(part, fd)\n\t\tif err != nil {\n\t\t\treturn resp, err\n\t\t}\n\t}\n\n\terr = writer.Close()\n\tif err != nil {\n\t\treturn resp, err\n\t}\n\n\treturn c.Post(p, \"POST\", bodyType, buf, int64(buf.Len()))\n}\n\n// SetReadTimeout sets the read timeout for future calls that read from the\n// fcgi responder. A zero value for t means no timeout will be set.\nfunc (c *client) SetReadTimeout(t time.Duration) error {\n\tif t != 0 {\n\t\treturn c.rwc.SetReadDeadline(time.Now().Add(t))\n\t}\n\treturn nil\n}\n\n// SetWriteTimeout sets the write timeout for future calls that send data to\n// the fcgi responder. A zero value for t means no timeout will be set.\nfunc (c *client) SetWriteTimeout(t time.Duration) error {\n\tif t != 0 {\n\t\treturn c.rwc.SetWriteDeadline(time.Now().Add(t))\n\t}\n\treturn nil\n}\n\n// Checks whether chunked is part of the encodings stack\nfunc chunked(te []string) bool { return len(te) > 0 && te[0] == \"chunked\" }\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/fastcgi/client_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// NOTE: These tests were adapted from the original\n// repository from which this package was forked.\n// The tests are slow (~10s) and in dire need of rewriting.\n// As such, the tests have been disabled to speed up\n// automated builds until they can be properly written.\n\npackage fastcgi\n\nimport (\n\t\"bytes\"\n\t\"crypto/md5\"\n\t\"encoding/binary\"\n\t\"fmt\"\n\t\"io\"\n\t\"log\"\n\t\"math/rand/v2\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/http/fcgi\"\n\t\"net/url\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n)\n\n// test fcgi protocol includes:\n// Get, Post, Post in multipart/form-data, and Post with files\n// each key should be the md5 of the value or the file uploaded\n// specify remote fcgi responder ip:port to test with php\n// test failed if the remote fcgi(script) failed md5 verification\n// and output \"FAILED\" in response\nconst (\n\tscriptFile = \"/tank/www/fcgic_test.php\"\n\t// ipPort = \"remote-php-serv:59000\"\n\tipPort = \"127.0.0.1:59000\"\n)\n\nvar globalt *testing.T\n\ntype FastCGIServer struct{}\n\nfunc (s FastCGIServer) ServeHTTP(resp http.ResponseWriter, req *http.Request) {\n\tif err := req.ParseMultipartForm(100000000); err != nil {\n\t\tlog.Printf(\"[ERROR] failed to parse: %v\", err)\n\t}\n\n\tstat := \"PASSED\"\n\tfmt.Fprintln(resp, \"-\")\n\tfileNum := 0\n\t{\n\t\tlength := 0\n\t\tfor k0, v0 := range req.Form {\n\t\t\th := md5.New()\n\t\t\t_, _ = io.WriteString(h, v0[0])\n\t\t\t_md5 := fmt.Sprintf(\"%x\", h.Sum(nil))\n\n\t\t\tlength += len(k0)\n\t\t\tlength += len(v0[0])\n\n\t\t\t// echo error when key != _md5(val)\n\t\t\tif _md5 != k0 {\n\t\t\t\tfmt.Fprintln(resp, \"server:err \", _md5, k0)\n\t\t\t\tstat = \"FAILED\"\n\t\t\t}\n\t\t}\n\t\tif req.MultipartForm != nil {\n\t\t\tfileNum = len(req.MultipartForm.File)\n\t\t\tfor kn, fns := range req.MultipartForm.File {\n\t\t\t\t// fmt.Fprintln(resp, \"server:filekey \", kn )\n\t\t\t\tlength += len(kn)\n\t\t\t\tfor _, f := range fns {\n\t\t\t\t\tfd, err := f.Open()\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tlog.Println(\"server:\", err)\n\t\t\t\t\t\treturn\n\t\t\t\t\t}\n\t\t\t\t\th := md5.New()\n\t\t\t\t\tl0, err := io.Copy(h, fd)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tlog.Println(err)\n\t\t\t\t\t\treturn\n\t\t\t\t\t}\n\t\t\t\t\tlength += int(l0)\n\t\t\t\t\tdefer fd.Close()\n\t\t\t\t\tmd5 := fmt.Sprintf(\"%x\", h.Sum(nil))\n\t\t\t\t\t// fmt.Fprintln(resp, \"server:filemd5 \", md5 )\n\n\t\t\t\t\tif kn != md5 {\n\t\t\t\t\t\tfmt.Fprintln(resp, \"server:err \", md5, kn)\n\t\t\t\t\t\tstat = \"FAILED\"\n\t\t\t\t\t}\n\t\t\t\t\t// fmt.Fprintln(resp, \"server:filename \", f.Filename )\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tfmt.Fprintln(resp, \"server:got data length\", length)\n\t}\n\tfmt.Fprintln(resp, \"-\"+stat+\"-POST(\", len(req.Form), \")-FILE(\", fileNum, \")--\")\n}\n\nfunc sendFcgi(reqType int, fcgiParams map[string]string, data []byte, posts map[string]string, files map[string]string) (content []byte) {\n\tconn, err := net.Dial(\"tcp\", ipPort)\n\tif err != nil {\n\t\tlog.Println(\"err:\", err)\n\t\treturn content\n\t}\n\n\tfcgi := client{rwc: conn, reqID: 1}\n\n\tlength := 0\n\n\tvar resp *http.Response\n\tswitch reqType {\n\tcase 0:\n\t\tif len(data) > 0 {\n\t\t\tlength = len(data)\n\t\t\trd := bytes.NewReader(data)\n\t\t\tresp, err = fcgi.Post(fcgiParams, \"\", \"\", rd, int64(rd.Len()))\n\t\t} else if len(posts) > 0 {\n\t\t\tvalues := url.Values{}\n\t\t\tfor k, v := range posts {\n\t\t\t\tvalues.Set(k, v)\n\t\t\t\tlength += len(k) + 2 + len(v)\n\t\t\t}\n\t\t\tresp, err = fcgi.PostForm(fcgiParams, values)\n\t\t} else {\n\t\t\trd := bytes.NewReader(data)\n\t\t\tresp, err = fcgi.Get(fcgiParams, rd, int64(rd.Len()))\n\t\t}\n\n\tdefault:\n\t\tvalues := url.Values{}\n\t\tfor k, v := range posts {\n\t\t\tvalues.Set(k, v)\n\t\t\tlength += len(k) + 2 + len(v)\n\t\t}\n\n\t\tfor k, v := range files {\n\t\t\tfi, _ := os.Lstat(v)\n\t\t\tlength += len(k) + int(fi.Size())\n\t\t}\n\t\tresp, err = fcgi.PostFile(fcgiParams, values, files)\n\t}\n\n\tif err != nil {\n\t\tlog.Println(\"err:\", err)\n\t\treturn content\n\t}\n\n\tdefer resp.Body.Close()\n\tcontent, _ = io.ReadAll(resp.Body)\n\n\tlog.Println(\"c: send data length ≈\", length, string(content))\n\tconn.Close()\n\ttime.Sleep(250 * time.Millisecond)\n\n\tif bytes.Contains(content, []byte(\"FAILED\")) {\n\t\tglobalt.Error(\"Server return failed message\")\n\t}\n\n\treturn content\n}\n\nfunc generateRandFile(size int) (p string, m string) {\n\tp = filepath.Join(os.TempDir(), \"fcgict\"+strconv.Itoa(rand.Int()))\n\n\t// open output file\n\tfo, err := os.Create(p)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\t// close fo on exit and check for its returned error\n\tdefer func() {\n\t\tif err := fo.Close(); err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t}()\n\n\th := md5.New()\n\tfor i := 0; i < size/16; i++ {\n\t\tbuf := make([]byte, 16)\n\t\tbinary.PutVarint(buf, rand.Int64())\n\t\tif _, err := fo.Write(buf); err != nil {\n\t\t\tlog.Printf(\"[ERROR] failed to write buffer: %v\\n\", err)\n\t\t}\n\t\tif _, err := h.Write(buf); err != nil {\n\t\t\tlog.Printf(\"[ERROR] failed to write buffer: %v\\n\", err)\n\t\t}\n\t}\n\tm = fmt.Sprintf(\"%x\", h.Sum(nil))\n\treturn p, m\n}\n\nfunc DisabledTest(t *testing.T) {\n\t// TODO: test chunked reader\n\tglobalt = t\n\n\t// server\n\tgo func() {\n\t\tlistener, err := net.Listen(\"tcp\", ipPort)\n\t\tif err != nil {\n\t\t\tlog.Println(\"listener creation failed: \", err)\n\t\t}\n\n\t\tsrv := new(FastCGIServer)\n\t\tif err := fcgi.Serve(listener, srv); err != nil {\n\t\t\tlog.Print(\"[ERROR] failed to start server: \", err)\n\t\t}\n\t}()\n\n\ttime.Sleep(250 * time.Millisecond)\n\n\t// init\n\tfcgiParams := make(map[string]string)\n\tfcgiParams[\"REQUEST_METHOD\"] = \"GET\"\n\tfcgiParams[\"SERVER_PROTOCOL\"] = \"HTTP/1.1\"\n\t// fcgi_params[\"GATEWAY_INTERFACE\"] = \"CGI/1.1\"\n\tfcgiParams[\"SCRIPT_FILENAME\"] = scriptFile\n\n\t// simple GET\n\tlog.Println(\"test:\", \"get\")\n\tsendFcgi(0, fcgiParams, nil, nil, nil)\n\n\t// simple post data\n\tlog.Println(\"test:\", \"post\")\n\tsendFcgi(0, fcgiParams, []byte(\"c4ca4238a0b923820dcc509a6f75849b=1&7b8b965ad4bca0e41ab51de7b31363a1=n\"), nil, nil)\n\n\tlog.Println(\"test:\", \"post data (more than 60KB)\")\n\tdata := \"\"\n\tfor i := 0x00; i < 0xff; i++ {\n\t\tv0 := strings.Repeat(fmt.Sprint(i), 256)\n\t\th := md5.New()\n\t\t_, _ = io.WriteString(h, v0)\n\t\tk0 := fmt.Sprintf(\"%x\", h.Sum(nil))\n\t\tdata += k0 + \"=\" + url.QueryEscape(v0) + \"&\"\n\t}\n\tsendFcgi(0, fcgiParams, []byte(data), nil, nil)\n\n\tlog.Println(\"test:\", \"post form (use url.Values)\")\n\tp0 := make(map[string]string, 1)\n\tp0[\"c4ca4238a0b923820dcc509a6f75849b\"] = \"1\"\n\tp0[\"7b8b965ad4bca0e41ab51de7b31363a1\"] = \"n\"\n\tsendFcgi(1, fcgiParams, nil, p0, nil)\n\n\tlog.Println(\"test:\", \"post forms (256 keys, more than 1MB)\")\n\tp1 := make(map[string]string, 1)\n\tfor i := 0x00; i < 0xff; i++ {\n\t\tv0 := strings.Repeat(fmt.Sprint(i), 4096)\n\t\th := md5.New()\n\t\t_, _ = io.WriteString(h, v0)\n\t\tk0 := fmt.Sprintf(\"%x\", h.Sum(nil))\n\t\tp1[k0] = v0\n\t}\n\tsendFcgi(1, fcgiParams, nil, p1, nil)\n\n\tlog.Println(\"test:\", \"post file (1 file, 500KB)) \")\n\tf0 := make(map[string]string, 1)\n\tpath0, m0 := generateRandFile(500000)\n\tf0[m0] = path0\n\tsendFcgi(1, fcgiParams, nil, p1, f0)\n\n\tlog.Println(\"test:\", \"post multiple files (2 files, 5M each) and forms (256 keys, more than 1MB data\")\n\tpath1, m1 := generateRandFile(5000000)\n\tf0[m1] = path1\n\tsendFcgi(1, fcgiParams, nil, p1, f0)\n\n\tlog.Println(\"test:\", \"post only files (2 files, 5M each)\")\n\tsendFcgi(1, fcgiParams, nil, nil, f0)\n\n\tlog.Println(\"test:\", \"post only 1 file\")\n\tdelete(f0, \"m0\")\n\tsendFcgi(1, fcgiParams, nil, nil, f0)\n\n\tif err := os.Remove(path0); err != nil {\n\t\tlog.Println(\"[ERROR] failed to remove path: \", err)\n\t}\n\tif err := os.Remove(path1); err != nil {\n\t\tlog.Println(\"[ERROR] failed to remove path: \", err)\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/fastcgi/fastcgi.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage fastcgi\n\nimport (\n\t\"crypto/tls\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/http\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\t\"unicode/utf8\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\t\"golang.org/x/text/language\"\n\t\"golang.org/x/text/search\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/reverseproxy\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddytls\"\n)\n\nvar (\n\tErrInvalidSplitPath = errors.New(\"split path contains non-ASCII characters\")\n\n\tnoopLogger = zap.NewNop()\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Transport{})\n}\n\n// Transport facilitates FastCGI communication.\ntype Transport struct {\n\t// Use this directory as the fastcgi root directory. Defaults to the root\n\t// directory of the parent virtual host.\n\tRoot string `json:\"root,omitempty\"`\n\n\t// The path in the URL will be split into two, with the first piece ending\n\t// with the value of SplitPath. The first piece will be assumed as the\n\t// actual resource (CGI script) name, and the second piece will be set to\n\t// PATH_INFO for the CGI script to use.\n\t//\n\t// Split paths can only contain ASCII characters.\n\t// Comparison is case-insensitive.\n\t//\n\t// Future enhancements should be careful to avoid CVE-2019-11043,\n\t// which can be mitigated with use of a try_files-like behavior\n\t// that 404s if the fastcgi path info is not found.\n\tSplitPath []string `json:\"split_path,omitempty\"`\n\n\t// Path declared as root directory will be resolved to its absolute value\n\t// after the evaluation of any symbolic links.\n\t// Due to the nature of PHP opcache, root directory path is cached: when\n\t// using a symlinked directory as root this could generate errors when\n\t// symlink is changed without php-fpm being restarted; enabling this\n\t// directive will set $_SERVER['DOCUMENT_ROOT'] to the real directory path.\n\tResolveRootSymlink bool `json:\"resolve_root_symlink,omitempty\"`\n\n\t// Extra environment variables.\n\tEnvVars map[string]string `json:\"env,omitempty\"`\n\n\t// The duration used to set a deadline when connecting to an upstream. Default: `3s`.\n\tDialTimeout caddy.Duration `json:\"dial_timeout,omitempty\"`\n\n\t// The duration used to set a deadline when reading from the FastCGI server.\n\tReadTimeout caddy.Duration `json:\"read_timeout,omitempty\"`\n\n\t// The duration used to set a deadline when sending to the FastCGI server.\n\tWriteTimeout caddy.Duration `json:\"write_timeout,omitempty\"`\n\n\t// Capture and log any messages sent by the upstream on stderr. Logs at WARN\n\t// level by default. If the response has a 4xx or 5xx status ERROR level will\n\t// be used instead.\n\tCaptureStderr bool `json:\"capture_stderr,omitempty\"`\n\n\tserverSoftware string\n\tlogger         *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Transport) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.reverse_proxy.transport.fastcgi\",\n\t\tNew: func() caddy.Module { return new(Transport) },\n\t}\n}\n\n// Provision sets up t.\nfunc (t *Transport) Provision(ctx caddy.Context) error {\n\tt.logger = ctx.Logger()\n\n\tif t.Root == \"\" {\n\t\tt.Root = \"{http.vars.root}\"\n\t}\n\n\tversion, _ := caddy.Version()\n\tt.serverSoftware = \"Caddy/\" + version\n\n\t// Set a relatively short default dial timeout.\n\t// This is helpful to make load-balancer retries more speedy.\n\tif t.DialTimeout == 0 {\n\t\tt.DialTimeout = caddy.Duration(3 * time.Second)\n\t}\n\n\tvar b strings.Builder\n\n\tfor i, split := range t.SplitPath {\n\t\tb.Grow(len(split))\n\n\t\tfor j := 0; j < len(split); j++ {\n\t\t\tc := split[j]\n\t\t\tif c >= utf8.RuneSelf {\n\t\t\t\treturn ErrInvalidSplitPath\n\t\t\t}\n\n\t\t\tif 'A' <= c && c <= 'Z' {\n\t\t\t\tb.WriteByte(c + 'a' - 'A')\n\t\t\t} else {\n\t\t\t\tb.WriteByte(c)\n\t\t\t}\n\t\t}\n\n\t\tt.SplitPath[i] = b.String()\n\t\tb.Reset()\n\t}\n\n\treturn nil\n}\n\n// DefaultBufferSizes enables request buffering for fastcgi if not configured.\n// This is because most fastcgi servers are php-fpm that require the content length to be set to read the body, golang\n// std has fastcgi implementation that doesn't need this value to process the body, but we can safely assume that's\n// not used.\n// http3 requests have a negative content length for GET and HEAD requests, if that header is not sent.\n// see: https://github.com/caddyserver/caddy/issues/6678#issuecomment-2472224182\n// Though it appears even if CONTENT_LENGTH is invalid, php-fpm can handle just fine if the body is empty (no Stdin records sent).\n// php-fpm will hang if there is any data in the body though, https://github.com/caddyserver/caddy/issues/5420#issuecomment-2415943516\n\n// TODO: better default buffering for fastcgi requests without content length, in theory a value of 1 should be enough, make it bigger anyway\nfunc (t Transport) DefaultBufferSizes() (int64, int64) {\n\treturn 4096, 0\n}\n\n// RoundTrip implements http.RoundTripper.\nfunc (t Transport) RoundTrip(r *http.Request) (*http.Response, error) {\n\tserver := r.Context().Value(caddyhttp.ServerCtxKey).(*caddyhttp.Server)\n\n\t// Disallow null bytes in the request path, because\n\t// PHP upstreams may do bad things, like execute a\n\t// non-PHP file as PHP code. See #4574\n\tif strings.Contains(r.URL.Path, \"\\x00\") {\n\t\treturn nil, caddyhttp.Error(http.StatusBadRequest, fmt.Errorf(\"invalid request path\"))\n\t}\n\n\tenv, err := t.buildEnv(r)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"building environment: %v\", err)\n\t}\n\n\tctx := r.Context()\n\n\t// extract dial information from request (should have been embedded by the reverse proxy)\n\tnetwork, address := \"tcp\", r.URL.Host\n\tif dialInfo, ok := reverseproxy.GetDialInfo(ctx); ok {\n\t\tnetwork = dialInfo.Network\n\t\taddress = dialInfo.Address\n\t}\n\n\tlogCreds := server.Logs != nil && server.Logs.ShouldLogCredentials\n\tloggableReq := caddyhttp.LoggableHTTPRequest{\n\t\tRequest:              r,\n\t\tShouldLogCredentials: logCreds,\n\t}\n\tloggableEnv := loggableEnv{vars: env, logCredentials: logCreds}\n\n\tlogger := t.logger.With(\n\t\tzap.Object(\"request\", loggableReq),\n\t\tzap.Object(\"env\", loggableEnv),\n\t)\n\tif c := t.logger.Check(zapcore.DebugLevel, \"roundtrip\"); c != nil {\n\t\tc.Write(\n\t\t\tzap.String(\"dial\", address),\n\t\t\tzap.Object(\"env\", loggableEnv),\n\t\t\tzap.Object(\"request\", loggableReq),\n\t\t)\n\t}\n\n\t// connect to the backend\n\tdialer := net.Dialer{Timeout: time.Duration(t.DialTimeout)}\n\tconn, err := dialer.DialContext(ctx, network, address)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"dialing backend: %v\", err)\n\t}\n\tdefer func() {\n\t\t// conn will be closed with the response body unless there's an error\n\t\tif err != nil {\n\t\t\tconn.Close()\n\t\t}\n\t}()\n\n\t// create the client that will facilitate the protocol\n\tclient := client{\n\t\trwc:    conn,\n\t\treqID:  1,\n\t\tlogger: logger,\n\t\tstderr: t.CaptureStderr,\n\t}\n\n\t// read/write timeouts\n\tif err = client.SetReadTimeout(time.Duration(t.ReadTimeout)); err != nil {\n\t\treturn nil, fmt.Errorf(\"setting read timeout: %v\", err)\n\t}\n\tif err = client.SetWriteTimeout(time.Duration(t.WriteTimeout)); err != nil {\n\t\treturn nil, fmt.Errorf(\"setting write timeout: %v\", err)\n\t}\n\n\tcontentLength := r.ContentLength\n\tif contentLength == 0 {\n\t\tcontentLength, _ = strconv.ParseInt(r.Header.Get(\"Content-Length\"), 10, 64)\n\t}\n\n\tvar resp *http.Response\n\tswitch r.Method {\n\tcase http.MethodHead:\n\t\tresp, err = client.Head(env)\n\tcase http.MethodGet:\n\t\tresp, err = client.Get(env, r.Body, contentLength)\n\tcase http.MethodOptions:\n\t\tresp, err = client.Options(env)\n\tdefault:\n\t\tresp, err = client.Post(env, r.Method, r.Header.Get(\"Content-Type\"), r.Body, contentLength)\n\t}\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn resp, nil\n}\n\n// buildEnv returns a set of CGI environment variables for the request.\nfunc (t Transport) buildEnv(r *http.Request) (envVars, error) {\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\n\tvar env envVars\n\n\t// Separate remote IP and port; more lenient than net.SplitHostPort\n\tvar ip, port string\n\tif idx := strings.LastIndex(r.RemoteAddr, \":\"); idx > -1 {\n\t\tip = r.RemoteAddr[:idx]\n\t\tport = r.RemoteAddr[idx+1:]\n\t} else {\n\t\tip = r.RemoteAddr\n\t}\n\n\t// Remove [] from IPv6 addresses\n\tip = strings.Replace(ip, \"[\", \"\", 1)\n\tip = strings.Replace(ip, \"]\", \"\", 1)\n\n\t// make sure file root is absolute\n\troot, err := caddy.FastAbs(repl.ReplaceAll(t.Root, \".\"))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tif t.ResolveRootSymlink {\n\t\troot, err = filepath.EvalSymlinks(root)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tfpath := r.URL.Path\n\tscriptName := fpath\n\n\tdocURI := fpath\n\t// split \"actual path\" from \"path info\" if configured\n\tvar pathInfo string\n\tif splitPos := t.splitPos(fpath); splitPos > -1 {\n\t\tdocURI = fpath[:splitPos]\n\t\tpathInfo = fpath[splitPos:]\n\n\t\t// Strip PATH_INFO from SCRIPT_NAME\n\t\tscriptName = strings.TrimSuffix(scriptName, pathInfo)\n\t}\n\n\t// Try to grab the path remainder from a file matcher\n\t// if we didn't get a split result here.\n\t// See https://github.com/caddyserver/caddy/issues/3718\n\tif pathInfo == \"\" {\n\t\tpathInfo, _ = repl.GetString(\"http.matchers.file.remainder\")\n\t}\n\n\t// SCRIPT_FILENAME is the absolute path of SCRIPT_NAME\n\tscriptFilename := caddyhttp.SanitizedPathJoin(root, scriptName)\n\n\t// Ensure the SCRIPT_NAME has a leading slash for compliance with RFC3875\n\t// Info: https://tools.ietf.org/html/rfc3875#section-4.1.13\n\tif scriptName != \"\" && !strings.HasPrefix(scriptName, \"/\") {\n\t\tscriptName = \"/\" + scriptName\n\t}\n\n\t// Get the request URL from context. The context stores the original URL in case\n\t// it was changed by a middleware such as rewrite. By default, we pass the\n\t// original URI in as the value of REQUEST_URI (the user can overwrite this\n\t// if desired). Most PHP apps seem to want the original URI. Besides, this is\n\t// how nginx defaults: http://stackoverflow.com/a/12485156/1048862\n\torigReq := r.Context().Value(caddyhttp.OriginalRequestCtxKey).(http.Request)\n\n\trequestScheme := \"http\"\n\tif r.TLS != nil {\n\t\trequestScheme = \"https\"\n\t}\n\n\treqHost, reqPort, err := net.SplitHostPort(r.Host)\n\tif err != nil {\n\t\t// whatever, just assume there was no port\n\t\treqHost = r.Host\n\t}\n\n\tauthUser, _ := repl.GetString(\"http.auth.user.id\")\n\n\t// Some variables are unused but cleared explicitly to prevent\n\t// the parent environment from interfering.\n\tenv = envVars{\n\t\t// Variables defined in CGI 1.1 spec\n\t\t\"AUTH_TYPE\":         \"\", // Not used\n\t\t\"CONTENT_LENGTH\":    r.Header.Get(\"Content-Length\"),\n\t\t\"CONTENT_TYPE\":      r.Header.Get(\"Content-Type\"),\n\t\t\"GATEWAY_INTERFACE\": \"CGI/1.1\",\n\t\t\"PATH_INFO\":         pathInfo,\n\t\t\"QUERY_STRING\":      r.URL.RawQuery,\n\t\t\"REMOTE_ADDR\":       ip,\n\t\t\"REMOTE_HOST\":       ip, // For speed, remote host lookups disabled\n\t\t\"REMOTE_PORT\":       port,\n\t\t\"REMOTE_IDENT\":      \"\", // Not used\n\t\t\"REMOTE_USER\":       authUser,\n\t\t\"REQUEST_METHOD\":    r.Method,\n\t\t\"REQUEST_SCHEME\":    requestScheme,\n\t\t\"SERVER_NAME\":       reqHost,\n\t\t\"SERVER_PROTOCOL\":   r.Proto,\n\t\t\"SERVER_SOFTWARE\":   t.serverSoftware,\n\n\t\t// Other variables\n\t\t\"DOCUMENT_ROOT\":   root,\n\t\t\"DOCUMENT_URI\":    docURI,\n\t\t\"HTTP_HOST\":       r.Host, // added here, since not always part of headers\n\t\t\"REQUEST_URI\":     origReq.URL.RequestURI(),\n\t\t\"SCRIPT_FILENAME\": scriptFilename,\n\t\t\"SCRIPT_NAME\":     scriptName,\n\t}\n\n\t// compliance with the CGI specification requires that\n\t// PATH_TRANSLATED should only exist if PATH_INFO is defined.\n\t// Info: https://www.ietf.org/rfc/rfc3875 Page 14\n\tif env[\"PATH_INFO\"] != \"\" {\n\t\tenv[\"PATH_TRANSLATED\"] = caddyhttp.SanitizedPathJoin(root, pathInfo) // Info: http://www.oreilly.com/openbook/cgi/ch02_04.html\n\t}\n\n\t// compliance with the CGI specification requires that\n\t// the SERVER_PORT variable MUST be set to the TCP/IP port number on which this request is received from the client\n\t// even if the port is the default port for the scheme and could otherwise be omitted from a URI.\n\t// https://tools.ietf.org/html/rfc3875#section-4.1.15\n\tif reqPort != \"\" {\n\t\tenv[\"SERVER_PORT\"] = reqPort\n\t} else if requestScheme == \"http\" {\n\t\tenv[\"SERVER_PORT\"] = \"80\"\n\t} else if requestScheme == \"https\" {\n\t\tenv[\"SERVER_PORT\"] = \"443\"\n\t}\n\n\t// Some web apps rely on knowing HTTPS or not\n\tif r.TLS != nil {\n\t\tenv[\"HTTPS\"] = \"on\"\n\t\t// and pass the protocol details in a manner compatible with apache's mod_ssl\n\t\t// (which is why these have a SSL_ prefix and not TLS_).\n\t\tv, ok := tlsProtocolStrings[r.TLS.Version]\n\t\tif ok {\n\t\t\tenv[\"SSL_PROTOCOL\"] = v\n\t\t}\n\t\t// and pass the cipher suite in a manner compatible with apache's mod_ssl\n\t\tfor _, cs := range caddytls.SupportedCipherSuites() {\n\t\t\tif cs.ID == r.TLS.CipherSuite {\n\t\t\t\tenv[\"SSL_CIPHER\"] = cs.Name\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\n\t// Add env variables from config (with support for placeholders in values)\n\tfor key, value := range t.EnvVars {\n\t\tenv[key] = repl.ReplaceAll(value, \"\")\n\t}\n\n\t// Add all HTTP headers to env variables\n\tfor field, val := range r.Header {\n\t\theader := strings.ToUpper(field)\n\t\theader = headerNameReplacer.Replace(header)\n\t\tenv[\"HTTP_\"+header] = strings.Join(val, \", \")\n\t}\n\treturn env, nil\n}\n\nvar splitSearchNonASCII = search.New(language.Und, search.IgnoreCase)\n\n// splitPos returns the index where path should\n// be split based on t.SplitPath.\n//\n// example: if splitPath is [\".php\"]\n// \"/path/to/script.php/some/path\": (\"/path/to/script.php\", \"/some/path\")\n//\n// Adapted from FrankenPHP's code (copyright 2026 Kévin Dunglas, MIT license)\nfunc (t Transport) splitPos(path string) int {\n\t// TODO: from v1...\n\t// if httpserver.CaseSensitivePath {\n\t// \treturn strings.Index(path, r.SplitPath)\n\t// }\n\tif len(t.SplitPath) == 0 {\n\t\treturn 0\n\t}\n\n\tpathLen := len(path)\n\n\t// We are sure that split strings are all ASCII-only and lower-case because of validation and normalization in Provision().\n\tfor _, split := range t.SplitPath {\n\t\tsplitLen := len(split)\n\n\t\tfor i := range pathLen {\n\t\t\tif path[i] >= utf8.RuneSelf {\n\t\t\t\tif _, end := splitSearchNonASCII.IndexString(path, split); end > -1 {\n\t\t\t\t\treturn end\n\t\t\t\t}\n\n\t\t\t\tbreak\n\t\t\t}\n\n\t\t\tif i+splitLen > pathLen {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tmatch := true\n\t\t\tfor j := range splitLen {\n\t\t\t\tc := path[i+j]\n\n\t\t\t\tif c >= utf8.RuneSelf {\n\t\t\t\t\tif _, end := splitSearchNonASCII.IndexString(path, split); end > -1 {\n\t\t\t\t\t\treturn end\n\t\t\t\t\t}\n\n\t\t\t\t\tbreak\n\t\t\t\t}\n\n\t\t\t\tif 'A' <= c && c <= 'Z' {\n\t\t\t\t\tc += 'a' - 'A'\n\t\t\t\t}\n\n\t\t\t\tif c != split[j] {\n\t\t\t\t\tmatch = false\n\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif match {\n\t\t\t\treturn i + splitLen\n\t\t\t}\n\t\t}\n\t}\n\n\treturn -1\n}\n\ntype envVars map[string]string\n\n// loggableEnv is a simple type to allow for speeding up zap log encoding.\ntype loggableEnv struct {\n\tvars           envVars\n\tlogCredentials bool\n}\n\nfunc (env loggableEnv) MarshalLogObject(enc zapcore.ObjectEncoder) error {\n\tfor k, v := range env.vars {\n\t\tif !env.logCredentials {\n\t\t\tswitch strings.ToLower(k) {\n\t\t\tcase \"http_cookie\", \"http_set_cookie\", \"http_authorization\", \"http_proxy_authorization\":\n\t\t\t\tv = \"\"\n\t\t\t}\n\t\t}\n\t\tenc.AddString(k, v)\n\t}\n\treturn nil\n}\n\n// Map of supported protocols to Apache ssl_mod format\n// Note that these are slightly different from SupportedProtocols in caddytls/config.go\nvar tlsProtocolStrings = map[uint16]string{\n\ttls.VersionTLS10: \"TLSv1\",\n\ttls.VersionTLS11: \"TLSv1.1\",\n\ttls.VersionTLS12: \"TLSv1.2\",\n\ttls.VersionTLS13: \"TLSv1.3\",\n}\n\nvar headerNameReplacer = strings.NewReplacer(\" \", \"_\", \"-\", \"_\")\n\n// Interface guards\nvar (\n\t_ zapcore.ObjectMarshaler = (*loggableEnv)(nil)\n\n\t_ caddy.Provisioner              = (*Transport)(nil)\n\t_ http.RoundTripper              = (*Transport)(nil)\n\t_ reverseproxy.BufferedTransport = (*Transport)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/fastcgi/fastcgi_test.go",
    "content": "package fastcgi\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/stretchr/testify/require\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc TestProvisionSplitPath(t *testing.T) {\n\ttests := []struct {\n\t\tname          string\n\t\tsplitPath     []string\n\t\twantErr       error\n\t\twantSplitPath []string\n\t}{\n\t\t{\n\t\t\tname:          \"valid lowercase split path\",\n\t\t\tsplitPath:     []string{\".php\"},\n\t\t\twantErr:       nil,\n\t\t\twantSplitPath: []string{\".php\"},\n\t\t},\n\t\t{\n\t\t\tname:          \"valid uppercase split path normalized\",\n\t\t\tsplitPath:     []string{\".PHP\"},\n\t\t\twantErr:       nil,\n\t\t\twantSplitPath: []string{\".php\"},\n\t\t},\n\t\t{\n\t\t\tname:          \"valid mixed case split path normalized\",\n\t\t\tsplitPath:     []string{\".PhP\", \".PHTML\"},\n\t\t\twantErr:       nil,\n\t\t\twantSplitPath: []string{\".php\", \".phtml\"},\n\t\t},\n\t\t{\n\t\t\tname:          \"empty split path\",\n\t\t\tsplitPath:     []string{},\n\t\t\twantErr:       nil,\n\t\t\twantSplitPath: []string{},\n\t\t},\n\t\t{\n\t\t\tname:      \"non-ASCII character in split path rejected\",\n\t\t\tsplitPath: []string{\".php\", \".Ⱥphp\"},\n\t\t\twantErr:   ErrInvalidSplitPath,\n\t\t},\n\t\t{\n\t\t\tname:      \"unicode character in split path rejected\",\n\t\t\tsplitPath: []string{\".phpⱥ\"},\n\t\t\twantErr:   ErrInvalidSplitPath,\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\ttr := Transport{SplitPath: tt.splitPath}\n\t\t\terr := tr.Provision(caddy.Context{})\n\n\t\t\tif tt.wantErr != nil {\n\t\t\t\trequire.ErrorIs(t, err, tt.wantErr)\n\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\trequire.NoError(t, err)\n\t\t\tassert.Equal(t, tt.wantSplitPath, tr.SplitPath)\n\t\t})\n\t}\n}\n\nfunc TestSplitPos(t *testing.T) {\n\ttests := []struct {\n\t\tname      string\n\t\tpath      string\n\t\tsplitPath []string\n\t\twantPos   int\n\t}{\n\t\t{\n\t\t\tname:      \"simple php extension\",\n\t\t\tpath:      \"/path/to/script.php\",\n\t\t\tsplitPath: []string{\".php\"},\n\t\t\twantPos:   19,\n\t\t},\n\t\t{\n\t\t\tname:      \"php extension with path info\",\n\t\t\tpath:      \"/path/to/script.php/some/path\",\n\t\t\tsplitPath: []string{\".php\"},\n\t\t\twantPos:   19,\n\t\t},\n\t\t{\n\t\t\tname:      \"case insensitive match\",\n\t\t\tpath:      \"/path/to/script.PHP\",\n\t\t\tsplitPath: []string{\".php\"},\n\t\t\twantPos:   19,\n\t\t},\n\t\t{\n\t\t\tname:      \"mixed case match\",\n\t\t\tpath:      \"/path/to/script.PhP/info\",\n\t\t\tsplitPath: []string{\".php\"},\n\t\t\twantPos:   19,\n\t\t},\n\t\t{\n\t\t\tname:      \"no match\",\n\t\t\tpath:      \"/path/to/script.txt\",\n\t\t\tsplitPath: []string{\".php\"},\n\t\t\twantPos:   -1,\n\t\t},\n\t\t{\n\t\t\tname:      \"empty split path\",\n\t\t\tpath:      \"/path/to/script.php\",\n\t\t\tsplitPath: []string{},\n\t\t\twantPos:   0,\n\t\t},\n\t\t{\n\t\t\tname:      \"multiple split paths first match\",\n\t\t\tpath:      \"/path/to/script.php\",\n\t\t\tsplitPath: []string{\".php\", \".phtml\"},\n\t\t\twantPos:   19,\n\t\t},\n\t\t{\n\t\t\tname:      \"multiple split paths second match\",\n\t\t\tpath:      \"/path/to/script.phtml\",\n\t\t\tsplitPath: []string{\".php\", \".phtml\"},\n\t\t\twantPos:   21,\n\t\t},\n\t\t// Unicode case-folding tests (security fix for GHSA-g966-83w7-6w38)\n\t\t// U+023A (Ⱥ) lowercases to U+2C65 (ⱥ), which has different UTF-8 byte length\n\t\t// Ⱥ: 2 bytes (C8 BA), ⱥ: 3 bytes (E2 B1 A5)\n\t\t{\n\t\t\tname:      \"unicode path with case-folding length expansion\",\n\t\t\tpath:      \"/ȺȺȺȺshell.php\",\n\t\t\tsplitPath: []string{\".php\"},\n\t\t\twantPos:   18, // correct position in original string\n\t\t},\n\t\t{\n\t\t\tname:      \"unicode path with extension after expansion chars\",\n\t\t\tpath:      \"/ȺȺȺȺshell.php/path/info\",\n\t\t\tsplitPath: []string{\".php\"},\n\t\t\twantPos:   18,\n\t\t},\n\t\t{\n\t\t\tname:      \"unicode in filename with multiple php occurrences\",\n\t\t\tpath:      \"/ȺȺȺȺshell.php.txt.php\",\n\t\t\tsplitPath: []string{\".php\"},\n\t\t\twantPos:   18, // should match first .php, not be confused by byte offset shift\n\t\t},\n\t\t{\n\t\t\tname:      \"unicode case insensitive extension\",\n\t\t\tpath:      \"/ȺȺȺȺshell.PHP\",\n\t\t\tsplitPath: []string{\".php\"},\n\t\t\twantPos:   18,\n\t\t},\n\t\t{\n\t\t\tname:      \"unicode in middle of path\",\n\t\t\tpath:      \"/path/Ⱥtest/script.php\",\n\t\t\tsplitPath: []string{\".php\"},\n\t\t\twantPos:   23, // Ⱥ is 2 bytes, so path is 23 bytes total, .php ends at byte 23\n\t\t},\n\t\t{\n\t\t\tname:      \"unicode only in directory not filename\",\n\t\t\tpath:      \"/Ⱥ/script.php\",\n\t\t\tsplitPath: []string{\".php\"},\n\t\t\twantPos:   14,\n\t\t},\n\t\t// Additional Unicode characters that expand when lowercased\n\t\t// U+0130 (İ - Turkish capital I with dot) lowercases to U+0069 + U+0307\n\t\t{\n\t\t\tname:      \"turkish capital I with dot\",\n\t\t\tpath:      \"/İtest.php\",\n\t\t\tsplitPath: []string{\".php\"},\n\t\t\twantPos:   11,\n\t\t},\n\t\t// Ensure standard ASCII still works correctly\n\t\t{\n\t\t\tname:      \"ascii only path with case variation\",\n\t\t\tpath:      \"/PATH/TO/SCRIPT.PHP/INFO\",\n\t\t\tsplitPath: []string{\".php\"},\n\t\t\twantPos:   19,\n\t\t},\n\t\t{\n\t\t\tname:      \"path at root\",\n\t\t\tpath:      \"/index.php\",\n\t\t\tsplitPath: []string{\".php\"},\n\t\t\twantPos:   10,\n\t\t},\n\t\t{\n\t\t\tname:      \"extension in middle of filename\",\n\t\t\tpath:      \"/test.php.bak\",\n\t\t\tsplitPath: []string{\".php\"},\n\t\t\twantPos:   9,\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgotPos := Transport{SplitPath: tt.splitPath}.splitPos(tt.path)\n\t\t\tassert.Equal(t, tt.wantPos, gotPos, \"splitPos(%q, %v)\", tt.path, tt.splitPath)\n\n\t\t\t// Verify that the split produces valid substrings\n\t\t\tif gotPos > 0 && gotPos <= len(tt.path) {\n\t\t\t\tscriptName := tt.path[:gotPos]\n\t\t\t\tpathInfo := tt.path[gotPos:]\n\n\t\t\t\t// The script name should end with one of the split extensions (case-insensitive)\n\t\t\t\thasValidEnding := false\n\t\t\t\tfor _, split := range tt.splitPath {\n\t\t\t\t\tif strings.HasSuffix(strings.ToLower(scriptName), split) {\n\t\t\t\t\t\thasValidEnding = true\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tassert.True(t, hasValidEnding, \"script name %q should end with one of %v\", scriptName, tt.splitPath)\n\n\t\t\t\t// Original path should be reconstructable\n\t\t\t\tassert.Equal(t, tt.path, scriptName+pathInfo, \"path should be reconstructable from split parts\")\n\t\t\t}\n\t\t})\n\t}\n}\n\n// TestSplitPosUnicodeSecurityRegression specifically tests the vulnerability\n// described in GHSA-g966-83w7-6w38 where Unicode case-folding caused\n// incorrect SCRIPT_NAME/PATH_INFO splitting\nfunc TestSplitPosUnicodeSecurityRegression(t *testing.T) {\n\t// U+023A: Ⱥ (UTF-8: C8 BA). Lowercase is ⱥ (UTF-8: E2 B1 A5), longer in bytes.\n\tpath := \"/ȺȺȺȺshell.php.txt.php\"\n\tsplit := []string{\".php\"}\n\n\tpos := Transport{SplitPath: split}.splitPos(path)\n\n\t// The vulnerable code would return 22 (computed on lowercased string)\n\t// The correct code should return 18 (position in original string)\n\texpectedPos := strings.Index(path, \".php\") + len(\".php\")\n\tassert.Equal(t, expectedPos, pos, \"split position should match first .php in original string\")\n\tassert.Equal(t, 18, pos, \"split position should be 18, not 22\")\n\n\tif pos > 0 && pos <= len(path) {\n\t\tscriptName := path[:pos]\n\t\tpathInfo := path[pos:]\n\n\t\tassert.Equal(t, \"/ȺȺȺȺshell.php\", scriptName, \"script name should be the path up to first .php\")\n\t\tassert.Equal(t, \".txt.php\", pathInfo, \"path info should be the remainder after first .php\")\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/fastcgi/header.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage fastcgi\n\ntype header struct {\n\tVersion       uint8\n\tType          uint8\n\tID            uint16\n\tContentLength uint16\n\tPaddingLength uint8\n\tReserved      uint8\n}\n\nfunc (h *header) init(recType uint8, reqID uint16, contentLength int) {\n\th.Version = 1\n\th.Type = recType\n\th.ID = reqID\n\th.ContentLength = uint16(contentLength)\n\th.PaddingLength = uint8(-contentLength & 7)\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/fastcgi/pool.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage fastcgi\n\nimport (\n\t\"bytes\"\n\t\"sync\"\n)\n\nvar bufPool = sync.Pool{\n\tNew: func() any {\n\t\treturn new(bytes.Buffer)\n\t},\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/fastcgi/reader.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage fastcgi\n\nimport (\n\t\"bytes\"\n\t\"io\"\n)\n\ntype streamReader struct {\n\tc      *client\n\trec    record\n\tstderr bytes.Buffer\n}\n\nfunc (w *streamReader) Read(p []byte) (n int, err error) {\n\tfor !w.rec.hasMore() {\n\t\terr = w.rec.fill(w.c.rwc)\n\t\tif err != nil {\n\t\t\treturn 0, err\n\t\t}\n\n\t\t// standard error output\n\t\tif w.rec.h.Type == Stderr {\n\t\t\tif _, err = io.Copy(&w.stderr, &w.rec); err != nil {\n\t\t\t\treturn 0, err\n\t\t\t}\n\t\t}\n\t}\n\n\treturn w.rec.Read(p)\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/fastcgi/record.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage fastcgi\n\nimport (\n\t\"encoding/binary\"\n\t\"errors\"\n\t\"io\"\n)\n\ntype record struct {\n\th       header\n\tlr      io.LimitedReader\n\tpadding int64\n}\n\nfunc (rec *record) fill(r io.Reader) (err error) {\n\trec.lr.N = rec.padding\n\trec.lr.R = r\n\tif _, err = io.Copy(io.Discard, rec); err != nil {\n\t\treturn err\n\t}\n\n\tif err = binary.Read(r, binary.BigEndian, &rec.h); err != nil {\n\t\treturn err\n\t}\n\tif rec.h.Version != 1 {\n\t\terr = errors.New(\"fcgi: invalid header version\")\n\t\treturn err\n\t}\n\tif rec.h.Type == EndRequest {\n\t\terr = io.EOF\n\t\treturn err\n\t}\n\trec.lr.N = int64(rec.h.ContentLength)\n\trec.padding = int64(rec.h.PaddingLength)\n\treturn err\n}\n\nfunc (rec *record) Read(p []byte) (n int, err error) {\n\treturn rec.lr.Read(p)\n}\n\nfunc (rec *record) hasMore() bool {\n\treturn rec.lr.N > 0\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/fastcgi/writer.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage fastcgi\n\nimport (\n\t\"bytes\"\n\t\"encoding/binary\"\n)\n\n// streamWriter abstracts out the separation of a stream into discrete records.\n// It only writes maxWrite bytes at a time.\ntype streamWriter struct {\n\tc       *client\n\th       header\n\tbuf     *bytes.Buffer\n\trecType uint8\n}\n\nfunc (w *streamWriter) writeRecord(recType uint8, content []byte) (err error) {\n\tw.h.init(recType, w.c.reqID, len(content))\n\tw.buf.Write(pad[:8])\n\tw.writeHeader()\n\tw.buf.Write(content)\n\tw.buf.Write(pad[:w.h.PaddingLength])\n\t_, err = w.buf.WriteTo(w.c.rwc)\n\treturn err\n}\n\nfunc (w *streamWriter) writeBeginRequest(role uint16, flags uint8) error {\n\tb := [8]byte{byte(role >> 8), byte(role), flags}\n\treturn w.writeRecord(BeginRequest, b[:])\n}\n\nfunc (w *streamWriter) Write(p []byte) (int, error) {\n\t// init header\n\tif w.buf.Len() < 8 {\n\t\tw.buf.Write(pad[:8])\n\t}\n\n\tnn := 0\n\tfor len(p) > 0 {\n\t\tn := len(p)\n\t\tnl := maxWrite + 8 - w.buf.Len()\n\t\tif n > nl {\n\t\t\tn = nl\n\t\t\tw.buf.Write(p[:n])\n\t\t\tif err := w.Flush(); err != nil {\n\t\t\t\treturn nn, err\n\t\t\t}\n\t\t\t// reset headers\n\t\t\tw.buf.Write(pad[:8])\n\t\t} else {\n\t\t\tw.buf.Write(p[:n])\n\t\t}\n\t\tnn += n\n\t\tp = p[n:]\n\t}\n\treturn nn, nil\n}\n\nfunc (w *streamWriter) endStream() error {\n\t// send empty record to close the stream\n\treturn w.writeRecord(w.recType, nil)\n}\n\nfunc (w *streamWriter) writePairs(pairs map[string]string) error {\n\tb := make([]byte, 8)\n\tnn := 0\n\t// init headers\n\tw.buf.Write(b)\n\tfor k, v := range pairs {\n\t\tm := 8 + len(k) + len(v)\n\t\tif m > maxWrite {\n\t\t\t// param data size exceed 65535 bytes\"\n\t\t\tvl := maxWrite - 8 - len(k)\n\t\t\tv = v[:vl]\n\t\t}\n\t\tn := encodeSize(b, uint32(len(k)))\n\t\tn += encodeSize(b[n:], uint32(len(v)))\n\t\tm = n + len(k) + len(v)\n\t\tif (nn + m) > maxWrite {\n\t\t\tif err := w.Flush(); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\t// reset headers\n\t\t\tw.buf.Write(b)\n\t\t\tnn = 0\n\t\t}\n\t\tnn += m\n\t\tw.buf.Write(b[:n])\n\t\tw.buf.WriteString(k)\n\t\tw.buf.WriteString(v)\n\t}\n\treturn w.FlushStream()\n}\n\nfunc encodeSize(b []byte, size uint32) int {\n\tif size > 127 {\n\t\tsize |= 1 << 31\n\t\tbinary.BigEndian.PutUint32(b, size)\n\t\treturn 4\n\t}\n\tb[0] = byte(size) //nolint:gosec // false positive; b is made 8 bytes long, then this function is always called with b being at least 4 or 1 byte long\n\treturn 1\n}\n\n// writeHeader populate header wire data in buf, it abuses buffer.Bytes() modification\nfunc (w *streamWriter) writeHeader() {\n\th := w.buf.Bytes()[:8]\n\th[0] = w.h.Version\n\th[1] = w.h.Type\n\tbinary.BigEndian.PutUint16(h[2:4], w.h.ID)\n\tbinary.BigEndian.PutUint16(h[4:6], w.h.ContentLength)\n\th[6] = w.h.PaddingLength\n\th[7] = w.h.Reserved\n}\n\n// Flush write buffer data to the underlying connection, it assumes header data is the first 8 bytes of buf\nfunc (w *streamWriter) Flush() error {\n\tw.h.init(w.recType, w.c.reqID, w.buf.Len()-8)\n\tw.writeHeader()\n\tw.buf.Write(pad[:w.h.PaddingLength])\n\t_, err := w.buf.WriteTo(w.c.rwc)\n\treturn err\n}\n\n// FlushStream flush data then end current stream\nfunc (w *streamWriter) FlushStream() error {\n\tif err := w.Flush(); err != nil {\n\t\treturn err\n\t}\n\treturn w.endStream()\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/forwardauth/caddyfile.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage forwardauth\n\nimport (\n\t\"encoding/json\"\n\t\"net/http\"\n\t\"sort\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/headers\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/reverseproxy\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/rewrite\"\n)\n\nfunc init() {\n\thttpcaddyfile.RegisterDirective(\"forward_auth\", parseCaddyfile)\n}\n\n// parseCaddyfile parses the forward_auth directive, which has the same syntax\n// as the reverse_proxy directive (in fact, the reverse_proxy's directive\n// Unmarshaler is invoked by this function) but the resulting proxy is specially\n// configured for most™️ auth gateways that support forward auth. The typical\n// config which looks something like this:\n//\n//\tforward_auth auth-gateway:9091 {\n//\t    uri /authenticate?redirect=https://auth.example.com\n//\t    copy_headers Remote-User Remote-Email\n//\t}\n//\n// is equivalent to a reverse_proxy directive like this:\n//\n//\treverse_proxy auth-gateway:9091 {\n//\t    method GET\n//\t    rewrite /authenticate?redirect=https://auth.example.com\n//\n//\t    header_up X-Forwarded-Method {method}\n//\t    header_up X-Forwarded-Uri {uri}\n//\n//\t    @good status 2xx\n//\t    handle_response @good {\n//\t        request_header {\n//\t            Remote-User {http.reverse_proxy.header.Remote-User}\n//\t            Remote-Email {http.reverse_proxy.header.Remote-Email}\n//\t        }\n//\t    }\n//\t}\nfunc parseCaddyfile(h httpcaddyfile.Helper) ([]httpcaddyfile.ConfigValue, error) {\n\tif !h.Next() {\n\t\treturn nil, h.ArgErr()\n\t}\n\n\t// if the user specified a matcher token, use that\n\t// matcher in a route that wraps both of our routes;\n\t// either way, strip the matcher token and pass\n\t// the remaining tokens to the unmarshaler so that\n\t// we can gain the rest of the reverse_proxy syntax\n\tuserMatcherSet, err := h.ExtractMatcherSet()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// make a new dispenser from the remaining tokens so that we\n\t// can reset the dispenser back to this point for the\n\t// reverse_proxy unmarshaler to read from it as well\n\tdispenser := h.NewFromNextSegment()\n\n\t// create the reverse proxy handler\n\trpHandler := &reverseproxy.Handler{\n\t\t// set up defaults for header_up; reverse_proxy already deals with\n\t\t// adding the other three X-Forwarded-* headers, but for this flow,\n\t\t// we want to also send along the incoming method and URI since this\n\t\t// request will have a rewritten URI and method.\n\t\tHeaders: &headers.Handler{\n\t\t\tRequest: &headers.HeaderOps{\n\t\t\t\tSet: http.Header{\n\t\t\t\t\t\"X-Forwarded-Method\": []string{\"{http.request.method}\"},\n\t\t\t\t\t\"X-Forwarded-Uri\":    []string{\"{http.request.uri}\"},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\n\t\t// we always rewrite the method to GET, which implicitly\n\t\t// turns off sending the incoming request's body, which\n\t\t// allows later middleware handlers to consume it\n\t\tRewrite: &rewrite.Rewrite{\n\t\t\tMethod: \"GET\",\n\t\t},\n\n\t\tHandleResponse: []caddyhttp.ResponseHandler{},\n\t}\n\n\t// collect the headers to copy from the auth response\n\t// onto the original request, so they can get passed\n\t// through to a backend app\n\theadersToCopy := make(map[string]string)\n\n\t// read the subdirectives for configuring the forward_auth shortcut\n\t// NOTE: we delete the tokens as we go so that the reverse_proxy\n\t// unmarshal doesn't see these subdirectives which it cannot handle\n\tfor dispenser.Next() {\n\t\tfor dispenser.NextBlock(0) {\n\t\t\t// ignore any sub-subdirectives that might\n\t\t\t// have the same name somewhere within\n\t\t\t// the reverse_proxy passthrough tokens\n\t\t\tif dispenser.Nesting() != 1 {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// parse the forward_auth subdirectives\n\t\t\tswitch dispenser.Val() {\n\t\t\tcase \"uri\":\n\t\t\t\tif !dispenser.NextArg() {\n\t\t\t\t\treturn nil, dispenser.ArgErr()\n\t\t\t\t}\n\t\t\t\trpHandler.Rewrite.URI = dispenser.Val()\n\t\t\t\tdispenser.DeleteN(2)\n\n\t\t\tcase \"copy_headers\":\n\t\t\t\targs := dispenser.RemainingArgs()\n\t\t\t\thadBlock := false\n\t\t\t\tfor nesting := dispenser.Nesting(); dispenser.NextBlock(nesting); {\n\t\t\t\t\thadBlock = true\n\t\t\t\t\targs = append(args, dispenser.Val())\n\t\t\t\t}\n\n\t\t\t\t// directive name + args\n\t\t\t\tdispenser.DeleteN(len(args) + 1)\n\t\t\t\tif hadBlock {\n\t\t\t\t\t// opening & closing brace\n\t\t\t\t\tdispenser.DeleteN(2)\n\t\t\t\t}\n\n\t\t\t\tfor _, headerField := range args {\n\t\t\t\t\tif strings.Contains(headerField, \">\") {\n\t\t\t\t\t\tparts := strings.Split(headerField, \">\")\n\t\t\t\t\t\theadersToCopy[parts[0]] = parts[1]\n\t\t\t\t\t} else {\n\t\t\t\t\t\theadersToCopy[headerField] = headerField\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif len(headersToCopy) == 0 {\n\t\t\t\t\treturn nil, dispenser.ArgErr()\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t// reset the dispenser after we're done so that the reverse_proxy\n\t// unmarshaler can read it from the start\n\tdispenser.Reset()\n\n\t// the auth target URI must not be empty\n\tif rpHandler.Rewrite.URI == \"\" {\n\t\treturn nil, dispenser.Errf(\"the 'uri' subdirective is required\")\n\t}\n\n\t// Set up handler for good responses; when a response has 2xx status,\n\t// then we will copy some headers from the response onto the original\n\t// request, and allow handling to continue down the middleware chain,\n\t// by _not_ executing a terminal handler. We must have at least one\n\t// route in the response handler, even if it's no-op, so that the\n\t// response handling logic in reverse_proxy doesn't skip this entry.\n\tgoodResponseHandler := caddyhttp.ResponseHandler{\n\t\tMatch: &caddyhttp.ResponseMatcher{\n\t\t\tStatusCode: []int{2},\n\t\t},\n\t\tRoutes: []caddyhttp.Route{\n\t\t\t{\n\t\t\t\tHandlersRaw: []json.RawMessage{caddyconfig.JSONModuleObject(\n\t\t\t\t\t&caddyhttp.VarsMiddleware{},\n\t\t\t\t\t\"handler\",\n\t\t\t\t\t\"vars\",\n\t\t\t\t\tnil,\n\t\t\t\t)},\n\t\t\t},\n\t\t},\n\t}\n\n\t// Sort the headers so that the order in the JSON output is deterministic.\n\tsortedHeadersToCopy := make([]string, 0, len(headersToCopy))\n\tfor k := range headersToCopy {\n\t\tsortedHeadersToCopy = append(sortedHeadersToCopy, k)\n\t}\n\tsort.Strings(sortedHeadersToCopy)\n\n\t// Set up handlers to copy headers from the auth response onto the\n\t// original request. We use vars matchers to test that the placeholder\n\t// values aren't empty, because the header handler would not replace\n\t// placeholders which have no value.\n\tcopyHeaderRoutes := []caddyhttp.Route{}\n\tfor _, from := range sortedHeadersToCopy {\n\t\tto := http.CanonicalHeaderKey(headersToCopy[from])\n\t\tplaceholderName := \"http.reverse_proxy.header.\" + http.CanonicalHeaderKey(from)\n\n\t\t// Always delete the client-supplied header before conditionally setting\n\t\t// it from the auth response. Without this, a client that pre-supplies a\n\t\t// header listed in copy_headers can inject arbitrary values when the auth\n\t\t// service does not return that header: the MatchNot guard below would\n\t\t// skip the Set entirely, leaving the original client-controlled value\n\t\t// intact and forwarding it to the backend.\n\t\tcopyHeaderRoutes = append(copyHeaderRoutes, caddyhttp.Route{\n\t\t\tHandlersRaw: []json.RawMessage{caddyconfig.JSONModuleObject(\n\t\t\t\t&headers.Handler{\n\t\t\t\t\tRequest: &headers.HeaderOps{\n\t\t\t\t\t\tDelete: []string{to},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\"handler\", \"headers\", nil,\n\t\t\t)},\n\t\t})\n\n\t\thandler := &headers.Handler{\n\t\t\tRequest: &headers.HeaderOps{\n\t\t\t\tSet: http.Header{\n\t\t\t\t\tto: []string{\"{\" + placeholderName + \"}\"},\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\t\tcopyHeaderRoutes = append(copyHeaderRoutes, caddyhttp.Route{\n\t\t\tMatcherSetsRaw: []caddy.ModuleMap{{\n\t\t\t\t\"not\": h.JSON(caddyhttp.MatchNot{MatcherSetsRaw: []caddy.ModuleMap{{\n\t\t\t\t\t\"vars\": h.JSON(caddyhttp.VarsMatcher{\"{\" + placeholderName + \"}\": []string{\"\"}}),\n\t\t\t\t}}}),\n\t\t\t}},\n\t\t\tHandlersRaw: []json.RawMessage{caddyconfig.JSONModuleObject(\n\t\t\t\thandler,\n\t\t\t\t\"handler\",\n\t\t\t\t\"headers\",\n\t\t\t\tnil,\n\t\t\t)},\n\t\t})\n\t}\n\n\tgoodResponseHandler.Routes = append(goodResponseHandler.Routes, copyHeaderRoutes...)\n\n\t// note that when a response has any other status than 2xx, then we\n\t// use the reverse proxy's default behaviour of copying the response\n\t// back to the client, so we don't need to explicitly add a response\n\t// handler specifically for that behaviour; we do need the 2xx handler\n\t// though, to make handling fall through to handlers deeper in the chain.\n\trpHandler.HandleResponse = append(rpHandler.HandleResponse, goodResponseHandler)\n\n\t// the rest of the config is specified by the user\n\t// using the reverse_proxy directive syntax\n\tdispenser.Next() // consume the directive name\n\terr = rpHandler.UnmarshalCaddyfile(dispenser)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\terr = rpHandler.FinalizeUnmarshalCaddyfile(h)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// create the final reverse proxy route\n\trpRoute := caddyhttp.Route{\n\t\tHandlersRaw: []json.RawMessage{caddyconfig.JSONModuleObject(\n\t\t\trpHandler,\n\t\t\t\"handler\",\n\t\t\t\"reverse_proxy\",\n\t\t\tnil,\n\t\t)},\n\t}\n\n\t// apply the user's matcher if any\n\tif userMatcherSet != nil {\n\t\trpRoute.MatcherSetsRaw = []caddy.ModuleMap{userMatcherSet}\n\t}\n\n\treturn []httpcaddyfile.ConfigValue{\n\t\t{\n\t\t\tClass: \"route\",\n\t\t\tValue: rpRoute,\n\t\t},\n\t}, nil\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/headers_test.go",
    "content": "package reverseproxy\n\nimport (\n\t\"context\"\n\t\"net/http/httptest\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc TestAddForwardedHeadersNonIP(t *testing.T) {\n\th := Handler{}\n\n\t// Simulate a request with a non-IP remote address (e.g. SCION, abstract socket, or hostname)\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"my-weird-network:12345\"\n\n\t// Mock the context variables required by Caddy.\n\t// We need to inject the variable map manually since we aren't running the full server.\n\tvars := map[string]interface{}{\n\t\tcaddyhttp.TrustedProxyVarKey: false,\n\t}\n\tctx := context.WithValue(req.Context(), caddyhttp.VarsCtxKey, vars)\n\treq = req.WithContext(ctx)\n\n\t// Execute the unexported function\n\terr := h.addForwardedHeaders(req)\n\n\t// Expectation: No error should be returned for non-IP addresses.\n\t// The function should simply skip the trusted proxy check.\n\tif err != nil {\n\t\tt.Errorf(\"expected no error for non-IP address, got: %v\", err)\n\t}\n}\n\nfunc TestAddForwardedHeaders_UnixSocketTrusted(t *testing.T) {\n\th := Handler{}\n\n\treq := httptest.NewRequest(\"GET\", \"http://example.com/\", nil)\n\treq.RemoteAddr = \"@\"\n\treq.Header.Set(\"X-Forwarded-For\", \"1.2.3.4, 10.0.0.1\")\n\treq.Header.Set(\"X-Forwarded-Proto\", \"https\")\n\treq.Header.Set(\"X-Forwarded-Host\", \"original.example.com\")\n\n\tvars := map[string]interface{}{\n\t\tcaddyhttp.TrustedProxyVarKey: true,\n\t\tcaddyhttp.ClientIPVarKey:     \"1.2.3.4\",\n\t}\n\tctx := context.WithValue(req.Context(), caddyhttp.VarsCtxKey, vars)\n\treq = req.WithContext(ctx)\n\n\terr := h.addForwardedHeaders(req)\n\tif err != nil {\n\t\tt.Fatalf(\"expected no error, got: %v\", err)\n\t}\n\n\tif got := req.Header.Get(\"X-Forwarded-For\"); got != \"1.2.3.4, 10.0.0.1\" {\n\t\tt.Errorf(\"X-Forwarded-For = %q, want %q\", got, \"1.2.3.4, 10.0.0.1\")\n\t}\n\tif got := req.Header.Get(\"X-Forwarded-Proto\"); got != \"https\" {\n\t\tt.Errorf(\"X-Forwarded-Proto = %q, want %q\", got, \"https\")\n\t}\n\tif got := req.Header.Get(\"X-Forwarded-Host\"); got != \"original.example.com\" {\n\t\tt.Errorf(\"X-Forwarded-Host = %q, want %q\", got, \"original.example.com\")\n\t}\n}\n\nfunc TestAddForwardedHeaders_UnixSocketUntrusted(t *testing.T) {\n\th := Handler{}\n\n\treq := httptest.NewRequest(\"GET\", \"http://example.com/\", nil)\n\treq.RemoteAddr = \"@\"\n\treq.Header.Set(\"X-Forwarded-For\", \"1.2.3.4\")\n\treq.Header.Set(\"X-Forwarded-Proto\", \"https\")\n\treq.Header.Set(\"X-Forwarded-Host\", \"spoofed.example.com\")\n\n\tvars := map[string]interface{}{\n\t\tcaddyhttp.TrustedProxyVarKey: false,\n\t\tcaddyhttp.ClientIPVarKey:     \"\",\n\t}\n\tctx := context.WithValue(req.Context(), caddyhttp.VarsCtxKey, vars)\n\treq = req.WithContext(ctx)\n\n\terr := h.addForwardedHeaders(req)\n\tif err != nil {\n\t\tt.Fatalf(\"expected no error, got: %v\", err)\n\t}\n\n\tif got := req.Header.Get(\"X-Forwarded-For\"); got != \"\" {\n\t\tt.Errorf(\"X-Forwarded-For should be deleted, got %q\", got)\n\t}\n\tif got := req.Header.Get(\"X-Forwarded-Proto\"); got != \"\" {\n\t\tt.Errorf(\"X-Forwarded-Proto should be deleted, got %q\", got)\n\t}\n\tif got := req.Header.Get(\"X-Forwarded-Host\"); got != \"\" {\n\t\tt.Errorf(\"X-Forwarded-Host should be deleted, got %q\", got)\n\t}\n}\n\nfunc TestAddForwardedHeaders_UnixSocketTrustedNoExistingHeaders(t *testing.T) {\n\th := Handler{}\n\n\treq := httptest.NewRequest(\"GET\", \"http://example.com/\", nil)\n\treq.RemoteAddr = \"@\"\n\n\tvars := map[string]interface{}{\n\t\tcaddyhttp.TrustedProxyVarKey: true,\n\t\tcaddyhttp.ClientIPVarKey:     \"5.6.7.8\",\n\t}\n\tctx := context.WithValue(req.Context(), caddyhttp.VarsCtxKey, vars)\n\treq = req.WithContext(ctx)\n\n\terr := h.addForwardedHeaders(req)\n\tif err != nil {\n\t\tt.Fatalf(\"expected no error, got: %v\", err)\n\t}\n\n\tif got := req.Header.Get(\"X-Forwarded-For\"); got != \"\" {\n\t\tt.Errorf(\"X-Forwarded-For should be empty when no prior XFF exists, got %q\", got)\n\t}\n\tif got := req.Header.Get(\"X-Forwarded-Proto\"); got != \"http\" {\n\t\tt.Errorf(\"X-Forwarded-Proto = %q, want %q\", got, \"http\")\n\t}\n\tif got := req.Header.Get(\"X-Forwarded-Host\"); got != \"example.com\" {\n\t\tt.Errorf(\"X-Forwarded-Host = %q, want %q\", got, \"example.com\")\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/healthchecks.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage reverseproxy\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"regexp\"\n\t\"runtime/debug\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\n// HealthChecks configures active and passive health checks.\ntype HealthChecks struct {\n\t// Active health checks run in the background on a timer. To\n\t// minimally enable active health checks, set either path or\n\t// port (or both). Note that active health check status\n\t// (healthy/unhealthy) is stored per-proxy-handler, not\n\t// globally; this allows different handlers to use different\n\t// criteria to decide what defines a healthy backend.\n\t//\n\t// Active health checks do not run for dynamic upstreams.\n\tActive *ActiveHealthChecks `json:\"active,omitempty\"`\n\n\t// Passive health checks monitor proxied requests for errors or timeouts.\n\t// To minimally enable passive health checks, specify at least an empty\n\t// config object with fail_duration > 0. Passive health check state is\n\t// shared (stored globally), so a failure from one handler will be counted\n\t// by all handlers; but the tolerances or standards for what defines\n\t// healthy/unhealthy backends is configured per-proxy-handler.\n\t//\n\t// Passive health checks technically do operate on dynamic upstreams,\n\t// but are only effective for very busy proxies where the list of\n\t// upstreams is mostly stable. This is because the shared/global\n\t// state of upstreams is cleaned up when the upstreams are no longer\n\t// used. Since dynamic upstreams are allocated dynamically at each\n\t// request (specifically, each iteration of the proxy loop per request),\n\t// they are also cleaned up after every request. Thus, if there is a\n\t// moment when no requests are actively referring to a particular\n\t// upstream host, the passive health check state will be reset because\n\t// it will be garbage-collected. It is usually better for the dynamic\n\t// upstream module to only return healthy, available backends instead.\n\tPassive *PassiveHealthChecks `json:\"passive,omitempty\"`\n}\n\n// ActiveHealthChecks holds configuration related to active\n// health checks (that is, health checks which occur in a\n// background goroutine independently).\ntype ActiveHealthChecks struct {\n\t// Deprecated: Use 'uri' instead. This field will be removed. TODO: remove this field\n\tPath string `json:\"path,omitempty\"`\n\n\t// The URI (path and query) to use for health checks\n\tURI string `json:\"uri,omitempty\"`\n\n\t// The host:port to use (if different from the upstream's dial address)\n\t// for health checks. This should be used in tandem with `health_header` and\n\t// `{http.reverse_proxy.active.target_upstream}`. This can be helpful when\n\t// creating an intermediate service to do a more thorough health check.\n\t// If upstream is set, the active health check port is ignored.\n\tUpstream string `json:\"upstream,omitempty\"`\n\n\t// The port to use (if different from the upstream's dial\n\t// address) for health checks. If active upstream is set,\n\t// this value is ignored.\n\tPort int `json:\"port,omitempty\"`\n\n\t// HTTP headers to set on health check requests.\n\tHeaders http.Header `json:\"headers,omitempty\"`\n\n\t// The HTTP method to use for health checks (default \"GET\").\n\tMethod string `json:\"method,omitempty\"`\n\n\t// The body to send with the health check request.\n\tBody string `json:\"body,omitempty\"`\n\n\t// Whether to follow HTTP redirects in response to active health checks (default off).\n\tFollowRedirects bool `json:\"follow_redirects,omitempty\"`\n\n\t// How frequently to perform active health checks (default 30s).\n\tInterval caddy.Duration `json:\"interval,omitempty\"`\n\n\t// How long to wait for a response from a backend before\n\t// considering it unhealthy (default 5s).\n\tTimeout caddy.Duration `json:\"timeout,omitempty\"`\n\n\t// Number of consecutive health check passes before marking\n\t// a previously unhealthy backend as healthy again (default 1).\n\tPasses int `json:\"passes,omitempty\"`\n\n\t// Number of consecutive health check failures before marking\n\t// a previously healthy backend as unhealthy (default 1).\n\tFails int `json:\"fails,omitempty\"`\n\n\t// The maximum response body to download from the backend\n\t// during a health check.\n\tMaxSize int64 `json:\"max_size,omitempty\"`\n\n\t// The HTTP status code to expect from a healthy backend.\n\tExpectStatus int `json:\"expect_status,omitempty\"`\n\n\t// A regular expression against which to match the response\n\t// body of a healthy backend.\n\tExpectBody string `json:\"expect_body,omitempty\"`\n\n\turi        *url.URL\n\thttpClient *http.Client\n\tbodyRegexp *regexp.Regexp\n\tlogger     *zap.Logger\n}\n\n// Provision ensures that a is set up properly before use.\nfunc (a *ActiveHealthChecks) Provision(ctx caddy.Context, h *Handler) error {\n\tif !a.IsEnabled() {\n\t\treturn nil\n\t}\n\n\t// Canonicalize the header keys ahead of time, since\n\t// JSON unmarshaled headers may be incorrect\n\tcleaned := http.Header{}\n\tfor key, hdrs := range a.Headers {\n\t\tfor _, val := range hdrs {\n\t\t\tcleaned.Add(key, val)\n\t\t}\n\t}\n\ta.Headers = cleaned\n\n\t// If Method is not set, default to GET\n\tif a.Method == \"\" {\n\t\ta.Method = http.MethodGet\n\t}\n\n\th.HealthChecks.Active.logger = h.logger.Named(\"health_checker.active\")\n\n\ttimeout := time.Duration(a.Timeout)\n\tif timeout == 0 {\n\t\ttimeout = 5 * time.Second\n\t}\n\n\tif a.Path != \"\" {\n\t\ta.logger.Warn(\"the 'path' option is deprecated, please use 'uri' instead!\")\n\t}\n\n\t// parse the URI string (supports path and query)\n\tif a.URI != \"\" {\n\t\tparsedURI, err := url.Parse(a.URI)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\ta.uri = parsedURI\n\t}\n\n\ta.httpClient = &http.Client{\n\t\tTimeout:   timeout,\n\t\tTransport: h.Transport,\n\t\tCheckRedirect: func(req *http.Request, via []*http.Request) error {\n\t\t\tif !a.FollowRedirects {\n\t\t\t\treturn http.ErrUseLastResponse\n\t\t\t}\n\t\t\treturn nil\n\t\t},\n\t}\n\n\tfor _, upstream := range h.Upstreams {\n\t\t// if there's an alternative upstream for health-check provided in the config,\n\t\t// then use it, otherwise use the upstream's dial address. if upstream is used,\n\t\t// then the port is ignored.\n\t\tif a.Upstream != \"\" {\n\t\t\tupstream.activeHealthCheckUpstream = a.Upstream\n\t\t} else if a.Port != 0 {\n\t\t\t// if there's an alternative port for health-check provided in the config,\n\t\t\t// then use it, otherwise use the port of upstream.\n\t\t\tupstream.activeHealthCheckPort = a.Port\n\t\t}\n\t}\n\n\tif a.Interval == 0 {\n\t\ta.Interval = caddy.Duration(30 * time.Second)\n\t}\n\n\tif a.ExpectBody != \"\" {\n\t\tvar err error\n\t\ta.bodyRegexp, err = regexp.Compile(a.ExpectBody)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"expect_body: compiling regular expression: %v\", err)\n\t\t}\n\t}\n\n\tif a.Passes < 1 {\n\t\ta.Passes = 1\n\t}\n\n\tif a.Fails < 1 {\n\t\ta.Fails = 1\n\t}\n\n\treturn nil\n}\n\n// IsEnabled checks if the active health checks have\n// the minimum config necessary to be enabled.\nfunc (a *ActiveHealthChecks) IsEnabled() bool {\n\treturn a.Path != \"\" || a.URI != \"\" || a.Port != 0\n}\n\n// PassiveHealthChecks holds configuration related to passive\n// health checks (that is, health checks which occur during\n// the normal flow of request proxying).\ntype PassiveHealthChecks struct {\n\t// How long to remember a failed request to a backend. A duration > 0\n\t// enables passive health checking. Default is 0.\n\tFailDuration caddy.Duration `json:\"fail_duration,omitempty\"`\n\n\t// The number of failed requests within the FailDuration window to\n\t// consider a backend as \"down\". Must be >= 1; default is 1. Requires\n\t// that FailDuration be > 0.\n\tMaxFails int `json:\"max_fails,omitempty\"`\n\n\t// Limits the number of simultaneous requests to a backend by\n\t// marking the backend as \"down\" if it has this many concurrent\n\t// requests or more.\n\tUnhealthyRequestCount int `json:\"unhealthy_request_count,omitempty\"`\n\n\t// Count the request as failed if the response comes back with\n\t// one of these status codes.\n\tUnhealthyStatus []int `json:\"unhealthy_status,omitempty\"`\n\n\t// Count the request as failed if the response takes at least this\n\t// long to receive.\n\tUnhealthyLatency caddy.Duration `json:\"unhealthy_latency,omitempty\"`\n\n\tlogger *zap.Logger\n}\n\n// CircuitBreaker is a type that can act as an early-warning\n// system for the health checker when backends are getting\n// overloaded. This interface is still experimental and is\n// subject to change.\ntype CircuitBreaker interface {\n\tOK() bool\n\tRecordMetric(statusCode int, latency time.Duration)\n}\n\n// activeHealthChecker runs active health checks on a\n// regular basis and blocks until\n// h.HealthChecks.Active.stopChan is closed.\nfunc (h *Handler) activeHealthChecker() {\n\tdefer func() {\n\t\tif err := recover(); err != nil {\n\t\t\tif c := h.HealthChecks.Active.logger.Check(zapcore.ErrorLevel, \"active health checker panicked\"); c != nil {\n\t\t\t\tc.Write(\n\t\t\t\t\tzap.Any(\"error\", err),\n\t\t\t\t\tzap.ByteString(\"stack\", debug.Stack()),\n\t\t\t\t)\n\t\t\t}\n\t\t}\n\t}()\n\tticker := time.NewTicker(time.Duration(h.HealthChecks.Active.Interval))\n\th.doActiveHealthCheckForAllHosts()\n\tfor {\n\t\tselect {\n\t\tcase <-ticker.C:\n\t\t\th.doActiveHealthCheckForAllHosts()\n\t\tcase <-h.ctx.Done():\n\t\t\tticker.Stop()\n\t\t\treturn\n\t\t}\n\t}\n}\n\n// doActiveHealthCheckForAllHosts immediately performs a\n// health checks for all upstream hosts configured by h.\nfunc (h *Handler) doActiveHealthCheckForAllHosts() {\n\tfor _, upstream := range h.Upstreams {\n\t\tgo func(upstream *Upstream) {\n\t\t\tdefer func() {\n\t\t\t\tif err := recover(); err != nil {\n\t\t\t\t\tif c := h.HealthChecks.Active.logger.Check(zapcore.ErrorLevel, \"active health checker panicked\"); c != nil {\n\t\t\t\t\t\tc.Write(\n\t\t\t\t\t\t\tzap.Any(\"error\", err),\n\t\t\t\t\t\t\tzap.ByteString(\"stack\", debug.Stack()),\n\t\t\t\t\t\t)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}()\n\n\t\t\trepl := caddy.NewReplacer()\n\n\t\t\tnetworkAddr, err := repl.ReplaceOrErr(upstream.Dial, true, true)\n\t\t\tif err != nil {\n\t\t\t\tif c := h.HealthChecks.Active.logger.Check(zapcore.ErrorLevel, \"invalid use of placeholders in dial address for active health checks\"); c != nil {\n\t\t\t\t\tc.Write(\n\t\t\t\t\t\tzap.String(\"address\", networkAddr),\n\t\t\t\t\t\tzap.Error(err),\n\t\t\t\t\t)\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\t}\n\t\t\taddr, err := caddy.ParseNetworkAddress(networkAddr)\n\t\t\tif err != nil {\n\t\t\t\tif c := h.HealthChecks.Active.logger.Check(zapcore.ErrorLevel, \"bad network address\"); c != nil {\n\t\t\t\t\tc.Write(\n\t\t\t\t\t\tzap.String(\"address\", networkAddr),\n\t\t\t\t\t\tzap.Error(err),\n\t\t\t\t\t)\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif hcp := uint(upstream.activeHealthCheckPort); hcp != 0 {\n\t\t\t\tif addr.IsUnixNetwork() || addr.IsFdNetwork() {\n\t\t\t\t\taddr.Network = \"tcp\" // I guess we just assume TCP since we are using a port??\n\t\t\t\t}\n\t\t\t\taddr.StartPort, addr.EndPort = hcp, hcp\n\t\t\t}\n\t\t\tif addr.PortRangeSize() != 1 {\n\t\t\t\tif c := h.HealthChecks.Active.logger.Check(zapcore.ErrorLevel, \"multiple addresses (upstream must map to only one address)\"); c != nil {\n\t\t\t\t\tc.Write(\n\t\t\t\t\t\tzap.String(\"address\", networkAddr),\n\t\t\t\t\t)\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\t}\n\t\t\thostAddr := addr.JoinHostPort(0)\n\t\t\tif addr.IsUnixNetwork() || addr.IsFdNetwork() {\n\t\t\t\t// this will be used as the Host portion of a http.Request URL, and\n\t\t\t\t// paths to socket files would produce an error when creating URL,\n\t\t\t\t// so use a fake Host value instead; unix sockets are usually local\n\t\t\t\thostAddr = \"localhost\"\n\t\t\t}\n\n\t\t\t// Fill in the dial info for the upstream\n\t\t\t// If the upstream is set, use that instead\n\t\t\tdialInfoUpstream := upstream\n\t\t\tif h.HealthChecks.Active.Upstream != \"\" {\n\t\t\t\tdialInfoUpstream = &Upstream{\n\t\t\t\t\tDial: h.HealthChecks.Active.Upstream,\n\t\t\t\t}\n\t\t\t} else if upstream.activeHealthCheckPort != 0 {\n\t\t\t\t// health_port overrides the port; addr has already been updated\n\t\t\t\t// with the health port, so use its address for dialing\n\t\t\t\tdialInfoUpstream = &Upstream{\n\t\t\t\t\tDial: addr.JoinHostPort(0),\n\t\t\t\t}\n\t\t\t}\n\t\t\tdialInfo, _ := dialInfoUpstream.fillDialInfo(repl)\n\n\t\t\terr = h.doActiveHealthCheck(dialInfo, hostAddr, networkAddr, upstream)\n\t\t\tif err != nil {\n\t\t\t\tif c := h.HealthChecks.Active.logger.Check(zapcore.ErrorLevel, \"active health check failed\"); c != nil {\n\t\t\t\t\tc.Write(\n\t\t\t\t\t\tzap.String(\"address\", hostAddr),\n\t\t\t\t\t\tzap.Error(err),\n\t\t\t\t\t)\n\t\t\t\t}\n\t\t\t}\n\t\t}(upstream)\n\t}\n}\n\n// doActiveHealthCheck performs a health check to upstream which\n// can be reached at address hostAddr. The actual address for\n// the request will be built according to active health checker\n// config. The health status of the host will be updated\n// according to whether it passes the health check. An error is\n// returned only if the health check fails to occur or if marking\n// the host's health status fails.\nfunc (h *Handler) doActiveHealthCheck(dialInfo DialInfo, hostAddr string, networkAddr string, upstream *Upstream) error {\n\t// create the URL for the request that acts as a health check\n\tu := &url.URL{\n\t\tScheme: \"http\",\n\t\tHost:   hostAddr,\n\t}\n\n\t// split the host and port if possible, override the port if configured\n\thost, port, err := net.SplitHostPort(hostAddr)\n\tif err != nil {\n\t\thost = hostAddr\n\t}\n\n\t// ignore active health check port if active upstream is provided as the\n\t// active upstream already contains the replacement port\n\tif h.HealthChecks.Active.Upstream != \"\" {\n\t\tu.Host = h.HealthChecks.Active.Upstream\n\t} else if h.HealthChecks.Active.Port != 0 {\n\t\tport := strconv.Itoa(h.HealthChecks.Active.Port)\n\t\tu.Host = net.JoinHostPort(host, port)\n\t}\n\n\t// override health check schemes if applicable\n\tif hcsot, ok := h.Transport.(HealthCheckSchemeOverriderTransport); ok {\n\t\thcsot.OverrideHealthCheckScheme(u, port)\n\t}\n\n\t// if we have a provisioned uri, use that, otherwise use\n\t// the deprecated Path option\n\tif h.HealthChecks.Active.uri != nil {\n\t\tu.Path = h.HealthChecks.Active.uri.Path\n\t\tu.RawQuery = h.HealthChecks.Active.uri.RawQuery\n\t} else {\n\t\tu.Path = h.HealthChecks.Active.Path\n\t}\n\n\t// replacer used for both body and headers. Only globals (env vars, system info, etc.) are available\n\trepl := caddy.NewReplacer()\n\n\t// if body is provided, create a reader for it, otherwise nil\n\tvar requestBody io.Reader\n\tif h.HealthChecks.Active.Body != \"\" {\n\t\t// set body, using replacer\n\t\trequestBody = strings.NewReader(repl.ReplaceAll(h.HealthChecks.Active.Body, \"\"))\n\t}\n\n\t// attach dialing information to this request, as well as context values that\n\t// may be expected by handlers of this request\n\tctx := h.ctx.Context\n\tctx = context.WithValue(ctx, caddy.ReplacerCtxKey, caddy.NewReplacer())\n\tctx = context.WithValue(ctx, caddyhttp.VarsCtxKey, map[string]any{\n\t\tdialInfoVarKey: dialInfo,\n\t})\n\treq, err := http.NewRequestWithContext(ctx, h.HealthChecks.Active.Method, u.String(), requestBody)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"making request: %v\", err)\n\t}\n\tctx = context.WithValue(ctx, caddyhttp.OriginalRequestCtxKey, *req)\n\treq = req.WithContext(ctx)\n\n\t// set headers, using replacer\n\trepl.Set(\"http.reverse_proxy.active.target_upstream\", networkAddr)\n\tfor key, vals := range h.HealthChecks.Active.Headers {\n\t\tkey = repl.ReplaceAll(key, \"\")\n\t\tif key == \"Host\" {\n\t\t\treq.Host = repl.ReplaceAll(h.HealthChecks.Active.Headers.Get(key), \"\")\n\t\t\tcontinue\n\t\t}\n\t\tfor _, val := range vals {\n\t\t\treq.Header.Add(key, repl.ReplaceKnown(val, \"\"))\n\t\t}\n\t}\n\n\tmarkUnhealthy := func() {\n\t\t// increment failures and then check if it has reached the threshold to mark unhealthy\n\t\terr := upstream.Host.countHealthFail(1)\n\t\tif err != nil {\n\t\t\tif c := h.HealthChecks.Active.logger.Check(zapcore.ErrorLevel, \"could not count active health failure\"); c != nil {\n\t\t\t\tc.Write(\n\t\t\t\t\tzap.String(\"host\", upstream.Dial),\n\t\t\t\t\tzap.Error(err),\n\t\t\t\t)\n\t\t\t}\n\t\t\treturn\n\t\t}\n\t\tif upstream.Host.activeHealthFails() >= h.HealthChecks.Active.Fails {\n\t\t\t// dispatch an event that the host newly became unhealthy\n\t\t\tif upstream.setHealthy(false) {\n\t\t\t\th.events.Emit(h.ctx, \"unhealthy\", map[string]any{\"host\": hostAddr})\n\t\t\t\tupstream.Host.resetHealth()\n\t\t\t}\n\t\t}\n\t}\n\n\tmarkHealthy := func() {\n\t\t// increment passes and then check if it has reached the threshold to be healthy\n\t\terr := upstream.countHealthPass(1)\n\t\tif err != nil {\n\t\t\tif c := h.HealthChecks.Active.logger.Check(zapcore.ErrorLevel, \"could not count active health pass\"); c != nil {\n\t\t\t\tc.Write(\n\t\t\t\t\tzap.String(\"host\", upstream.Dial),\n\t\t\t\t\tzap.Error(err),\n\t\t\t\t)\n\t\t\t}\n\t\t\treturn\n\t\t}\n\t\tif upstream.Host.activeHealthPasses() >= h.HealthChecks.Active.Passes {\n\t\t\tif upstream.setHealthy(true) {\n\t\t\t\tif c := h.HealthChecks.Active.logger.Check(zapcore.InfoLevel, \"host is up\"); c != nil {\n\t\t\t\t\tc.Write(zap.String(\"host\", hostAddr))\n\t\t\t\t}\n\t\t\t\th.events.Emit(h.ctx, \"healthy\", map[string]any{\"host\": hostAddr})\n\t\t\t\tupstream.Host.resetHealth()\n\t\t\t}\n\t\t}\n\t}\n\n\t// do the request, being careful to tame the response body\n\tresp, err := h.HealthChecks.Active.httpClient.Do(req) //nolint:gosec // no SSRF\n\tif err != nil {\n\t\tif c := h.HealthChecks.Active.logger.Check(zapcore.InfoLevel, \"HTTP request failed\"); c != nil {\n\t\t\tc.Write(\n\t\t\t\tzap.String(\"host\", hostAddr),\n\t\t\t\tzap.Error(err),\n\t\t\t)\n\t\t}\n\t\tmarkUnhealthy()\n\t\treturn nil\n\t}\n\tvar body io.Reader = resp.Body\n\tif h.HealthChecks.Active.MaxSize > 0 {\n\t\tbody = io.LimitReader(body, h.HealthChecks.Active.MaxSize)\n\t}\n\tdefer func() {\n\t\t// drain any remaining body so connection could be re-used\n\t\t_, _ = io.Copy(io.Discard, body)\n\t\tresp.Body.Close()\n\t}()\n\n\t// if status code is outside criteria, mark down\n\tif h.HealthChecks.Active.ExpectStatus > 0 {\n\t\tif !caddyhttp.StatusCodeMatches(resp.StatusCode, h.HealthChecks.Active.ExpectStatus) {\n\t\t\tif c := h.HealthChecks.Active.logger.Check(zapcore.InfoLevel, \"unexpected status code\"); c != nil {\n\t\t\t\tc.Write(\n\t\t\t\t\tzap.Int(\"status_code\", resp.StatusCode),\n\t\t\t\t\tzap.String(\"host\", hostAddr),\n\t\t\t\t)\n\t\t\t}\n\t\t\tmarkUnhealthy()\n\t\t\treturn nil\n\t\t}\n\t} else if resp.StatusCode < 200 || resp.StatusCode >= 300 {\n\t\tif c := h.HealthChecks.Active.logger.Check(zapcore.InfoLevel, \"status code out of tolerances\"); c != nil {\n\t\t\tc.Write(\n\t\t\t\tzap.Int(\"status_code\", resp.StatusCode),\n\t\t\t\tzap.String(\"host\", hostAddr),\n\t\t\t)\n\t\t}\n\t\tmarkUnhealthy()\n\t\treturn nil\n\t}\n\n\t// if body does not match regex, mark down\n\tif h.HealthChecks.Active.bodyRegexp != nil {\n\t\tbodyBytes, err := io.ReadAll(body)\n\t\tif err != nil {\n\t\t\tif c := h.HealthChecks.Active.logger.Check(zapcore.InfoLevel, \"failed to read response body\"); c != nil {\n\t\t\t\tc.Write(\n\t\t\t\t\tzap.String(\"host\", hostAddr),\n\t\t\t\t\tzap.Error(err),\n\t\t\t\t)\n\t\t\t}\n\t\t\tmarkUnhealthy()\n\t\t\treturn nil\n\t\t}\n\t\tif !h.HealthChecks.Active.bodyRegexp.Match(bodyBytes) {\n\t\t\tif c := h.HealthChecks.Active.logger.Check(zapcore.InfoLevel, \"response body failed expectations\"); c != nil {\n\t\t\t\tc.Write(\n\t\t\t\t\tzap.String(\"host\", hostAddr),\n\t\t\t\t)\n\t\t\t}\n\t\t\tmarkUnhealthy()\n\t\t\treturn nil\n\t\t}\n\t}\n\n\t// passed health check parameters, so mark as healthy\n\tmarkHealthy()\n\n\treturn nil\n}\n\n// countFailure is used with passive health checks. It\n// remembers 1 failure for upstream for the configured\n// duration. If passive health checks are disabled or\n// failure expiry is 0, this is a no-op.\nfunc (h *Handler) countFailure(upstream *Upstream) {\n\t// only count failures if passive health checking is enabled\n\t// and if failures are configured have a non-zero expiry\n\tif h.HealthChecks == nil || h.HealthChecks.Passive == nil {\n\t\treturn\n\t}\n\tfailDuration := time.Duration(h.HealthChecks.Passive.FailDuration)\n\tif failDuration == 0 {\n\t\treturn\n\t}\n\n\t// count failure immediately\n\terr := upstream.Host.countFail(1)\n\tif err != nil {\n\t\tif c := h.HealthChecks.Active.logger.Check(zapcore.ErrorLevel, \"could not count failure\"); c != nil {\n\t\t\tc.Write(\n\t\t\t\tzap.String(\"host\", upstream.Dial),\n\t\t\t\tzap.Error(err),\n\t\t\t)\n\t\t}\n\t\treturn\n\t}\n\n\t// forget it later\n\tgo func(host *Host, failDuration time.Duration) {\n\t\tdefer func() {\n\t\t\tif err := recover(); err != nil {\n\t\t\t\tif c := h.HealthChecks.Active.logger.Check(zapcore.ErrorLevel, \"passive health check failure forgetter panicked\"); c != nil {\n\t\t\t\t\tc.Write(\n\t\t\t\t\t\tzap.Any(\"error\", err),\n\t\t\t\t\t\tzap.ByteString(\"stack\", debug.Stack()),\n\t\t\t\t\t)\n\t\t\t\t}\n\t\t\t}\n\t\t}()\n\t\ttimer := time.NewTimer(failDuration)\n\t\tselect {\n\t\tcase <-h.ctx.Done():\n\t\t\tif !timer.Stop() {\n\t\t\t\t<-timer.C\n\t\t\t}\n\t\tcase <-timer.C:\n\t\t}\n\t\terr := host.countFail(-1)\n\t\tif err != nil {\n\t\t\tif c := h.HealthChecks.Active.logger.Check(zapcore.ErrorLevel, \"could not forget failure\"); c != nil {\n\t\t\t\tc.Write(\n\t\t\t\t\tzap.String(\"host\", upstream.Dial),\n\t\t\t\t\tzap.Error(err),\n\t\t\t\t)\n\t\t\t}\n\t\t}\n\t}(upstream.Host, failDuration)\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/hosts.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage reverseproxy\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net/netip\"\n\t\"strconv\"\n\t\"sync\"\n\t\"sync/atomic\"\n\t\"time\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\n// UpstreamPool is a collection of upstreams.\ntype UpstreamPool []*Upstream\n\n// Upstream bridges this proxy's configuration to the\n// state of the backend host it is correlated with.\n// Upstream values must not be copied.\ntype Upstream struct {\n\t*Host `json:\"-\"`\n\n\t// The [network address](/docs/conventions#network-addresses)\n\t// to dial to connect to the upstream. Must represent precisely\n\t// one socket (i.e. no port ranges). A valid network address\n\t// either has a host and port or is a unix socket address.\n\t//\n\t// Placeholders may be used to make the upstream dynamic, but be\n\t// aware of the health check implications of this: a single\n\t// upstream that represents numerous (perhaps arbitrary) backends\n\t// can be considered down if one or enough of the arbitrary\n\t// backends is down. Also be aware of open proxy vulnerabilities.\n\tDial string `json:\"dial,omitempty\"`\n\n\t// The maximum number of simultaneous requests to allow to\n\t// this upstream. If set, overrides the global passive health\n\t// check UnhealthyRequestCount value.\n\tMaxRequests int `json:\"max_requests,omitempty\"`\n\n\t// TODO: This could be really useful, to bind requests\n\t// with certain properties to specific backends\n\t// HeaderAffinity string\n\t// IPAffinity     string\n\n\tactiveHealthCheckPort     int\n\tactiveHealthCheckUpstream string\n\thealthCheckPolicy         *PassiveHealthChecks\n\tcb                        CircuitBreaker\n\tunhealthy                 int32 // accessed atomically; status from active health checker\n}\n\n// (pointer receiver necessary to avoid a race condition, since\n// copying the Upstream reads the 'unhealthy' field which is\n// accessed atomically)\nfunc (u *Upstream) String() string { return u.Dial }\n\n// Available returns true if the remote host\n// is available to receive requests. This is\n// the method that should be used by selection\n// policies, etc. to determine if a backend\n// should be able to be sent a request.\nfunc (u *Upstream) Available() bool {\n\treturn u.Healthy() && !u.Full()\n}\n\n// Healthy returns true if the remote host\n// is currently known to be healthy or \"up\".\n// It consults the circuit breaker, if any.\nfunc (u *Upstream) Healthy() bool {\n\thealthy := u.healthy()\n\tif healthy && u.healthCheckPolicy != nil {\n\t\thealthy = u.Host.Fails() < u.healthCheckPolicy.MaxFails\n\t}\n\tif healthy && u.cb != nil {\n\t\thealthy = u.cb.OK()\n\t}\n\treturn healthy\n}\n\n// Full returns true if the remote host\n// cannot receive more requests at this time.\nfunc (u *Upstream) Full() bool {\n\treturn u.MaxRequests > 0 && u.Host.NumRequests() >= u.MaxRequests\n}\n\n// fillDialInfo returns a filled DialInfo for upstream u, using the request\n// context. Note that the returned value is not a pointer.\nfunc (u *Upstream) fillDialInfo(repl *caddy.Replacer) (DialInfo, error) {\n\tvar addr caddy.NetworkAddress\n\n\t// use provided dial address\n\tvar err error\n\tdial := repl.ReplaceAll(u.Dial, \"\")\n\taddr, err = caddy.ParseNetworkAddress(dial)\n\tif err != nil {\n\t\treturn DialInfo{}, fmt.Errorf(\"upstream %s: invalid dial address %s: %v\", u.Dial, dial, err)\n\t}\n\tif numPorts := addr.PortRangeSize(); numPorts != 1 {\n\t\treturn DialInfo{}, fmt.Errorf(\"upstream %s: dial address must represent precisely one socket: %s represents %d\",\n\t\t\tu.Dial, dial, numPorts)\n\t}\n\n\treturn DialInfo{\n\t\tUpstream: u,\n\t\tNetwork:  addr.Network,\n\t\tAddress:  addr.JoinHostPort(0),\n\t\tHost:     addr.Host,\n\t\tPort:     strconv.Itoa(int(addr.StartPort)),\n\t}, nil\n}\n\nfunc (u *Upstream) fillHost() {\n\thost := new(Host)\n\texistingHost, loaded := hosts.LoadOrStore(u.String(), host)\n\tif loaded {\n\t\thost = existingHost.(*Host)\n\t}\n\tu.Host = host\n}\n\n// fillDynamicHost is like fillHost, but stores the host in the separate\n// dynamicHosts map rather than the reference-counted UsagePool. Dynamic\n// hosts are not reference-counted; instead, they are retained as long as\n// they are actively seen and are evicted by a background cleanup goroutine\n// after dynamicHostIdleExpiry of inactivity. This preserves health state\n// (e.g. passive fail counts) across sequential requests.\nfunc (u *Upstream) fillDynamicHost() {\n\tdynamicHostsMu.Lock()\n\tentry, ok := dynamicHosts[u.String()]\n\tif ok {\n\t\tentry.lastSeen = time.Now()\n\t\tdynamicHosts[u.String()] = entry\n\t\tu.Host = entry.host\n\t} else {\n\t\th := new(Host)\n\t\tdynamicHosts[u.String()] = dynamicHostEntry{host: h, lastSeen: time.Now()}\n\t\tu.Host = h\n\t}\n\tdynamicHostsMu.Unlock()\n\n\t// ensure the cleanup goroutine is running\n\tdynamicHostsCleanerOnce.Do(func() {\n\t\tgo func() {\n\t\t\tfor {\n\t\t\t\ttime.Sleep(dynamicHostCleanupInterval)\n\t\t\t\tdynamicHostsMu.Lock()\n\t\t\t\tfor addr, entry := range dynamicHosts {\n\t\t\t\t\tif time.Since(entry.lastSeen) > dynamicHostIdleExpiry {\n\t\t\t\t\t\tdelete(dynamicHosts, addr)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tdynamicHostsMu.Unlock()\n\t\t\t}\n\t\t}()\n\t})\n}\n\n// Host is the basic, in-memory representation of the state of a remote host.\n// Its fields are accessed atomically and Host values must not be copied.\ntype Host struct {\n\tnumRequests  int64 // must be 64-bit aligned on 32-bit systems (see https://golang.org/pkg/sync/atomic/#pkg-note-BUG)\n\tfails        int64\n\tactivePasses int64\n\tactiveFails  int64\n}\n\n// NumRequests returns the number of active requests to the upstream.\nfunc (h *Host) NumRequests() int {\n\treturn int(atomic.LoadInt64(&h.numRequests))\n}\n\n// Fails returns the number of recent failures with the upstream.\nfunc (h *Host) Fails() int {\n\treturn int(atomic.LoadInt64(&h.fails))\n}\n\n// activeHealthPasses returns the number of consecutive active health check passes with the upstream.\nfunc (h *Host) activeHealthPasses() int {\n\treturn int(atomic.LoadInt64(&h.activePasses))\n}\n\n// activeHealthFails returns the number of consecutive active health check failures with the upstream.\nfunc (h *Host) activeHealthFails() int {\n\treturn int(atomic.LoadInt64(&h.activeFails))\n}\n\n// countRequest mutates the active request count by\n// delta. It returns an error if the adjustment fails.\nfunc (h *Host) countRequest(delta int) error {\n\tresult := atomic.AddInt64(&h.numRequests, int64(delta))\n\tif result < 0 {\n\t\treturn fmt.Errorf(\"count below 0: %d\", result)\n\t}\n\treturn nil\n}\n\n// countFail mutates the recent failures count by\n// delta. It returns an error if the adjustment fails.\nfunc (h *Host) countFail(delta int) error {\n\tresult := atomic.AddInt64(&h.fails, int64(delta))\n\tif result < 0 {\n\t\treturn fmt.Errorf(\"count below 0: %d\", result)\n\t}\n\treturn nil\n}\n\n// countHealthPass mutates the recent passes count by\n// delta. It returns an error if the adjustment fails.\nfunc (h *Host) countHealthPass(delta int) error {\n\tresult := atomic.AddInt64(&h.activePasses, int64(delta))\n\tif result < 0 {\n\t\treturn fmt.Errorf(\"count below 0: %d\", result)\n\t}\n\treturn nil\n}\n\n// countHealthFail mutates the recent failures count by\n// delta. It returns an error if the adjustment fails.\nfunc (h *Host) countHealthFail(delta int) error {\n\tresult := atomic.AddInt64(&h.activeFails, int64(delta))\n\tif result < 0 {\n\t\treturn fmt.Errorf(\"count below 0: %d\", result)\n\t}\n\treturn nil\n}\n\n// resetHealth resets the health check counters.\nfunc (h *Host) resetHealth() {\n\tatomic.StoreInt64(&h.activePasses, 0)\n\tatomic.StoreInt64(&h.activeFails, 0)\n}\n\n// healthy returns true if the upstream is not actively marked as unhealthy.\n// (This returns the status only from the \"active\" health checks.)\nfunc (u *Upstream) healthy() bool {\n\treturn atomic.LoadInt32(&u.unhealthy) == 0\n}\n\n// SetHealthy sets the upstream has healthy or unhealthy\n// and returns true if the new value is different. This\n// sets the status only for the \"active\" health checks.\nfunc (u *Upstream) setHealthy(healthy bool) bool {\n\tvar unhealthy, compare int32 = 1, 0\n\tif healthy {\n\t\tunhealthy, compare = 0, 1\n\t}\n\treturn atomic.CompareAndSwapInt32(&u.unhealthy, compare, unhealthy)\n}\n\n// DialInfo contains information needed to dial a\n// connection to an upstream host. This information\n// may be different than that which is represented\n// in a URL (for example, unix sockets don't have\n// a host that can be represented in a URL, but\n// they certainly have a network name and address).\ntype DialInfo struct {\n\t// Upstream is the Upstream associated with\n\t// this DialInfo. It may be nil.\n\tUpstream *Upstream\n\n\t// The network to use. This should be one of\n\t// the values that is accepted by net.Dial:\n\t// https://golang.org/pkg/net/#Dial\n\tNetwork string\n\n\t// The address to dial. Follows the same\n\t// semantics and rules as net.Dial.\n\tAddress string\n\n\t// Host and Port are components of Address.\n\tHost, Port string\n}\n\n// String returns the Caddy network address form\n// by joining the network and address with a\n// forward slash.\nfunc (di DialInfo) String() string {\n\treturn caddy.JoinNetworkAddress(di.Network, di.Host, di.Port)\n}\n\n// GetDialInfo gets the upstream dialing info out of the context,\n// and returns true if there was a valid value; false otherwise.\nfunc GetDialInfo(ctx context.Context) (DialInfo, bool) {\n\tdialInfo, ok := caddyhttp.GetVar(ctx, dialInfoVarKey).(DialInfo)\n\treturn dialInfo, ok\n}\n\n// hosts is the global repository for hosts that are\n// currently in use by active configuration(s). This\n// allows the state of remote hosts to be preserved\n// through config reloads.\nvar hosts = caddy.NewUsagePool()\n\n// dynamicHosts tracks hosts that were provisioned from dynamic upstream\n// sources. Unlike static upstreams which are reference-counted via the\n// UsagePool, dynamic upstream hosts are not reference-counted. Instead,\n// their last-seen time is updated on each request, and a background\n// goroutine evicts entries that have been idle for dynamicHostIdleExpiry.\n// This preserves health state (e.g. passive fail counts) across requests\n// to the same dynamic backend.\nvar (\n\tdynamicHosts               = make(map[string]dynamicHostEntry)\n\tdynamicHostsMu             sync.RWMutex\n\tdynamicHostsCleanerOnce    sync.Once\n\tdynamicHostCleanupInterval = 5 * time.Minute\n\tdynamicHostIdleExpiry      = time.Hour\n)\n\n// dynamicHostEntry holds a Host and the last time it was seen\n// in a set of dynamic upstreams returned for a request.\ntype dynamicHostEntry struct {\n\thost     *Host\n\tlastSeen time.Time\n}\n\n// dialInfoVarKey is the key used for the variable that holds\n// the dial info for the upstream connection.\nconst dialInfoVarKey = \"reverse_proxy.dial_info\"\n\n// proxyProtocolInfoVarKey is the key used for the variable that holds\n// the proxy protocol info for the upstream connection.\nconst proxyProtocolInfoVarKey = \"reverse_proxy.proxy_protocol_info\"\n\n// ProxyProtocolInfo contains information needed to write proxy protocol to a\n// connection to an upstream host.\ntype ProxyProtocolInfo struct {\n\tAddrPort netip.AddrPort\n}\n\n// tlsH1OnlyVarKey is the key used that indicates the connection will use h1 only for TLS.\n// https://github.com/caddyserver/caddy/issues/7292\nconst tlsH1OnlyVarKey = \"reverse_proxy.tls_h1_only\"\n\n// proxyVarKey is the key used that indicates the proxy server used for a request.\nconst proxyVarKey = \"reverse_proxy.proxy\"\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/httptransport.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage reverseproxy\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"encoding/base64\"\n\t\"encoding/json\"\n\t\"fmt\"\n\tweakrand \"math/rand/v2\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"reflect\"\n\t\"slices\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/pires/go-proxyproto\"\n\t\"github.com/quic-go/quic-go/http3\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\t\"golang.org/x/net/http2\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/headers\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddytls\"\n\t\"github.com/caddyserver/caddy/v2/modules/internal/network\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(HTTPTransport{})\n}\n\n// HTTPTransport is essentially a configuration wrapper for http.Transport.\n// It defines a JSON structure useful when configuring the HTTP transport\n// for Caddy's reverse proxy. It builds its http.Transport at Provision.\ntype HTTPTransport struct {\n\t// TODO: It's possible that other transports (like fastcgi) might be\n\t// able to borrow/use at least some of these config fields; if so,\n\t// maybe move them into a type called CommonTransport and embed it?\n\n\t// Configures the DNS resolver used to resolve the IP address of upstream hostnames.\n\tResolver *UpstreamResolver `json:\"resolver,omitempty\"`\n\n\t// Configures TLS to the upstream. Setting this to an empty struct\n\t// is sufficient to enable TLS with reasonable defaults.\n\tTLS *TLSConfig `json:\"tls,omitempty\"`\n\n\t// Configures HTTP Keep-Alive (enabled by default). Should only be\n\t// necessary if rigorous testing has shown that tuning this helps\n\t// improve performance.\n\tKeepAlive *KeepAlive `json:\"keep_alive,omitempty\"`\n\n\t// Whether to enable compression to upstream. Default: true\n\tCompression *bool `json:\"compression,omitempty\"`\n\n\t// Maximum number of connections per host. Default: 0 (no limit)\n\tMaxConnsPerHost int `json:\"max_conns_per_host,omitempty\"`\n\n\t// If non-empty, which PROXY protocol version to send when\n\t// connecting to an upstream. Default: off.\n\tProxyProtocol string `json:\"proxy_protocol,omitempty\"`\n\n\t// URL to the server that the HTTP transport will use to proxy\n\t// requests to the upstream. See http.Transport.Proxy for\n\t// information regarding supported protocols. This value takes\n\t// precedence over `HTTP_PROXY`, etc.\n\t//\n\t// Providing a value to this parameter results in\n\t// requests flowing through the reverse_proxy in the following\n\t// way:\n\t//\n\t// User Agent ->\n\t//  reverse_proxy ->\n\t//  forward_proxy_url -> upstream\n\t//\n\t// Default: http.ProxyFromEnvironment\n\t// DEPRECATED: Use NetworkProxyRaw|`network_proxy` instead. Subject to removal.\n\tForwardProxyURL string `json:\"forward_proxy_url,omitempty\"`\n\n\t// How long to wait before timing out trying to connect to\n\t// an upstream. Default: `3s`.\n\tDialTimeout caddy.Duration `json:\"dial_timeout,omitempty\"`\n\n\t// How long to wait before spawning an RFC 6555 Fast Fallback\n\t// connection. A negative value disables this. Default: `300ms`.\n\tFallbackDelay caddy.Duration `json:\"dial_fallback_delay,omitempty\"`\n\n\t// How long to wait for reading response headers from server. Default: No timeout.\n\tResponseHeaderTimeout caddy.Duration `json:\"response_header_timeout,omitempty\"`\n\n\t// The length of time to wait for a server's first response\n\t// headers after fully writing the request headers if the\n\t// request has a header \"Expect: 100-continue\". Default: No timeout.\n\tExpectContinueTimeout caddy.Duration `json:\"expect_continue_timeout,omitempty\"`\n\n\t// The maximum bytes to read from response headers. Default: `10MiB`.\n\tMaxResponseHeaderSize int64 `json:\"max_response_header_size,omitempty\"`\n\n\t// The size of the write buffer in bytes. Default: `4KiB`.\n\tWriteBufferSize int `json:\"write_buffer_size,omitempty\"`\n\n\t// The size of the read buffer in bytes. Default: `4KiB`.\n\tReadBufferSize int `json:\"read_buffer_size,omitempty\"`\n\n\t// The maximum time to wait for next read from backend. Default: no timeout.\n\tReadTimeout caddy.Duration `json:\"read_timeout,omitempty\"`\n\n\t// The maximum time to wait for next write to backend. Default: no timeout.\n\tWriteTimeout caddy.Duration `json:\"write_timeout,omitempty\"`\n\n\t// The versions of HTTP to support. As a special case, \"h2c\"\n\t// can be specified to use H2C (HTTP/2 over Cleartext) to the\n\t// upstream (this feature is experimental and subject to\n\t// change or removal). Default: [\"1.1\", \"2\"]\n\t//\n\t// EXPERIMENTAL: \"3\" enables HTTP/3, but it must be the only\n\t// version specified if enabled. Additionally, HTTPS must be\n\t// enabled to the upstream as HTTP/3 requires TLS. Subject\n\t// to change or removal while experimental.\n\tVersions []string `json:\"versions,omitempty\"`\n\n\t// Specify the address to bind to when connecting to an upstream. In other words,\n\t// it is the address the upstream sees as the remote address.\n\tLocalAddress string `json:\"local_address,omitempty\"`\n\n\t// The pre-configured underlying HTTP transport.\n\tTransport *http.Transport `json:\"-\"`\n\n\t// The module that provides the network (forward) proxy\n\t// URL that the HTTP transport will use to proxy\n\t// requests to the upstream. See [http.Transport.Proxy](https://pkg.go.dev/net/http#Transport.Proxy)\n\t// for information regarding supported protocols.\n\t//\n\t// Providing a value to this parameter results in requests\n\t// flowing through the reverse_proxy in the following way:\n\t//\n\t// User Agent ->\n\t//  reverse_proxy ->\n\t//  [proxy provided by the module] -> upstream\n\t//\n\t// If nil, defaults to reading the `HTTP_PROXY`,\n\t// `HTTPS_PROXY`, and `NO_PROXY` environment variables.\n\tNetworkProxyRaw json.RawMessage `json:\"network_proxy,omitempty\" caddy:\"namespace=caddy.network_proxy inline_key=from\"`\n\n\th3Transport *http3.Transport // TODO: EXPERIMENTAL (May 2024)\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (HTTPTransport) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.reverse_proxy.transport.http\",\n\t\tNew: func() caddy.Module { return new(HTTPTransport) },\n\t}\n}\n\nvar (\n\tallowedVersions       = []string{\"1.1\", \"2\", \"h2c\", \"3\"}\n\tallowedVersionsString = strings.Join(allowedVersions, \", \")\n)\n\n// Provision sets up h.Transport with a *http.Transport\n// that is ready to use.\nfunc (h *HTTPTransport) Provision(ctx caddy.Context) error {\n\tif len(h.Versions) == 0 {\n\t\th.Versions = []string{\"1.1\", \"2\"}\n\t}\n\t// some users may provide http versions not recognized by caddy, instead of trying to\n\t// guess the version, we just error out and let the user fix their config\n\t// see: https://github.com/caddyserver/caddy/issues/7111\n\tfor _, v := range h.Versions {\n\t\tif !slices.Contains(allowedVersions, v) {\n\t\t\treturn fmt.Errorf(\"unsupported HTTP version: %s, supported version: %s\", v, allowedVersionsString)\n\t\t}\n\t}\n\n\trt, err := h.NewTransport(ctx)\n\tif err != nil {\n\t\treturn err\n\t}\n\th.Transport = rt\n\n\treturn nil\n}\n\n// NewTransport builds a standard-lib-compatible http.Transport value from h.\nfunc (h *HTTPTransport) NewTransport(caddyCtx caddy.Context) (*http.Transport, error) {\n\t// Set keep-alive defaults if it wasn't otherwise configured\n\tif h.KeepAlive == nil {\n\t\th.KeepAlive = new(KeepAlive)\n\t}\n\tif h.KeepAlive.ProbeInterval == 0 {\n\t\th.KeepAlive.ProbeInterval = caddy.Duration(30 * time.Second)\n\t}\n\tif h.KeepAlive.IdleConnTimeout == 0 {\n\t\th.KeepAlive.IdleConnTimeout = caddy.Duration(2 * time.Minute)\n\t}\n\tif h.KeepAlive.MaxIdleConnsPerHost == 0 {\n\t\th.KeepAlive.MaxIdleConnsPerHost = 32 // seems about optimal, see #2805\n\t}\n\n\t// Set a relatively short default dial timeout.\n\t// This is helpful to make load-balancer retries more speedy.\n\tif h.DialTimeout == 0 {\n\t\th.DialTimeout = caddy.Duration(3 * time.Second)\n\t}\n\n\tdialer := &net.Dialer{\n\t\tTimeout:       time.Duration(h.DialTimeout),\n\t\tFallbackDelay: time.Duration(h.FallbackDelay),\n\t}\n\n\tif h.LocalAddress != \"\" {\n\t\tnetaddr, err := caddy.ParseNetworkAddressWithDefaults(h.LocalAddress, \"tcp\", 0)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif netaddr.PortRangeSize() > 1 {\n\t\t\treturn nil, fmt.Errorf(\"local_address must be a single address, not a port range\")\n\t\t}\n\t\tswitch netaddr.Network {\n\t\tcase \"tcp\", \"tcp4\", \"tcp6\":\n\t\t\tdialer.LocalAddr, err = net.ResolveTCPAddr(netaddr.Network, netaddr.JoinHostPort(0))\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\tcase \"unix\", \"unixgram\", \"unixpacket\":\n\t\t\tdialer.LocalAddr, err = net.ResolveUnixAddr(netaddr.Network, netaddr.JoinHostPort(0))\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\tcase \"udp\", \"udp4\", \"udp6\":\n\t\t\treturn nil, fmt.Errorf(\"local_address must be a TCP address, not a UDP address\")\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"unsupported network\")\n\t\t}\n\t}\n\tif h.Resolver != nil {\n\t\terr := h.Resolver.ParseAddresses()\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\td := &net.Dialer{\n\t\t\tTimeout:       time.Duration(h.DialTimeout),\n\t\t\tFallbackDelay: time.Duration(h.FallbackDelay),\n\t\t}\n\t\tdialer.Resolver = &net.Resolver{\n\t\t\tPreferGo: true,\n\t\t\tDial: func(ctx context.Context, _, _ string) (net.Conn, error) {\n\t\t\t\t//nolint:gosec\n\t\t\t\taddr := h.Resolver.netAddrs[weakrand.IntN(len(h.Resolver.netAddrs))]\n\t\t\t\treturn d.DialContext(ctx, addr.Network, addr.JoinHostPort(0))\n\t\t\t},\n\t\t}\n\t}\n\n\tdialContext := func(ctx context.Context, network, address string) (net.Conn, error) {\n\t\t// The network is usually tcp, and the address is the host in http.Request.URL.Host\n\t\t// and that's been overwritten in directRequest\n\t\t// However, if proxy is used according to http.ProxyFromEnvironment or proxy providers,\n\t\t// address will be the address of the proxy server.\n\n\t\t// This means we can safely use the address in dialInfo if proxy is not used (the address and network will be same any way)\n\t\t// or if the upstream is unix (because there is no way socks or http proxy can be used for unix address).\n\t\tif dialInfo, ok := GetDialInfo(ctx); ok {\n\t\t\tif caddyhttp.GetVar(ctx, proxyVarKey) == nil || strings.HasPrefix(dialInfo.Network, \"unix\") {\n\t\t\t\tnetwork = dialInfo.Network\n\t\t\t\taddress = dialInfo.Address\n\t\t\t}\n\t\t}\n\n\t\tconn, err := dialer.DialContext(ctx, network, address)\n\t\tif err != nil {\n\t\t\t// identify this error as one that occurred during\n\t\t\t// dialing, which can be important when trying to\n\t\t\t// decide whether to retry a request\n\t\t\treturn nil, DialError{err}\n\t\t}\n\n\t\tif h.ProxyProtocol != \"\" {\n\t\t\tproxyProtocolInfo, ok := caddyhttp.GetVar(ctx, proxyProtocolInfoVarKey).(ProxyProtocolInfo)\n\t\t\tif !ok {\n\t\t\t\treturn nil, fmt.Errorf(\"failed to get proxy protocol info from context\")\n\t\t\t}\n\t\t\tvar proxyv byte\n\t\t\tswitch h.ProxyProtocol {\n\t\t\tcase \"v1\":\n\t\t\t\tproxyv = 1\n\t\t\tcase \"v2\":\n\t\t\t\tproxyv = 2\n\t\t\tdefault:\n\t\t\t\treturn nil, fmt.Errorf(\"unexpected proxy protocol version\")\n\t\t\t}\n\n\t\t\t// The src and dst have to be of the same address family. As we don't know the original\n\t\t\t// dst address (it's kind of impossible to know) and this address is generally of very\n\t\t\t// little interest, we just set it to all zeros.\n\t\t\tvar destAddr net.Addr\n\t\t\tswitch {\n\t\t\tcase proxyProtocolInfo.AddrPort.Addr().Is4():\n\t\t\t\tdestAddr = &net.TCPAddr{\n\t\t\t\t\tIP: net.IPv4zero,\n\t\t\t\t}\n\t\t\tcase proxyProtocolInfo.AddrPort.Addr().Is6():\n\t\t\t\tdestAddr = &net.TCPAddr{\n\t\t\t\t\tIP: net.IPv6zero,\n\t\t\t\t}\n\t\t\tdefault:\n\t\t\t\treturn nil, fmt.Errorf(\"unexpected remote addr type in proxy protocol info\")\n\t\t\t}\n\t\t\tsourceAddr := &net.TCPAddr{\n\t\t\t\tIP:   proxyProtocolInfo.AddrPort.Addr().AsSlice(),\n\t\t\t\tPort: int(proxyProtocolInfo.AddrPort.Port()),\n\t\t\t\tZone: proxyProtocolInfo.AddrPort.Addr().Zone(),\n\t\t\t}\n\t\t\theader := proxyproto.HeaderProxyFromAddrs(proxyv, sourceAddr, destAddr)\n\n\t\t\t// retain the log message structure\n\t\t\tswitch h.ProxyProtocol {\n\t\t\tcase \"v1\":\n\t\t\t\tcaddyCtx.Logger().Debug(\"sending proxy protocol header v1\", zap.Any(\"header\", header))\n\t\t\tcase \"v2\":\n\t\t\t\tcaddyCtx.Logger().Debug(\"sending proxy protocol header v2\", zap.Any(\"header\", header))\n\t\t\t}\n\n\t\t\t_, err = header.WriteTo(conn)\n\t\t\tif err != nil {\n\t\t\t\t// identify this error as one that occurred during\n\t\t\t\t// dialing, which can be important when trying to\n\t\t\t\t// decide whether to retry a request\n\t\t\t\treturn nil, DialError{err}\n\t\t\t}\n\t\t}\n\n\t\t// if read/write timeouts are configured and this is a TCP connection,\n\t\t// enforce the timeouts by wrapping the connection with our own type\n\t\tif tcpConn, ok := conn.(*net.TCPConn); ok && (h.ReadTimeout > 0 || h.WriteTimeout > 0) {\n\t\t\tconn = &tcpRWTimeoutConn{\n\t\t\t\tTCPConn:      tcpConn,\n\t\t\t\treadTimeout:  time.Duration(h.ReadTimeout),\n\t\t\t\twriteTimeout: time.Duration(h.WriteTimeout),\n\t\t\t\tlogger:       caddyCtx.Logger(),\n\t\t\t}\n\t\t}\n\n\t\treturn conn, nil\n\t}\n\n\t// negotiate any HTTP/SOCKS proxy for the HTTP transport\n\tproxy := http.ProxyFromEnvironment\n\tif h.ForwardProxyURL != \"\" {\n\t\tcaddyCtx.Logger().Warn(\"forward_proxy_url is deprecated; use network_proxy instead\")\n\t\tu := network.ProxyFromURL{URL: h.ForwardProxyURL}\n\t\th.NetworkProxyRaw = caddyconfig.JSONModuleObject(u, \"from\", \"url\", nil)\n\t}\n\tif len(h.NetworkProxyRaw) != 0 {\n\t\tproxyMod, err := caddyCtx.LoadModule(h, \"NetworkProxyRaw\")\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"failed to load network_proxy module: %v\", err)\n\t\t}\n\t\tif m, ok := proxyMod.(caddy.ProxyFuncProducer); ok {\n\t\t\tproxy = m.ProxyFunc()\n\t\t} else {\n\t\t\treturn nil, fmt.Errorf(\"network_proxy module is not `(func(*http.Request) (*url.URL, error))``\")\n\t\t}\n\t}\n\t// we need to keep track if a proxy is used for a request\n\tproxyWrapper := func(req *http.Request) (*url.URL, error) {\n\t\tif proxy == nil {\n\t\t\treturn nil, nil\n\t\t}\n\t\tu, err := proxy(req)\n\t\tif u == nil || err != nil {\n\t\t\treturn u, err\n\t\t}\n\t\t// there must be a proxy for this request\n\t\tcaddyhttp.SetVar(req.Context(), proxyVarKey, u)\n\t\treturn u, nil\n\t}\n\n\trt := &http.Transport{\n\t\tProxy:                  proxyWrapper,\n\t\tDialContext:            dialContext,\n\t\tMaxConnsPerHost:        h.MaxConnsPerHost,\n\t\tResponseHeaderTimeout:  time.Duration(h.ResponseHeaderTimeout),\n\t\tExpectContinueTimeout:  time.Duration(h.ExpectContinueTimeout),\n\t\tMaxResponseHeaderBytes: h.MaxResponseHeaderSize,\n\t\tWriteBufferSize:        h.WriteBufferSize,\n\t\tReadBufferSize:         h.ReadBufferSize,\n\t}\n\n\tif h.TLS != nil {\n\t\trt.TLSHandshakeTimeout = time.Duration(h.TLS.HandshakeTimeout)\n\t\tvar err error\n\t\trt.TLSClientConfig, err = h.TLS.MakeTLSClientConfig(caddyCtx)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"making TLS client config: %v\", err)\n\t\t}\n\n\t\tserverNameHasPlaceholder := strings.Contains(h.TLS.ServerName, \"{\")\n\n\t\t// We need to use custom DialTLSContext if:\n\t\t// 1. ServerName has a placeholder that needs to be replaced at request-time, OR\n\t\t// 2. ProxyProtocol is enabled, because req.URL.Host is modified to include\n\t\t//    client address info with \"->\" separator which breaks Go's address parsing\n\t\tif serverNameHasPlaceholder || h.ProxyProtocol != \"\" {\n\t\t\trt.DialTLSContext = func(ctx context.Context, network, addr string) (net.Conn, error) {\n\t\t\t\t// reuses the dialer from above to establish a plaintext connection\n\t\t\t\tconn, err := dialContext(ctx, network, addr)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\n\t\t\t\t// but add our own handshake logic\n\t\t\t\ttlsConfig := rt.TLSClientConfig.Clone()\n\t\t\t\tif serverNameHasPlaceholder {\n\t\t\t\t\trepl := ctx.Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\t\t\t\t\ttlsConfig.ServerName = repl.ReplaceAll(tlsConfig.ServerName, \"\")\n\t\t\t\t}\n\n\t\t\t\t// h1 only\n\t\t\t\tif caddyhttp.GetVar(ctx, tlsH1OnlyVarKey) == true {\n\t\t\t\t\t// stdlib does this\n\t\t\t\t\t// https://github.com/golang/go/blob/4837fbe4145cd47b43eed66fee9eed9c2b988316/src/net/http/transport.go#L1701\n\t\t\t\t\ttlsConfig.NextProtos = nil\n\t\t\t\t}\n\n\t\t\t\ttlsConn := tls.Client(conn, tlsConfig)\n\n\t\t\t\t// complete the handshake before returning the connection\n\t\t\t\tif rt.TLSHandshakeTimeout != 0 {\n\t\t\t\t\tvar cancel context.CancelFunc\n\t\t\t\t\tctx, cancel = context.WithTimeoutCause(ctx, rt.TLSHandshakeTimeout, fmt.Errorf(\"HTTP transport TLS handshake %ds timeout\", int(rt.TLSHandshakeTimeout.Seconds())))\n\t\t\t\t\tdefer cancel()\n\t\t\t\t}\n\t\t\t\terr = tlsConn.HandshakeContext(ctx)\n\t\t\t\tif err != nil {\n\t\t\t\t\t_ = tlsConn.Close()\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t\treturn tlsConn, nil\n\t\t\t}\n\t\t}\n\t}\n\n\tif h.KeepAlive != nil {\n\t\t// according to https://pkg.go.dev/net#Dialer.KeepAliveConfig,\n\t\t// KeepAlive is ignored if KeepAliveConfig.Enable is true.\n\t\t// If configured to 0, a system-dependent default is used.\n\t\t// To disable tcp keepalive, choose a negative value,\n\t\t// so KeepAliveConfig.Enable is false and KeepAlive is negative.\n\n\t\t// This is different from http keepalive where a tcp connection\n\t\t// can transfer multiple http requests/responses.\n\t\tdialer.KeepAlive = time.Duration(h.KeepAlive.ProbeInterval)\n\t\tdialer.KeepAliveConfig = net.KeepAliveConfig{\n\t\t\tEnable:   h.KeepAlive.ProbeInterval > 0,\n\t\t\tInterval: time.Duration(h.KeepAlive.ProbeInterval),\n\t\t}\n\t\tif h.KeepAlive.Enabled != nil {\n\t\t\trt.DisableKeepAlives = !*h.KeepAlive.Enabled\n\t\t}\n\t\trt.MaxIdleConns = h.KeepAlive.MaxIdleConns\n\t\trt.MaxIdleConnsPerHost = h.KeepAlive.MaxIdleConnsPerHost\n\t\trt.IdleConnTimeout = time.Duration(h.KeepAlive.IdleConnTimeout)\n\t}\n\n\tif h.Compression != nil {\n\t\trt.DisableCompression = !*h.Compression\n\t}\n\n\t// configure HTTP/3 transport if enabled; however, this does not\n\t// automatically fall back to lower versions like most web browsers\n\t// do (that'd add latency and complexity, besides, we expect that\n\t// site owners  control the backends), so it must be exclusive\n\tif len(h.Versions) == 1 && h.Versions[0] == \"3\" {\n\t\th.h3Transport = new(http3.Transport)\n\t\tif h.TLS != nil {\n\t\t\tvar err error\n\t\t\th.h3Transport.TLSClientConfig, err = h.TLS.MakeTLSClientConfig(caddyCtx)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"making TLS client config for HTTP/3 transport: %v\", err)\n\t\t\t}\n\t\t}\n\t} else if len(h.Versions) > 1 && slices.Contains(h.Versions, \"3\") {\n\t\treturn nil, fmt.Errorf(\"if HTTP/3 is enabled to the upstream, no other HTTP versions are supported\")\n\t}\n\n\t// if h2/c is enabled, configure it explicitly\n\tif slices.Contains(h.Versions, \"2\") || slices.Contains(h.Versions, \"h2c\") {\n\t\tif err := http2.ConfigureTransport(rt); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\t// DisableCompression from h2 is configured by http2.ConfigureTransport\n\t\t// Likewise, DisableKeepAlives from h1 is used too.\n\n\t\t// Protocols field is only used when the request is not using TLS,\n\t\t// http1/2 over tls is still allowed\n\t\tif slices.Contains(h.Versions, \"h2c\") {\n\t\t\trt.Protocols = new(http.Protocols)\n\t\t\trt.Protocols.SetUnencryptedHTTP2(true)\n\t\t\trt.Protocols.SetHTTP1(false)\n\t\t}\n\t}\n\n\treturn rt, nil\n}\n\n// RequestHeaderOps implements TransportHeaderOpsProvider. It returns header\n// operations for requests when the transport's configuration indicates they\n// should be applied. In particular, when TLS is enabled for this transport,\n// return an operation to set the Host header to the upstream host:port\n// placeholder so HTTPS upstreams get the proper Host by default.\n//\n// Note: this is a provision-time hook; the Handler will call this during\n// its Provision and cache the resulting HeaderOps. The HeaderOps are\n// applied per-request (so placeholders are expanded at request time).\nfunc (h *HTTPTransport) RequestHeaderOps() *headers.HeaderOps {\n\t// If TLS is not configured for this transport, don't inject Host\n\t// defaults. TLS being non-nil indicates HTTPS to the upstream.\n\tif h.TLS == nil {\n\t\treturn nil\n\t}\n\treturn &headers.HeaderOps{\n\t\tSet: http.Header{\n\t\t\t\"Host\": []string{\"{http.reverse_proxy.upstream.hostport}\"},\n\t\t},\n\t}\n}\n\n// RoundTrip implements http.RoundTripper.\nfunc (h *HTTPTransport) RoundTrip(req *http.Request) (*http.Response, error) {\n\th.SetScheme(req)\n\n\t// use HTTP/3 if enabled (TODO: This is EXPERIMENTAL)\n\tif h.h3Transport != nil {\n\t\treturn h.h3Transport.RoundTrip(req)\n\t}\n\n\treturn h.Transport.RoundTrip(req)\n}\n\n// SetScheme ensures that the outbound request req\n// has the scheme set in its URL; the underlying\n// http.Transport requires a scheme to be set.\n//\n// This method may be used by other transport modules\n// that wrap/use this one.\nfunc (h *HTTPTransport) SetScheme(req *http.Request) {\n\tif req.URL.Scheme != \"\" {\n\t\treturn\n\t}\n\tif h.shouldUseTLS(req) {\n\t\treq.URL.Scheme = \"https\"\n\t} else {\n\t\treq.URL.Scheme = \"http\"\n\t}\n}\n\n// shouldUseTLS returns true if TLS should be used for req.\nfunc (h *HTTPTransport) shouldUseTLS(req *http.Request) bool {\n\tif h.TLS == nil {\n\t\treturn false\n\t}\n\n\tport := req.URL.Port()\n\treturn !slices.Contains(h.TLS.ExceptPorts, port)\n}\n\n// TLSEnabled returns true if TLS is enabled.\nfunc (h HTTPTransport) TLSEnabled() bool {\n\treturn h.TLS != nil\n}\n\n// EnableTLS enables TLS on the transport.\nfunc (h *HTTPTransport) EnableTLS(base *TLSConfig) error {\n\th.TLS = base\n\treturn nil\n}\n\n// EnableH2C enables H2C (HTTP/2 over Cleartext) on the transport.\nfunc (h *HTTPTransport) EnableH2C() error {\n\th.Versions = []string{\"h2c\", \"2\"}\n\treturn nil\n}\n\n// OverrideHealthCheckScheme overrides the scheme of the given URL\n// used for health checks.\nfunc (h HTTPTransport) OverrideHealthCheckScheme(base *url.URL, port string) {\n\t// if tls is enabled and the port isn't in the except list, use HTTPs\n\tif h.TLSEnabled() && !slices.Contains(h.TLS.ExceptPorts, port) {\n\t\tbase.Scheme = \"https\"\n\t}\n}\n\n// ProxyProtocolEnabled returns true if proxy protocol is enabled.\nfunc (h HTTPTransport) ProxyProtocolEnabled() bool {\n\treturn h.ProxyProtocol != \"\"\n}\n\n// Cleanup implements caddy.CleanerUpper and closes any idle connections.\nfunc (h HTTPTransport) Cleanup() error {\n\tif h.Transport == nil {\n\t\treturn nil\n\t}\n\th.Transport.CloseIdleConnections()\n\treturn nil\n}\n\n// TLSConfig holds configuration related to the TLS configuration for the\n// transport/client.\ntype TLSConfig struct {\n\t// Certificate authority module which provides the certificate pool of trusted certificates\n\tCARaw json.RawMessage `json:\"ca,omitempty\" caddy:\"namespace=tls.ca_pool.source inline_key=provider\"`\n\n\t// Deprecated: Use the `ca` field with the `tls.ca_pool.source.inline` module instead.\n\t// Optional list of base64-encoded DER-encoded CA certificates to trust.\n\tRootCAPool []string `json:\"root_ca_pool,omitempty\"`\n\n\t// Deprecated: Use the `ca` field with the `tls.ca_pool.source.file` module instead.\n\t// List of PEM-encoded CA certificate files to add to the same trust\n\t// store as RootCAPool (or root_ca_pool in the JSON).\n\tRootCAPEMFiles []string `json:\"root_ca_pem_files,omitempty\"`\n\n\t// PEM-encoded client certificate filename to present to servers.\n\tClientCertificateFile string `json:\"client_certificate_file,omitempty\"`\n\n\t// PEM-encoded key to use with the client certificate.\n\tClientCertificateKeyFile string `json:\"client_certificate_key_file,omitempty\"`\n\n\t// If specified, Caddy will use and automate a client certificate\n\t// with this subject name.\n\tClientCertificateAutomate string `json:\"client_certificate_automate,omitempty\"`\n\n\t// If true, TLS verification of server certificates will be disabled.\n\t// This is insecure and may be removed in the future. Do not use this\n\t// option except in testing or local development environments.\n\tInsecureSkipVerify bool `json:\"insecure_skip_verify,omitempty\"`\n\n\t// The duration to allow a TLS handshake to a server. Default: No timeout.\n\tHandshakeTimeout caddy.Duration `json:\"handshake_timeout,omitempty\"`\n\n\t// The server name used when verifying the certificate received in the TLS\n\t// handshake. By default, this will use the upstream address' host part.\n\t// You only need to override this if your upstream address does not match the\n\t// certificate the upstream is likely to use. For example if the upstream\n\t// address is an IP address, then you would need to configure this to the\n\t// hostname being served by the upstream server. Currently, this does not\n\t// support placeholders because the TLS config is not provisioned on each\n\t// connection, so a static value must be used.\n\tServerName string `json:\"server_name,omitempty\"`\n\n\t// TLS renegotiation level. TLS renegotiation is the act of performing\n\t// subsequent handshakes on a connection after the first.\n\t// The level can be:\n\t//  - \"never\": (the default) disables renegotiation.\n\t//  - \"once\": allows a remote server to request renegotiation once per connection.\n\t//  - \"freely\": allows a remote server to repeatedly request renegotiation.\n\tRenegotiation string `json:\"renegotiation,omitempty\"`\n\n\t// Skip TLS ports specifies a list of upstream ports on which TLS should not be\n\t// attempted even if it is configured. Handy when using dynamic upstreams that\n\t// return HTTP and HTTPS endpoints too.\n\t// When specified, TLS will automatically be configured on the transport.\n\t// The value can be a list of any valid tcp port numbers, default empty.\n\tExceptPorts []string `json:\"except_ports,omitempty\"`\n\n\t// The list of elliptic curves to support. Caddy's\n\t// defaults are modern and secure.\n\tCurves []string `json:\"curves,omitempty\"`\n}\n\n// MakeTLSClientConfig returns a tls.Config usable by a client to a backend.\n// If there is no custom TLS configuration, a nil config may be returned.\nfunc (t *TLSConfig) MakeTLSClientConfig(ctx caddy.Context) (*tls.Config, error) {\n\tcfg := new(tls.Config)\n\n\t// client auth\n\tif t.ClientCertificateFile != \"\" && t.ClientCertificateKeyFile == \"\" {\n\t\treturn nil, fmt.Errorf(\"client_certificate_file specified without client_certificate_key_file\")\n\t}\n\tif t.ClientCertificateFile == \"\" && t.ClientCertificateKeyFile != \"\" {\n\t\treturn nil, fmt.Errorf(\"client_certificate_key_file specified without client_certificate_file\")\n\t}\n\tif t.ClientCertificateFile != \"\" && t.ClientCertificateKeyFile != \"\" {\n\t\tcert, err := tls.LoadX509KeyPair(t.ClientCertificateFile, t.ClientCertificateKeyFile)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"loading client certificate key pair: %v\", err)\n\t\t}\n\t\tcfg.Certificates = []tls.Certificate{cert}\n\t}\n\tif t.ClientCertificateAutomate != \"\" {\n\t\t// TODO: use or enable ctx.IdentityCredentials() ...\n\t\ttlsAppIface, err := ctx.App(\"tls\")\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"getting tls app: %v\", err)\n\t\t}\n\t\ttlsApp := tlsAppIface.(*caddytls.TLS)\n\t\terr = tlsApp.Manage(map[string]struct{}{t.ClientCertificateAutomate: {}})\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"managing client certificate: %v\", err)\n\t\t}\n\t\tcfg.GetClientCertificate = func(cri *tls.CertificateRequestInfo) (*tls.Certificate, error) {\n\t\t\tcerts := caddytls.AllMatchingCertificates(t.ClientCertificateAutomate)\n\t\t\tvar err error\n\t\t\tfor _, cert := range certs {\n\t\t\t\tcertCertificate := cert.Certificate // avoid taking address of iteration variable (gosec warning)\n\t\t\t\terr = cri.SupportsCertificate(&certCertificate)\n\t\t\t\tif err == nil {\n\t\t\t\t\treturn &cert.Certificate, nil\n\t\t\t\t}\n\t\t\t}\n\t\t\tif err == nil {\n\t\t\t\terr = fmt.Errorf(\"no client certificate found for automate name: %s\", t.ClientCertificateAutomate)\n\t\t\t}\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\t// trusted root CAs\n\tif len(t.RootCAPool) > 0 || len(t.RootCAPEMFiles) > 0 {\n\t\tctx.Logger().Warn(\"root_ca_pool and root_ca_pem_files are deprecated. Use one of the tls.ca_pool.source modules instead\")\n\t\trootPool := x509.NewCertPool()\n\t\tfor _, encodedCACert := range t.RootCAPool {\n\t\t\tcaCert, err := decodeBase64DERCert(encodedCACert)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"parsing CA certificate: %v\", err)\n\t\t\t}\n\t\t\trootPool.AddCert(caCert)\n\t\t}\n\t\tfor _, pemFile := range t.RootCAPEMFiles {\n\t\t\tpemData, err := os.ReadFile(pemFile)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"failed reading ca cert: %v\", err)\n\t\t\t}\n\t\t\trootPool.AppendCertsFromPEM(pemData)\n\t\t}\n\t\tcfg.RootCAs = rootPool\n\t}\n\n\tif t.CARaw != nil {\n\t\tif len(t.RootCAPool) > 0 || len(t.RootCAPEMFiles) > 0 {\n\t\t\treturn nil, fmt.Errorf(\"conflicting config for Root CA pool\")\n\t\t}\n\t\tcaRaw, err := ctx.LoadModule(t, \"CARaw\")\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"failed to load ca module: %v\", err)\n\t\t}\n\t\tca, ok := caRaw.(caddytls.CA)\n\t\tif !ok {\n\t\t\treturn nil, fmt.Errorf(\"CA module '%s' is not a certificate pool provider\", ca)\n\t\t}\n\t\tcfg.RootCAs = ca.CertPool()\n\t}\n\n\t// Renegotiation\n\tswitch t.Renegotiation {\n\tcase \"never\", \"\":\n\t\tcfg.Renegotiation = tls.RenegotiateNever\n\tcase \"once\":\n\t\tcfg.Renegotiation = tls.RenegotiateOnceAsClient\n\tcase \"freely\":\n\t\tcfg.Renegotiation = tls.RenegotiateFreelyAsClient\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"invalid TLS renegotiation level: %v\", t.Renegotiation)\n\t}\n\n\t// override for the server name used verify the TLS handshake\n\tcfg.ServerName = t.ServerName\n\n\t// throw all security out the window\n\tcfg.InsecureSkipVerify = t.InsecureSkipVerify\n\n\tcurvesAdded := make(map[tls.CurveID]struct{})\n\tfor _, curveName := range t.Curves {\n\t\tcurveID := caddytls.SupportedCurves[curveName]\n\t\tif _, ok := curvesAdded[curveID]; !ok {\n\t\t\tcurvesAdded[curveID] = struct{}{}\n\t\t\tcfg.CurvePreferences = append(cfg.CurvePreferences, curveID)\n\t\t}\n\t}\n\n\t// only return a config if it's not empty\n\tif reflect.DeepEqual(cfg, new(tls.Config)) {\n\t\treturn nil, nil\n\t}\n\n\treturn cfg, nil\n}\n\n// KeepAlive holds configuration pertaining to HTTP Keep-Alive.\ntype KeepAlive struct {\n\t// Whether HTTP Keep-Alive is enabled. Default: `true`\n\tEnabled *bool `json:\"enabled,omitempty\"`\n\n\t// How often to probe for liveness. Default: `30s`.\n\tProbeInterval caddy.Duration `json:\"probe_interval,omitempty\"`\n\n\t// Maximum number of idle connections. Default: `0`, which means no limit.\n\tMaxIdleConns int `json:\"max_idle_conns,omitempty\"`\n\n\t// Maximum number of idle connections per host. Default: `32`.\n\tMaxIdleConnsPerHost int `json:\"max_idle_conns_per_host,omitempty\"`\n\n\t// How long connections should be kept alive when idle. Default: `2m`.\n\tIdleConnTimeout caddy.Duration `json:\"idle_timeout,omitempty\"`\n}\n\n// tcpRWTimeoutConn enforces read/write timeouts for a TCP connection.\n// If it fails to set deadlines, the error is logged but does not abort\n// the read/write attempt (ignoring the error is consistent with what\n// the standard library does: https://github.com/golang/go/blob/c5da4fb7ac5cb7434b41fc9a1df3bee66c7f1a4d/src/net/http/server.go#L981-L986)\ntype tcpRWTimeoutConn struct {\n\t*net.TCPConn\n\treadTimeout, writeTimeout time.Duration\n\tlogger                    *zap.Logger\n}\n\nfunc (c *tcpRWTimeoutConn) Read(b []byte) (int, error) {\n\tif c.readTimeout > 0 {\n\t\terr := c.TCPConn.SetReadDeadline(time.Now().Add(c.readTimeout))\n\t\tif err != nil {\n\t\t\tif ce := c.logger.Check(zapcore.ErrorLevel, \"failed to set read deadline\"); ce != nil {\n\t\t\t\tce.Write(zap.Error(err))\n\t\t\t}\n\t\t}\n\t}\n\treturn c.TCPConn.Read(b)\n}\n\nfunc (c *tcpRWTimeoutConn) Write(b []byte) (int, error) {\n\tif c.writeTimeout > 0 {\n\t\terr := c.TCPConn.SetWriteDeadline(time.Now().Add(c.writeTimeout))\n\t\tif err != nil {\n\t\t\tif ce := c.logger.Check(zapcore.ErrorLevel, \"failed to set write deadline\"); ce != nil {\n\t\t\t\tce.Write(zap.Error(err))\n\t\t\t}\n\t\t}\n\t}\n\treturn c.TCPConn.Write(b)\n}\n\n// decodeBase64DERCert base64-decodes, then DER-decodes, certStr.\nfunc decodeBase64DERCert(certStr string) (*x509.Certificate, error) {\n\t// decode base64\n\tderBytes, err := base64.StdEncoding.DecodeString(certStr)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// parse the DER-encoded certificate\n\treturn x509.ParseCertificate(derBytes)\n}\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner                   = (*HTTPTransport)(nil)\n\t_ http.RoundTripper                   = (*HTTPTransport)(nil)\n\t_ caddy.CleanerUpper                  = (*HTTPTransport)(nil)\n\t_ TLSTransport                        = (*HTTPTransport)(nil)\n\t_ H2CTransport                        = (*HTTPTransport)(nil)\n\t_ HealthCheckSchemeOverriderTransport = (*HTTPTransport)(nil)\n\t_ ProxyProtocolTransport              = (*HTTPTransport)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/httptransport_test.go",
    "content": "package reverseproxy\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"reflect\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc TestHTTPTransportUnmarshalCaddyFileWithCaPools(t *testing.T) {\n\tconst test_der_1 = `MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==`\n\ttype args struct {\n\t\td *caddyfile.Dispenser\n\t}\n\ttests := []struct {\n\t\tname              string\n\t\targs              args\n\t\texpectedTLSConfig TLSConfig\n\t\twantErr           bool\n\t}{\n\t\t{\n\t\t\tname: \"tls_trust_pool without a module argument returns an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(\n\t\t\t\t\t`http {\n\t\t\t\t\ttls_trust_pool\n\t\t\t\t}`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"providing both 'tls_trust_pool' and 'tls_trusted_ca_certs' returns an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(\n\t\t\t\t\t`http {\n\t\t\t\t\ttls_trust_pool inline %s\n\t\t\t\t\ttls_trusted_ca_certs %s\n\t\t\t\t}`, test_der_1, test_der_1)),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'tls_trust_pool' and 'tls_trusted_ca_certs' produces an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(\n\t\t\t\t\t`http {\n\t\t\t\t\ttls_trust_pool inline {\n\t\t\t\t\t\ttrust_der\t%s\n\t\t\t\t\t}\n\t\t\t\t\ttls_trusted_ca_certs %s\n\t\t\t\t}`, test_der_1, test_der_1)),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"using 'inline' tls_trust_pool loads the module successfully\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(\n\t\t\t\t\t`http {\n\t\t\t\t\t\ttls_trust_pool inline {\n\t\t\t\t\t\t\ttrust_der\t%s\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t`, test_der_1)),\n\t\t\t},\n\t\t\texpectedTLSConfig: TLSConfig{CARaw: json.RawMessage(fmt.Sprintf(`{\"provider\":\"inline\",\"trusted_ca_certs\":[\"%s\"]}`, test_der_1))},\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'tls_trusted_ca_certs' and 'tls_trust_pool' produces an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(\n\t\t\t\t\t`http {\n\t\t\t\t\t\ttls_trusted_ca_certs %s\n\t\t\t\t\t\ttls_trust_pool inline {\n\t\t\t\t\t\t\ttrust_der\t%s\n\t\t\t\t\t\t}\n\t\t\t\t}`, test_der_1, test_der_1)),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tht := &HTTPTransport{}\n\t\t\tif err := ht.UnmarshalCaddyfile(tt.args.d); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"HTTPTransport.UnmarshalCaddyfile() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !tt.wantErr && !reflect.DeepEqual(&tt.expectedTLSConfig, ht.TLS) {\n\t\t\t\tt.Errorf(\"HTTPTransport.UnmarshalCaddyfile() = %v, want %v\", ht, tt.expectedTLSConfig)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestHTTPTransport_RequestHeaderOps_TLS(t *testing.T) {\n\tvar ht HTTPTransport\n\t// When TLS is nil, expect no header ops\n\tif ops := ht.RequestHeaderOps(); ops != nil {\n\t\tt.Fatalf(\"expected nil HeaderOps when TLS is nil, got: %#v\", ops)\n\t}\n\n\t// When TLS is configured, expect a HeaderOps that sets Host\n\tht.TLS = &TLSConfig{}\n\tops := ht.RequestHeaderOps()\n\tif ops == nil {\n\t\tt.Fatal(\"expected non-nil HeaderOps when TLS is set\")\n\t}\n\tif ops.Set == nil {\n\t\tt.Fatalf(\"expected ops.Set to be non-nil, got nil\")\n\t}\n\tif got := ops.Set.Get(\"Host\"); got != \"{http.reverse_proxy.upstream.hostport}\" {\n\t\tt.Fatalf(\"unexpected Host value; want placeholder, got: %s\", got)\n\t}\n}\n\n// TestHTTPTransport_DialTLSContext_ProxyProtocol verifies that when TLS and\n// ProxyProtocol are both enabled, DialTLSContext is set. This is critical because\n// ProxyProtocol modifies req.URL.Host to include client info with \"->\" separator\n// (e.g., \"[2001:db8::1]:12345->127.0.0.1:443\"), which breaks Go's address parsing.\n// Without a custom DialTLSContext, Go's HTTP library would fail with\n// \"too many colons in address\" when trying to parse the mangled host.\nfunc TestHTTPTransport_DialTLSContext_ProxyProtocol(t *testing.T) {\n\tctx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tdefer cancel()\n\n\ttests := []struct {\n\t\tname                    string\n\t\ttls                     *TLSConfig\n\t\tproxyProtocol           string\n\t\tserverNameHasPlaceholder bool\n\t\texpectDialTLSContext    bool\n\t}{\n\t\t{\n\t\t\tname:                 \"no TLS, no proxy protocol\",\n\t\t\ttls:                  nil,\n\t\t\tproxyProtocol:        \"\",\n\t\t\texpectDialTLSContext: false,\n\t\t},\n\t\t{\n\t\t\tname:                 \"TLS without proxy protocol\",\n\t\t\ttls:                  &TLSConfig{},\n\t\t\tproxyProtocol:        \"\",\n\t\t\texpectDialTLSContext: false,\n\t\t},\n\t\t{\n\t\t\tname:                 \"TLS with proxy protocol v1\",\n\t\t\ttls:                  &TLSConfig{},\n\t\t\tproxyProtocol:        \"v1\",\n\t\t\texpectDialTLSContext: true,\n\t\t},\n\t\t{\n\t\t\tname:                 \"TLS with proxy protocol v2\",\n\t\t\ttls:                  &TLSConfig{},\n\t\t\tproxyProtocol:        \"v2\",\n\t\t\texpectDialTLSContext: true,\n\t\t},\n\t\t{\n\t\t\tname:                     \"TLS with placeholder ServerName\",\n\t\t\ttls:                      &TLSConfig{ServerName: \"{http.request.host}\"},\n\t\t\tproxyProtocol:            \"\",\n\t\t\tserverNameHasPlaceholder: true,\n\t\t\texpectDialTLSContext:     true,\n\t\t},\n\t\t{\n\t\t\tname:                     \"TLS with placeholder ServerName and proxy protocol\",\n\t\t\ttls:                      &TLSConfig{ServerName: \"{http.request.host}\"},\n\t\t\tproxyProtocol:            \"v2\",\n\t\t\tserverNameHasPlaceholder: true,\n\t\t\texpectDialTLSContext:     true,\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tht := &HTTPTransport{\n\t\t\t\tTLS:           tt.tls,\n\t\t\t\tProxyProtocol: tt.proxyProtocol,\n\t\t\t}\n\n\t\t\trt, err := ht.NewTransport(ctx)\n\t\t\tif err != nil {\n\t\t\t\tt.Fatalf(\"NewTransport() error = %v\", err)\n\t\t\t}\n\n\t\t\thasDialTLSContext := rt.DialTLSContext != nil\n\t\t\tif hasDialTLSContext != tt.expectDialTLSContext {\n\t\t\t\tt.Errorf(\"DialTLSContext set = %v, want %v\", hasDialTLSContext, tt.expectDialTLSContext)\n\t\t\t}\n\t\t})\n\t}\n}\n\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/metrics.go",
    "content": "package reverseproxy\n\nimport (\n\t\"errors\"\n\t\"runtime/debug\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/prometheus/client_golang/prometheus\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nvar reverseProxyMetrics = struct {\n\tonce             sync.Once\n\tupstreamsHealthy *prometheus.GaugeVec\n\tlogger           *zap.Logger\n}{}\n\nfunc initReverseProxyMetrics(handler *Handler, registry *prometheus.Registry) {\n\tconst ns, sub = \"caddy\", \"reverse_proxy\"\n\n\tupstreamsLabels := []string{\"upstream\"}\n\treverseProxyMetrics.once.Do(func() {\n\t\treverseProxyMetrics.upstreamsHealthy = prometheus.NewGaugeVec(prometheus.GaugeOpts{\n\t\t\tNamespace: ns,\n\t\t\tSubsystem: sub,\n\t\t\tName:      \"upstreams_healthy\",\n\t\t\tHelp:      \"Health status of reverse proxy upstreams.\",\n\t\t}, upstreamsLabels)\n\t})\n\n\t// duplicate registration could happen if multiple sites with reverse proxy are configured; so ignore the error because\n\t// there's no good way to capture having multiple sites with reverse proxy. If this happens, the metrics will be\n\t// registered twice, but the second registration will be ignored.\n\tif err := registry.Register(reverseProxyMetrics.upstreamsHealthy); err != nil &&\n\t\t!errors.Is(err, prometheus.AlreadyRegisteredError{\n\t\t\tExistingCollector: reverseProxyMetrics.upstreamsHealthy,\n\t\t\tNewCollector:      reverseProxyMetrics.upstreamsHealthy,\n\t\t}) {\n\t\tpanic(err)\n\t}\n\n\treverseProxyMetrics.logger = handler.logger.Named(\"reverse_proxy.metrics\")\n}\n\ntype metricsUpstreamsHealthyUpdater struct {\n\thandler *Handler\n}\n\nfunc newMetricsUpstreamsHealthyUpdater(handler *Handler, ctx caddy.Context) *metricsUpstreamsHealthyUpdater {\n\tinitReverseProxyMetrics(handler, ctx.GetMetricsRegistry())\n\treverseProxyMetrics.upstreamsHealthy.Reset()\n\n\treturn &metricsUpstreamsHealthyUpdater{handler}\n}\n\nfunc (m *metricsUpstreamsHealthyUpdater) init() {\n\tgo func() {\n\t\tdefer func() {\n\t\t\tif err := recover(); err != nil {\n\t\t\t\tif c := reverseProxyMetrics.logger.Check(zapcore.ErrorLevel, \"upstreams healthy metrics updater panicked\"); c != nil {\n\t\t\t\t\tc.Write(\n\t\t\t\t\t\tzap.Any(\"error\", err),\n\t\t\t\t\t\tzap.ByteString(\"stack\", debug.Stack()),\n\t\t\t\t\t)\n\t\t\t\t}\n\t\t\t}\n\t\t}()\n\n\t\tm.update()\n\n\t\tticker := time.NewTicker(10 * time.Second)\n\t\tfor {\n\t\t\tselect {\n\t\t\tcase <-ticker.C:\n\t\t\t\tm.update()\n\t\t\tcase <-m.handler.ctx.Done():\n\t\t\t\tticker.Stop()\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t}()\n}\n\nfunc (m *metricsUpstreamsHealthyUpdater) update() {\n\tfor _, upstream := range m.handler.Upstreams {\n\t\tlabels := prometheus.Labels{\"upstream\": upstream.Dial}\n\n\t\tgaugeValue := 0.0\n\t\tif upstream.Healthy() {\n\t\t\tgaugeValue = 1.0\n\t\t}\n\n\t\treverseProxyMetrics.upstreamsHealthy.With(labels).Set(gaugeValue)\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/passive_health_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage reverseproxy\n\nimport (\n\t\"context\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\n// newPassiveHandler builds a minimal Handler with passive health checks\n// configured and a live caddy.Context so the fail-forgetter goroutine can\n// be cancelled cleanly. The caller must call cancel() when done.\nfunc newPassiveHandler(t *testing.T, maxFails int, failDuration time.Duration) (*Handler, context.CancelFunc) {\n\tt.Helper()\n\tcaddyCtx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\th := &Handler{\n\t\tctx: caddyCtx,\n\t\tHealthChecks: &HealthChecks{\n\t\t\tPassive: &PassiveHealthChecks{\n\t\t\t\tMaxFails:     maxFails,\n\t\t\t\tFailDuration: caddy.Duration(failDuration),\n\t\t\t},\n\t\t},\n\t}\n\treturn h, cancel\n}\n\n// provisionedStaticUpstream creates a static upstream, registers it in the\n// UsagePool, and returns a cleanup func that removes it from the pool.\nfunc provisionedStaticUpstream(t *testing.T, h *Handler, addr string) (*Upstream, func()) {\n\tt.Helper()\n\tu := &Upstream{Dial: addr}\n\th.provisionUpstream(u, false)\n\treturn u, func() { _, _ = hosts.Delete(addr) }\n}\n\n// provisionedDynamicUpstream creates a dynamic upstream, registers it in\n// dynamicHosts, and returns a cleanup func that removes it.\nfunc provisionedDynamicUpstream(t *testing.T, h *Handler, addr string) (*Upstream, func()) {\n\tt.Helper()\n\tu := &Upstream{Dial: addr}\n\th.provisionUpstream(u, true)\n\treturn u, func() {\n\t\tdynamicHostsMu.Lock()\n\t\tdelete(dynamicHosts, addr)\n\t\tdynamicHostsMu.Unlock()\n\t}\n}\n\n// --- countFailure behaviour ---\n\n// TestCountFailureNoopWhenNoHealthChecks verifies that countFailure is a no-op\n// when HealthChecks is nil.\nfunc TestCountFailureNoopWhenNoHealthChecks(t *testing.T) {\n\tresetDynamicHosts()\n\th := &Handler{}\n\tu := &Upstream{Dial: \"10.1.0.1:80\", Host: new(Host)}\n\n\th.countFailure(u)\n\n\tif u.Host.Fails() != 0 {\n\t\tt.Errorf(\"expected 0 fails with no HealthChecks config, got %d\", u.Host.Fails())\n\t}\n}\n\n// TestCountFailureNoopWhenZeroDuration verifies that countFailure is a no-op\n// when FailDuration is 0 (the zero value disables passive checks).\nfunc TestCountFailureNoopWhenZeroDuration(t *testing.T) {\n\tresetDynamicHosts()\n\tcaddyCtx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tdefer cancel()\n\th := &Handler{\n\t\tctx: caddyCtx,\n\t\tHealthChecks: &HealthChecks{\n\t\t\tPassive: &PassiveHealthChecks{MaxFails: 1, FailDuration: 0},\n\t\t},\n\t}\n\tu := &Upstream{Dial: \"10.1.0.2:80\", Host: new(Host)}\n\n\th.countFailure(u)\n\n\tif u.Host.Fails() != 0 {\n\t\tt.Errorf(\"expected 0 fails with zero FailDuration, got %d\", u.Host.Fails())\n\t}\n}\n\n// TestCountFailureIncrementsCount verifies that countFailure increments the\n// fail count on the upstream's Host.\nfunc TestCountFailureIncrementsCount(t *testing.T) {\n\tresetDynamicHosts()\n\th, cancel := newPassiveHandler(t, 2, time.Minute)\n\tdefer cancel()\n\tu := &Upstream{Dial: \"10.1.0.3:80\", Host: new(Host)}\n\n\th.countFailure(u)\n\n\tif u.Host.Fails() != 1 {\n\t\tt.Errorf(\"expected 1 fail after countFailure, got %d\", u.Host.Fails())\n\t}\n}\n\n// TestCountFailureDecrementsAfterDuration verifies that the fail count is\n// decremented back after FailDuration elapses.\nfunc TestCountFailureDecrementsAfterDuration(t *testing.T) {\n\tresetDynamicHosts()\n\tconst failDuration = 50 * time.Millisecond\n\th, cancel := newPassiveHandler(t, 2, failDuration)\n\tdefer cancel()\n\tu := &Upstream{Dial: \"10.1.0.4:80\", Host: new(Host)}\n\n\th.countFailure(u)\n\tif u.Host.Fails() != 1 {\n\t\tt.Fatalf(\"expected 1 fail immediately after countFailure, got %d\", u.Host.Fails())\n\t}\n\n\t// Wait long enough for the forgetter goroutine to fire.\n\ttime.Sleep(3 * failDuration)\n\n\tif u.Host.Fails() != 0 {\n\t\tt.Errorf(\"expected fail count to return to 0 after FailDuration, got %d\", u.Host.Fails())\n\t}\n}\n\n// TestCountFailureCancelledContextForgets verifies that cancelling the handler\n// context (simulating a config unload) also triggers the forgetter to run,\n// decrementing the fail count.\nfunc TestCountFailureCancelledContextForgets(t *testing.T) {\n\tresetDynamicHosts()\n\th, cancel := newPassiveHandler(t, 2, time.Hour) // very long duration\n\tu := &Upstream{Dial: \"10.1.0.5:80\", Host: new(Host)}\n\n\th.countFailure(u)\n\tif u.Host.Fails() != 1 {\n\t\tt.Fatalf(\"expected 1 fail immediately after countFailure, got %d\", u.Host.Fails())\n\t}\n\n\t// Cancelling the context should cause the forgetter goroutine to exit and\n\t// decrement the count.\n\tcancel()\n\ttime.Sleep(50 * time.Millisecond)\n\n\tif u.Host.Fails() != 0 {\n\t\tt.Errorf(\"expected fail count to be decremented after context cancel, got %d\", u.Host.Fails())\n\t}\n}\n\n// --- static upstream passive health check ---\n\n// TestStaticUpstreamHealthyWithNoFailures verifies that a static upstream with\n// no recorded failures is considered healthy.\nfunc TestStaticUpstreamHealthyWithNoFailures(t *testing.T) {\n\tresetDynamicHosts()\n\th, cancel := newPassiveHandler(t, 2, time.Minute)\n\tdefer cancel()\n\n\tu, cleanup := provisionedStaticUpstream(t, h, \"10.2.0.1:80\")\n\tdefer cleanup()\n\n\tif !u.Healthy() {\n\t\tt.Error(\"upstream with no failures should be healthy\")\n\t}\n}\n\n// TestStaticUpstreamUnhealthyAtMaxFails verifies that a static upstream is\n// marked unhealthy once its fail count reaches MaxFails.\nfunc TestStaticUpstreamUnhealthyAtMaxFails(t *testing.T) {\n\tresetDynamicHosts()\n\th, cancel := newPassiveHandler(t, 2, time.Minute)\n\tdefer cancel()\n\n\tu, cleanup := provisionedStaticUpstream(t, h, \"10.2.0.2:80\")\n\tdefer cleanup()\n\n\th.countFailure(u)\n\tif !u.Healthy() {\n\t\tt.Error(\"upstream should still be healthy after 1 of 2 allowed failures\")\n\t}\n\n\th.countFailure(u)\n\tif u.Healthy() {\n\t\tt.Error(\"upstream should be unhealthy after reaching MaxFails=2\")\n\t}\n}\n\n// TestStaticUpstreamRecoversAfterFailDuration verifies that a static upstream\n// returns to healthy once its failures expire.\nfunc TestStaticUpstreamRecoversAfterFailDuration(t *testing.T) {\n\tresetDynamicHosts()\n\tconst failDuration = 50 * time.Millisecond\n\th, cancel := newPassiveHandler(t, 1, failDuration)\n\tdefer cancel()\n\n\tu, cleanup := provisionedStaticUpstream(t, h, \"10.2.0.3:80\")\n\tdefer cleanup()\n\n\th.countFailure(u)\n\tif u.Healthy() {\n\t\tt.Fatal(\"upstream should be unhealthy immediately after MaxFails failure\")\n\t}\n\n\ttime.Sleep(3 * failDuration)\n\n\tif !u.Healthy() {\n\t\tt.Errorf(\"upstream should recover to healthy after FailDuration, Fails=%d\", u.Host.Fails())\n\t}\n}\n\n// TestStaticUpstreamHealthPersistedAcrossReprovisioning verifies that static\n// upstreams share a Host via the UsagePool, so a second call to provisionUpstream\n// for the same address (as happens on config reload) sees the accumulated state.\nfunc TestStaticUpstreamHealthPersistedAcrossReprovisioning(t *testing.T) {\n\tresetDynamicHosts()\n\th, cancel := newPassiveHandler(t, 2, time.Minute)\n\tdefer cancel()\n\n\tu1, cleanup1 := provisionedStaticUpstream(t, h, \"10.2.0.4:80\")\n\tdefer cleanup1()\n\n\th.countFailure(u1)\n\th.countFailure(u1)\n\n\t// Simulate a second handler instance referencing the same upstream\n\t// (e.g. after a config reload that keeps the same backend address).\n\tu2, cleanup2 := provisionedStaticUpstream(t, h, \"10.2.0.4:80\")\n\tdefer cleanup2()\n\n\tif u1.Host != u2.Host {\n\t\tt.Fatal(\"expected both Upstream structs to share the same *Host via UsagePool\")\n\t}\n\tif u2.Healthy() {\n\t\tt.Error(\"re-provisioned upstream should still see the prior fail count and be unhealthy\")\n\t}\n}\n\n// --- dynamic upstream passive health check ---\n\n// TestDynamicUpstreamHealthyWithNoFailures verifies that a freshly provisioned\n// dynamic upstream is healthy.\nfunc TestDynamicUpstreamHealthyWithNoFailures(t *testing.T) {\n\tresetDynamicHosts()\n\th, cancel := newPassiveHandler(t, 2, time.Minute)\n\tdefer cancel()\n\n\tu, cleanup := provisionedDynamicUpstream(t, h, \"10.3.0.1:80\")\n\tdefer cleanup()\n\n\tif !u.Healthy() {\n\t\tt.Error(\"dynamic upstream with no failures should be healthy\")\n\t}\n}\n\n// TestDynamicUpstreamUnhealthyAtMaxFails verifies that a dynamic upstream is\n// marked unhealthy once its fail count reaches MaxFails.\nfunc TestDynamicUpstreamUnhealthyAtMaxFails(t *testing.T) {\n\tresetDynamicHosts()\n\th, cancel := newPassiveHandler(t, 2, time.Minute)\n\tdefer cancel()\n\n\tu, cleanup := provisionedDynamicUpstream(t, h, \"10.3.0.2:80\")\n\tdefer cleanup()\n\n\th.countFailure(u)\n\tif !u.Healthy() {\n\t\tt.Error(\"dynamic upstream should still be healthy after 1 of 2 allowed failures\")\n\t}\n\n\th.countFailure(u)\n\tif u.Healthy() {\n\t\tt.Error(\"dynamic upstream should be unhealthy after reaching MaxFails=2\")\n\t}\n}\n\n// TestDynamicUpstreamFailCountPersistedBetweenRequests is the core regression\n// test: it simulates two sequential (non-concurrent) requests to the same\n// dynamic upstream. Before the fix, the UsagePool entry would be deleted\n// between requests, wiping the fail count. Now it should survive.\nfunc TestDynamicUpstreamFailCountPersistedBetweenRequests(t *testing.T) {\n\tresetDynamicHosts()\n\th, cancel := newPassiveHandler(t, 2, time.Minute)\n\tdefer cancel()\n\n\t// --- first request ---\n\tu1 := &Upstream{Dial: \"10.3.0.3:80\"}\n\th.provisionUpstream(u1, true)\n\th.countFailure(u1)\n\n\tif u1.Host.Fails() != 1 {\n\t\tt.Fatalf(\"expected 1 fail after first request, got %d\", u1.Host.Fails())\n\t}\n\n\t// Simulate end of first request: no delete from any pool (key difference\n\t// vs. the old behaviour where hosts.Delete was deferred).\n\n\t// --- second request: brand-new *Upstream struct, same dial address ---\n\tu2 := &Upstream{Dial: \"10.3.0.3:80\"}\n\th.provisionUpstream(u2, true)\n\n\tif u1.Host != u2.Host {\n\t\tt.Fatal(\"expected both requests to share the same *Host pointer from dynamicHosts\")\n\t}\n\tif u2.Host.Fails() != 1 {\n\t\tt.Errorf(\"expected fail count to persist across requests, got %d\", u2.Host.Fails())\n\t}\n\n\t// A second failure now tips it over MaxFails=2.\n\th.countFailure(u2)\n\tif u2.Healthy() {\n\t\tt.Error(\"upstream should be unhealthy after accumulated failures across requests\")\n\t}\n\n\t// Cleanup.\n\tdynamicHostsMu.Lock()\n\tdelete(dynamicHosts, \"10.3.0.3:80\")\n\tdynamicHostsMu.Unlock()\n}\n\n// TestDynamicUpstreamRecoveryAfterFailDuration verifies that a dynamic\n// upstream's fail count expires and it returns to healthy.\nfunc TestDynamicUpstreamRecoveryAfterFailDuration(t *testing.T) {\n\tresetDynamicHosts()\n\tconst failDuration = 50 * time.Millisecond\n\th, cancel := newPassiveHandler(t, 1, failDuration)\n\tdefer cancel()\n\n\tu, cleanup := provisionedDynamicUpstream(t, h, \"10.3.0.4:80\")\n\tdefer cleanup()\n\n\th.countFailure(u)\n\tif u.Healthy() {\n\t\tt.Fatal(\"upstream should be unhealthy immediately after MaxFails failure\")\n\t}\n\n\ttime.Sleep(3 * failDuration)\n\n\t// Re-provision (as a new request would) to get fresh *Upstream with policy set.\n\tu2 := &Upstream{Dial: \"10.3.0.4:80\"}\n\th.provisionUpstream(u2, true)\n\n\tif !u2.Healthy() {\n\t\tt.Errorf(\"dynamic upstream should recover to healthy after FailDuration, Fails=%d\", u2.Host.Fails())\n\t}\n}\n\n// TestDynamicUpstreamMaxRequestsFromUnhealthyRequestCount verifies that\n// UnhealthyRequestCount is copied into MaxRequests so Full() works correctly.\nfunc TestDynamicUpstreamMaxRequestsFromUnhealthyRequestCount(t *testing.T) {\n\tresetDynamicHosts()\n\tcaddyCtx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tdefer cancel()\n\th := &Handler{\n\t\tctx: caddyCtx,\n\t\tHealthChecks: &HealthChecks{\n\t\t\tPassive: &PassiveHealthChecks{\n\t\t\t\tUnhealthyRequestCount: 3,\n\t\t\t},\n\t\t},\n\t}\n\n\tu, cleanup := provisionedDynamicUpstream(t, h, \"10.3.0.5:80\")\n\tdefer cleanup()\n\n\tif u.MaxRequests != 3 {\n\t\tt.Errorf(\"expected MaxRequests=3 from UnhealthyRequestCount, got %d\", u.MaxRequests)\n\t}\n\n\t// Should not be full with fewer requests than the limit.\n\t_ = u.Host.countRequest(2)\n\tif u.Full() {\n\t\tt.Error(\"upstream should not be full with 2 of 3 allowed requests\")\n\t}\n\n\t_ = u.Host.countRequest(1)\n\tif !u.Full() {\n\t\tt.Error(\"upstream should be full at UnhealthyRequestCount concurrent requests\")\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/retries_test.go",
    "content": "package reverseproxy\n\nimport (\n\t\"errors\"\n\t\"io\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"strings\"\n\t\"sync\"\n\t\"testing\"\n\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\n// prepareTestRequest injects the context values that ServeHTTP and\n// proxyLoopIteration require (caddy.ReplacerCtxKey, VarsCtxKey, etc.) using\n// the same helper that the real HTTP server uses.\n//\n// A zero-value Server is passed so that caddyhttp.ServerCtxKey is set to a\n// non-nil pointer; reverseProxy dereferences it to check ShouldLogCredentials.\nfunc prepareTestRequest(req *http.Request) *http.Request {\n\trepl := caddy.NewReplacer()\n\treturn caddyhttp.PrepareRequest(req, repl, nil, &caddyhttp.Server{})\n}\n\n// closeOnCloseReader is an io.ReadCloser whose Close method actually makes\n// subsequent reads fail, mimicking the behaviour of a real HTTP request body\n// (as opposed to io.NopCloser, whose Close is a no-op and would mask the bug\n// we are testing).\ntype closeOnCloseReader struct {\n\tmu     sync.Mutex\n\tr      *strings.Reader\n\tclosed bool\n}\n\nfunc newCloseOnCloseReader(s string) *closeOnCloseReader {\n\treturn &closeOnCloseReader{r: strings.NewReader(s)}\n}\n\nfunc (c *closeOnCloseReader) Read(p []byte) (int, error) {\n\tc.mu.Lock()\n\tdefer c.mu.Unlock()\n\tif c.closed {\n\t\treturn 0, errors.New(\"http: invalid Read on closed Body\")\n\t}\n\treturn c.r.Read(p)\n}\n\nfunc (c *closeOnCloseReader) Close() error {\n\tc.mu.Lock()\n\tdefer c.mu.Unlock()\n\tc.closed = true\n\treturn nil\n}\n\n// deadUpstreamAddr returns a TCP address that is guaranteed to refuse\n// connections: we bind a listener, note its address, close it immediately,\n// and return the address. Any dial to that address will get ECONNREFUSED.\nfunc deadUpstreamAddr(t *testing.T) string {\n\tt.Helper()\n\tln, err := net.Listen(\"tcp\", \"127.0.0.1:0\")\n\tif err != nil {\n\t\tt.Fatalf(\"failed to create dead upstream listener: %v\", err)\n\t}\n\taddr := ln.Addr().String()\n\tln.Close()\n\treturn addr\n}\n\n// testTransport wraps http.Transport to:\n//  1. Set the URL scheme to \"http\" when it is empty (matching what\n//     HTTPTransport.SetScheme does in production; cloneRequest strips the\n//     scheme intentionally so a plain *http.Transport would fail with\n//     \"unsupported protocol scheme\").\n//  2. Wrap dial errors as DialError so that tryAgain correctly identifies them\n//     as safe-to-retry regardless of request method (as HTTPTransport does in\n//     production via its custom dialer).\ntype testTransport struct{ *http.Transport }\n\nfunc (t testTransport) RoundTrip(req *http.Request) (*http.Response, error) {\n\tif req.URL.Scheme == \"\" {\n\t\treq.URL.Scheme = \"http\"\n\t}\n\tresp, err := t.Transport.RoundTrip(req)\n\tif err != nil {\n\t\t// Wrap dial errors as DialError to match production behaviour.\n\t\t// Without this wrapping, tryAgain treats ECONNREFUSED on a POST\n\t\t// request as non-retryable (only GET is retried by default when\n\t\t// the error is not a DialError).\n\t\tvar opErr *net.OpError\n\t\tif errors.As(err, &opErr) && opErr.Op == \"dial\" {\n\t\t\treturn nil, DialError{err}\n\t\t}\n\t}\n\treturn resp, err\n}\n\n// minimalHandler returns a Handler with only the fields required by ServeHTTP\n// set directly, bypassing Provision (which requires a full Caddy runtime).\n// RoundRobinSelection is used so that successive iterations of the proxy loop\n// advance through the upstream pool in a predictable order.\nfunc minimalHandler(retries int, upstreams ...*Upstream) *Handler {\n\treturn &Handler{\n\t\tlogger:    zap.NewNop(),\n\t\tTransport: testTransport{&http.Transport{}},\n\t\tUpstreams: upstreams,\n\t\tLoadBalancing: &LoadBalancing{\n\t\t\tRetries:         retries,\n\t\t\tSelectionPolicy: &RoundRobinSelection{},\n\t\t\t// RetryMatch intentionally nil: dial errors are always retried\n\t\t\t// regardless of RetryMatch or request method.\n\t\t},\n\t\t// ctx, connections, connectionsMu, events: zero/nil values are safe\n\t\t// for the code paths exercised by these tests (TryInterval=0 so\n\t\t// ctx.Done() is never consulted; no WebSocket hijacking; no passive\n\t\t// health-check event emission).\n\t}\n}\n\n// TestDialErrorBodyRetry verifies that a POST request whose body has NOT been\n// pre-buffered via request_buffers can still be retried after a dial error.\n//\n// Before the fix, a dial error caused Go's transport to close the shared body\n// (via cloneRequest's shallow copy), so the retry attempt would read from an\n// already-closed io.ReadCloser and produce:\n//\n//\thttp: invalid Read on closed Body → HTTP 502\n//\n// After the fix the handler wraps the body in noCloseBody when retries are\n// configured, preventing the transport's Close() from propagating to the\n// shared body. Since dial errors never read any bytes, the body remains at\n// position 0 for the retry.\nfunc TestDialErrorBodyRetry(t *testing.T) {\n\t// Good upstream: echoes the request body with 200 OK.\n\tgoodServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\tbody, err := io.ReadAll(r.Body)\n\t\tif err != nil {\n\t\t\thttp.Error(w, \"read body: \"+err.Error(), http.StatusInternalServerError)\n\t\t\treturn\n\t\t}\n\t\tw.WriteHeader(http.StatusOK)\n\t\t_, _ = w.Write(body)\n\t}))\n\tt.Cleanup(goodServer.Close)\n\n\tconst requestBody = \"hello, retry\"\n\n\ttests := []struct {\n\t\tname       string\n\t\tmethod     string\n\t\tbody       string\n\t\tretries    int\n\t\twantStatus int\n\t\twantBody   string\n\t}{\n\t\t{\n\t\t\t// Core regression case: POST with a body, no request_buffers,\n\t\t\t// dial error on first upstream → retry to second upstream succeeds.\n\t\t\tname:       \"POST body retried after dial error\",\n\t\t\tmethod:     http.MethodPost,\n\t\t\tbody:       requestBody,\n\t\t\tretries:    1,\n\t\t\twantStatus: http.StatusOK,\n\t\t\twantBody:   requestBody,\n\t\t},\n\t\t{\n\t\t\t// Dial errors are always retried regardless of method, but there\n\t\t\t// is no body to re-read, so GET has always worked. Keep it as a\n\t\t\t// sanity check that we did not break the no-body path.\n\t\t\tname:       \"GET without body retried after dial error\",\n\t\t\tmethod:     http.MethodGet,\n\t\t\tbody:       \"\",\n\t\t\tretries:    1,\n\t\t\twantStatus: http.StatusOK,\n\t\t\twantBody:   \"\",\n\t\t},\n\t\t{\n\t\t\t// Without any retry configuration the handler must give up on the\n\t\t\t// first dial error and return a 502. Confirms no wrapping occurs\n\t\t\t// in the no-retry path.\n\t\t\tname:       \"no retries configured returns 502 on dial error\",\n\t\t\tmethod:     http.MethodPost,\n\t\t\tbody:       requestBody,\n\t\t\tretries:    0,\n\t\t\twantStatus: http.StatusBadGateway,\n\t\t\twantBody:   \"\",\n\t\t},\n\t}\n\n\tfor _, tc := range tests {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tdead := deadUpstreamAddr(t)\n\n\t\t\t// Build the upstream pool. RoundRobinSelection starts its\n\t\t\t// counter at 0 and increments before returning, so with a\n\t\t\t// two-element pool it picks index 1 first, then index 0.\n\t\t\t// Put the good upstream at index 0 and the dead one at\n\t\t\t// index 1 so that:\n\t\t\t//   attempt 1 → pool[1] = dead → DialError (ECONNREFUSED)\n\t\t\t//   attempt 2 → pool[0] = good → 200\n\t\t\tupstreams := []*Upstream{\n\t\t\t\t{Host: new(Host), Dial: goodServer.Listener.Addr().String()},\n\t\t\t\t{Host: new(Host), Dial: dead},\n\t\t\t}\n\t\t\tif tc.retries == 0 {\n\t\t\t\t// For the \"no retries\" case use only the dead upstream so\n\t\t\t\t// there is nowhere to retry to.\n\t\t\t\tupstreams = []*Upstream{\n\t\t\t\t\t{Host: new(Host), Dial: dead},\n\t\t\t\t}\n\t\t\t}\n\n\t\t\th := minimalHandler(tc.retries, upstreams...)\n\n\t\t\t// Use closeOnCloseReader so that Close() truly prevents further\n\t\t\t// reads, matching real http.body semantics. io.NopCloser would\n\t\t\t// mask the bug because its Close is a no-op.\n\t\t\tvar bodyReader io.ReadCloser\n\t\t\tif tc.body != \"\" {\n\t\t\t\tbodyReader = newCloseOnCloseReader(tc.body)\n\t\t\t}\n\t\t\treq := httptest.NewRequest(tc.method, \"http://example.com/\", bodyReader)\n\t\t\tif bodyReader != nil {\n\t\t\t\t// httptest.NewRequest wraps the reader in NopCloser; replace\n\t\t\t\t// it with our close-aware reader so Close() is propagated.\n\t\t\t\treq.Body = bodyReader\n\t\t\t\treq.ContentLength = int64(len(tc.body))\n\t\t\t}\n\t\t\treq = prepareTestRequest(req)\n\n\t\t\trec := httptest.NewRecorder()\n\t\t\terr := h.ServeHTTP(rec, req, caddyhttp.HandlerFunc(func(w http.ResponseWriter, r *http.Request) error {\n\t\t\t\treturn nil\n\t\t\t}))\n\n\t\t\t// For error cases (e.g. 502) ServeHTTP returns a HandlerError\n\t\t\t// rather than writing the status itself.\n\t\t\tgotStatus := rec.Code\n\t\t\tif err != nil {\n\t\t\t\tif herr, ok := err.(caddyhttp.HandlerError); ok {\n\t\t\t\t\tgotStatus = herr.StatusCode\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif gotStatus != tc.wantStatus {\n\t\t\t\tt.Errorf(\"status: got %d, want %d (err=%v)\", gotStatus, tc.wantStatus, err)\n\t\t\t}\n\t\t\tif tc.wantBody != \"\" && rec.Body.String() != tc.wantBody {\n\t\t\t\tt.Errorf(\"body: got %q, want %q\", rec.Body.String(), tc.wantBody)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/reverseproxy.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage reverseproxy\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"crypto/rand\"\n\t\"encoding/base64\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/http/httptrace\"\n\t\"net/netip\"\n\t\"net/textproto\"\n\t\"net/url\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"sync/atomic\"\n\t\"time\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\t\"golang.org/x/net/http/httpguts\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyevents\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/headers\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp/rewrite\"\n)\n\n// inFlightRequests uses sync.Map with atomic.Int64 for lock-free updates on the hot path\nvar inFlightRequests sync.Map\n\nfunc incInFlightRequest(address string) {\n\tv, _ := inFlightRequests.LoadOrStore(address, new(atomic.Int64))\n\tv.(*atomic.Int64).Add(1)\n}\n\nfunc decInFlightRequest(address string) {\n\tif v, ok := inFlightRequests.Load(address); ok {\n\t\tif v.(*atomic.Int64).Add(-1) <= 0 {\n\t\t\tinFlightRequests.Delete(address)\n\t\t}\n\t}\n}\n\nfunc getInFlightRequests() map[string]int64 {\n\tcopyMap := make(map[string]int64)\n\tinFlightRequests.Range(func(key, value any) bool {\n\t\tcopyMap[key.(string)] = value.(*atomic.Int64).Load()\n\t\treturn true\n\t})\n\treturn copyMap\n}\n\nfunc init() {\n\tcaddy.RegisterModule(Handler{})\n}\n\n// Handler implements a highly configurable and production-ready reverse proxy.\n//\n// Upon proxying, this module sets the following placeholders (which can be used\n// both within and after this handler; for example, in response headers):\n//\n// Placeholder | Description\n// ------------|-------------\n// `{http.reverse_proxy.upstream.address}` | The full address to the upstream as given in the config\n// `{http.reverse_proxy.upstream.hostport}` | The host:port of the upstream\n// `{http.reverse_proxy.upstream.host}` | The host of the upstream\n// `{http.reverse_proxy.upstream.port}` | The port of the upstream\n// `{http.reverse_proxy.upstream.requests}` | The approximate current number of requests to the upstream\n// `{http.reverse_proxy.upstream.max_requests}` | The maximum approximate number of requests allowed to the upstream\n// `{http.reverse_proxy.upstream.fails}` | The number of recent failed requests to the upstream\n// `{http.reverse_proxy.upstream.latency}` | How long it took the proxy upstream to write the response header.\n// `{http.reverse_proxy.upstream.latency_ms}` | Same as 'latency', but in milliseconds.\n// `{http.reverse_proxy.upstream.duration}` | Time spent proxying to the upstream, including writing response body to client.\n// `{http.reverse_proxy.upstream.duration_ms}` | Same as 'upstream.duration', but in milliseconds.\n// `{http.reverse_proxy.duration}` | Total time spent proxying, including selecting an upstream, retries, and writing response.\n// `{http.reverse_proxy.duration_ms}` | Same as 'duration', but in milliseconds.\n// `{http.reverse_proxy.retries}` | The number of retries actually performed to communicate with an upstream.\ntype Handler struct {\n\t// Configures the method of transport for the proxy. A transport\n\t// is what performs the actual \"round trip\" to the backend.\n\t// The default transport is plaintext HTTP.\n\tTransportRaw json.RawMessage `json:\"transport,omitempty\" caddy:\"namespace=http.reverse_proxy.transport inline_key=protocol\"`\n\n\t// A circuit breaker may be used to relieve pressure on a backend\n\t// that is beginning to exhibit symptoms of stress or latency.\n\t// By default, there is no circuit breaker.\n\tCBRaw json.RawMessage `json:\"circuit_breaker,omitempty\" caddy:\"namespace=http.reverse_proxy.circuit_breakers inline_key=type\"`\n\n\t// Load balancing distributes load/requests between backends.\n\tLoadBalancing *LoadBalancing `json:\"load_balancing,omitempty\"`\n\n\t// Health checks update the status of backends, whether they are\n\t// up or down. Down backends will not be proxied to.\n\tHealthChecks *HealthChecks `json:\"health_checks,omitempty\"`\n\n\t// Upstreams is the static list of backends to proxy to.\n\tUpstreams UpstreamPool `json:\"upstreams,omitempty\"`\n\n\t// A module for retrieving the list of upstreams dynamically. Dynamic\n\t// upstreams are retrieved at every iteration of the proxy loop for\n\t// each request (i.e. before every proxy attempt within every request).\n\t// Active health checks do not work on dynamic upstreams, and passive\n\t// health checks are only effective on dynamic upstreams if the proxy\n\t// server is busy enough that concurrent requests to the same backends\n\t// are continuous. Instead of health checks for dynamic upstreams, it\n\t// is recommended that the dynamic upstream module only return available\n\t// backends in the first place.\n\tDynamicUpstreamsRaw json.RawMessage `json:\"dynamic_upstreams,omitempty\" caddy:\"namespace=http.reverse_proxy.upstreams inline_key=source\"`\n\n\t// Adjusts how often to flush the response buffer. By default,\n\t// no periodic flushing is done. A negative value disables\n\t// response buffering, and flushes immediately after each\n\t// write to the client. This option is ignored when the upstream's\n\t// response is recognized as a streaming response, or if its\n\t// content length is -1; for such responses, writes are flushed\n\t// to the client immediately.\n\tFlushInterval caddy.Duration `json:\"flush_interval,omitempty\"`\n\n\t// A list of IP ranges (supports CIDR notation) from which\n\t// X-Forwarded-* header values should be trusted. By default,\n\t// no proxies are trusted, so existing values will be ignored\n\t// when setting these headers. If the proxy is trusted, then\n\t// existing values will be used when constructing the final\n\t// header values.\n\tTrustedProxies []string `json:\"trusted_proxies,omitempty\"`\n\n\t// Headers manipulates headers between Caddy and the backend.\n\t// By default, all headers are passed-thru without changes,\n\t// with the exceptions of special hop-by-hop headers.\n\t//\n\t// X-Forwarded-For, X-Forwarded-Proto and X-Forwarded-Host\n\t// are also set implicitly.\n\tHeaders *headers.Handler `json:\"headers,omitempty\"`\n\n\t// If nonzero, the entire request body up to this size will be read\n\t// and buffered in memory before being proxied to the backend. This\n\t// should be avoided if at all possible for performance reasons, but\n\t// could be useful if the backend is intolerant of read latency or\n\t// chunked encodings.\n\tRequestBuffers int64 `json:\"request_buffers,omitempty\"`\n\n\t// If nonzero, the entire response body up to this size will be read\n\t// and buffered in memory before being proxied to the client. This\n\t// should be avoided if at all possible for performance reasons, but\n\t// could be useful if the backend has tighter memory constraints.\n\tResponseBuffers int64 `json:\"response_buffers,omitempty\"`\n\n\t// If nonzero, streaming requests such as WebSockets will be\n\t// forcibly closed at the end of the timeout. Default: no timeout.\n\tStreamTimeout caddy.Duration `json:\"stream_timeout,omitempty\"`\n\n\t// If nonzero, streaming requests such as WebSockets will not be\n\t// closed when the proxy config is unloaded, and instead the stream\n\t// will remain open until the delay is complete. In other words,\n\t// enabling this prevents streams from closing when Caddy's config\n\t// is reloaded. Enabling this may be a good idea to avoid a thundering\n\t// herd of reconnecting clients which had their connections closed\n\t// by the previous config closing. Default: no delay.\n\tStreamCloseDelay caddy.Duration `json:\"stream_close_delay,omitempty\"`\n\n\t// If configured, rewrites the copy of the upstream request.\n\t// Allows changing the request method and URI (path and query).\n\t// Since the rewrite is applied to the copy, it does not persist\n\t// past the reverse proxy handler.\n\t// If the method is changed to `GET` or `HEAD`, the request body\n\t// will not be copied to the backend. This allows a later request\n\t// handler -- either in a `handle_response` route, or after -- to\n\t// read the body.\n\t// By default, no rewrite is performed, and the method and URI\n\t// from the incoming request is used as-is for proxying.\n\tRewrite *rewrite.Rewrite `json:\"rewrite,omitempty\"`\n\n\t// List of handlers and their associated matchers to evaluate\n\t// after successful roundtrips. The first handler that matches\n\t// the response from a backend will be invoked. The response\n\t// body from the backend will not be written to the client;\n\t// it is up to the handler to finish handling the response.\n\t// If passive health checks are enabled, any errors from the\n\t// handler chain will not affect the health status of the\n\t// backend.\n\t//\n\t// Three new placeholders are available in this handler chain:\n\t// - `{http.reverse_proxy.status_code}` The status code from the response\n\t// - `{http.reverse_proxy.status_text}` The status text from the response\n\t// - `{http.reverse_proxy.header.*}` The headers from the response\n\tHandleResponse []caddyhttp.ResponseHandler `json:\"handle_response,omitempty\"`\n\n\t// If set, the proxy will write very detailed logs about its\n\t// inner workings. Enable this only when debugging, as it\n\t// will produce a lot of output.\n\t//\n\t// EXPERIMENTAL: This feature is subject to change or removal.\n\tVerboseLogs bool `json:\"verbose_logs,omitempty\"`\n\n\tTransport        http.RoundTripper `json:\"-\"`\n\tCB               CircuitBreaker    `json:\"-\"`\n\tDynamicUpstreams UpstreamSource    `json:\"-\"`\n\n\t// transportHeaderOps is a set of header operations provided\n\t// by the transport at provision time, if the transport\n\t// implements TransportHeaderOpsProvider. These ops are\n\t// applied before any user-configured header ops so the\n\t// user can override transport defaults.\n\ttransportHeaderOps *headers.HeaderOps\n\n\t// Holds the parsed CIDR ranges from TrustedProxies\n\ttrustedProxies []netip.Prefix\n\n\t// Holds the named response matchers from the Caddyfile while adapting\n\tresponseMatchers map[string]caddyhttp.ResponseMatcher\n\n\t// Holds the handle_response Caddyfile tokens while adapting\n\thandleResponseSegments []*caddyfile.Dispenser\n\n\t// Stores upgraded requests (hijacked connections) for proper cleanup\n\tconnections           map[io.ReadWriteCloser]openConnection\n\tconnectionsCloseTimer *time.Timer\n\tconnectionsMu         *sync.Mutex\n\n\tctx    caddy.Context\n\tlogger *zap.Logger\n\tevents *caddyevents.App\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Handler) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.reverse_proxy\",\n\t\tNew: func() caddy.Module { return new(Handler) },\n\t}\n}\n\n// Provision ensures that h is set up properly before use.\nfunc (h *Handler) Provision(ctx caddy.Context) error {\n\teventAppIface, err := ctx.App(\"events\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"getting events app: %v\", err)\n\t}\n\th.events = eventAppIface.(*caddyevents.App)\n\th.ctx = ctx\n\th.logger = ctx.Logger()\n\th.connections = make(map[io.ReadWriteCloser]openConnection)\n\th.connectionsMu = new(sync.Mutex)\n\n\t// warn about unsafe buffering config\n\tif h.RequestBuffers == -1 || h.ResponseBuffers == -1 {\n\t\th.logger.Warn(\"UNLIMITED BUFFERING: buffering is enabled without any cap on buffer size, which can result in OOM crashes\")\n\t}\n\n\t// start by loading modules\n\tif h.TransportRaw != nil {\n\t\tmod, err := ctx.LoadModule(h, \"TransportRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading transport: %v\", err)\n\t\t}\n\t\th.Transport = mod.(http.RoundTripper)\n\n\t\t// set default buffer sizes if applicable\n\t\tif bt, ok := h.Transport.(BufferedTransport); ok {\n\t\t\treqBuffers, respBuffers := bt.DefaultBufferSizes()\n\t\t\tif h.RequestBuffers == 0 {\n\t\t\t\th.RequestBuffers = reqBuffers\n\t\t\t}\n\t\t\tif h.ResponseBuffers == 0 {\n\t\t\t\th.ResponseBuffers = respBuffers\n\t\t\t}\n\t\t}\n\t}\n\tif h.LoadBalancing != nil && h.LoadBalancing.SelectionPolicyRaw != nil {\n\t\tmod, err := ctx.LoadModule(h.LoadBalancing, \"SelectionPolicyRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading load balancing selection policy: %s\", err)\n\t\t}\n\t\th.LoadBalancing.SelectionPolicy = mod.(Selector)\n\t}\n\tif h.CBRaw != nil {\n\t\tmod, err := ctx.LoadModule(h, \"CBRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading circuit breaker: %s\", err)\n\t\t}\n\t\th.CB = mod.(CircuitBreaker)\n\t}\n\tif h.DynamicUpstreamsRaw != nil {\n\t\tmod, err := ctx.LoadModule(h, \"DynamicUpstreamsRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading upstream source module: %v\", err)\n\t\t}\n\t\th.DynamicUpstreams = mod.(UpstreamSource)\n\t}\n\n\t// parse trusted proxy CIDRs ahead of time\n\tfor _, str := range h.TrustedProxies {\n\t\tif strings.Contains(str, \"/\") {\n\t\t\tipNet, err := netip.ParsePrefix(str)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"parsing CIDR expression: '%s': %v\", str, err)\n\t\t\t}\n\t\t\th.trustedProxies = append(h.trustedProxies, ipNet)\n\t\t} else {\n\t\t\tipAddr, err := netip.ParseAddr(str)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"invalid IP address: '%s': %v\", str, err)\n\t\t\t}\n\t\t\tipNew := netip.PrefixFrom(ipAddr, ipAddr.BitLen())\n\t\t\th.trustedProxies = append(h.trustedProxies, ipNew)\n\t\t}\n\t}\n\n\t// ensure any embedded headers handler module gets provisioned\n\t// (see https://caddy.community/t/set-cookie-manipulation-in-reverse-proxy/7666?u=matt\n\t// for what happens if we forget to provision it)\n\tif h.Headers != nil {\n\t\terr := h.Headers.Provision(ctx)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"provisioning embedded headers handler: %v\", err)\n\t\t}\n\t}\n\n\tif h.Rewrite != nil {\n\t\terr := h.Rewrite.Provision(ctx)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"provisioning rewrite: %v\", err)\n\t\t}\n\t}\n\n\t// set up transport\n\tif h.Transport == nil {\n\t\tt := &HTTPTransport{}\n\t\terr := t.Provision(ctx)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"provisioning default transport: %v\", err)\n\t\t}\n\t\th.Transport = t\n\t}\n\n\t// If the transport can provide header ops, cache them now so we don't\n\t// have to compute them per-request. Provision the HeaderOps if present\n\t// so any runtime artifacts (like precompiled regex) are prepared.\n\tif tph, ok := h.Transport.(RequestHeaderOpsTransport); ok {\n\t\th.transportHeaderOps = tph.RequestHeaderOps()\n\t\tif h.transportHeaderOps != nil {\n\t\t\tif err := h.transportHeaderOps.Provision(ctx); err != nil {\n\t\t\t\treturn fmt.Errorf(\"provisioning transport header ops: %v\", err)\n\t\t\t}\n\t\t}\n\t}\n\n\t// set up load balancing\n\tif h.LoadBalancing == nil {\n\t\th.LoadBalancing = new(LoadBalancing)\n\t}\n\tif h.LoadBalancing.SelectionPolicy == nil {\n\t\th.LoadBalancing.SelectionPolicy = RandomSelection{}\n\t}\n\tif h.LoadBalancing.TryDuration > 0 && h.LoadBalancing.TryInterval == 0 {\n\t\t// a non-zero try_duration with a zero try_interval\n\t\t// will always spin the CPU for try_duration if the\n\t\t// upstream is local or low-latency; avoid that by\n\t\t// defaulting to a sane wait period between attempts\n\t\th.LoadBalancing.TryInterval = caddy.Duration(250 * time.Millisecond)\n\t}\n\tlbMatcherSets, err := ctx.LoadModule(h.LoadBalancing, \"RetryMatchRaw\")\n\tif err != nil {\n\t\treturn err\n\t}\n\terr = h.LoadBalancing.RetryMatch.FromInterface(lbMatcherSets)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// set up upstreams\n\tfor _, u := range h.Upstreams {\n\t\th.provisionUpstream(u, false)\n\t}\n\n\tif h.HealthChecks != nil {\n\t\t// set defaults on passive health checks, if necessary\n\t\tif h.HealthChecks.Passive != nil {\n\t\t\th.HealthChecks.Passive.logger = h.logger.Named(\"health_checker.passive\")\n\t\t\tif h.HealthChecks.Passive.MaxFails == 0 {\n\t\t\t\th.HealthChecks.Passive.MaxFails = 1\n\t\t\t}\n\t\t}\n\n\t\t// if active health checks are enabled, configure them and start a worker\n\t\tif h.HealthChecks.Active != nil {\n\t\t\terr := h.HealthChecks.Active.Provision(ctx, h)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\tif h.HealthChecks.Active.IsEnabled() {\n\t\t\t\tgo h.activeHealthChecker()\n\t\t\t}\n\t\t}\n\t}\n\n\t// set up any response routes\n\tfor i, rh := range h.HandleResponse {\n\t\terr := rh.Provision(ctx)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"provisioning response handler %d: %v\", i, err)\n\t\t}\n\t}\n\n\tupstreamHealthyUpdater := newMetricsUpstreamsHealthyUpdater(h, ctx)\n\tupstreamHealthyUpdater.init()\n\n\treturn nil\n}\n\n// Cleanup cleans up the resources made by h.\nfunc (h *Handler) Cleanup() error {\n\terr := h.cleanupConnections()\n\n\t// remove hosts from our config from the pool\n\tfor _, upstream := range h.Upstreams {\n\t\t_, _ = hosts.Delete(upstream.String())\n\t}\n\n\treturn err\n}\n\nfunc (h *Handler) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\n\t// prepare the request for proxying; this is needed only once\n\tclonedReq, err := h.prepareRequest(r, repl)\n\tif err != nil {\n\t\treturn caddyhttp.Error(http.StatusInternalServerError,\n\t\t\tfmt.Errorf(\"preparing request for upstream round-trip: %v\", err))\n\t}\n\n\t// websocket over http2 or http3 if extended connect is enabled, assuming backend doesn't support this, the request will be modified to http1.1 upgrade\n\t// Both use the same upgrade mechanism: server advertizes extended connect support, and client sends the pseudo header :protocol in a CONNECT request\n\t// The quic-go http3 implementation also puts :protocol in r.Proto for CONNECT requests (quic-go/http3/headers.go@70-72,185,203)\n\t// TODO: once we can reliably detect backend support this, it can be removed for those backends\n\tif (r.ProtoMajor == 2 && r.Method == http.MethodConnect && r.Header.Get(\":protocol\") == \"websocket\") ||\n\t\t(r.ProtoMajor == 3 && r.Method == http.MethodConnect && r.Proto == \"websocket\") {\n\t\tclonedReq.Header.Del(\":protocol\")\n\t\t// keep the body for later use. http1.1 upgrade uses http.NoBody\n\t\tcaddyhttp.SetVar(clonedReq.Context(), \"extended_connect_websocket_body\", clonedReq.Body)\n\t\tclonedReq.Body = http.NoBody\n\t\tclonedReq.Method = http.MethodGet\n\t\tclonedReq.Header.Set(\"Upgrade\", \"websocket\")\n\t\tclonedReq.Header.Set(\"Connection\", \"Upgrade\")\n\t\tkey := make([]byte, 16)\n\t\t_, randErr := rand.Read(key)\n\t\tif randErr != nil {\n\t\t\treturn randErr\n\t\t}\n\t\tclonedReq.Header[\"Sec-WebSocket-Key\"] = []string{base64.StdEncoding.EncodeToString(key)}\n\t}\n\n\t// we will need the original headers and Host value if\n\t// header operations are configured; this is so that each\n\t// retry can apply the modifications, because placeholders\n\t// may be used which depend on the selected upstream for\n\t// their values\n\treqHost := clonedReq.Host\n\treqHeader := clonedReq.Header\n\n\t// When retries are configured and there is a body, wrap it in\n\t// io.NopCloser to prevent Go's transport from closing it on dial\n\t// errors. cloneRequest does a shallow copy, so clonedReq.Body and\n\t// r.Body share the same io.ReadCloser — a dial-failure Close()\n\t// would kill the original body for all subsequent retry attempts.\n\t// The real body is closed by the HTTP server when the handler\n\t// returns.\n\t//\n\t// If the body was already fully buffered (via request_buffers),\n\t// we also extract the buffer so the retry loop can replay it\n\t// from the beginning on each attempt. (see #6259, #7546)\n\tvar bufferedReqBody *bytes.Buffer\n\tif clonedReq.Body != nil && h.LoadBalancing != nil &&\n\t\t(h.LoadBalancing.Retries > 0 || h.LoadBalancing.TryDuration > 0) {\n\t\tif reqBodyBuf, ok := clonedReq.Body.(bodyReadCloser); ok && reqBodyBuf.body == nil && reqBodyBuf.buf != nil {\n\t\t\tbufferedReqBody = reqBodyBuf.buf\n\t\t\treqBodyBuf.buf = nil\n\t\t\tclonedReq.Body = io.NopCloser(bytes.NewReader(bufferedReqBody.Bytes()))\n\t\t\tdefer func() {\n\t\t\t\tbufferedReqBody.Reset()\n\t\t\t\tbufPool.Put(bufferedReqBody)\n\t\t\t}()\n\t\t} else {\n\t\t\tclonedReq.Body = io.NopCloser(clonedReq.Body)\n\t\t}\n\t}\n\n\tstart := time.Now()\n\tdefer func() {\n\t\t// total proxying duration, including time spent on LB and retries\n\t\trepl.Set(\"http.reverse_proxy.duration\", time.Since(start))\n\t\trepl.Set(\"http.reverse_proxy.duration_ms\", time.Since(start).Seconds()*1e3) // multiply seconds to preserve decimal (see #4666)\n\t}()\n\n\t// in the proxy loop, each iteration is an attempt to proxy the request,\n\t// and because we may retry some number of times, carry over the error\n\t// from previous tries because of the nuances of load balancing & retries\n\tvar proxyErr error\n\tvar retries int\n\tfor {\n\t\t// if the request body was buffered (and only the entire body, hence no body\n\t\t// set to read from after the buffer), make reading from the body idempotent\n\t\t// and reusable, so if a backend partially or fully reads the body but then\n\t\t// produces an error, the request can be repeated to the next backend with\n\t\t// the full body (retries should only happen for idempotent requests) (see #6259)\n\t\tif bufferedReqBody != nil {\n\t\t\tclonedReq.Body = io.NopCloser(bytes.NewReader(bufferedReqBody.Bytes()))\n\t\t}\n\n\t\tvar done bool\n\t\tdone, proxyErr = h.proxyLoopIteration(clonedReq, r, w, proxyErr, start, retries, repl, reqHeader, reqHost, next)\n\t\tif done {\n\t\t\tbreak\n\t\t}\n\t\tif h.VerboseLogs {\n\t\t\tvar lbWait time.Duration\n\t\t\tif h.LoadBalancing != nil {\n\t\t\t\tlbWait = time.Duration(h.LoadBalancing.TryInterval)\n\t\t\t}\n\t\t\tif c := h.logger.Check(zapcore.DebugLevel, \"retrying\"); c != nil {\n\t\t\t\tc.Write(zap.Error(proxyErr), zap.Duration(\"after\", lbWait))\n\t\t\t}\n\t\t}\n\t\tretries++\n\t}\n\n\t// number of retries actually performed\n\trepl.Set(\"http.reverse_proxy.retries\", retries)\n\n\tif proxyErr != nil {\n\t\treturn statusError(proxyErr)\n\t}\n\n\treturn nil\n}\n\n// proxyLoopIteration implements an iteration of the proxy loop. Despite the enormous amount of local state\n// that has to be passed in, we brought this into its own method so that we could run defer more easily.\n// It returns true when the loop is done and should break; false otherwise. The error value returned should\n// be assigned to the proxyErr value for the next iteration of the loop (or the error handled after break).\nfunc (h *Handler) proxyLoopIteration(r *http.Request, origReq *http.Request, w http.ResponseWriter, proxyErr error, start time.Time, retries int,\n\trepl *caddy.Replacer, reqHeader http.Header, reqHost string, next caddyhttp.Handler,\n) (bool, error) {\n\t// get the updated list of upstreams\n\tupstreams := h.Upstreams\n\tif h.DynamicUpstreams != nil {\n\t\tdUpstreams, err := h.DynamicUpstreams.GetUpstreams(r)\n\t\tif err != nil {\n\t\t\tif c := h.logger.Check(zapcore.ErrorLevel, \"failed getting dynamic upstreams; falling back to static upstreams\"); c != nil {\n\t\t\t\tc.Write(zap.Error(err))\n\t\t\t}\n\t\t} else {\n\t\t\tupstreams = dUpstreams\n\t\t\tfor _, dUp := range dUpstreams {\n\t\t\t\th.provisionUpstream(dUp, true)\n\t\t\t}\n\t\t\tif c := h.logger.Check(zapcore.DebugLevel, \"provisioned dynamic upstreams\"); c != nil {\n\t\t\t\tc.Write(zap.Int(\"count\", len(dUpstreams)))\n\t\t\t}\n\t\t}\n\t}\n\n\t// choose an available upstream\n\tupstream := h.LoadBalancing.SelectionPolicy.Select(upstreams, r, w)\n\tif upstream == nil {\n\t\tif proxyErr == nil {\n\t\t\tproxyErr = caddyhttp.Error(http.StatusServiceUnavailable, errNoUpstream)\n\t\t}\n\t\tif !h.LoadBalancing.tryAgain(h.ctx, start, retries, proxyErr, r, h.logger) {\n\t\t\treturn true, proxyErr\n\t\t}\n\t\treturn false, proxyErr\n\t}\n\n\t// the dial address may vary per-request if placeholders are\n\t// used, so perform those replacements here; the resulting\n\t// DialInfo struct should have valid network address syntax\n\tdialInfo, err := upstream.fillDialInfo(repl)\n\tif err != nil {\n\t\treturn true, fmt.Errorf(\"making dial info: %v\", err)\n\t}\n\n\tif c := h.logger.Check(zapcore.DebugLevel, \"selected upstream\"); c != nil {\n\t\tc.Write(\n\t\t\tzap.String(\"dial\", dialInfo.Address),\n\t\t\tzap.Int(\"total_upstreams\", len(upstreams)),\n\t\t)\n\t}\n\n\t// attach to the request information about how to dial the upstream;\n\t// this is necessary because the information cannot be sufficiently\n\t// or satisfactorily represented in a URL\n\tcaddyhttp.SetVar(r.Context(), dialInfoVarKey, dialInfo)\n\n\t// set placeholders with information about this upstream\n\trepl.Set(\"http.reverse_proxy.upstream.address\", dialInfo.String())\n\trepl.Set(\"http.reverse_proxy.upstream.hostport\", dialInfo.Address)\n\trepl.Set(\"http.reverse_proxy.upstream.host\", dialInfo.Host)\n\trepl.Set(\"http.reverse_proxy.upstream.port\", dialInfo.Port)\n\trepl.Set(\"http.reverse_proxy.upstream.requests\", upstream.Host.NumRequests())\n\trepl.Set(\"http.reverse_proxy.upstream.max_requests\", upstream.MaxRequests)\n\trepl.Set(\"http.reverse_proxy.upstream.fails\", upstream.Host.Fails())\n\n\t// mutate request headers according to this upstream;\n\t// because we're in a retry loop, we have to copy headers\n\t// (and the r.Host value) from the original so that each\n\t// retry is identical to the first. If either transport or\n\t// user ops exist, apply them in order (transport first,\n\t// then user, so user's config wins).\n\tvar userOps *headers.HeaderOps\n\tif h.Headers != nil {\n\t\tuserOps = h.Headers.Request\n\t}\n\ttransportOps := h.transportHeaderOps\n\tif transportOps != nil || userOps != nil {\n\t\tr.Header = make(http.Header)\n\t\tcopyHeader(r.Header, reqHeader)\n\t\tr.Host = reqHost\n\t\tif transportOps != nil {\n\t\t\ttransportOps.ApplyToRequest(r)\n\t\t}\n\t\tif userOps != nil {\n\t\t\tuserOps.ApplyToRequest(r)\n\t\t}\n\t}\n\n\t// proxy the request to that upstream\n\tproxyErr = h.reverseProxy(w, r, origReq, repl, dialInfo, next)\n\tif proxyErr == nil || errors.Is(proxyErr, context.Canceled) {\n\t\t// context.Canceled happens when the downstream client\n\t\t// cancels the request, which is not our failure\n\t\treturn true, nil\n\t}\n\n\t// if the roundtrip was successful, don't retry the request or\n\t// ding the health status of the upstream (an error can still\n\t// occur after the roundtrip if, for example, a response handler\n\t// after the roundtrip returns an error)\n\tif succ, ok := proxyErr.(roundtripSucceededError); ok {\n\t\treturn true, succ.error\n\t}\n\n\t// remember this failure (if enabled)\n\th.countFailure(upstream)\n\n\t// if we've tried long enough, break\n\tif !h.LoadBalancing.tryAgain(h.ctx, start, retries, proxyErr, r, h.logger) {\n\t\treturn true, proxyErr\n\t}\n\n\treturn false, proxyErr\n}\n\n// Mapping of the canonical form of the headers, to the RFC 6455 form,\n// i.e. `WebSocket` with uppercase 'S'.\nvar websocketHeaderMapping = map[string]string{\n\t\"Sec-Websocket-Accept\":     \"Sec-WebSocket-Accept\",\n\t\"Sec-Websocket-Extensions\": \"Sec-WebSocket-Extensions\",\n\t\"Sec-Websocket-Key\":        \"Sec-WebSocket-Key\",\n\t\"Sec-Websocket-Protocol\":   \"Sec-WebSocket-Protocol\",\n\t\"Sec-Websocket-Version\":    \"Sec-WebSocket-Version\",\n}\n\n// normalizeWebsocketHeaders ensures we use the standard casing as per\n// RFC 6455, i.e. `WebSocket` with uppercase 'S'. Most servers don't\n// care about this difference (read headers case insensitively), but\n// some do, so this maximizes compatibility with upstreams.\n// See https://github.com/caddyserver/caddy/pull/6621\nfunc normalizeWebsocketHeaders(header http.Header) {\n\tfor k, rk := range websocketHeaderMapping {\n\t\tif v, ok := header[k]; ok {\n\t\t\tdelete(header, k)\n\t\t\theader[rk] = v\n\t\t}\n\t}\n}\n\n// prepareRequest clones req so that it can be safely modified without\n// changing the original request or introducing data races. It then\n// modifies it so that it is ready to be proxied, except for directing\n// to a specific upstream. This method adjusts headers and other relevant\n// properties of the cloned request and should be done just once (before\n// proxying) regardless of proxy retries. This assumes that no mutations\n// of the cloned request are performed by h during or after proxying.\nfunc (h Handler) prepareRequest(req *http.Request, repl *caddy.Replacer) (*http.Request, error) {\n\treq = cloneRequest(req)\n\n\t// if enabled, perform rewrites on the cloned request; if\n\t// the method is GET or HEAD, prevent the request body\n\t// from being copied to the upstream\n\tif h.Rewrite != nil {\n\t\tchanged := h.Rewrite.Rewrite(req, repl)\n\t\tif changed && (h.Rewrite.Method == \"GET\" || h.Rewrite.Method == \"HEAD\") {\n\t\t\treq.ContentLength = 0\n\t\t\treq.Body = nil\n\t\t}\n\t}\n\n\t// if enabled, buffer client request; this should only be\n\t// enabled if the upstream requires it and does not work\n\t// with \"slow clients\" (gunicorn, etc.) - this obviously\n\t// has a perf overhead and makes the proxy at risk of\n\t// exhausting memory and more susceptible to slowloris\n\t// attacks, so it is strongly recommended to only use this\n\t// feature if absolutely required, if read timeouts are\n\t// set, and if body size is limited\n\tif h.RequestBuffers != 0 && req.Body != nil {\n\t\tvar readBytes int64\n\t\treq.Body, readBytes = h.bufferedBody(req.Body, h.RequestBuffers)\n\t\t// set Content-Length when body is fully buffered\n\t\tif b, ok := req.Body.(bodyReadCloser); ok && b.body == nil {\n\t\t\treq.ContentLength = readBytes\n\t\t\treq.Header.Set(\"Content-Length\", strconv.FormatInt(req.ContentLength, 10))\n\t\t}\n\t}\n\n\tif req.ContentLength == 0 {\n\t\treq.Body = nil // Issue golang/go#16036: nil Body for http.Transport retries\n\t}\n\n\treq.Close = false\n\n\t// if User-Agent is not set by client, then explicitly\n\t// disable it so it's not set to default value by std lib\n\tif _, ok := req.Header[\"User-Agent\"]; !ok {\n\t\treq.Header.Set(\"User-Agent\", \"\")\n\t}\n\n\t// Indicate if request has been conveyed in early data.\n\t// RFC 8470: \"An intermediary that forwards a request prior to the\n\t// completion of the TLS handshake with its client MUST send it with\n\t// the Early-Data header field set to “1” (i.e., it adds it if not\n\t// present in the request). An intermediary MUST use the Early-Data\n\t// header field if the request might have been subject to a replay and\n\t// might already have been forwarded by it or another instance\n\t// (see Section 6.2).\"\n\tif req.TLS != nil && !req.TLS.HandshakeComplete {\n\t\treq.Header.Set(\"Early-Data\", \"1\")\n\t}\n\n\treqUpgradeType := upgradeType(req.Header)\n\tremoveConnectionHeaders(req.Header)\n\n\t// Remove hop-by-hop headers to the backend. Especially\n\t// important is \"Connection\" because we want a persistent\n\t// connection, regardless of what the client sent to us.\n\t// Issue golang/go#46313: don't skip if field is empty.\n\tfor _, h := range hopHeaders {\n\t\t// Issue golang/go#21096: tell backend applications that care about trailer support\n\t\t// that we support trailers. (We do, but we don't go out of our way to\n\t\t// advertise that unless the incoming client request thought it was worth\n\t\t// mentioning.)\n\t\tif h == \"Te\" && httpguts.HeaderValuesContainsToken(req.Header[\"Te\"], \"trailers\") {\n\t\t\treq.Header.Set(\"Te\", \"trailers\")\n\t\t\tcontinue\n\t\t}\n\t\treq.Header.Del(h)\n\t}\n\n\t// After stripping all the hop-by-hop connection headers above, add back any\n\t// necessary for protocol upgrades, such as for websockets.\n\tif reqUpgradeType != \"\" {\n\t\treq.Header.Set(\"Connection\", \"Upgrade\")\n\t\treq.Header.Set(\"Upgrade\", reqUpgradeType)\n\t\tnormalizeWebsocketHeaders(req.Header)\n\t}\n\n\t// Set up the PROXY protocol info\n\taddress := caddyhttp.GetVar(req.Context(), caddyhttp.ClientIPVarKey).(string)\n\taddrPort, err := netip.ParseAddrPort(address)\n\tif err != nil {\n\t\t// OK; probably didn't have a port\n\t\taddr, err := netip.ParseAddr(address)\n\t\tif err != nil {\n\t\t\t// Doesn't seem like a valid ip address at all\n\t\t} else {\n\t\t\t// Ok, only the port was missing\n\t\t\taddrPort = netip.AddrPortFrom(addr, 0)\n\t\t}\n\t}\n\tproxyProtocolInfo := ProxyProtocolInfo{AddrPort: addrPort}\n\tcaddyhttp.SetVar(req.Context(), proxyProtocolInfoVarKey, proxyProtocolInfo)\n\n\t// some of the outbound requests require h1 (e.g. websocket)\n\t// https://github.com/golang/go/blob/4837fbe4145cd47b43eed66fee9eed9c2b988316/src/net/http/request.go#L1579\n\tif isWebsocket(req) {\n\t\tcaddyhttp.SetVar(req.Context(), tlsH1OnlyVarKey, true)\n\t}\n\n\t// Add the supported X-Forwarded-* headers\n\terr = h.addForwardedHeaders(req)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// Via header(s)\n\treq.Header.Add(\"Via\", fmt.Sprintf(\"%d.%d Caddy\", req.ProtoMajor, req.ProtoMinor))\n\n\treturn req, nil\n}\n\n// addForwardedHeaders adds the de-facto standard X-Forwarded-*\n// headers to the request before it is sent upstream.\n//\n// These headers are security sensitive, so care is taken to only\n// use existing values for these headers from the incoming request\n// if the client IP is trusted (i.e. coming from a trusted proxy\n// sitting in front of this server). If the request didn't have\n// the headers at all, then they will be added with the values\n// that we can glean from the request.\nfunc (h Handler) addForwardedHeaders(req *http.Request) error {\n\t// Check if the client is a trusted proxy\n\ttrusted := caddyhttp.GetVar(req.Context(), caddyhttp.TrustedProxyVarKey).(bool)\n\n\tvar clientIP string\n\n\tif req.RemoteAddr == \"@\" {\n\t\t// For Unix socket connections, RemoteAddr is \"@\" which cannot\n\t\t// be parsed as host:port. If untrusted, strip forwarded headers\n\t\t// for security. If trusted, there is no peer IP to append to\n\t\t// X-Forwarded-For, so clientIP stays empty.\n\t\tif !trusted {\n\t\t\treq.Header.Del(\"X-Forwarded-For\")\n\t\t\treq.Header.Del(\"X-Forwarded-Proto\")\n\t\t\treq.Header.Del(\"X-Forwarded-Host\")\n\t\t\treturn nil\n\t\t}\n\t} else {\n\t\t// Parse the remote IP, ignore the error as non-fatal,\n\t\t// but the remote IP is required to continue, so we\n\t\t// just return early. This should probably never happen\n\t\t// though, unless some other module manipulated the request's\n\t\t// remote address and used an invalid value.\n\t\tvar err error\n\t\tclientIP, _, err = net.SplitHostPort(req.RemoteAddr)\n\t\tif err != nil {\n\t\t\t// Remove the `X-Forwarded-*` headers to avoid upstreams\n\t\t\t// potentially trusting a header that came from the client\n\t\t\treq.Header.Del(\"X-Forwarded-For\")\n\t\t\treq.Header.Del(\"X-Forwarded-Proto\")\n\t\t\treq.Header.Del(\"X-Forwarded-Host\")\n\t\t\treturn nil\n\t\t}\n\n\t\t// Client IP may contain a zone if IPv6, so we need\n\t\t// to pull that out before parsing the IP\n\t\tclientIP, _, _ = strings.Cut(clientIP, \"%\")\n\t\tipAddr, err := netip.ParseAddr(clientIP)\n\n\t\t// If ParseAddr fails (e.g. non-IP network like SCION), we cannot check\n\t\t// if it is a trusted proxy by IP range. In this case, we ignore the\n\t\t// error and treat the connection as untrusted (or retain existing status).\n\t\tif err == nil {\n\t\t\tfor _, ipRange := range h.trustedProxies {\n\t\t\t\tif ipRange.Contains(ipAddr) {\n\t\t\t\t\ttrusted = true\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t// If we aren't the first proxy, and the proxy is trusted,\n\t// retain prior X-Forwarded-For information as a comma+space\n\t// separated list and fold multiple headers into one.\n\tprior, ok, omit := allHeaderValues(req.Header, \"X-Forwarded-For\")\n\tif !omit {\n\t\tif trusted && ok && prior != \"\" {\n\t\t\tif clientIP != \"\" {\n\t\t\t\treq.Header.Set(\"X-Forwarded-For\", prior+\", \"+clientIP)\n\t\t\t} else {\n\t\t\t\treq.Header.Set(\"X-Forwarded-For\", prior)\n\t\t\t}\n\t\t} else if clientIP != \"\" {\n\t\t\treq.Header.Set(\"X-Forwarded-For\", clientIP)\n\t\t}\n\t}\n\n\t// Set X-Forwarded-Proto; many backend apps expect this,\n\t// so that they can properly craft URLs with the right\n\t// scheme to match the original request\n\tproto := \"https\"\n\tif req.TLS == nil {\n\t\tproto = \"http\"\n\t}\n\tprior, ok, omit = lastHeaderValue(req.Header, \"X-Forwarded-Proto\")\n\tif trusted && ok && prior != \"\" {\n\t\tproto = prior\n\t}\n\tif !omit {\n\t\treq.Header.Set(\"X-Forwarded-Proto\", proto)\n\t}\n\n\t// Set X-Forwarded-Host; often this is redundant because\n\t// we pass through the request Host as-is, but in situations\n\t// where we proxy over HTTPS, the user may need to override\n\t// Host themselves, so it's helpful to send the original too.\n\thost := req.Host\n\tprior, ok, omit = lastHeaderValue(req.Header, \"X-Forwarded-Host\")\n\tif trusted && ok && prior != \"\" {\n\t\thost = prior\n\t}\n\tif !omit {\n\t\treq.Header.Set(\"X-Forwarded-Host\", host)\n\t}\n\n\treturn nil\n}\n\n// reverseProxy performs a round-trip to the given backend and processes the response with the client.\n// (This method is mostly the beginning of what was borrowed from the net/http/httputil package in the\n// Go standard library which was used as the foundation.)\nfunc (h *Handler) reverseProxy(rw http.ResponseWriter, req *http.Request, origReq *http.Request, repl *caddy.Replacer, di DialInfo, next caddyhttp.Handler) error {\n\t_ = di.Upstream.Host.countRequest(1)\n\n\t// Increment the in-flight request count\n\tincInFlightRequest(di.Address)\n\n\t//nolint:errcheck\n\tdefer func() {\n\t\tdi.Upstream.Host.countRequest(-1)\n\t\t// Decrement the in-flight request count\n\t\tdecInFlightRequest(di.Address)\n\t}()\n\n\t// point the request to this upstream\n\th.directRequest(req, di)\n\n\tserver := req.Context().Value(caddyhttp.ServerCtxKey).(*caddyhttp.Server)\n\tshouldLogCredentials := server.Logs != nil && server.Logs.ShouldLogCredentials\n\n\t// Forward 1xx status codes, backported from https://github.com/golang/go/pull/53164\n\tvar (\n\t\troundTripMutex sync.Mutex\n\t\troundTripDone  bool\n\t)\n\ttrace := &httptrace.ClientTrace{\n\t\tGot1xxResponse: func(code int, header textproto.MIMEHeader) error {\n\t\t\troundTripMutex.Lock()\n\t\t\tdefer roundTripMutex.Unlock()\n\t\t\tif roundTripDone {\n\t\t\t\t// If RoundTrip has returned, don't try to further modify\n\t\t\t\t// the ResponseWriter's header map.\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\th := rw.Header()\n\t\t\tcopyHeader(h, http.Header(header))\n\t\t\trw.WriteHeader(code)\n\n\t\t\t// Clear headers coming from the backend\n\t\t\t// (it's not automatically done by ResponseWriter.WriteHeader() for 1xx responses)\n\t\t\tclear(h)\n\n\t\t\treturn nil\n\t\t},\n\t}\n\treq = req.WithContext(httptrace.WithClientTrace(req.Context(), trace))\n\n\t// do the round-trip\n\tstart := time.Now()\n\tres, err := h.Transport.RoundTrip(req)\n\tduration := time.Since(start)\n\n\t// record that the round trip is done for the 1xx response handler\n\troundTripMutex.Lock()\n\troundTripDone = true\n\troundTripMutex.Unlock()\n\n\t// emit debug log with values we know are safe,\n\t// or if there is no error, emit fuller log entry\n\tlogger := h.logger.With(\n\t\tzap.String(\"upstream\", di.Upstream.String()),\n\t\tzap.Duration(\"duration\", duration),\n\t\tzap.Object(\"request\", caddyhttp.LoggableHTTPRequest{\n\t\t\tRequest:              req,\n\t\t\tShouldLogCredentials: shouldLogCredentials,\n\t\t}),\n\t)\n\n\tconst logMessage = \"upstream roundtrip\"\n\n\tif err != nil {\n\t\tif c := logger.Check(zapcore.DebugLevel, logMessage); c != nil {\n\t\t\tc.Write(zap.Error(err))\n\t\t}\n\t\treturn err\n\t}\n\tif c := logger.Check(zapcore.DebugLevel, logMessage); c != nil {\n\t\tc.Write(\n\t\t\tzap.Object(\"headers\", caddyhttp.LoggableHTTPHeader{\n\t\t\t\tHeader:               res.Header,\n\t\t\t\tShouldLogCredentials: shouldLogCredentials,\n\t\t\t}),\n\t\t\tzap.Int(\"status\", res.StatusCode),\n\t\t)\n\t}\n\n\t// duration until upstream wrote response headers (roundtrip duration)\n\trepl.Set(\"http.reverse_proxy.upstream.latency\", duration)\n\trepl.Set(\"http.reverse_proxy.upstream.latency_ms\", duration.Seconds()*1e3) // multiply seconds to preserve decimal (see #4666)\n\n\t// update circuit breaker on current conditions\n\tif di.Upstream.cb != nil {\n\t\tdi.Upstream.cb.RecordMetric(res.StatusCode, duration)\n\t}\n\n\t// perform passive health checks (if enabled)\n\tif h.HealthChecks != nil && h.HealthChecks.Passive != nil {\n\t\t// strike if the status code matches one that is \"bad\"\n\t\tfor _, badStatus := range h.HealthChecks.Passive.UnhealthyStatus {\n\t\t\tif caddyhttp.StatusCodeMatches(res.StatusCode, badStatus) {\n\t\t\t\th.countFailure(di.Upstream)\n\t\t\t}\n\t\t}\n\n\t\t// strike if the roundtrip took too long\n\t\tif h.HealthChecks.Passive.UnhealthyLatency > 0 &&\n\t\t\tduration >= time.Duration(h.HealthChecks.Passive.UnhealthyLatency) {\n\t\t\th.countFailure(di.Upstream)\n\t\t}\n\t}\n\n\t// if enabled, buffer the response body\n\tif h.ResponseBuffers != 0 {\n\t\tres.Body, _ = h.bufferedBody(res.Body, h.ResponseBuffers)\n\t}\n\n\t// see if any response handler is configured for this response from the backend\n\tfor i, rh := range h.HandleResponse {\n\t\tif rh.Match != nil && !rh.Match.Match(res.StatusCode, res.Header) {\n\t\t\tcontinue\n\t\t}\n\n\t\t// if configured to only change the status code,\n\t\t// do that then continue regular proxy response\n\t\tif statusCodeStr := rh.StatusCode.String(); statusCodeStr != \"\" {\n\t\t\tstatusCode, err := strconv.Atoi(repl.ReplaceAll(statusCodeStr, \"\"))\n\t\t\tif err != nil {\n\t\t\t\treturn caddyhttp.Error(http.StatusInternalServerError, err)\n\t\t\t}\n\t\t\tif statusCode != 0 {\n\t\t\t\tres.StatusCode = statusCode\n\t\t\t}\n\t\t\tbreak\n\t\t}\n\n\t\t// set up the replacer so that parts of the original response can be\n\t\t// used for routing decisions\n\t\tfor field, value := range res.Header {\n\t\t\trepl.Set(\"http.reverse_proxy.header.\"+field, strings.Join(value, \",\"))\n\t\t}\n\t\trepl.Set(\"http.reverse_proxy.status_code\", res.StatusCode)\n\t\trepl.Set(\"http.reverse_proxy.status_text\", res.Status)\n\n\t\tif c := logger.Check(zapcore.DebugLevel, \"handling response\"); c != nil {\n\t\t\tc.Write(zap.Int(\"handler\", i))\n\t\t}\n\n\t\t// we make some data available via request context to child routes\n\t\t// so that they may inherit some options and functions from the\n\t\t// handler, and be able to copy the response.\n\t\t// we use the original request here, so that any routes from 'next'\n\t\t// see the original request rather than the proxy cloned request.\n\t\thrc := &handleResponseContext{\n\t\t\thandler:  h,\n\t\t\tresponse: res,\n\t\t\tstart:    start,\n\t\t\tlogger:   logger,\n\t\t}\n\t\tctx := origReq.Context()\n\t\tctx = context.WithValue(ctx, proxyHandleResponseContextCtxKey, hrc)\n\n\t\t// pass the request through the response handler routes\n\t\trouteErr := rh.Routes.Compile(next).ServeHTTP(rw, origReq.WithContext(ctx))\n\n\t\t// close the response body afterwards, since we don't need it anymore;\n\t\t// either a route had 'copy_response' which already consumed the body,\n\t\t// or some other terminal handler ran which doesn't need the response\n\t\t// body after that point (e.g. 'file_server' for X-Accel-Redirect flow),\n\t\t// or we fell through to subsequent handlers past this proxy\n\t\t// (e.g. forward auth's 2xx response flow).\n\t\tif !hrc.isFinalized {\n\t\t\tres.Body.Close()\n\t\t}\n\n\t\t// wrap any route error in roundtripSucceededError so caller knows that\n\t\t// the roundtrip was successful and to not retry\n\t\tif routeErr != nil {\n\t\t\treturn roundtripSucceededError{routeErr}\n\t\t}\n\n\t\t// we're done handling the response, and we don't want to\n\t\t// fall through to the default finalize/copy behaviour\n\t\treturn nil\n\t}\n\n\t// copy the response body and headers back to the upstream client\n\treturn h.finalizeResponse(rw, req, res, repl, start, logger)\n}\n\n// finalizeResponse prepares and copies the response.\nfunc (h *Handler) finalizeResponse(\n\trw http.ResponseWriter,\n\treq *http.Request,\n\tres *http.Response,\n\trepl *caddy.Replacer,\n\tstart time.Time,\n\tlogger *zap.Logger,\n) error {\n\t// deal with 101 Switching Protocols responses: (WebSocket, h2c, etc)\n\tif res.StatusCode == http.StatusSwitchingProtocols {\n\t\tvar wg sync.WaitGroup\n\t\th.handleUpgradeResponse(logger, &wg, rw, req, res)\n\t\twg.Wait()\n\t\treturn nil\n\t}\n\n\tremoveConnectionHeaders(res.Header)\n\n\tfor _, h := range hopHeaders {\n\t\tres.Header.Del(h)\n\t}\n\n\t// delete our Server header and use Via instead (see #6275)\n\trw.Header().Del(\"Server\")\n\tvar protoPrefix string\n\tif !strings.HasPrefix(strings.ToUpper(res.Proto), \"HTTP/\") {\n\t\tprotoPrefix = res.Proto[:strings.Index(res.Proto, \"/\")+1]\n\t}\n\trw.Header().Add(\"Via\", fmt.Sprintf(\"%s%d.%d Caddy\", protoPrefix, res.ProtoMajor, res.ProtoMinor))\n\n\t// apply any response header operations\n\tif h.Headers != nil && h.Headers.Response != nil {\n\t\tif h.Headers.Response.Require == nil ||\n\t\t\th.Headers.Response.Require.Match(res.StatusCode, res.Header) {\n\t\t\th.Headers.Response.ApplyTo(res.Header, repl)\n\t\t}\n\t}\n\n\tcopyHeader(rw.Header(), res.Header)\n\n\t// The \"Trailer\" header isn't included in the Transport's response,\n\t// at least for *http.Transport. Build it up from Trailer.\n\tannouncedTrailers := len(res.Trailer)\n\tif announcedTrailers > 0 {\n\t\ttrailerKeys := make([]string, 0, len(res.Trailer))\n\t\tfor k := range res.Trailer {\n\t\t\ttrailerKeys = append(trailerKeys, k)\n\t\t}\n\t\trw.Header().Add(\"Trailer\", strings.Join(trailerKeys, \", \"))\n\t}\n\n\trw.WriteHeader(res.StatusCode)\n\tif h.VerboseLogs {\n\t\tlogger.Debug(\"wrote header\")\n\t}\n\n\terr := h.copyResponse(rw, res.Body, h.flushInterval(req, res), logger)\n\terrClose := res.Body.Close() // close now, instead of defer, to populate res.Trailer\n\tif h.VerboseLogs || errClose != nil {\n\t\tif c := logger.Check(zapcore.DebugLevel, \"closed response body from upstream\"); c != nil {\n\t\t\tc.Write(zap.Error(errClose))\n\t\t}\n\t}\n\tif err != nil {\n\t\t// we're streaming the response and we've already written headers, so\n\t\t// there's nothing an error handler can do to recover at this point;\n\t\t// we'll just log the error and abort the stream here and panic just as\n\t\t// the standard lib's proxy to propagate the stream error.\n\t\t// see issue https://github.com/caddyserver/caddy/issues/5951\n\t\tif c := logger.Check(zapcore.WarnLevel, \"aborting with incomplete response\"); c != nil {\n\t\t\tc.Write(zap.Error(err))\n\t\t}\n\t\t// no extra logging from stdlib\n\t\tpanic(http.ErrAbortHandler)\n\t}\n\n\tif len(res.Trailer) > 0 {\n\t\t// Force chunking if we saw a response trailer.\n\t\t// This prevents net/http from calculating the length for short\n\t\t// bodies and adding a Content-Length.\n\t\t//nolint:bodyclose\n\t\thttp.NewResponseController(rw).Flush()\n\t}\n\n\t// total duration spent proxying, including writing response body\n\trepl.Set(\"http.reverse_proxy.upstream.duration\", time.Since(start))\n\trepl.Set(\"http.reverse_proxy.upstream.duration_ms\", time.Since(start).Seconds()*1e3)\n\n\tif len(res.Trailer) == announcedTrailers {\n\t\tcopyHeader(rw.Header(), res.Trailer)\n\t\treturn nil\n\t}\n\n\tfor k, vv := range res.Trailer {\n\t\tk = http.TrailerPrefix + k\n\t\tfor _, v := range vv {\n\t\t\trw.Header().Add(k, v)\n\t\t}\n\t}\n\n\tif h.VerboseLogs {\n\t\tlogger.Debug(\"response finalized\")\n\t}\n\n\treturn nil\n}\n\n// tryAgain takes the time that the handler was initially invoked,\n// the amount of retries already performed, as well as any error\n// currently obtained, and the request being tried, and returns\n// true if another attempt should be made at proxying the request.\n// If true is returned, it has already blocked long enough before\n// the next retry (i.e. no more sleeping is needed). If false is\n// returned, the handler should stop trying to proxy the request.\nfunc (lb LoadBalancing) tryAgain(ctx caddy.Context, start time.Time, retries int, proxyErr error, req *http.Request, logger *zap.Logger) bool {\n\t// no retries are configured\n\tif lb.TryDuration == 0 && lb.Retries == 0 {\n\t\treturn false\n\t}\n\n\t// if we've tried long enough, break\n\tif lb.TryDuration > 0 && time.Since(start) >= time.Duration(lb.TryDuration) {\n\t\treturn false\n\t}\n\n\t// if we've reached the retry limit, break\n\tif lb.Retries > 0 && retries >= lb.Retries {\n\t\treturn false\n\t}\n\n\t// if the error occurred while dialing (i.e. a connection\n\t// could not even be established to the upstream), then it\n\t// should be safe to retry, since without a connection, no\n\t// HTTP request can be transmitted; but if the error is not\n\t// specifically a dialer error, we need to be careful\n\tif proxyErr != nil {\n\t\t_, isDialError := proxyErr.(DialError)\n\t\therr, isHandlerError := proxyErr.(caddyhttp.HandlerError)\n\n\t\t// if the error occurred after a connection was established,\n\t\t// we have to assume the upstream received the request, and\n\t\t// retries need to be carefully decided, because some requests\n\t\t// are not idempotent\n\t\tif !isDialError && (!isHandlerError || !errors.Is(herr, errNoUpstream)) {\n\t\t\tif lb.RetryMatch == nil && req.Method != \"GET\" {\n\t\t\t\t// by default, don't retry requests if they aren't GET\n\t\t\t\treturn false\n\t\t\t}\n\n\t\t\tmatch, err := lb.RetryMatch.AnyMatchWithError(req)\n\t\t\tif err != nil {\n\t\t\t\tlogger.Error(\"error matching request for retry\", zap.Error(err))\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tif !match {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t}\n\n\t// fast path; if the interval is zero, we don't need to wait\n\tif lb.TryInterval == 0 {\n\t\treturn true\n\t}\n\n\t// otherwise, wait and try the next available host\n\ttimer := time.NewTimer(time.Duration(lb.TryInterval))\n\tselect {\n\tcase <-timer.C:\n\t\treturn true\n\tcase <-ctx.Done():\n\t\tif !timer.Stop() {\n\t\t\t// if the timer has been stopped then read from the channel\n\t\t\t<-timer.C\n\t\t}\n\t\treturn false\n\t}\n}\n\n// directRequest modifies only req.URL so that it points to the upstream\n// in the given DialInfo. It must modify ONLY the request URL.\nfunc (h *Handler) directRequest(req *http.Request, di DialInfo) {\n\t// we need a host, so set the upstream's host address\n\treqHost := di.Address\n\n\t// if the port equates to the scheme, strip the port because\n\t// it's weird to make a request like http://example.com:80/.\n\tif (req.URL.Scheme == \"http\" && di.Port == \"80\") ||\n\t\t(req.URL.Scheme == \"https\" && di.Port == \"443\") {\n\t\treqHost = di.Host\n\t}\n\n\t// add client address to the host to let transport differentiate requests from different clients\n\tif ppt, ok := h.Transport.(ProxyProtocolTransport); ok && ppt.ProxyProtocolEnabled() {\n\t\tif proxyProtocolInfo, ok := caddyhttp.GetVar(req.Context(), proxyProtocolInfoVarKey).(ProxyProtocolInfo); ok {\n\t\t\t// encode the request so it plays well with h2 transport, it's unnecessary for h1 but anyway\n\t\t\t// The issue is that h2 transport will use the address to determine if new connections are needed\n\t\t\t// to roundtrip requests but the without escaping, new connections are constantly created and closed until\n\t\t\t// file descriptors are exhausted.\n\t\t\t// see: https://github.com/caddyserver/caddy/issues/7529\n\t\t\treqHost = url.QueryEscape(proxyProtocolInfo.AddrPort.String() + \"->\" + reqHost)\n\t\t}\n\t}\n\n\treq.URL.Host = reqHost\n}\n\nfunc (h Handler) provisionUpstream(upstream *Upstream, dynamic bool) {\n\t// create or get the host representation for this upstream;\n\t// dynamic upstreams are tracked in a separate map with last-seen\n\t// timestamps so their health state persists across requests without\n\t// being reference-counted (and thus discarded between requests).\n\tif dynamic {\n\t\tupstream.fillDynamicHost()\n\t} else {\n\t\tupstream.fillHost()\n\t}\n\n\t// give it the circuit breaker, if any\n\tupstream.cb = h.CB\n\n\t// if the passive health checker has a non-zero UnhealthyRequestCount\n\t// but the upstream has no MaxRequests set (they are the same thing,\n\t// but the passive health checker is a default value for upstreams\n\t// without MaxRequests), copy the value into this upstream, since the\n\t// value in the upstream (MaxRequests) is what is used during\n\t// availability checks\n\tif h.HealthChecks != nil &&\n\t\th.HealthChecks.Passive != nil &&\n\t\th.HealthChecks.Passive.UnhealthyRequestCount > 0 &&\n\t\tupstream.MaxRequests == 0 {\n\t\tupstream.MaxRequests = h.HealthChecks.Passive.UnhealthyRequestCount\n\t}\n\n\t// upstreams need independent access to the passive\n\t// health check policy because passive health checks\n\t// run without access to h.\n\tif h.HealthChecks != nil {\n\t\tupstream.healthCheckPolicy = h.HealthChecks.Passive\n\t}\n}\n\n// bufferedBody reads originalBody into a buffer with maximum size of limit (-1 for unlimited),\n// then returns a reader for the buffer along with how many bytes were buffered. Always close\n// the return value when done with it, just like if it was the original body! If limit is 0\n// (which it shouldn't be), this function returns its input; i.e. is a no-op, for safety.\n// Otherwise, it returns bodyReadCloser, the original body will be closed and body will be nil\n// if it's explicitly configured to buffer all or EOF is reached when reading.\n// TODO: the error during reading is discarded if the limit is negative, should the error be propagated\n// to upstream/downstream?\nfunc (h Handler) bufferedBody(originalBody io.ReadCloser, limit int64) (io.ReadCloser, int64) {\n\tif limit == 0 {\n\t\treturn originalBody, 0\n\t}\n\tvar written int64\n\tbuf := bufPool.Get().(*bytes.Buffer)\n\tbuf.Reset()\n\tif limit > 0 {\n\t\tvar err error\n\t\twritten, err = io.CopyN(buf, originalBody, limit)\n\t\tif (err != nil && err != io.EOF) || written == limit {\n\t\t\treturn bodyReadCloser{\n\t\t\t\tReader: io.MultiReader(buf, originalBody),\n\t\t\t\tbuf:    buf,\n\t\t\t\tbody:   originalBody,\n\t\t\t}, written\n\t\t}\n\t} else {\n\t\twritten, _ = io.Copy(buf, originalBody)\n\t}\n\toriginalBody.Close() // no point in keeping it open\n\treturn bodyReadCloser{\n\t\tReader: buf,\n\t\tbuf:    buf,\n\t}, written\n}\n\n// cloneRequest makes a semi-deep clone of origReq.\n//\n// Most of this code is borrowed from the Go stdlib reverse proxy,\n// but we make a shallow-ish clone the request (deep clone only\n// the headers and URL) so we can avoid manipulating the original\n// request when using it to proxy upstream. This prevents request\n// corruption and data races.\nfunc cloneRequest(origReq *http.Request) *http.Request {\n\treq := new(http.Request)\n\t*req = *origReq\n\tif origReq.URL != nil {\n\t\tnewURL := new(url.URL)\n\t\t*newURL = *origReq.URL\n\t\tif origReq.URL.User != nil {\n\t\t\tnewURL.User = new(url.Userinfo)\n\t\t\t*newURL.User = *origReq.URL.User\n\t\t}\n\t\t// sanitize the request URL; we expect it to not contain the\n\t\t// scheme and host since those should be determined by r.TLS\n\t\t// and r.Host respectively, but some clients may include it\n\t\t// in the request-line, which is technically valid in HTTP,\n\t\t// but breaks reverseproxy behaviour, overriding how the\n\t\t// dialer will behave. See #4237 for context.\n\t\tnewURL.Scheme = \"\"\n\t\tnewURL.Host = \"\"\n\t\treq.URL = newURL\n\t}\n\tif origReq.Header != nil {\n\t\treq.Header = origReq.Header.Clone()\n\t}\n\tif origReq.Trailer != nil {\n\t\treq.Trailer = origReq.Trailer.Clone()\n\t}\n\treturn req\n}\n\nfunc copyHeader(dst, src http.Header) {\n\tfor k, vv := range src {\n\t\tfor _, v := range vv {\n\t\t\tdst.Add(k, v)\n\t\t}\n\t}\n}\n\n// allHeaderValues gets all values for a given header field,\n// joined by a comma and space if more than one is set. If the\n// header field is nil, then the omit is true, meaning some\n// earlier logic in the server wanted to prevent this header from\n// getting written at all. If the header is empty, then ok is\n// false. Callers should still check that the value is not empty\n// (the header field may be set but have an empty value).\nfunc allHeaderValues(h http.Header, field string) (value string, ok bool, omit bool) {\n\tvalues, ok := h[http.CanonicalHeaderKey(field)]\n\tif ok && values == nil {\n\t\treturn \"\", true, true\n\t}\n\tif len(values) == 0 {\n\t\treturn \"\", false, false\n\t}\n\treturn strings.Join(values, \", \"), true, false\n}\n\n// lastHeaderValue gets the last value for a given header field\n// if more than one is set. If the header field is nil, then\n// the omit is true, meaning some earlier logic in the server\n// wanted to prevent this header from getting written at all.\n// If the header is empty, then ok is false. Callers should\n// still check that the value is not empty (the header field\n// may be set but have an empty value).\nfunc lastHeaderValue(h http.Header, field string) (value string, ok bool, omit bool) {\n\tvalues, ok := h[http.CanonicalHeaderKey(field)]\n\tif ok && values == nil {\n\t\treturn \"\", true, true\n\t}\n\tif len(values) == 0 {\n\t\treturn \"\", false, false\n\t}\n\treturn values[len(values)-1], true, false\n}\n\nfunc upgradeType(h http.Header) string {\n\tif !httpguts.HeaderValuesContainsToken(h[\"Connection\"], \"Upgrade\") {\n\t\treturn \"\"\n\t}\n\treturn strings.ToLower(h.Get(\"Upgrade\"))\n}\n\n// removeConnectionHeaders removes hop-by-hop headers listed in the \"Connection\" header of h.\n// See RFC 7230, section 6.1\nfunc removeConnectionHeaders(h http.Header) {\n\tfor _, f := range h[\"Connection\"] {\n\t\tfor sf := range strings.SplitSeq(f, \",\") {\n\t\t\tif sf = textproto.TrimString(sf); sf != \"\" {\n\t\t\t\th.Del(sf)\n\t\t\t}\n\t\t}\n\t}\n}\n\n// statusError returns an error value that has a status code.\nfunc statusError(err error) error {\n\t// errors proxying usually mean there is a problem with the upstream(s)\n\tstatusCode := http.StatusBadGateway\n\n\t// timeout errors have a standard status code (see issue #4823)\n\tif err, ok := err.(net.Error); ok && err.Timeout() {\n\t\tstatusCode = http.StatusGatewayTimeout\n\t}\n\n\t// if the client canceled the request (usually this means they closed\n\t// the connection, so they won't see any response), we can report it\n\t// as a client error (4xx) and not a server error (5xx); unfortunately\n\t// the Go standard library, at least at time of writing in late 2020,\n\t// obnoxiously wraps the exported, standard context.Canceled error with\n\t// an unexported garbage value that we have to do a substring check for:\n\t// https://github.com/golang/go/blob/6965b01ea248cabb70c3749fd218b36089a21efb/src/net/net.go#L416-L430\n\tif errors.Is(err, context.Canceled) || strings.Contains(err.Error(), \"operation was canceled\") {\n\t\t// regrettably, there is no standard error code for \"client closed connection\", but\n\t\t// for historical reasons we can use a code that a lot of people are already using;\n\t\t// using 5xx is problematic for users; see #3748\n\t\tstatusCode = 499\n\t}\n\treturn caddyhttp.Error(statusCode, err)\n}\n\n// LoadBalancing has parameters related to load balancing.\ntype LoadBalancing struct {\n\t// A selection policy is how to choose an available backend.\n\t// The default policy is random selection.\n\tSelectionPolicyRaw json.RawMessage `json:\"selection_policy,omitempty\" caddy:\"namespace=http.reverse_proxy.selection_policies inline_key=policy\"`\n\n\t// How many times to retry selecting available backends for each\n\t// request if the next available host is down. If try_duration is\n\t// also configured, then retries may stop early if the duration\n\t// is reached. By default, retries are disabled (zero).\n\tRetries int `json:\"retries,omitempty\"`\n\n\t// How long to try selecting available backends for each request\n\t// if the next available host is down. Clients will wait for up\n\t// to this long while the load balancer tries to find an available\n\t// upstream host. If retries is also configured, tries may stop\n\t// early if the maximum retries is reached. By default, retries\n\t// are disabled (zero duration).\n\tTryDuration caddy.Duration `json:\"try_duration,omitempty\"`\n\n\t// How long to wait between selecting the next host from the pool.\n\t// Default is 250ms if try_duration is enabled, otherwise zero. Only\n\t// relevant when a request to an upstream host fails. Be aware that\n\t// setting this to 0 with a non-zero try_duration can cause the CPU\n\t// to spin if all backends are down and latency is very low.\n\tTryInterval caddy.Duration `json:\"try_interval,omitempty\"`\n\n\t// A list of matcher sets that restricts with which requests retries are\n\t// allowed. A request must match any of the given matcher sets in order\n\t// to be retried if the connection to the upstream succeeded but the\n\t// subsequent round-trip failed. If the connection to the upstream failed,\n\t// a retry is always allowed. If unspecified, only GET requests will be\n\t// allowed to be retried. Note that a retry is done with the next available\n\t// host according to the load balancing policy.\n\tRetryMatchRaw caddyhttp.RawMatcherSets `json:\"retry_match,omitempty\" caddy:\"namespace=http.matchers\"`\n\n\tSelectionPolicy Selector              `json:\"-\"`\n\tRetryMatch      caddyhttp.MatcherSets `json:\"-\"`\n}\n\n// Selector selects an available upstream from the pool.\ntype Selector interface {\n\tSelect(UpstreamPool, *http.Request, http.ResponseWriter) *Upstream\n}\n\n// UpstreamSource gets the list of upstreams that can be used when\n// proxying a request. Returned upstreams will be load balanced and\n// health-checked. This should be a very fast function -- instant\n// if possible -- and the return value must be as stable as possible.\n// In other words, the list of upstreams should ideally not change much\n// across successive calls. If the list of upstreams changes or the\n// ordering is not stable, load balancing will suffer. This function\n// may be called during each retry, multiple times per request, and as\n// such, needs to be instantaneous. The returned slice will not be\n// modified.\ntype UpstreamSource interface {\n\tGetUpstreams(*http.Request) ([]*Upstream, error)\n}\n\n// Hop-by-hop headers. These are removed when sent to the backend.\n// As of RFC 7230, hop-by-hop headers are required to appear in the\n// Connection header field. These are the headers defined by the\n// obsoleted RFC 2616 (section 13.5.1) and are used for backward\n// compatibility.\nvar hopHeaders = []string{\n\t\"Alt-Svc\",\n\t\"Connection\",\n\t\"Proxy-Connection\", // non-standard but still sent by libcurl and rejected by e.g. google\n\t\"Keep-Alive\",\n\t\"Proxy-Authenticate\",\n\t\"Proxy-Authorization\",\n\t\"Te\",      // canonicalized version of \"TE\"\n\t\"Trailer\", // not Trailers per URL above; https://www.rfc-editor.org/errata_search.php?eid=4522\n\t\"Transfer-Encoding\",\n\t\"Upgrade\",\n}\n\n// DialError is an error that specifically occurs\n// in a call to Dial or DialContext.\ntype DialError struct{ error }\n\n// TLSTransport is implemented by transports\n// that are capable of using TLS.\ntype TLSTransport interface {\n\t// TLSEnabled returns true if the transport\n\t// has TLS enabled, false otherwise.\n\tTLSEnabled() bool\n\n\t// EnableTLS enables TLS within the transport\n\t// if it is not already, using the provided\n\t// value as a basis for the TLS config.\n\tEnableTLS(base *TLSConfig) error\n}\n\n// H2CTransport is implemented by transports\n// that are capable of using h2c.\ntype H2CTransport interface {\n\tEnableH2C() error\n}\n\n// ProxyProtocolTransport is implemented by transports\n// that are capable of using proxy protocol.\ntype ProxyProtocolTransport interface {\n\tProxyProtocolEnabled() bool\n}\n\n// HealthCheckSchemeOverriderTransport is implemented by transports\n// that can override the scheme used for health checks.\ntype HealthCheckSchemeOverriderTransport interface {\n\tOverrideHealthCheckScheme(base *url.URL, port string)\n}\n\n// BufferedTransport is implemented by transports\n// that needs to buffer requests and/or responses.\ntype BufferedTransport interface {\n\t// DefaultBufferSizes returns the default buffer sizes\n\t// for requests and responses, respectively if buffering isn't enabled.\n\tDefaultBufferSizes() (int64, int64)\n}\n\n// RequestHeaderOpsTransport may be implemented by a transport to provide\n// header operations to apply to requests immediately before the RoundTrip.\n// For example, overriding the default Host when TLS is enabled.\ntype RequestHeaderOpsTransport interface {\n\t// RequestHeaderOps allows a transport to provide header operations\n\t// to apply to the request. The transport is asked at provision time\n\t// to return a HeaderOps (or nil) that will be applied before\n\t// user-configured header ops.\n\tRequestHeaderOps() *headers.HeaderOps\n}\n\n// roundtripSucceededError is an error type that is returned if the\n// roundtrip succeeded, but an error occurred after-the-fact.\ntype roundtripSucceededError struct{ error }\n\n// bodyReadCloser is a reader that, upon closing, will return\n// its buffer to the pool and close the underlying body reader.\ntype bodyReadCloser struct {\n\tio.Reader\n\tbuf  *bytes.Buffer\n\tbody io.ReadCloser\n}\n\nfunc (brc bodyReadCloser) Close() error {\n\t// Inside this package this will be set to nil for fully-buffered\n\t// requests due to the possibility of retrial.\n\tif brc.buf != nil {\n\t\tbufPool.Put(brc.buf)\n\t}\n\t// For fully-buffered bodies, body is nil, so Close is a no-op.\n\tif brc.body != nil {\n\t\treturn brc.body.Close()\n\t}\n\treturn nil\n}\n\n// bufPool is used for buffering requests and responses.\nvar bufPool = sync.Pool{\n\tNew: func() any {\n\t\treturn new(bytes.Buffer)\n\t},\n}\n\n// handleResponseContext carries some contextual information about the\n// current proxy handling.\ntype handleResponseContext struct {\n\t// handler is the active proxy handler instance, so that\n\t// routes like copy_response may inherit some config\n\t// options and have access to handler methods.\n\thandler *Handler\n\n\t// response is the actual response received from the proxy\n\t// roundtrip, to potentially be copied if a copy_response\n\t// handler is in the handle_response routes.\n\tresponse *http.Response\n\n\t// start is the time just before the proxy roundtrip was\n\t// performed, used for logging.\n\tstart time.Time\n\n\t// logger is the prepared logger which is used to write logs\n\t// with the request, duration, and selected upstream attached.\n\tlogger *zap.Logger\n\n\t// isFinalized is whether the response has been finalized,\n\t// i.e. copied and closed, to make sure that it doesn't\n\t// happen twice.\n\tisFinalized bool\n}\n\n// proxyHandleResponseContextCtxKey is the context key for the active proxy handler\n// so that handle_response routes can inherit some config options\n// from the proxy handler.\nconst proxyHandleResponseContextCtxKey caddy.CtxKey = \"reverse_proxy_handle_response_context\"\n\n// errNoUpstream occurs when there are no upstream available.\nvar errNoUpstream = fmt.Errorf(\"no upstreams available\")\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner           = (*Handler)(nil)\n\t_ caddy.CleanerUpper          = (*Handler)(nil)\n\t_ caddyhttp.MiddlewareHandler = (*Handler)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/selectionpolicies.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage reverseproxy\n\nimport (\n\t\"crypto/hmac\"\n\t\"crypto/sha256\"\n\t\"encoding/hex\"\n\t\"encoding/json\"\n\t\"fmt\"\n\tweakrand \"math/rand/v2\"\n\t\"net\"\n\t\"net/http\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync/atomic\"\n\t\"time\"\n\n\t\"github.com/cespare/xxhash/v2\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(RandomSelection{})\n\tcaddy.RegisterModule(RandomChoiceSelection{})\n\tcaddy.RegisterModule(LeastConnSelection{})\n\tcaddy.RegisterModule(RoundRobinSelection{})\n\tcaddy.RegisterModule(WeightedRoundRobinSelection{})\n\tcaddy.RegisterModule(FirstSelection{})\n\tcaddy.RegisterModule(IPHashSelection{})\n\tcaddy.RegisterModule(ClientIPHashSelection{})\n\tcaddy.RegisterModule(URIHashSelection{})\n\tcaddy.RegisterModule(QueryHashSelection{})\n\tcaddy.RegisterModule(HeaderHashSelection{})\n\tcaddy.RegisterModule(CookieHashSelection{})\n}\n\n// RandomSelection is a policy that selects\n// an available host at random.\ntype RandomSelection struct{}\n\n// CaddyModule returns the Caddy module information.\nfunc (RandomSelection) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.reverse_proxy.selection_policies.random\",\n\t\tNew: func() caddy.Module { return new(RandomSelection) },\n\t}\n}\n\n// Select returns an available host, if any.\nfunc (r RandomSelection) Select(pool UpstreamPool, request *http.Request, _ http.ResponseWriter) *Upstream {\n\treturn selectRandomHost(pool)\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (r *RandomSelection) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume policy name\n\tif d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\treturn nil\n}\n\n// WeightedRoundRobinSelection is a policy that selects\n// a host based on weighted round-robin ordering.\ntype WeightedRoundRobinSelection struct {\n\t// The weight of each upstream in order,\n\t// corresponding with the list of upstreams configured.\n\tWeights     []int `json:\"weights,omitempty\"`\n\tindex       uint32\n\ttotalWeight int\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (WeightedRoundRobinSelection) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID: \"http.reverse_proxy.selection_policies.weighted_round_robin\",\n\t\tNew: func() caddy.Module {\n\t\t\treturn new(WeightedRoundRobinSelection)\n\t\t},\n\t}\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (r *WeightedRoundRobinSelection) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume policy name\n\n\targs := d.RemainingArgs()\n\tif len(args) == 0 {\n\t\treturn d.ArgErr()\n\t}\n\n\tfor _, weight := range args {\n\t\tweightInt, err := strconv.Atoi(weight)\n\t\tif err != nil {\n\t\t\treturn d.Errf(\"invalid weight value '%s': %v\", weight, err)\n\t\t}\n\t\tif weightInt < 0 {\n\t\t\treturn d.Errf(\"invalid weight value '%s': weight should be non-negative\", weight)\n\t\t}\n\t\tr.Weights = append(r.Weights, weightInt)\n\t}\n\treturn nil\n}\n\n// Provision sets up r.\nfunc (r *WeightedRoundRobinSelection) Provision(ctx caddy.Context) error {\n\tfor _, weight := range r.Weights {\n\t\tr.totalWeight += weight\n\t}\n\treturn nil\n}\n\n// Select returns an available host, if any.\nfunc (r *WeightedRoundRobinSelection) Select(pool UpstreamPool, _ *http.Request, _ http.ResponseWriter) *Upstream {\n\tif len(pool) == 0 {\n\t\treturn nil\n\t}\n\tif len(r.Weights) < 2 {\n\t\treturn pool[0]\n\t}\n\tvar index, totalWeight int\n\tvar weights []int\n\n\tfor _, w := range r.Weights {\n\t\tif w > 0 {\n\t\t\tweights = append(weights, w)\n\t\t}\n\t}\n\tcurrentWeight := int(atomic.AddUint32(&r.index, 1)) % r.totalWeight\n\tfor i, weight := range weights {\n\t\ttotalWeight += weight\n\t\tif currentWeight < totalWeight {\n\t\t\tindex = i\n\t\t\tbreak\n\t\t}\n\t}\n\n\tupstreams := make([]*Upstream, 0, len(weights))\n\tfor i, upstream := range pool {\n\t\tif !upstream.Available() || r.Weights[i] == 0 {\n\t\t\tcontinue\n\t\t}\n\t\tupstreams = append(upstreams, upstream)\n\t\tif len(upstreams) == cap(upstreams) {\n\t\t\tbreak\n\t\t}\n\t}\n\tif len(upstreams) == 0 {\n\t\treturn nil\n\t}\n\treturn upstreams[index%len(upstreams)]\n}\n\n// RandomChoiceSelection is a policy that selects\n// two or more available hosts at random, then\n// chooses the one with the least load.\ntype RandomChoiceSelection struct {\n\t// The size of the sub-pool created from the larger upstream pool. The default value\n\t// is 2 and the maximum at selection time is the size of the upstream pool.\n\tChoose int `json:\"choose,omitempty\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (RandomChoiceSelection) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.reverse_proxy.selection_policies.random_choose\",\n\t\tNew: func() caddy.Module { return new(RandomChoiceSelection) },\n\t}\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (r *RandomChoiceSelection) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume policy name\n\n\tif !d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\tchooseStr := d.Val()\n\tchoose, err := strconv.Atoi(chooseStr)\n\tif err != nil {\n\t\treturn d.Errf(\"invalid choice value '%s': %v\", chooseStr, err)\n\t}\n\tr.Choose = choose\n\treturn nil\n}\n\n// Provision sets up r.\nfunc (r *RandomChoiceSelection) Provision(ctx caddy.Context) error {\n\tif r.Choose == 0 {\n\t\tr.Choose = 2\n\t}\n\treturn nil\n}\n\n// Validate ensures that r's configuration is valid.\nfunc (r RandomChoiceSelection) Validate() error {\n\tif r.Choose < 2 {\n\t\treturn fmt.Errorf(\"choose must be at least 2\")\n\t}\n\treturn nil\n}\n\n// Select returns an available host, if any.\nfunc (r RandomChoiceSelection) Select(pool UpstreamPool, _ *http.Request, _ http.ResponseWriter) *Upstream {\n\tk := min(r.Choose, len(pool))\n\tchoices := make([]*Upstream, k)\n\tfor i, upstream := range pool {\n\t\tif !upstream.Available() {\n\t\t\tcontinue\n\t\t}\n\t\tj := weakrand.IntN(i + 1) //nolint:gosec\n\t\tif j < k {\n\t\t\tchoices[j] = upstream\n\t\t}\n\t}\n\treturn leastRequests(choices)\n}\n\n// LeastConnSelection is a policy that selects the\n// host with the least active requests. If multiple\n// hosts have the same fewest number, one is chosen\n// randomly. The term \"conn\" or \"connection\" is used\n// in this policy name due to its similar meaning in\n// other software, but our load balancer actually\n// counts active requests rather than connections,\n// since these days requests are multiplexed onto\n// shared connections.\ntype LeastConnSelection struct{}\n\n// CaddyModule returns the Caddy module information.\nfunc (LeastConnSelection) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.reverse_proxy.selection_policies.least_conn\",\n\t\tNew: func() caddy.Module { return new(LeastConnSelection) },\n\t}\n}\n\n// Select selects the up host with the least number of connections in the\n// pool. If more than one host has the same least number of connections,\n// one of the hosts is chosen at random.\nfunc (LeastConnSelection) Select(pool UpstreamPool, _ *http.Request, _ http.ResponseWriter) *Upstream {\n\tvar bestHost *Upstream\n\tvar count int\n\tleastReqs := -1\n\n\tfor _, host := range pool {\n\t\tif !host.Available() {\n\t\t\tcontinue\n\t\t}\n\t\tnumReqs := host.NumRequests()\n\t\tif leastReqs == -1 || numReqs < leastReqs {\n\t\t\tleastReqs = numReqs\n\t\t\tcount = 0\n\t\t}\n\n\t\t// among hosts with same least connections, perform a reservoir\n\t\t// sample: https://en.wikipedia.org/wiki/Reservoir_sampling\n\t\tif numReqs == leastReqs {\n\t\t\tcount++\n\t\t\tif count == 1 || weakrand.IntN(count) == 0 { //nolint:gosec\n\t\t\t\tbestHost = host\n\t\t\t}\n\t\t}\n\t}\n\n\treturn bestHost\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (r *LeastConnSelection) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume policy name\n\tif d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\treturn nil\n}\n\n// RoundRobinSelection is a policy that selects\n// a host based on round-robin ordering.\ntype RoundRobinSelection struct {\n\trobin uint32\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (RoundRobinSelection) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.reverse_proxy.selection_policies.round_robin\",\n\t\tNew: func() caddy.Module { return new(RoundRobinSelection) },\n\t}\n}\n\n// Select returns an available host, if any.\nfunc (r *RoundRobinSelection) Select(pool UpstreamPool, _ *http.Request, _ http.ResponseWriter) *Upstream {\n\tn := uint32(len(pool))\n\tif n == 0 {\n\t\treturn nil\n\t}\n\tfor range n {\n\t\trobin := atomic.AddUint32(&r.robin, 1)\n\t\thost := pool[robin%n]\n\t\tif host.Available() {\n\t\t\treturn host\n\t\t}\n\t}\n\treturn nil\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (r *RoundRobinSelection) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume policy name\n\tif d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\treturn nil\n}\n\n// FirstSelection is a policy that selects\n// the first available host.\ntype FirstSelection struct{}\n\n// CaddyModule returns the Caddy module information.\nfunc (FirstSelection) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.reverse_proxy.selection_policies.first\",\n\t\tNew: func() caddy.Module { return new(FirstSelection) },\n\t}\n}\n\n// Select returns an available host, if any.\nfunc (FirstSelection) Select(pool UpstreamPool, _ *http.Request, _ http.ResponseWriter) *Upstream {\n\tfor _, host := range pool {\n\t\tif host.Available() {\n\t\t\treturn host\n\t\t}\n\t}\n\treturn nil\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (r *FirstSelection) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume policy name\n\tif d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\treturn nil\n}\n\n// IPHashSelection is a policy that selects a host\n// based on hashing the remote IP of the request.\ntype IPHashSelection struct{}\n\n// CaddyModule returns the Caddy module information.\nfunc (IPHashSelection) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.reverse_proxy.selection_policies.ip_hash\",\n\t\tNew: func() caddy.Module { return new(IPHashSelection) },\n\t}\n}\n\n// Select returns an available host, if any.\nfunc (IPHashSelection) Select(pool UpstreamPool, req *http.Request, _ http.ResponseWriter) *Upstream {\n\tclientIP, _, err := net.SplitHostPort(req.RemoteAddr)\n\tif err != nil {\n\t\tclientIP = req.RemoteAddr\n\t}\n\treturn hostByHashing(pool, clientIP)\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (r *IPHashSelection) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume policy name\n\tif d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\treturn nil\n}\n\n// ClientIPHashSelection is a policy that selects a host\n// based on hashing the client IP of the request, as determined\n// by the HTTP app's trusted proxies settings.\ntype ClientIPHashSelection struct{}\n\n// CaddyModule returns the Caddy module information.\nfunc (ClientIPHashSelection) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.reverse_proxy.selection_policies.client_ip_hash\",\n\t\tNew: func() caddy.Module { return new(ClientIPHashSelection) },\n\t}\n}\n\n// Select returns an available host, if any.\nfunc (ClientIPHashSelection) Select(pool UpstreamPool, req *http.Request, _ http.ResponseWriter) *Upstream {\n\taddress := caddyhttp.GetVar(req.Context(), caddyhttp.ClientIPVarKey).(string)\n\tclientIP, _, err := net.SplitHostPort(address)\n\tif err != nil {\n\t\tclientIP = address // no port\n\t}\n\treturn hostByHashing(pool, clientIP)\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (r *ClientIPHashSelection) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume policy name\n\tif d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\treturn nil\n}\n\n// URIHashSelection is a policy that selects a\n// host by hashing the request URI.\ntype URIHashSelection struct{}\n\n// CaddyModule returns the Caddy module information.\nfunc (URIHashSelection) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.reverse_proxy.selection_policies.uri_hash\",\n\t\tNew: func() caddy.Module { return new(URIHashSelection) },\n\t}\n}\n\n// Select returns an available host, if any.\nfunc (URIHashSelection) Select(pool UpstreamPool, req *http.Request, _ http.ResponseWriter) *Upstream {\n\treturn hostByHashing(pool, req.RequestURI)\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (r *URIHashSelection) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume policy name\n\tif d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\treturn nil\n}\n\n// QueryHashSelection is a policy that selects\n// a host based on a given request query parameter.\ntype QueryHashSelection struct {\n\t// The query key whose value is to be hashed and used for upstream selection.\n\tKey string `json:\"key,omitempty\"`\n\n\t// The fallback policy to use if the query key is not present. Defaults to `random`.\n\tFallbackRaw json.RawMessage `json:\"fallback,omitempty\" caddy:\"namespace=http.reverse_proxy.selection_policies inline_key=policy\"`\n\tfallback    Selector\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (QueryHashSelection) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.reverse_proxy.selection_policies.query\",\n\t\tNew: func() caddy.Module { return new(QueryHashSelection) },\n\t}\n}\n\n// Provision sets up the module.\nfunc (s *QueryHashSelection) Provision(ctx caddy.Context) error {\n\tif s.Key == \"\" {\n\t\treturn fmt.Errorf(\"query key is required\")\n\t}\n\tif s.FallbackRaw == nil {\n\t\ts.FallbackRaw = caddyconfig.JSONModuleObject(RandomSelection{}, \"policy\", \"random\", nil)\n\t}\n\tmod, err := ctx.LoadModule(s, \"FallbackRaw\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"loading fallback selection policy: %s\", err)\n\t}\n\ts.fallback = mod.(Selector)\n\treturn nil\n}\n\n// Select returns an available host, if any.\nfunc (s QueryHashSelection) Select(pool UpstreamPool, req *http.Request, _ http.ResponseWriter) *Upstream {\n\t// Since the query may have multiple values for the same key,\n\t// we'll join them to avoid a problem where the user can control\n\t// the upstream that the request goes to by sending multiple values\n\t// for the same key, when the upstream only considers the first value.\n\t// Keep in mind that a client changing the order of the values may\n\t// affect which upstream is selected, but this is a semantically\n\t// different request, because the order of the values is significant.\n\tvals := strings.Join(req.URL.Query()[s.Key], \",\")\n\tif vals == \"\" {\n\t\treturn s.fallback.Select(pool, req, nil)\n\t}\n\treturn hostByHashing(pool, vals)\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (s *QueryHashSelection) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume policy name\n\n\tif !d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\ts.Key = d.Val()\n\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"fallback\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif s.FallbackRaw != nil {\n\t\t\t\treturn d.Err(\"fallback selection policy already specified\")\n\t\t\t}\n\t\t\tmod, err := loadFallbackPolicy(d)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\ts.FallbackRaw = mod\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized option '%s'\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\n// HeaderHashSelection is a policy that selects\n// a host based on a given request header.\ntype HeaderHashSelection struct {\n\t// The HTTP header field whose value is to be hashed and used for upstream selection.\n\tField string `json:\"field,omitempty\"`\n\n\t// The fallback policy to use if the header is not present. Defaults to `random`.\n\tFallbackRaw json.RawMessage `json:\"fallback,omitempty\" caddy:\"namespace=http.reverse_proxy.selection_policies inline_key=policy\"`\n\tfallback    Selector\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (HeaderHashSelection) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.reverse_proxy.selection_policies.header\",\n\t\tNew: func() caddy.Module { return new(HeaderHashSelection) },\n\t}\n}\n\n// Provision sets up the module.\nfunc (s *HeaderHashSelection) Provision(ctx caddy.Context) error {\n\tif s.Field == \"\" {\n\t\treturn fmt.Errorf(\"header field is required\")\n\t}\n\tif s.FallbackRaw == nil {\n\t\ts.FallbackRaw = caddyconfig.JSONModuleObject(RandomSelection{}, \"policy\", \"random\", nil)\n\t}\n\tmod, err := ctx.LoadModule(s, \"FallbackRaw\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"loading fallback selection policy: %s\", err)\n\t}\n\ts.fallback = mod.(Selector)\n\treturn nil\n}\n\n// Select returns an available host, if any.\nfunc (s HeaderHashSelection) Select(pool UpstreamPool, req *http.Request, _ http.ResponseWriter) *Upstream {\n\t// The Host header should be obtained from the req.Host field\n\t// since net/http removes it from the header map.\n\tif s.Field == \"Host\" && req.Host != \"\" {\n\t\treturn hostByHashing(pool, req.Host)\n\t}\n\n\tval := req.Header.Get(s.Field)\n\tif val == \"\" {\n\t\treturn s.fallback.Select(pool, req, nil)\n\t}\n\treturn hostByHashing(pool, val)\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (s *HeaderHashSelection) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume policy name\n\n\tif !d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\ts.Field = d.Val()\n\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"fallback\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif s.FallbackRaw != nil {\n\t\t\t\treturn d.Err(\"fallback selection policy already specified\")\n\t\t\t}\n\t\t\tmod, err := loadFallbackPolicy(d)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\ts.FallbackRaw = mod\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized option '%s'\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\n// CookieHashSelection is a policy that selects\n// a host based on a given cookie name.\ntype CookieHashSelection struct {\n\t// The HTTP cookie name whose value is to be hashed and used for upstream selection.\n\tName string `json:\"name,omitempty\"`\n\t// Secret to hash (Hmac256) chosen upstream in cookie\n\tSecret string `json:\"secret,omitempty\"` //nolint:gosec // yes it's exported because it needs to encode to JSON\n\t// The cookie's Max-Age before it expires. Default is no expiry.\n\tMaxAge caddy.Duration `json:\"max_age,omitempty\"`\n\n\t// The fallback policy to use if the cookie is not present. Defaults to `random`.\n\tFallbackRaw json.RawMessage `json:\"fallback,omitempty\" caddy:\"namespace=http.reverse_proxy.selection_policies inline_key=policy\"`\n\tfallback    Selector\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (CookieHashSelection) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.reverse_proxy.selection_policies.cookie\",\n\t\tNew: func() caddy.Module { return new(CookieHashSelection) },\n\t}\n}\n\n// Provision sets up the module.\nfunc (s *CookieHashSelection) Provision(ctx caddy.Context) error {\n\tif s.Name == \"\" {\n\t\ts.Name = \"lb\"\n\t}\n\tif s.FallbackRaw == nil {\n\t\ts.FallbackRaw = caddyconfig.JSONModuleObject(RandomSelection{}, \"policy\", \"random\", nil)\n\t}\n\tmod, err := ctx.LoadModule(s, \"FallbackRaw\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"loading fallback selection policy: %s\", err)\n\t}\n\ts.fallback = mod.(Selector)\n\treturn nil\n}\n\n// Select returns an available host, if any.\nfunc (s CookieHashSelection) Select(pool UpstreamPool, req *http.Request, w http.ResponseWriter) *Upstream {\n\t// selects a new Host using the fallback policy (typically random)\n\t// and write a sticky session cookie to the response.\n\tselectNewHost := func() *Upstream {\n\t\tupstream := s.fallback.Select(pool, req, w)\n\t\tif upstream == nil {\n\t\t\treturn nil\n\t\t}\n\t\tsha, err := hashCookie(s.Secret, upstream.Dial)\n\t\tif err != nil {\n\t\t\treturn upstream\n\t\t}\n\t\tcookie := &http.Cookie{\n\t\t\tName:   s.Name,\n\t\t\tValue:  sha,\n\t\t\tPath:   \"/\",\n\t\t\tSecure: false,\n\t\t}\n\t\tisProxyHttps := false\n\t\tif trusted, ok := caddyhttp.GetVar(req.Context(), caddyhttp.TrustedProxyVarKey).(bool); ok && trusted {\n\t\t\txfp, xfpOk, _ := lastHeaderValue(req.Header, \"X-Forwarded-Proto\")\n\t\t\tisProxyHttps = xfpOk && xfp == \"https\"\n\t\t}\n\t\tif req.TLS != nil || isProxyHttps {\n\t\t\tcookie.Secure = true\n\t\t\tcookie.SameSite = http.SameSiteNoneMode\n\t\t}\n\t\tif s.MaxAge > 0 {\n\t\t\tcookie.MaxAge = int(time.Duration(s.MaxAge).Seconds())\n\t\t}\n\t\thttp.SetCookie(w, cookie)\n\t\treturn upstream\n\t}\n\n\tcookie, err := req.Cookie(s.Name)\n\t// If there's no cookie, select a host using the fallback policy\n\tif err != nil || cookie == nil {\n\t\treturn selectNewHost()\n\t}\n\t// If the cookie is present, loop over the available upstreams until we find a match\n\tcookieValue := cookie.Value\n\tfor _, upstream := range pool {\n\t\tif !upstream.Available() {\n\t\t\tcontinue\n\t\t}\n\t\tsha, err := hashCookie(s.Secret, upstream.Dial)\n\t\tif err == nil && sha == cookieValue {\n\t\t\treturn upstream\n\t\t}\n\t}\n\t// If there is no matching host, select a host using the fallback policy\n\treturn selectNewHost()\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens. Syntax:\n//\n//\tlb_policy cookie [<name> [<secret>]] {\n//\t\tfallback <policy>\n//\t\tmax_age <duration>\n//\t}\n//\n// By default name is `lb`\nfunc (s *CookieHashSelection) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\targs := d.RemainingArgs()\n\tswitch len(args) {\n\tcase 1:\n\tcase 2:\n\t\ts.Name = args[1]\n\tcase 3:\n\t\ts.Name = args[1]\n\t\ts.Secret = args[2]\n\tdefault:\n\t\treturn d.ArgErr()\n\t}\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"fallback\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif s.FallbackRaw != nil {\n\t\t\t\treturn d.Err(\"fallback selection policy already specified\")\n\t\t\t}\n\t\t\tmod, err := loadFallbackPolicy(d)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\ts.FallbackRaw = mod\n\t\tcase \"max_age\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif s.MaxAge != 0 {\n\t\t\t\treturn d.Err(\"cookie max_age already specified\")\n\t\t\t}\n\t\t\tmaxAge, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid duration: %s\", d.Val())\n\t\t\t}\n\t\t\tif maxAge <= 0 {\n\t\t\t\treturn d.Errf(\"invalid duration: %s, max_age should be non-zero and positive\", d.Val())\n\t\t\t}\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\ts.MaxAge = caddy.Duration(maxAge)\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized option '%s'\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\n// hashCookie hashes (HMAC 256) some data with the secret\nfunc hashCookie(secret string, data string) (string, error) {\n\th := hmac.New(sha256.New, []byte(secret))\n\t_, err := h.Write([]byte(data))\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn hex.EncodeToString(h.Sum(nil)), nil\n}\n\n// selectRandomHost returns a random available host\nfunc selectRandomHost(pool []*Upstream) *Upstream {\n\t// use reservoir sampling because the number of available\n\t// hosts isn't known: https://en.wikipedia.org/wiki/Reservoir_sampling\n\tvar randomHost *Upstream\n\tvar count int\n\tfor _, upstream := range pool {\n\t\tif !upstream.Available() {\n\t\t\tcontinue\n\t\t}\n\t\t// (n % 1 == 0) holds for all n, therefore a\n\t\t// upstream will always be chosen if there is at\n\t\t// least one available\n\t\tcount++\n\t\tif weakrand.IntN(count) == 0 { //nolint:gosec\n\t\t\trandomHost = upstream\n\t\t}\n\t}\n\treturn randomHost\n}\n\n// leastRequests returns the host with the\n// least number of active requests to it.\n// If more than one host has the same\n// least number of active requests, then\n// one of those is chosen at random.\nfunc leastRequests(upstreams []*Upstream) *Upstream {\n\tif len(upstreams) == 0 {\n\t\treturn nil\n\t}\n\tvar best []*Upstream\n\tbestReqs := -1\n\tfor _, upstream := range upstreams {\n\t\tif upstream == nil {\n\t\t\tcontinue\n\t\t}\n\t\treqs := upstream.NumRequests()\n\t\tif reqs == 0 {\n\t\t\treturn upstream\n\t\t}\n\t\t// If bestReqs was just initialized to -1\n\t\t// we need to append upstream also\n\t\tif reqs <= bestReqs || bestReqs == -1 {\n\t\t\tbestReqs = reqs\n\t\t\tbest = append(best, upstream)\n\t\t}\n\t}\n\tif len(best) == 0 {\n\t\treturn nil\n\t}\n\tif len(best) == 1 {\n\t\treturn best[0]\n\t}\n\treturn best[weakrand.IntN(len(best))] //nolint:gosec\n}\n\n// hostByHashing returns an available host from pool based on a hashable string s.\nfunc hostByHashing(pool []*Upstream, s string) *Upstream {\n\t// Highest Random Weight (HRW, or \"Rendezvous\") hashing,\n\t// guarantees stability when the list of upstreams changes;\n\t// see https://medium.com/i0exception/rendezvous-hashing-8c00e2fb58b0,\n\t// https://randorithms.com/2020/12/26/rendezvous-hashing.html,\n\t// and https://en.wikipedia.org/wiki/Rendezvous_hashing.\n\tvar highestHash uint64\n\tvar upstream *Upstream\n\tfor _, up := range pool {\n\t\tif !up.Available() {\n\t\t\tcontinue\n\t\t}\n\t\th := hash(up.String() + s) // important to hash key and server together\n\t\tif h > highestHash {\n\t\t\thighestHash = h\n\t\t\tupstream = up\n\t\t}\n\t}\n\treturn upstream\n}\n\n// hash calculates a fast hash based on s.\nfunc hash(s string) uint64 {\n\th := xxhash.New()\n\t_, _ = h.Write([]byte(s))\n\treturn h.Sum64()\n}\n\nfunc loadFallbackPolicy(d *caddyfile.Dispenser) (json.RawMessage, error) {\n\tname := d.Val()\n\tmodID := \"http.reverse_proxy.selection_policies.\" + name\n\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tsel, ok := unm.(Selector)\n\tif !ok {\n\t\treturn nil, d.Errf(\"module %s (%T) is not a reverseproxy.Selector\", modID, unm)\n\t}\n\treturn caddyconfig.JSONModuleObject(sel, \"policy\", name, nil), nil\n}\n\n// Interface guards\nvar (\n\t_ Selector = (*RandomSelection)(nil)\n\t_ Selector = (*RandomChoiceSelection)(nil)\n\t_ Selector = (*LeastConnSelection)(nil)\n\t_ Selector = (*RoundRobinSelection)(nil)\n\t_ Selector = (*WeightedRoundRobinSelection)(nil)\n\t_ Selector = (*FirstSelection)(nil)\n\t_ Selector = (*IPHashSelection)(nil)\n\t_ Selector = (*ClientIPHashSelection)(nil)\n\t_ Selector = (*URIHashSelection)(nil)\n\t_ Selector = (*QueryHashSelection)(nil)\n\t_ Selector = (*HeaderHashSelection)(nil)\n\t_ Selector = (*CookieHashSelection)(nil)\n\n\t_ caddy.Validator = (*RandomChoiceSelection)(nil)\n\n\t_ caddy.Provisioner = (*RandomChoiceSelection)(nil)\n\t_ caddy.Provisioner = (*WeightedRoundRobinSelection)(nil)\n\n\t_ caddyfile.Unmarshaler = (*RandomChoiceSelection)(nil)\n\t_ caddyfile.Unmarshaler = (*WeightedRoundRobinSelection)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/selectionpolicies_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage reverseproxy\n\nimport (\n\t\"context\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc testPool() UpstreamPool {\n\treturn UpstreamPool{\n\t\t{Host: new(Host), Dial: \"0.0.0.1\"},\n\t\t{Host: new(Host), Dial: \"0.0.0.2\"},\n\t\t{Host: new(Host), Dial: \"0.0.0.3\"},\n\t}\n}\n\nfunc TestRoundRobinPolicy(t *testing.T) {\n\tpool := testPool()\n\trrPolicy := RoundRobinSelection{}\n\treq, _ := http.NewRequest(\"GET\", \"/\", nil)\n\n\th := rrPolicy.Select(pool, req, nil)\n\t// First selected host is 1, because counter starts at 0\n\t// and increments before host is selected\n\tif h != pool[1] {\n\t\tt.Error(\"Expected first round robin host to be second host in the pool.\")\n\t}\n\th = rrPolicy.Select(pool, req, nil)\n\tif h != pool[2] {\n\t\tt.Error(\"Expected second round robin host to be third host in the pool.\")\n\t}\n\th = rrPolicy.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected third round robin host to be first host in the pool.\")\n\t}\n\t// mark host as down\n\tpool[1].setHealthy(false)\n\th = rrPolicy.Select(pool, req, nil)\n\tif h != pool[2] {\n\t\tt.Error(\"Expected to skip down host.\")\n\t}\n\t// mark host as up\n\tpool[1].setHealthy(true)\n\n\th = rrPolicy.Select(pool, req, nil)\n\tif h == pool[2] {\n\t\tt.Error(\"Expected to balance evenly among healthy hosts\")\n\t}\n\t// mark host as full\n\tpool[1].countRequest(1)\n\tpool[1].MaxRequests = 1\n\th = rrPolicy.Select(pool, req, nil)\n\tif h != pool[2] {\n\t\tt.Error(\"Expected to skip full host.\")\n\t}\n}\n\nfunc TestWeightedRoundRobinPolicy(t *testing.T) {\n\tpool := testPool()\n\twrrPolicy := WeightedRoundRobinSelection{\n\t\tWeights:     []int{3, 2, 1},\n\t\ttotalWeight: 6,\n\t}\n\treq, _ := http.NewRequest(\"GET\", \"/\", nil)\n\n\th := wrrPolicy.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected first weighted round robin host to be first host in the pool.\")\n\t}\n\th = wrrPolicy.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected second weighted round robin host to be first host in the pool.\")\n\t}\n\t// Third selected host is 1, because counter starts at 0\n\t// and increments before host is selected\n\th = wrrPolicy.Select(pool, req, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected third weighted round robin host to be second host in the pool.\")\n\t}\n\th = wrrPolicy.Select(pool, req, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected fourth weighted round robin host to be second host in the pool.\")\n\t}\n\th = wrrPolicy.Select(pool, req, nil)\n\tif h != pool[2] {\n\t\tt.Error(\"Expected fifth weighted round robin host to be third host in the pool.\")\n\t}\n\th = wrrPolicy.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected sixth weighted round robin host to be first host in the pool.\")\n\t}\n\n\t// mark host as down\n\tpool[0].setHealthy(false)\n\th = wrrPolicy.Select(pool, req, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected to skip down host.\")\n\t}\n\t// mark host as up\n\tpool[0].setHealthy(true)\n\n\th = wrrPolicy.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected to select first host on availability.\")\n\t}\n\t// mark host as full\n\tpool[1].countRequest(1)\n\tpool[1].MaxRequests = 1\n\th = wrrPolicy.Select(pool, req, nil)\n\tif h != pool[2] {\n\t\tt.Error(\"Expected to skip full host.\")\n\t}\n}\n\nfunc TestWeightedRoundRobinPolicyWithZeroWeight(t *testing.T) {\n\tpool := testPool()\n\twrrPolicy := WeightedRoundRobinSelection{\n\t\tWeights:     []int{0, 2, 1},\n\t\ttotalWeight: 3,\n\t}\n\treq, _ := http.NewRequest(\"GET\", \"/\", nil)\n\n\th := wrrPolicy.Select(pool, req, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected first weighted round robin host to be second host in the pool.\")\n\t}\n\n\th = wrrPolicy.Select(pool, req, nil)\n\tif h != pool[2] {\n\t\tt.Error(\"Expected second weighted round robin host to be third host in the pool.\")\n\t}\n\n\th = wrrPolicy.Select(pool, req, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected third weighted round robin host to be second host in the pool.\")\n\t}\n\n\t// mark second host as down\n\tpool[1].setHealthy(false)\n\th = wrrPolicy.Select(pool, req, nil)\n\tif h != pool[2] {\n\t\tt.Error(\"Expect select next available host.\")\n\t}\n\n\th = wrrPolicy.Select(pool, req, nil)\n\tif h != pool[2] {\n\t\tt.Error(\"Expect select only available host.\")\n\t}\n\t// mark second host as up\n\tpool[1].setHealthy(true)\n\n\th = wrrPolicy.Select(pool, req, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expect select first host on availability.\")\n\t}\n\n\t// test next select in full cycle\n\texpected := []*Upstream{pool[1], pool[2], pool[1], pool[1], pool[2], pool[1]}\n\tfor i, want := range expected {\n\t\tgot := wrrPolicy.Select(pool, req, nil)\n\t\tif want != got {\n\t\t\tt.Errorf(\"Selection %d: got host[%s], want host[%s]\", i+1, got, want)\n\t\t}\n\t}\n}\n\nfunc TestLeastConnPolicy(t *testing.T) {\n\tpool := testPool()\n\tlcPolicy := LeastConnSelection{}\n\treq, _ := http.NewRequest(\"GET\", \"/\", nil)\n\n\tpool[0].countRequest(10)\n\tpool[1].countRequest(10)\n\th := lcPolicy.Select(pool, req, nil)\n\tif h != pool[2] {\n\t\tt.Error(\"Expected least connection host to be third host.\")\n\t}\n\tpool[2].countRequest(100)\n\th = lcPolicy.Select(pool, req, nil)\n\tif h != pool[0] && h != pool[1] {\n\t\tt.Error(\"Expected least connection host to be first or second host.\")\n\t}\n}\n\nfunc TestIPHashPolicy(t *testing.T) {\n\tpool := testPool()\n\tipHash := IPHashSelection{}\n\treq, _ := http.NewRequest(\"GET\", \"/\", nil)\n\n\t// We should be able to predict where every request is routed.\n\treq.RemoteAddr = \"172.0.0.1:80\"\n\th := ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\treq.RemoteAddr = \"172.0.0.2:80\"\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected ip hash policy host to be the second host.\")\n\t}\n\treq.RemoteAddr = \"172.0.0.3:80\"\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\treq.RemoteAddr = \"172.0.0.4:80\"\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected ip hash policy host to be the second host.\")\n\t}\n\n\t// we should get the same results without a port\n\treq.RemoteAddr = \"172.0.0.1\"\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\treq.RemoteAddr = \"172.0.0.2\"\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected ip hash policy host to be the second host.\")\n\t}\n\treq.RemoteAddr = \"172.0.0.3\"\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\treq.RemoteAddr = \"172.0.0.4\"\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected ip hash policy host to be the second host.\")\n\t}\n\n\t// we should get a healthy host if the original host is unhealthy and a\n\t// healthy host is available\n\treq.RemoteAddr = \"172.0.0.4\"\n\tpool[1].setHealthy(false)\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[2] {\n\t\tt.Error(\"Expected ip hash policy host to be the third host.\")\n\t}\n\n\treq.RemoteAddr = \"172.0.0.2\"\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\tpool[1].setHealthy(true)\n\n\treq.RemoteAddr = \"172.0.0.3\"\n\tpool[2].setHealthy(false)\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\treq.RemoteAddr = \"172.0.0.4\"\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected ip hash policy host to be the second host.\")\n\t}\n\n\t// We should be able to resize the host pool and still be able to predict\n\t// where a req will be routed with the same IP's used above\n\tpool = UpstreamPool{\n\t\t{Host: new(Host), Dial: \"0.0.0.2\"},\n\t\t{Host: new(Host), Dial: \"0.0.0.3\"},\n\t}\n\treq.RemoteAddr = \"172.0.0.1:80\"\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\treq.RemoteAddr = \"172.0.0.2:80\"\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\treq.RemoteAddr = \"172.0.0.3:80\"\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\treq.RemoteAddr = \"172.0.0.4:80\"\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\n\t// We should get nil when there are no healthy hosts\n\tpool[0].setHealthy(false)\n\tpool[1].setHealthy(false)\n\th = ipHash.Select(pool, req, nil)\n\tif h != nil {\n\t\tt.Error(\"Expected ip hash policy host to be nil.\")\n\t}\n\n\t// Reproduce #4135\n\tpool = UpstreamPool{\n\t\t{Host: new(Host)},\n\t\t{Host: new(Host)},\n\t\t{Host: new(Host)},\n\t\t{Host: new(Host)},\n\t\t{Host: new(Host)},\n\t\t{Host: new(Host)},\n\t\t{Host: new(Host)},\n\t\t{Host: new(Host)},\n\t\t{Host: new(Host)},\n\t}\n\tpool[0].setHealthy(false)\n\tpool[1].setHealthy(false)\n\tpool[2].setHealthy(false)\n\tpool[3].setHealthy(false)\n\tpool[4].setHealthy(false)\n\tpool[5].setHealthy(false)\n\tpool[6].setHealthy(false)\n\tpool[7].setHealthy(false)\n\tpool[8].setHealthy(true)\n\n\t// We should get a result back when there is one healthy host left.\n\th = ipHash.Select(pool, req, nil)\n\tif h == nil {\n\t\t// If it is nil, it means we missed a host even though one is available\n\t\tt.Error(\"Expected ip hash policy host to not be nil, but it is nil.\")\n\t}\n}\n\nfunc TestClientIPHashPolicy(t *testing.T) {\n\tpool := testPool()\n\tipHash := ClientIPHashSelection{}\n\treq, _ := http.NewRequest(\"GET\", \"/\", nil)\n\treq = req.WithContext(context.WithValue(req.Context(), caddyhttp.VarsCtxKey, make(map[string]any)))\n\n\t// We should be able to predict where every request is routed.\n\tcaddyhttp.SetVar(req.Context(), caddyhttp.ClientIPVarKey, \"172.0.0.1:80\")\n\th := ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\tcaddyhttp.SetVar(req.Context(), caddyhttp.ClientIPVarKey, \"172.0.0.2:80\")\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected ip hash policy host to be the second host.\")\n\t}\n\tcaddyhttp.SetVar(req.Context(), caddyhttp.ClientIPVarKey, \"172.0.0.3:80\")\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\tcaddyhttp.SetVar(req.Context(), caddyhttp.ClientIPVarKey, \"172.0.0.4:80\")\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected ip hash policy host to be the second host.\")\n\t}\n\n\t// we should get the same results without a port\n\tcaddyhttp.SetVar(req.Context(), caddyhttp.ClientIPVarKey, \"172.0.0.1\")\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\tcaddyhttp.SetVar(req.Context(), caddyhttp.ClientIPVarKey, \"172.0.0.2\")\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected ip hash policy host to be the second host.\")\n\t}\n\tcaddyhttp.SetVar(req.Context(), caddyhttp.ClientIPVarKey, \"172.0.0.3\")\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\tcaddyhttp.SetVar(req.Context(), caddyhttp.ClientIPVarKey, \"172.0.0.4\")\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected ip hash policy host to be the second host.\")\n\t}\n\n\t// we should get a healthy host if the original host is unhealthy and a\n\t// healthy host is available\n\tcaddyhttp.SetVar(req.Context(), caddyhttp.ClientIPVarKey, \"172.0.0.4\")\n\tpool[1].setHealthy(false)\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[2] {\n\t\tt.Error(\"Expected ip hash policy host to be the third host.\")\n\t}\n\n\tcaddyhttp.SetVar(req.Context(), caddyhttp.ClientIPVarKey, \"172.0.0.2\")\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\tpool[1].setHealthy(true)\n\n\tcaddyhttp.SetVar(req.Context(), caddyhttp.ClientIPVarKey, \"172.0.0.3\")\n\tpool[2].setHealthy(false)\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\tcaddyhttp.SetVar(req.Context(), caddyhttp.ClientIPVarKey, \"172.0.0.4\")\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected ip hash policy host to be the second host.\")\n\t}\n\n\t// We should be able to resize the host pool and still be able to predict\n\t// where a req will be routed with the same IP's used above\n\tpool = UpstreamPool{\n\t\t{Host: new(Host), Dial: \"0.0.0.2\"},\n\t\t{Host: new(Host), Dial: \"0.0.0.3\"},\n\t}\n\tcaddyhttp.SetVar(req.Context(), caddyhttp.ClientIPVarKey, \"172.0.0.1:80\")\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\tcaddyhttp.SetVar(req.Context(), caddyhttp.ClientIPVarKey, \"172.0.0.2:80\")\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\tcaddyhttp.SetVar(req.Context(), caddyhttp.ClientIPVarKey, \"172.0.0.3:80\")\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\tcaddyhttp.SetVar(req.Context(), caddyhttp.ClientIPVarKey, \"172.0.0.4:80\")\n\th = ipHash.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected ip hash policy host to be the first host.\")\n\t}\n\n\t// We should get nil when there are no healthy hosts\n\tpool[0].setHealthy(false)\n\tpool[1].setHealthy(false)\n\th = ipHash.Select(pool, req, nil)\n\tif h != nil {\n\t\tt.Error(\"Expected ip hash policy host to be nil.\")\n\t}\n\n\t// Reproduce #4135\n\tpool = UpstreamPool{\n\t\t{Host: new(Host)},\n\t\t{Host: new(Host)},\n\t\t{Host: new(Host)},\n\t\t{Host: new(Host)},\n\t\t{Host: new(Host)},\n\t\t{Host: new(Host)},\n\t\t{Host: new(Host)},\n\t\t{Host: new(Host)},\n\t\t{Host: new(Host)},\n\t}\n\tpool[0].setHealthy(false)\n\tpool[1].setHealthy(false)\n\tpool[2].setHealthy(false)\n\tpool[3].setHealthy(false)\n\tpool[4].setHealthy(false)\n\tpool[5].setHealthy(false)\n\tpool[6].setHealthy(false)\n\tpool[7].setHealthy(false)\n\tpool[8].setHealthy(true)\n\n\t// We should get a result back when there is one healthy host left.\n\th = ipHash.Select(pool, req, nil)\n\tif h == nil {\n\t\t// If it is nil, it means we missed a host even though one is available\n\t\tt.Error(\"Expected ip hash policy host to not be nil, but it is nil.\")\n\t}\n}\n\nfunc TestFirstPolicy(t *testing.T) {\n\tpool := testPool()\n\tfirstPolicy := FirstSelection{}\n\treq := httptest.NewRequest(http.MethodGet, \"/\", nil)\n\n\th := firstPolicy.Select(pool, req, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected first policy host to be the first host.\")\n\t}\n\n\tpool[0].setHealthy(false)\n\th = firstPolicy.Select(pool, req, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected first policy host to be the second host.\")\n\t}\n}\n\nfunc TestQueryHashPolicy(t *testing.T) {\n\tctx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tdefer cancel()\n\tqueryPolicy := QueryHashSelection{Key: \"foo\"}\n\tif err := queryPolicy.Provision(ctx); err != nil {\n\t\tt.Errorf(\"Provision error: %v\", err)\n\t\tt.FailNow()\n\t}\n\n\tpool := testPool()\n\n\trequest := httptest.NewRequest(http.MethodGet, \"/?foo=1\", nil)\n\th := queryPolicy.Select(pool, request, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected query policy host to be the first host.\")\n\t}\n\n\trequest = httptest.NewRequest(http.MethodGet, \"/?foo=100000\", nil)\n\th = queryPolicy.Select(pool, request, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected query policy host to be the second host.\")\n\t}\n\n\trequest = httptest.NewRequest(http.MethodGet, \"/?foo=1\", nil)\n\tpool[0].setHealthy(false)\n\th = queryPolicy.Select(pool, request, nil)\n\tif h != pool[2] {\n\t\tt.Error(\"Expected query policy host to be the third host.\")\n\t}\n\n\trequest = httptest.NewRequest(http.MethodGet, \"/?foo=100000\", nil)\n\th = queryPolicy.Select(pool, request, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected query policy host to be the second host.\")\n\t}\n\n\t// We should be able to resize the host pool and still be able to predict\n\t// where a request will be routed with the same query used above\n\tpool = UpstreamPool{\n\t\t{Host: new(Host)},\n\t\t{Host: new(Host)},\n\t}\n\n\trequest = httptest.NewRequest(http.MethodGet, \"/?foo=1\", nil)\n\th = queryPolicy.Select(pool, request, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected query policy host to be the first host.\")\n\t}\n\n\tpool[0].setHealthy(false)\n\th = queryPolicy.Select(pool, request, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected query policy host to be the second host.\")\n\t}\n\n\trequest = httptest.NewRequest(http.MethodGet, \"/?foo=4\", nil)\n\th = queryPolicy.Select(pool, request, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected query policy host to be the second host.\")\n\t}\n\n\tpool[0].setHealthy(false)\n\tpool[1].setHealthy(false)\n\th = queryPolicy.Select(pool, request, nil)\n\tif h != nil {\n\t\tt.Error(\"Expected query policy policy host to be nil.\")\n\t}\n\n\trequest = httptest.NewRequest(http.MethodGet, \"/?foo=aa11&foo=bb22\", nil)\n\tpool = testPool()\n\th = queryPolicy.Select(pool, request, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected query policy host to be the first host.\")\n\t}\n}\n\nfunc TestURIHashPolicy(t *testing.T) {\n\tpool := testPool()\n\turiPolicy := URIHashSelection{}\n\n\trequest := httptest.NewRequest(http.MethodGet, \"/test\", nil)\n\th := uriPolicy.Select(pool, request, nil)\n\tif h != pool[2] {\n\t\tt.Error(\"Expected uri policy host to be the third host.\")\n\t}\n\n\tpool[2].setHealthy(false)\n\th = uriPolicy.Select(pool, request, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected uri policy host to be the first host.\")\n\t}\n\n\trequest = httptest.NewRequest(http.MethodGet, \"/test_2\", nil)\n\th = uriPolicy.Select(pool, request, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected uri policy host to be the first host.\")\n\t}\n\n\t// We should be able to resize the host pool and still be able to predict\n\t// where a request will be routed with the same URI's used above\n\tpool = UpstreamPool{\n\t\t{Host: new(Host)},\n\t\t{Host: new(Host)},\n\t}\n\n\trequest = httptest.NewRequest(http.MethodGet, \"/test\", nil)\n\th = uriPolicy.Select(pool, request, nil)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected uri policy host to be the first host.\")\n\t}\n\n\tpool[0].setHealthy(false)\n\th = uriPolicy.Select(pool, request, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected uri policy host to be the first host.\")\n\t}\n\n\trequest = httptest.NewRequest(http.MethodGet, \"/test_2\", nil)\n\th = uriPolicy.Select(pool, request, nil)\n\tif h != pool[1] {\n\t\tt.Error(\"Expected uri policy host to be the second host.\")\n\t}\n\n\tpool[0].setHealthy(false)\n\tpool[1].setHealthy(false)\n\th = uriPolicy.Select(pool, request, nil)\n\tif h != nil {\n\t\tt.Error(\"Expected uri policy policy host to be nil.\")\n\t}\n}\n\nfunc TestLeastRequests(t *testing.T) {\n\tpool := testPool()\n\tpool[0].Dial = \"localhost:8080\"\n\tpool[1].Dial = \"localhost:8081\"\n\tpool[2].Dial = \"localhost:8082\"\n\tpool[0].setHealthy(true)\n\tpool[1].setHealthy(true)\n\tpool[2].setHealthy(true)\n\tpool[0].countRequest(10)\n\tpool[1].countRequest(20)\n\tpool[2].countRequest(30)\n\n\tresult := leastRequests(pool)\n\n\tif result == nil {\n\t\tt.Error(\"Least request should not return nil\")\n\t}\n\n\tif result != pool[0] {\n\t\tt.Error(\"Least request should return pool[0]\")\n\t}\n}\n\nfunc TestRandomChoicePolicy(t *testing.T) {\n\tpool := testPool()\n\tpool[0].Dial = \"localhost:8080\"\n\tpool[1].Dial = \"localhost:8081\"\n\tpool[2].Dial = \"localhost:8082\"\n\tpool[0].setHealthy(false)\n\tpool[1].setHealthy(true)\n\tpool[2].setHealthy(true)\n\tpool[0].countRequest(10)\n\tpool[1].countRequest(20)\n\tpool[2].countRequest(30)\n\n\trequest := httptest.NewRequest(http.MethodGet, \"/test\", nil)\n\trandomChoicePolicy := RandomChoiceSelection{Choose: 2}\n\n\th := randomChoicePolicy.Select(pool, request, nil)\n\n\tif h == nil {\n\t\tt.Error(\"RandomChoicePolicy should not return nil\")\n\t}\n\n\tif h == pool[0] {\n\t\tt.Error(\"RandomChoicePolicy should not choose pool[0]\")\n\t}\n}\n\nfunc TestCookieHashPolicy(t *testing.T) {\n\tctx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tdefer cancel()\n\tcookieHashPolicy := CookieHashSelection{}\n\tif err := cookieHashPolicy.Provision(ctx); err != nil {\n\t\tt.Errorf(\"Provision error: %v\", err)\n\t\tt.FailNow()\n\t}\n\n\tpool := testPool()\n\tpool[0].Dial = \"localhost:8080\"\n\tpool[1].Dial = \"localhost:8081\"\n\tpool[2].Dial = \"localhost:8082\"\n\tpool[0].setHealthy(true)\n\tpool[1].setHealthy(false)\n\tpool[2].setHealthy(false)\n\trequest := httptest.NewRequest(http.MethodGet, \"/test\", nil)\n\tw := httptest.NewRecorder()\n\n\th := cookieHashPolicy.Select(pool, request, w)\n\tcookieServer1 := w.Result().Cookies()[0]\n\tif cookieServer1 == nil {\n\t\tt.Fatal(\"cookieHashPolicy should set a cookie\")\n\t}\n\tif cookieServer1.Name != \"lb\" {\n\t\tt.Error(\"cookieHashPolicy should set a cookie with name lb\")\n\t}\n\tif cookieServer1.Secure {\n\t\tt.Error(\"cookieHashPolicy should set cookie Secure attribute to false when request is not secure\")\n\t}\n\tif h != pool[0] {\n\t\tt.Error(\"Expected cookieHashPolicy host to be the first only available host.\")\n\t}\n\tpool[1].setHealthy(true)\n\tpool[2].setHealthy(true)\n\trequest = httptest.NewRequest(http.MethodGet, \"/test\", nil)\n\tw = httptest.NewRecorder()\n\trequest.AddCookie(cookieServer1)\n\th = cookieHashPolicy.Select(pool, request, w)\n\tif h != pool[0] {\n\t\tt.Error(\"Expected cookieHashPolicy host to stick to the first host (matching cookie).\")\n\t}\n\ts := w.Result().Cookies()\n\tif len(s) != 0 {\n\t\tt.Error(\"Expected cookieHashPolicy to not set a new cookie.\")\n\t}\n\tpool[0].setHealthy(false)\n\trequest = httptest.NewRequest(http.MethodGet, \"/test\", nil)\n\tw = httptest.NewRecorder()\n\trequest.AddCookie(cookieServer1)\n\th = cookieHashPolicy.Select(pool, request, w)\n\tif h == pool[0] {\n\t\tt.Error(\"Expected cookieHashPolicy to select a new host.\")\n\t}\n\tif w.Result().Cookies() == nil {\n\t\tt.Error(\"Expected cookieHashPolicy to set a new cookie.\")\n\t}\n}\n\nfunc TestCookieHashPolicyWithSecureRequest(t *testing.T) {\n\tctx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tdefer cancel()\n\tcookieHashPolicy := CookieHashSelection{}\n\tif err := cookieHashPolicy.Provision(ctx); err != nil {\n\t\tt.Errorf(\"Provision error: %v\", err)\n\t\tt.FailNow()\n\t}\n\n\tpool := testPool()\n\tpool[0].Dial = \"localhost:8080\"\n\tpool[1].Dial = \"localhost:8081\"\n\tpool[2].Dial = \"localhost:8082\"\n\tpool[0].setHealthy(true)\n\tpool[1].setHealthy(false)\n\tpool[2].setHealthy(false)\n\n\t// Create a test server that serves HTTPS requests\n\tts := httptest.NewTLSServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\th := cookieHashPolicy.Select(pool, r, w)\n\t\tif h != pool[0] {\n\t\t\tt.Error(\"Expected cookieHashPolicy host to be the first only available host.\")\n\t\t}\n\t}))\n\tdefer ts.Close()\n\n\t// Make a new HTTPS request to the test server\n\tclient := ts.Client()\n\trequest, err := http.NewRequest(http.MethodGet, ts.URL+\"/test\", nil)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tresponse, err := client.Do(request)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// Check if the cookie set is Secure and has SameSiteNone mode\n\tcookies := response.Cookies()\n\tif len(cookies) == 0 {\n\t\tt.Fatal(\"Expected a cookie to be set\")\n\t}\n\tcookie := cookies[0]\n\tif !cookie.Secure {\n\t\tt.Error(\"Expected cookie Secure attribute to be true when request is secure\")\n\t}\n\tif cookie.SameSite != http.SameSiteNoneMode {\n\t\tt.Error(\"Expected cookie SameSite attribute to be None when request is secure\")\n\t}\n}\n\nfunc TestCookieHashPolicyWithFirstFallback(t *testing.T) {\n\tctx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tdefer cancel()\n\tcookieHashPolicy := CookieHashSelection{\n\t\tFallbackRaw: caddyconfig.JSONModuleObject(FirstSelection{}, \"policy\", \"first\", nil),\n\t}\n\tif err := cookieHashPolicy.Provision(ctx); err != nil {\n\t\tt.Errorf(\"Provision error: %v\", err)\n\t\tt.FailNow()\n\t}\n\n\tpool := testPool()\n\tpool[0].Dial = \"localhost:8080\"\n\tpool[1].Dial = \"localhost:8081\"\n\tpool[2].Dial = \"localhost:8082\"\n\tpool[0].setHealthy(true)\n\tpool[1].setHealthy(true)\n\tpool[2].setHealthy(true)\n\trequest := httptest.NewRequest(http.MethodGet, \"/test\", nil)\n\tw := httptest.NewRecorder()\n\n\th := cookieHashPolicy.Select(pool, request, w)\n\tcookieServer1 := w.Result().Cookies()[0]\n\tif cookieServer1 == nil {\n\t\tt.Fatal(\"cookieHashPolicy should set a cookie\")\n\t}\n\tif cookieServer1.Name != \"lb\" {\n\t\tt.Error(\"cookieHashPolicy should set a cookie with name lb\")\n\t}\n\tif h != pool[0] {\n\t\tt.Errorf(\"Expected cookieHashPolicy host to be the first only available host, got %s\", h)\n\t}\n\trequest = httptest.NewRequest(http.MethodGet, \"/test\", nil)\n\tw = httptest.NewRecorder()\n\trequest.AddCookie(cookieServer1)\n\th = cookieHashPolicy.Select(pool, request, w)\n\tif h != pool[0] {\n\t\tt.Errorf(\"Expected cookieHashPolicy host to stick to the first host (matching cookie), got %s\", h)\n\t}\n\ts := w.Result().Cookies()\n\tif len(s) != 0 {\n\t\tt.Error(\"Expected cookieHashPolicy to not set a new cookie.\")\n\t}\n\tpool[0].setHealthy(false)\n\trequest = httptest.NewRequest(http.MethodGet, \"/test\", nil)\n\tw = httptest.NewRecorder()\n\trequest.AddCookie(cookieServer1)\n\th = cookieHashPolicy.Select(pool, request, w)\n\tif h != pool[1] {\n\t\tt.Errorf(\"Expected cookieHashPolicy to select the next first available host, got %s\", h)\n\t}\n\tif w.Result().Cookies() == nil {\n\t\tt.Error(\"Expected cookieHashPolicy to set a new cookie.\")\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/streaming.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Most of the code in this file was initially borrowed from the Go\n// standard library and modified; It had this copyright notice:\n// Copyright 2011 The Go Authors\n\npackage reverseproxy\n\nimport (\n\t\"bufio\"\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\tweakrand \"math/rand/v2\"\n\t\"mime\"\n\t\"net/http\"\n\t\"sync\"\n\t\"time\"\n\t\"unsafe\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\t\"golang.org/x/net/http/httpguts\"\n\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\ntype h2ReadWriteCloser struct {\n\tio.ReadCloser\n\thttp.ResponseWriter\n}\n\nfunc (rwc h2ReadWriteCloser) Write(p []byte) (n int, err error) {\n\tn, err = rwc.ResponseWriter.Write(p)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\n\t//nolint:bodyclose\n\terr = http.NewResponseController(rwc.ResponseWriter).Flush()\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\treturn n, nil\n}\n\nfunc (h *Handler) handleUpgradeResponse(logger *zap.Logger, wg *sync.WaitGroup, rw http.ResponseWriter, req *http.Request, res *http.Response) {\n\treqUpType := upgradeType(req.Header)\n\tresUpType := upgradeType(res.Header)\n\n\t// Taken from https://github.com/golang/go/commit/5c489514bc5e61ad9b5b07bd7d8ec65d66a0512a\n\t// We know reqUpType is ASCII, it's checked by the caller.\n\tif !asciiIsPrint(resUpType) {\n\t\tif c := logger.Check(zapcore.DebugLevel, \"backend tried to switch to invalid protocol\"); c != nil {\n\t\t\tc.Write(zap.String(\"backend_upgrade\", resUpType))\n\t\t}\n\t\treturn\n\t}\n\tif !asciiEqualFold(reqUpType, resUpType) {\n\t\tif c := logger.Check(zapcore.DebugLevel, \"backend tried to switch to unexpected protocol via Upgrade header\"); c != nil {\n\t\t\tc.Write(\n\t\t\t\tzap.String(\"backend_upgrade\", resUpType),\n\t\t\t\tzap.String(\"requested_upgrade\", reqUpType),\n\t\t\t)\n\t\t}\n\t\treturn\n\t}\n\n\tbackConn, ok := res.Body.(io.ReadWriteCloser)\n\tif !ok {\n\t\tlogger.Error(\"internal error: 101 switching protocols response with non-writable body\")\n\t\treturn\n\t}\n\n\t// write header first, response headers should not be counted in size\n\t// like the rest of handler chain.\n\tcopyHeader(rw.Header(), res.Header)\n\tnormalizeWebsocketHeaders(rw.Header())\n\n\tvar (\n\t\tconn io.ReadWriteCloser\n\t\tbrw  *bufio.ReadWriter\n\t)\n\t// websocket over http2 or http3 if extended connect is enabled, assuming backend doesn't support this, the request will be modified to http1.1 upgrade\n\t// TODO: once we can reliably detect backend support this, it can be removed for those backends\n\tif body, ok := caddyhttp.GetVar(req.Context(), \"extended_connect_websocket_body\").(io.ReadCloser); ok {\n\t\treq.Body = body\n\t\trw.Header().Del(\"Upgrade\")\n\t\trw.Header().Del(\"Connection\")\n\t\tdelete(rw.Header(), \"Sec-WebSocket-Accept\")\n\t\trw.WriteHeader(http.StatusOK)\n\n\t\tif c := logger.Check(zap.DebugLevel, \"upgrading connection\"); c != nil {\n\t\t\tc.Write(zap.Int(\"http_version\", 2))\n\t\t}\n\n\t\t//nolint:bodyclose\n\t\tflushErr := http.NewResponseController(rw).Flush()\n\t\tif flushErr != nil {\n\t\t\tif c := h.logger.Check(zap.ErrorLevel, \"failed to flush http2 websocket response\"); c != nil {\n\t\t\t\tc.Write(zap.Error(flushErr))\n\t\t\t}\n\t\t\treturn\n\t\t}\n\t\tconn = h2ReadWriteCloser{req.Body, rw}\n\t\t// bufio is not needed, use minimal buffer\n\t\tbrw = bufio.NewReadWriter(bufio.NewReaderSize(conn, 1), bufio.NewWriterSize(conn, 1))\n\t} else {\n\t\trw.WriteHeader(res.StatusCode)\n\n\t\tif c := logger.Check(zap.DebugLevel, \"upgrading connection\"); c != nil {\n\t\t\tc.Write(zap.Int(\"http_version\", req.ProtoMajor))\n\t\t}\n\n\t\tvar hijackErr error\n\t\t//nolint:bodyclose\n\t\tconn, brw, hijackErr = http.NewResponseController(rw).Hijack()\n\t\tif errors.Is(hijackErr, http.ErrNotSupported) {\n\t\t\tif c := h.logger.Check(zap.ErrorLevel, \"can't switch protocols using non-Hijacker ResponseWriter\"); c != nil {\n\t\t\t\tc.Write(zap.String(\"type\", fmt.Sprintf(\"%T\", rw)))\n\t\t\t}\n\t\t\treturn\n\t\t}\n\n\t\tif hijackErr != nil {\n\t\t\tif c := h.logger.Check(zap.ErrorLevel, \"hijack failed on protocol switch\"); c != nil {\n\t\t\t\tc.Write(zap.Error(hijackErr))\n\t\t\t}\n\t\t\treturn\n\t\t}\n\t}\n\n\t// adopted from https://github.com/golang/go/commit/8bcf2834afdf6a1f7937390903a41518715ef6f5\n\tbackConnCloseCh := make(chan struct{})\n\tgo func() {\n\t\t// Ensure that the cancellation of a request closes the backend.\n\t\t// See issue https://golang.org/issue/35559.\n\t\tselect {\n\t\tcase <-req.Context().Done():\n\t\tcase <-backConnCloseCh:\n\t\t}\n\t\tbackConn.Close()\n\t}()\n\tdefer close(backConnCloseCh)\n\n\tstart := time.Now()\n\tdefer func() {\n\t\tconn.Close()\n\t\tif c := logger.Check(zapcore.DebugLevel, \"connection closed\"); c != nil {\n\t\t\tc.Write(zap.Duration(\"duration\", time.Since(start)))\n\t\t}\n\t}()\n\n\tif err := brw.Flush(); err != nil {\n\t\tif c := logger.Check(zapcore.DebugLevel, \"response flush\"); c != nil {\n\t\t\tc.Write(zap.Error(err))\n\t\t}\n\t\treturn\n\t}\n\n\t// There may be buffered data in the *bufio.Reader\n\t// see: https://github.com/caddyserver/caddy/issues/6273\n\tif buffered := brw.Reader.Buffered(); buffered > 0 {\n\t\tdata, _ := brw.Peek(buffered)\n\t\t_, err := backConn.Write(data)\n\t\tif err != nil {\n\t\t\tif c := logger.Check(zapcore.DebugLevel, \"backConn write failed\"); c != nil {\n\t\t\t\tc.Write(zap.Error(err))\n\t\t\t}\n\t\t\treturn\n\t\t}\n\t}\n\n\t// Ensure the hijacked client connection, and the new connection established\n\t// with the backend, are both closed in the event of a server shutdown. This\n\t// is done by registering them. We also try to gracefully close connections\n\t// we recognize as websockets.\n\t// We need to make sure the client connection messages (i.e. to upstream)\n\t// are masked, so we need to know whether the connection is considered the\n\t// server or the client side of the proxy.\n\tgracefulClose := func(conn io.ReadWriteCloser, isClient bool) func() error {\n\t\tif isWebsocket(req) {\n\t\t\treturn func() error {\n\t\t\t\treturn writeCloseControl(conn, isClient)\n\t\t\t}\n\t\t}\n\t\treturn nil\n\t}\n\tdeleteFrontConn := h.registerConnection(conn, gracefulClose(conn, false))\n\tdeleteBackConn := h.registerConnection(backConn, gracefulClose(backConn, true))\n\tdefer deleteFrontConn()\n\tdefer deleteBackConn()\n\n\tspc := switchProtocolCopier{user: conn, backend: backConn, wg: wg}\n\n\t// setup the timeout if requested\n\tvar timeoutc <-chan time.Time\n\tif h.StreamTimeout > 0 {\n\t\ttimer := time.NewTimer(time.Duration(h.StreamTimeout))\n\t\tdefer timer.Stop()\n\t\ttimeoutc = timer.C\n\t}\n\n\t// when a stream timeout is encountered, no error will be read from errc\n\t// a buffer size of 2 will allow both the read and write goroutines to send the error and exit\n\t// see: https://github.com/caddyserver/caddy/issues/7418\n\terrc := make(chan error, 2)\n\twg.Add(2)\n\tgo spc.copyToBackend(errc)\n\tgo spc.copyFromBackend(errc)\n\tselect {\n\tcase err := <-errc:\n\t\tif c := logger.Check(zapcore.DebugLevel, \"streaming error\"); c != nil {\n\t\t\tc.Write(zap.Error(err))\n\t\t}\n\tcase time := <-timeoutc:\n\t\tif c := logger.Check(zapcore.DebugLevel, \"stream timed out\"); c != nil {\n\t\t\tc.Write(zap.Time(\"timeout\", time))\n\t\t}\n\t}\n}\n\n// flushInterval returns the p.FlushInterval value, conditionally\n// overriding its value for a specific request/response.\nfunc (h Handler) flushInterval(req *http.Request, res *http.Response) time.Duration {\n\tresCTHeader := res.Header.Get(\"Content-Type\")\n\tresCT, _, err := mime.ParseMediaType(resCTHeader)\n\n\t// For Server-Sent Events responses, flush immediately.\n\t// The MIME type is defined in https://www.w3.org/TR/eventsource/#text-event-stream\n\tif err == nil && resCT == \"text/event-stream\" {\n\t\treturn -1 // negative means immediately\n\t}\n\n\t// We might have the case of streaming for which Content-Length might be unset.\n\tif res.ContentLength == -1 {\n\t\treturn -1\n\t}\n\n\t// for h2 and h2c upstream streaming data to client (issues #3556 and #3606)\n\tif h.isBidirectionalStream(req, res) {\n\t\treturn -1\n\t}\n\n\treturn time.Duration(h.FlushInterval)\n}\n\n// isBidirectionalStream returns whether we should work in bi-directional stream mode.\n//\n// See https://github.com/caddyserver/caddy/pull/3620 for discussion of nuances.\nfunc (h Handler) isBidirectionalStream(req *http.Request, res *http.Response) bool {\n\t// We have to check the encoding here; only flush headers with identity encoding.\n\t// Non-identity encoding might combine with \"encode\" directive, and in that case,\n\t// if body size larger than enc.MinLength, upper level encode handle might have\n\t// Content-Encoding header to write.\n\t// (see https://github.com/caddyserver/caddy/issues/3606 for use case)\n\tae := req.Header.Get(\"Accept-Encoding\")\n\n\treturn req.ProtoMajor == 2 &&\n\t\tres.ProtoMajor == 2 &&\n\t\tres.ContentLength == -1 &&\n\t\t(ae == \"identity\" || ae == \"\")\n}\n\nfunc (h Handler) copyResponse(dst http.ResponseWriter, src io.Reader, flushInterval time.Duration, logger *zap.Logger) error {\n\tvar w io.Writer = dst\n\n\tif flushInterval != 0 {\n\t\tvar mlwLogger *zap.Logger\n\t\tif h.VerboseLogs {\n\t\t\tmlwLogger = logger.Named(\"max_latency_writer\")\n\t\t} else {\n\t\t\tmlwLogger = zap.NewNop()\n\t\t}\n\t\tmlw := &maxLatencyWriter{\n\t\t\tdst: dst,\n\t\t\t//nolint:bodyclose\n\t\t\tflush:   http.NewResponseController(dst).Flush,\n\t\t\tlatency: flushInterval,\n\t\t\tlogger:  mlwLogger,\n\t\t}\n\t\tdefer mlw.stop()\n\n\t\t// set up initial timer so headers get flushed even if body writes are delayed\n\t\tmlw.flushPending = true\n\t\tmlw.t = time.AfterFunc(flushInterval, mlw.delayedFlush)\n\n\t\tw = mlw\n\t}\n\n\tbuf := streamingBufPool.Get().(*[]byte)\n\tdefer streamingBufPool.Put(buf)\n\n\tvar copyLogger *zap.Logger\n\tif h.VerboseLogs {\n\t\tcopyLogger = logger\n\t} else {\n\t\tcopyLogger = zap.NewNop()\n\t}\n\n\t_, err := h.copyBuffer(w, src, *buf, copyLogger)\n\treturn err\n}\n\n// copyBuffer returns any write errors or non-EOF read errors, and the amount\n// of bytes written.\nfunc (h Handler) copyBuffer(dst io.Writer, src io.Reader, buf []byte, logger *zap.Logger) (int64, error) {\n\tif len(buf) == 0 {\n\t\tbuf = make([]byte, defaultBufferSize)\n\t}\n\tvar written int64\n\tfor {\n\t\tlogger.Debug(\"waiting to read from upstream\")\n\t\tnr, rerr := src.Read(buf)\n\t\tlogger := logger.With(zap.Int(\"read\", nr))\n\t\tif c := logger.Check(zapcore.DebugLevel, \"read from upstream\"); c != nil {\n\t\t\tc.Write(zap.Error(rerr))\n\t\t}\n\t\tif rerr != nil && rerr != io.EOF && rerr != context.Canceled {\n\t\t\t// TODO: this could be useful to know (indeed, it revealed an error in our\n\t\t\t// fastcgi PoC earlier; but it's this single error report here that necessitates\n\t\t\t// a function separate from io.CopyBuffer, since io.CopyBuffer does not distinguish\n\t\t\t// between read or write errors; in a reverse proxy situation, write errors are not\n\t\t\t// something we need to report to the client, but read errors are a problem on our\n\t\t\t// end for sure. so we need to decide what we want.)\n\t\t\t// p.logf(\"copyBuffer: ReverseProxy read error during body copy: %v\", rerr)\n\t\t\tif c := logger.Check(zapcore.ErrorLevel, \"reading from backend\"); c != nil {\n\t\t\t\tc.Write(zap.Error(rerr))\n\t\t\t}\n\t\t}\n\t\tif nr > 0 {\n\t\t\tlogger.Debug(\"writing to downstream\")\n\t\t\tnw, werr := dst.Write(buf[:nr])\n\t\t\tif nw > 0 {\n\t\t\t\twritten += int64(nw)\n\t\t\t}\n\t\t\tif c := logger.Check(zapcore.DebugLevel, \"wrote to downstream\"); c != nil {\n\t\t\t\tc.Write(\n\t\t\t\t\tzap.Int(\"written\", nw),\n\t\t\t\t\tzap.Int64(\"written_total\", written),\n\t\t\t\t\tzap.Error(werr),\n\t\t\t\t)\n\t\t\t}\n\t\t\tif werr != nil {\n\t\t\t\treturn written, fmt.Errorf(\"writing: %w\", werr)\n\t\t\t}\n\t\t\tif nr != nw {\n\t\t\t\treturn written, io.ErrShortWrite\n\t\t\t}\n\t\t}\n\t\tif rerr != nil {\n\t\t\tif rerr == io.EOF {\n\t\t\t\treturn written, nil\n\t\t\t}\n\t\t\treturn written, fmt.Errorf(\"reading: %w\", rerr)\n\t\t}\n\t}\n}\n\n// registerConnection holds onto conn so it can be closed in the event\n// of a server shutdown. This is useful because hijacked connections or\n// connections dialed to backends don't close when server is shut down.\n// The caller should call the returned delete() function when the\n// connection is done to remove it from memory.\nfunc (h *Handler) registerConnection(conn io.ReadWriteCloser, gracefulClose func() error) (del func()) {\n\th.connectionsMu.Lock()\n\th.connections[conn] = openConnection{conn, gracefulClose}\n\th.connectionsMu.Unlock()\n\treturn func() {\n\t\th.connectionsMu.Lock()\n\t\tdelete(h.connections, conn)\n\t\t// if there is no connection left before the connections close timer fires\n\t\tif len(h.connections) == 0 && h.connectionsCloseTimer != nil {\n\t\t\t// we release the timer that holds the reference to Handler\n\t\t\tif (*h.connectionsCloseTimer).Stop() {\n\t\t\t\th.logger.Debug(\"stopped streaming connections close timer - all connections are already closed\")\n\t\t\t}\n\t\t\th.connectionsCloseTimer = nil\n\t\t}\n\t\th.connectionsMu.Unlock()\n\t}\n}\n\n// closeConnections immediately closes all hijacked connections (both to client and backend).\nfunc (h *Handler) closeConnections() error {\n\tvar err error\n\th.connectionsMu.Lock()\n\tdefer h.connectionsMu.Unlock()\n\n\tfor _, oc := range h.connections {\n\t\tif oc.gracefulClose != nil {\n\t\t\t// this is potentially blocking while we have the lock on the connections\n\t\t\t// map, but that should be OK since the server has in theory shut down\n\t\t\t// and we are no longer using the connections map\n\t\t\tgracefulErr := oc.gracefulClose()\n\t\t\tif gracefulErr != nil && err == nil {\n\t\t\t\terr = gracefulErr\n\t\t\t}\n\t\t}\n\t\tcloseErr := oc.conn.Close()\n\t\tif closeErr != nil && err == nil {\n\t\t\terr = closeErr\n\t\t}\n\t}\n\treturn err\n}\n\n// cleanupConnections closes hijacked connections.\n// Depending on the value of StreamCloseDelay it does that either immediately\n// or sets up a timer that will do that later.\nfunc (h *Handler) cleanupConnections() error {\n\tif h.StreamCloseDelay == 0 {\n\t\treturn h.closeConnections()\n\t}\n\n\th.connectionsMu.Lock()\n\tdefer h.connectionsMu.Unlock()\n\t// the handler is shut down, no new connection can appear,\n\t// so we can skip setting up the timer when there are no connections\n\tif len(h.connections) > 0 {\n\t\tdelay := time.Duration(h.StreamCloseDelay)\n\t\th.connectionsCloseTimer = time.AfterFunc(delay, func() {\n\t\t\tif c := h.logger.Check(zapcore.DebugLevel, \"closing streaming connections after delay\"); c != nil {\n\t\t\t\tc.Write(zap.Duration(\"delay\", delay))\n\t\t\t}\n\t\t\terr := h.closeConnections()\n\t\t\tif err != nil {\n\t\t\t\tif c := h.logger.Check(zapcore.ErrorLevel, \"failed to closed connections after delay\"); c != nil {\n\t\t\t\t\tc.Write(\n\t\t\t\t\t\tzap.Error(err),\n\t\t\t\t\t\tzap.Duration(\"delay\", delay),\n\t\t\t\t\t)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n\treturn nil\n}\n\n// writeCloseControl sends a best-effort Close control message to the given\n// WebSocket connection. Thanks to @pascaldekloe who provided inspiration\n// from his simple implementation of this I was able to learn from at:\n// github.com/pascaldekloe/websocket. Further work for handling masking\n// taken from github.com/gorilla/websocket.\nfunc writeCloseControl(conn io.Writer, isClient bool) error {\n\t// Sources:\n\t// https://github.com/pascaldekloe/websocket/blob/32050af67a5d/websocket.go#L119\n\t// https://github.com/gorilla/websocket/blob/v1.5.0/conn.go#L413\n\n\t// For now, we're not using a reason. We might later, though.\n\t// The code handling the reason is left in\n\tvar reason string // max 123 bytes (control frame payload limit is 125; status code takes 2)\n\n\tconst closeMessage = 8\n\tconst finalBit = 1 << 7 // Frame header byte 0 bits from Section 5.2 of RFC 6455\n\tconst maskBit = 1 << 7  // Frame header byte 1 bits from Section 5.2 of RFC 6455\n\tconst goingAwayUpper uint8 = 1001 >> 8\n\tconst goingAwayLower uint8 = 1001 & 0xff\n\n\tb0 := byte(closeMessage) | finalBit\n\tb1 := byte(len(reason) + 2)\n\tif isClient {\n\t\tb1 |= maskBit\n\t}\n\n\tbuf := make([]byte, 0, 127)\n\tbuf = append(buf, b0, b1)\n\tmsgLength := 4 + len(reason)\n\n\t// Both branches below append the \"going away\" code and reason\n\tappendMessage := func(buf []byte) []byte {\n\t\tbuf = append(buf, goingAwayUpper, goingAwayLower)\n\t\tbuf = append(buf, []byte(reason)...)\n\t\treturn buf\n\t}\n\n\t// When we're the client, we need to mask the message as per\n\t// https://www.rfc-editor.org/rfc/rfc6455#section-5.3\n\tif isClient {\n\t\tkey := newMaskKey()\n\t\tbuf = append(buf, key[:]...)\n\t\tmsgLength += len(key)\n\t\tbuf = appendMessage(buf)\n\t\tmaskBytes(key, 0, buf[2+len(key):])\n\t} else {\n\t\tbuf = appendMessage(buf)\n\t}\n\n\t// simply best-effort, but return error for logging purposes\n\t// TODO: we might need to ensure we are the exclusive writer by this point (io.Copy is stopped)?\n\t_, err := conn.Write(buf[:msgLength])\n\treturn err\n}\n\n// Copied from https://github.com/gorilla/websocket/blob/v1.5.0/mask.go\nfunc maskBytes(key [4]byte, pos int, b []byte) int {\n\t// Mask one byte at a time for small buffers.\n\tif len(b) < 2*wordSize {\n\t\tfor i := range b {\n\t\t\tb[i] ^= key[pos&3]\n\t\t\tpos++\n\t\t}\n\t\treturn pos & 3\n\t}\n\n\t// Mask one byte at a time to word boundary.\n\tif n := int(uintptr(unsafe.Pointer(&b[0]))) % wordSize; n != 0 {\n\t\tn = wordSize - n\n\t\tfor i := range b[:n] {\n\t\t\tb[i] ^= key[pos&3]\n\t\t\tpos++\n\t\t}\n\t\tb = b[n:]\n\t}\n\n\t// Create aligned word size key.\n\tvar k [wordSize]byte\n\tfor i := range k {\n\t\tk[i] = key[(pos+i)&3] // nolint:gosec // false positive, impossible to be out of bounds; see: https://github.com/securego/gosec/issues/1525\n\t}\n\tkw := *(*uintptr)(unsafe.Pointer(&k))\n\n\t// Mask one word at a time.\n\tn := (len(b) / wordSize) * wordSize\n\tfor i := 0; i < n; i += wordSize {\n\t\t*(*uintptr)(unsafe.Add(unsafe.Pointer(&b[0]), i)) ^= kw\n\t}\n\n\t// Mask one byte at a time for remaining bytes.\n\tb = b[n:]\n\tfor i := range b {\n\t\tb[i] ^= key[pos&3]\n\t\tpos++\n\t}\n\n\treturn pos & 3\n}\n\n// Copied from https://github.com/gorilla/websocket/blob/v1.5.0/conn.go#L184\nfunc newMaskKey() [4]byte {\n\tn := weakrand.Uint32()\n\treturn [4]byte{byte(n), byte(n >> 8), byte(n >> 16), byte(n >> 24)}\n}\n\n// isWebsocket returns true if r looks to be an upgrade request for WebSockets.\n// It is a fairly naive check.\nfunc isWebsocket(r *http.Request) bool {\n\treturn httpguts.HeaderValuesContainsToken(r.Header[\"Connection\"], \"upgrade\") &&\n\t\thttpguts.HeaderValuesContainsToken(r.Header[\"Upgrade\"], \"websocket\")\n}\n\n// openConnection maps an open connection to\n// an optional function for graceful close.\ntype openConnection struct {\n\tconn          io.ReadWriteCloser\n\tgracefulClose func() error\n}\n\ntype maxLatencyWriter struct {\n\tdst     io.Writer\n\tflush   func() error\n\tlatency time.Duration // non-zero; negative means to flush immediately\n\n\tmu           sync.Mutex // protects t, flushPending, and dst.Flush\n\tt            *time.Timer\n\tflushPending bool\n\tlogger       *zap.Logger\n}\n\nfunc (m *maxLatencyWriter) Write(p []byte) (n int, err error) {\n\tm.mu.Lock()\n\tdefer m.mu.Unlock()\n\tn, err = m.dst.Write(p)\n\tif c := m.logger.Check(zapcore.DebugLevel, \"wrote bytes\"); c != nil {\n\t\tc.Write(zap.Int(\"n\", n), zap.Error(err))\n\t}\n\tif m.latency < 0 {\n\t\tm.logger.Debug(\"flushing immediately\")\n\t\t//nolint:errcheck\n\t\tm.flush()\n\t\treturn n, err\n\t}\n\tif m.flushPending {\n\t\tm.logger.Debug(\"delayed flush already pending\")\n\t\treturn n, err\n\t}\n\tif m.t == nil {\n\t\tm.t = time.AfterFunc(m.latency, m.delayedFlush)\n\t} else {\n\t\tm.t.Reset(m.latency)\n\t}\n\tif c := m.logger.Check(zapcore.DebugLevel, \"timer set for delayed flush\"); c != nil {\n\t\tc.Write(zap.Duration(\"duration\", m.latency))\n\t}\n\tm.flushPending = true\n\treturn n, err\n}\n\nfunc (m *maxLatencyWriter) delayedFlush() {\n\tm.mu.Lock()\n\tdefer m.mu.Unlock()\n\tif !m.flushPending { // if stop was called but AfterFunc already started this goroutine\n\t\tm.logger.Debug(\"delayed flush is not pending\")\n\t\treturn\n\t}\n\tm.logger.Debug(\"delayed flush\")\n\t//nolint:errcheck\n\tm.flush()\n\tm.flushPending = false\n}\n\nfunc (m *maxLatencyWriter) stop() {\n\tm.mu.Lock()\n\tdefer m.mu.Unlock()\n\tm.flushPending = false\n\tif m.t != nil {\n\t\tm.t.Stop()\n\t}\n}\n\n// switchProtocolCopier exists so goroutines proxying data back and\n// forth have nice names in stacks.\ntype switchProtocolCopier struct {\n\tuser, backend io.ReadWriteCloser\n\twg            *sync.WaitGroup\n}\n\nfunc (c switchProtocolCopier) copyFromBackend(errc chan<- error) {\n\t_, err := io.Copy(c.user, c.backend)\n\terrc <- err\n\tc.wg.Done()\n}\n\nfunc (c switchProtocolCopier) copyToBackend(errc chan<- error) {\n\t_, err := io.Copy(c.backend, c.user)\n\terrc <- err\n\tc.wg.Done()\n}\n\nvar streamingBufPool = sync.Pool{\n\tNew: func() any {\n\t\t// The Pool's New function should generally only return pointer\n\t\t// types, since a pointer can be put into the return interface\n\t\t// value without an allocation\n\t\t// - (from the package docs)\n\t\tb := make([]byte, defaultBufferSize)\n\t\treturn &b\n\t},\n}\n\nconst (\n\tdefaultBufferSize = 32 * 1024\n\twordSize          = int(unsafe.Sizeof(uintptr(0)))\n)\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/streaming_test.go",
    "content": "package reverseproxy\n\nimport (\n\t\"bytes\"\n\t\"net/http/httptest\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc TestHandlerCopyResponse(t *testing.T) {\n\th := Handler{}\n\ttestdata := []string{\n\t\t\"\",\n\t\tstrings.Repeat(\"a\", defaultBufferSize),\n\t\tstrings.Repeat(\"123456789 123456789 123456789 12\", 3000),\n\t}\n\n\tdst := bytes.NewBuffer(nil)\n\trecorder := httptest.NewRecorder()\n\trecorder.Body = dst\n\n\tfor _, d := range testdata {\n\t\tsrc := bytes.NewBuffer([]byte(d))\n\t\tdst.Reset()\n\t\terr := h.copyResponse(recorder, src, 0, caddy.Log())\n\t\tif err != nil {\n\t\t\tt.Errorf(\"failed with error: %v\", err)\n\t\t}\n\t\tout := dst.String()\n\t\tif out != d {\n\t\t\tt.Errorf(\"bad read: got %q\", out)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/upstreams.go",
    "content": "package reverseproxy\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\tweakrand \"math/rand/v2\"\n\t\"net\"\n\t\"net/http\"\n\t\"strconv\"\n\t\"sync\"\n\t\"time\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(SRVUpstreams{})\n\tcaddy.RegisterModule(AUpstreams{})\n\tcaddy.RegisterModule(MultiUpstreams{})\n}\n\n// SRVUpstreams provides upstreams from SRV lookups.\n// The lookup DNS name can be configured either by\n// its individual parts (that is, specifying the\n// service, protocol, and name separately) to form\n// the standard \"_service._proto.name\" domain, or\n// the domain can be specified directly in name by\n// leaving service and proto empty. See RFC 2782.\n//\n// Lookups are cached and refreshed at the configured\n// refresh interval.\n//\n// Returned upstreams are sorted by priority and weight.\ntype SRVUpstreams struct {\n\t// The service label.\n\tService string `json:\"service,omitempty\"`\n\n\t// The protocol label; either tcp or udp.\n\tProto string `json:\"proto,omitempty\"`\n\n\t// The name label; or, if service and proto are\n\t// empty, the entire domain name to look up.\n\tName string `json:\"name,omitempty\"`\n\n\t// The interval at which to refresh the SRV lookup.\n\t// Results are cached between lookups. Default: 1m\n\tRefresh caddy.Duration `json:\"refresh,omitempty\"`\n\n\t// If > 0 and there is an error with the lookup,\n\t// continue to use the cached results for up to\n\t// this long before trying again, (even though they\n\t// are stale) instead of returning an error to the\n\t// client. Default: 0s.\n\tGracePeriod caddy.Duration `json:\"grace_period,omitempty\"`\n\n\t// Configures the DNS resolver used to resolve the\n\t// SRV address to SRV records.\n\tResolver *UpstreamResolver `json:\"resolver,omitempty\"`\n\n\t// If Resolver is configured, how long to wait before\n\t// timing out trying to connect to the DNS server.\n\tDialTimeout caddy.Duration `json:\"dial_timeout,omitempty\"`\n\n\t// If Resolver is configured, how long to wait before\n\t// spawning an RFC 6555 Fast Fallback connection.\n\t// A negative value disables this.\n\tFallbackDelay caddy.Duration `json:\"dial_fallback_delay,omitempty\"`\n\n\t// Specific network to dial when connecting to the upstream(s)\n\t// provided by SRV records upstream. See Go's net package for\n\t// accepted values. For example, to restrict to IPv4, use \"tcp4\".\n\tDialNetwork string `json:\"dial_network,omitempty\"`\n\n\tresolver *net.Resolver\n\n\tlogger *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (SRVUpstreams) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.reverse_proxy.upstreams.srv\",\n\t\tNew: func() caddy.Module { return new(SRVUpstreams) },\n\t}\n}\n\nfunc (su *SRVUpstreams) Provision(ctx caddy.Context) error {\n\tsu.logger = ctx.Logger()\n\tif su.Refresh == 0 {\n\t\tsu.Refresh = caddy.Duration(time.Minute)\n\t}\n\n\tif su.Resolver != nil {\n\t\terr := su.Resolver.ParseAddresses()\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\td := &net.Dialer{\n\t\t\tTimeout:       time.Duration(su.DialTimeout),\n\t\t\tFallbackDelay: time.Duration(su.FallbackDelay),\n\t\t}\n\t\tsu.resolver = &net.Resolver{\n\t\t\tPreferGo: true,\n\t\t\tDial: func(ctx context.Context, _, _ string) (net.Conn, error) {\n\t\t\t\t//nolint:gosec\n\t\t\t\taddr := su.Resolver.netAddrs[weakrand.IntN(len(su.Resolver.netAddrs))]\n\t\t\t\treturn d.DialContext(ctx, addr.Network, addr.JoinHostPort(0))\n\t\t\t},\n\t\t}\n\t}\n\tif su.resolver == nil {\n\t\tsu.resolver = net.DefaultResolver\n\t}\n\n\treturn nil\n}\n\nfunc (su SRVUpstreams) GetUpstreams(r *http.Request) ([]*Upstream, error) {\n\tsuAddr, service, proto, name := su.expandedAddr(r)\n\n\t// first, use a cheap read-lock to return a cached result quickly\n\tsrvsMu.RLock()\n\tcached := srvs[suAddr]\n\tsrvsMu.RUnlock()\n\tif cached.isFresh() {\n\t\treturn allNew(cached.upstreams), nil\n\t}\n\n\t// otherwise, obtain a write-lock to update the cached value\n\tsrvsMu.Lock()\n\tdefer srvsMu.Unlock()\n\n\t// check to see if it's still stale, since we're now in a different\n\t// lock from when we first checked freshness; another goroutine might\n\t// have refreshed it in the meantime before we re-obtained our lock\n\tcached = srvs[suAddr]\n\tif cached.isFresh() {\n\t\treturn allNew(cached.upstreams), nil\n\t}\n\n\tif c := su.logger.Check(zapcore.DebugLevel, \"refreshing SRV upstreams\"); c != nil {\n\t\tc.Write(\n\t\t\tzap.String(\"service\", service),\n\t\t\tzap.String(\"proto\", proto),\n\t\t\tzap.String(\"name\", name),\n\t\t)\n\t}\n\n\t_, records, err := su.resolver.LookupSRV(r.Context(), service, proto, name)\n\tif err != nil {\n\t\t// From LookupSRV docs: \"If the response contains invalid names, those records are filtered\n\t\t// out and an error will be returned alongside the remaining results, if any.\" Thus, we\n\t\t// only return an error if no records were also returned.\n\t\tif len(records) == 0 {\n\t\t\tif su.GracePeriod > 0 {\n\t\t\t\tif c := su.logger.Check(zapcore.ErrorLevel, \"SRV lookup failed; using previously cached\"); c != nil {\n\t\t\t\t\tc.Write(zap.Error(err))\n\t\t\t\t}\n\t\t\t\tcached.freshness = time.Now().Add(time.Duration(su.GracePeriod) - time.Duration(su.Refresh))\n\t\t\t\tsrvs[suAddr] = cached\n\t\t\t\treturn allNew(cached.upstreams), nil\n\t\t\t}\n\t\t\treturn nil, err\n\t\t}\n\t\tif c := su.logger.Check(zapcore.WarnLevel, \"SRV records filtered\"); c != nil {\n\t\t\tc.Write(zap.Error(err))\n\t\t}\n\t}\n\n\tupstreams := make([]Upstream, len(records))\n\tfor i, rec := range records {\n\t\tif c := su.logger.Check(zapcore.DebugLevel, \"discovered SRV record\"); c != nil {\n\t\t\tc.Write(\n\t\t\t\tzap.String(\"target\", rec.Target),\n\t\t\t\tzap.Uint16(\"port\", rec.Port),\n\t\t\t\tzap.Uint16(\"priority\", rec.Priority),\n\t\t\t\tzap.Uint16(\"weight\", rec.Weight),\n\t\t\t)\n\t\t}\n\t\taddr := net.JoinHostPort(rec.Target, strconv.Itoa(int(rec.Port)))\n\t\tif su.DialNetwork != \"\" {\n\t\t\taddr = su.DialNetwork + \"/\" + addr\n\t\t}\n\t\tupstreams[i] = Upstream{Dial: addr}\n\t}\n\n\t// before adding a new one to the cache (as opposed to replacing stale one), make room if cache is full\n\tif cached.freshness.IsZero() && len(srvs) >= 100 {\n\t\tfor randomKey := range srvs {\n\t\t\tdelete(srvs, randomKey)\n\t\t\tbreak\n\t\t}\n\t}\n\n\tsrvs[suAddr] = srvLookup{\n\t\tsrvUpstreams: su,\n\t\tfreshness:    time.Now(),\n\t\tupstreams:    upstreams,\n\t}\n\n\treturn allNew(upstreams), nil\n}\n\nfunc (su SRVUpstreams) String() string {\n\tif su.Service == \"\" && su.Proto == \"\" {\n\t\treturn su.Name\n\t}\n\treturn su.formattedAddr(su.Service, su.Proto, su.Name)\n}\n\n// expandedAddr expands placeholders in the configured SRV domain labels.\n// The return values are: addr, the RFC 2782 representation of the SRV domain;\n// service, the service; proto, the protocol; and name, the name.\n// If su.Service and su.Proto are empty, name will be returned as addr instead.\nfunc (su SRVUpstreams) expandedAddr(r *http.Request) (addr, service, proto, name string) {\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\tname = repl.ReplaceAll(su.Name, \"\")\n\tif su.Service == \"\" && su.Proto == \"\" {\n\t\taddr = name\n\t\treturn addr, service, proto, name\n\t}\n\tservice = repl.ReplaceAll(su.Service, \"\")\n\tproto = repl.ReplaceAll(su.Proto, \"\")\n\taddr = su.formattedAddr(service, proto, name)\n\treturn addr, service, proto, name\n}\n\n// formattedAddr the RFC 2782 representation of the SRV domain, in\n// the form \"_service._proto.name\".\nfunc (SRVUpstreams) formattedAddr(service, proto, name string) string {\n\treturn fmt.Sprintf(\"_%s._%s.%s\", service, proto, name)\n}\n\ntype srvLookup struct {\n\tsrvUpstreams SRVUpstreams\n\tfreshness    time.Time\n\tupstreams    []Upstream\n}\n\nfunc (sl srvLookup) isFresh() bool {\n\treturn time.Since(sl.freshness) < time.Duration(sl.srvUpstreams.Refresh)\n}\n\ntype IPVersions struct {\n\tIPv4 *bool `json:\"ipv4,omitempty\"`\n\tIPv6 *bool `json:\"ipv6,omitempty\"`\n}\n\nfunc resolveIpVersion(versions *IPVersions) string {\n\tresolveIpv4 := versions == nil || (versions.IPv4 == nil && versions.IPv6 == nil) || (versions.IPv4 != nil && *versions.IPv4)\n\tresolveIpv6 := versions == nil || (versions.IPv6 == nil && versions.IPv4 == nil) || (versions.IPv6 != nil && *versions.IPv6)\n\tswitch {\n\tcase resolveIpv4 && !resolveIpv6:\n\t\treturn \"ip4\"\n\tcase !resolveIpv4 && resolveIpv6:\n\t\treturn \"ip6\"\n\tdefault:\n\t\treturn \"ip\"\n\t}\n}\n\n// AUpstreams provides upstreams from A/AAAA lookups.\n// Results are cached and refreshed at the configured\n// refresh interval.\ntype AUpstreams struct {\n\t// The domain name to look up.\n\tName string `json:\"name,omitempty\"`\n\n\t// The port to use with the upstreams. Default: 80\n\tPort string `json:\"port,omitempty\"`\n\n\t// The interval at which to refresh the A lookup.\n\t// Results are cached between lookups. Default: 1m\n\tRefresh caddy.Duration `json:\"refresh,omitempty\"`\n\n\t// Configures the DNS resolver used to resolve the\n\t// domain name to A records.\n\tResolver *UpstreamResolver `json:\"resolver,omitempty\"`\n\n\t// If Resolver is configured, how long to wait before\n\t// timing out trying to connect to the DNS server.\n\tDialTimeout caddy.Duration `json:\"dial_timeout,omitempty\"`\n\n\t// If Resolver is configured, how long to wait before\n\t// spawning an RFC 6555 Fast Fallback connection.\n\t// A negative value disables this.\n\tFallbackDelay caddy.Duration `json:\"dial_fallback_delay,omitempty\"`\n\n\t// The IP versions to resolve for. By default, both\n\t// \"ipv4\" and \"ipv6\" will be enabled, which\n\t// correspond to A and AAAA records respectively.\n\tVersions *IPVersions `json:\"versions,omitempty\"`\n\n\tresolver *net.Resolver\n\n\tlogger *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (AUpstreams) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.reverse_proxy.upstreams.a\",\n\t\tNew: func() caddy.Module { return new(AUpstreams) },\n\t}\n}\n\nfunc (au *AUpstreams) Provision(ctx caddy.Context) error {\n\tau.logger = ctx.Logger()\n\tif au.Refresh == 0 {\n\t\tau.Refresh = caddy.Duration(time.Minute)\n\t}\n\tif au.Port == \"\" {\n\t\tau.Port = \"80\"\n\t}\n\n\tif au.Resolver != nil {\n\t\terr := au.Resolver.ParseAddresses()\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\td := &net.Dialer{\n\t\t\tTimeout:       time.Duration(au.DialTimeout),\n\t\t\tFallbackDelay: time.Duration(au.FallbackDelay),\n\t\t}\n\t\tau.resolver = &net.Resolver{\n\t\t\tPreferGo: true,\n\t\t\tDial: func(ctx context.Context, _, _ string) (net.Conn, error) {\n\t\t\t\t//nolint:gosec\n\t\t\t\taddr := au.Resolver.netAddrs[weakrand.IntN(len(au.Resolver.netAddrs))]\n\t\t\t\treturn d.DialContext(ctx, addr.Network, addr.JoinHostPort(0))\n\t\t\t},\n\t\t}\n\t}\n\tif au.resolver == nil {\n\t\tau.resolver = net.DefaultResolver\n\t}\n\n\treturn nil\n}\n\nfunc (au AUpstreams) GetUpstreams(r *http.Request) ([]*Upstream, error) {\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\n\t// Map ipVersion early, so we can use it as part of the cache-key.\n\t// This should be fairly inexpensive and comes and the upside of\n\t// allowing the same dynamic upstream (name + port combination)\n\t// to be used multiple times with different ip versions.\n\t//\n\t// It also forced a cache-miss if a previously cached dynamic\n\t// upstream changes its ip version, e.g. after a config reload,\n\t// while keeping the cache-invalidation as simple as it currently is.\n\tipVersion := resolveIpVersion(au.Versions)\n\n\tauStr := repl.ReplaceAll(au.String()+ipVersion, \"\")\n\n\t// first, use a cheap read-lock to return a cached result quickly\n\taAaaaMu.RLock()\n\tcached := aAaaa[auStr]\n\taAaaaMu.RUnlock()\n\tif cached.isFresh() {\n\t\treturn allNew(cached.upstreams), nil\n\t}\n\n\t// otherwise, obtain a write-lock to update the cached value\n\taAaaaMu.Lock()\n\tdefer aAaaaMu.Unlock()\n\n\t// check to see if it's still stale, since we're now in a different\n\t// lock from when we first checked freshness; another goroutine might\n\t// have refreshed it in the meantime before we re-obtained our lock\n\tcached = aAaaa[auStr]\n\tif cached.isFresh() {\n\t\treturn allNew(cached.upstreams), nil\n\t}\n\n\tname := repl.ReplaceAll(au.Name, \"\")\n\tport := repl.ReplaceAll(au.Port, \"\")\n\n\tif c := au.logger.Check(zapcore.DebugLevel, \"refreshing A upstreams\"); c != nil {\n\t\tc.Write(\n\t\t\tzap.String(\"version\", ipVersion),\n\t\t\tzap.String(\"name\", name),\n\t\t\tzap.String(\"port\", port),\n\t\t)\n\t}\n\n\tips, err := au.resolver.LookupIP(r.Context(), ipVersion, name)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tupstreams := make([]Upstream, len(ips))\n\tfor i, ip := range ips {\n\t\tif c := au.logger.Check(zapcore.DebugLevel, \"discovered A record\"); c != nil {\n\t\t\tc.Write(zap.String(\"ip\", ip.String()))\n\t\t}\n\t\tupstreams[i] = Upstream{\n\t\t\tDial: net.JoinHostPort(ip.String(), port),\n\t\t}\n\t}\n\n\t// before adding a new one to the cache (as opposed to replacing stale one), make room if cache is full\n\tif cached.freshness.IsZero() && len(aAaaa) >= 100 {\n\t\tfor randomKey := range aAaaa {\n\t\t\tdelete(aAaaa, randomKey)\n\t\t\tbreak\n\t\t}\n\t}\n\n\taAaaa[auStr] = aLookup{\n\t\taUpstreams: au,\n\t\tfreshness:  time.Now(),\n\t\tupstreams:  upstreams,\n\t}\n\n\treturn allNew(upstreams), nil\n}\n\nfunc (au AUpstreams) String() string { return net.JoinHostPort(au.Name, au.Port) }\n\ntype aLookup struct {\n\taUpstreams AUpstreams\n\tfreshness  time.Time\n\tupstreams  []Upstream\n}\n\nfunc (al aLookup) isFresh() bool {\n\treturn time.Since(al.freshness) < time.Duration(al.aUpstreams.Refresh)\n}\n\n// MultiUpstreams is a single dynamic upstream source that\n// aggregates the results of multiple dynamic upstream sources.\n// All configured sources will be queried in order, with their\n// results appended to the end of the list. Errors returned\n// from individual sources will be logged and the next source\n// will continue to be invoked.\n//\n// This module makes it easy to implement redundant cluster\n// failovers, especially in conjunction with the `first` load\n// balancing policy: if the first source returns an error or\n// no upstreams, the second source's upstreams will be used\n// naturally.\ntype MultiUpstreams struct {\n\t// The list of upstream source modules to get upstreams from.\n\t// They will be queried in order, with their results appended\n\t// in the order they are returned.\n\tSourcesRaw []json.RawMessage `json:\"sources,omitempty\" caddy:\"namespace=http.reverse_proxy.upstreams inline_key=source\"`\n\tsources    []UpstreamSource\n\n\tlogger *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (MultiUpstreams) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.reverse_proxy.upstreams.multi\",\n\t\tNew: func() caddy.Module { return new(MultiUpstreams) },\n\t}\n}\n\nfunc (mu *MultiUpstreams) Provision(ctx caddy.Context) error {\n\tmu.logger = ctx.Logger()\n\n\tif mu.SourcesRaw != nil {\n\t\tmod, err := ctx.LoadModule(mu, \"SourcesRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading upstream source modules: %v\", err)\n\t\t}\n\t\tfor _, src := range mod.([]any) {\n\t\t\tmu.sources = append(mu.sources, src.(UpstreamSource))\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (mu MultiUpstreams) GetUpstreams(r *http.Request) ([]*Upstream, error) {\n\tvar upstreams []*Upstream\n\n\tfor i, src := range mu.sources {\n\t\tselect {\n\t\tcase <-r.Context().Done():\n\t\t\treturn upstreams, context.Canceled\n\t\tdefault:\n\t\t}\n\n\t\tup, err := src.GetUpstreams(r)\n\t\tif err != nil {\n\t\t\tif c := mu.logger.Check(zapcore.ErrorLevel, \"upstream source returned error\"); c != nil {\n\t\t\t\tc.Write(\n\t\t\t\t\tzap.Int(\"source_idx\", i),\n\t\t\t\t\tzap.Error(err),\n\t\t\t\t)\n\t\t\t}\n\t\t} else if len(up) == 0 {\n\t\t\tif c := mu.logger.Check(zapcore.WarnLevel, \"upstream source returned 0 upstreams\"); c != nil {\n\t\t\t\tc.Write(zap.Int(\"source_idx\", i))\n\t\t\t}\n\t\t} else {\n\t\t\tupstreams = append(upstreams, up...)\n\t\t}\n\t}\n\n\treturn upstreams, nil\n}\n\n// UpstreamResolver holds the set of addresses of DNS resolvers of\n// upstream addresses\ntype UpstreamResolver struct {\n\t// The addresses of DNS resolvers to use when looking up the addresses of proxy upstreams.\n\t// It accepts [network addresses](/docs/conventions#network-addresses)\n\t// with port range of only 1. If the host is an IP address, it will be dialed directly to resolve the upstream server.\n\t// If the host is not an IP address, the addresses are resolved using the [name resolution convention](https://golang.org/pkg/net/#hdr-Name_Resolution) of the Go standard library.\n\t// If the array contains more than 1 resolver address, one is chosen at random.\n\tAddresses []string `json:\"addresses,omitempty\"`\n\tnetAddrs  []caddy.NetworkAddress\n}\n\n// ParseAddresses parses all the configured network addresses\n// and ensures they're ready to be used.\nfunc (u *UpstreamResolver) ParseAddresses() error {\n\tfor _, v := range u.Addresses {\n\t\taddr, err := caddy.ParseNetworkAddressWithDefaults(v, \"udp\", 53)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif addr.PortRangeSize() != 1 {\n\t\t\treturn fmt.Errorf(\"resolver address must have exactly one address; cannot call %v\", addr)\n\t\t}\n\t\tu.netAddrs = append(u.netAddrs, addr)\n\t}\n\treturn nil\n}\n\nfunc allNew(upstreams []Upstream) []*Upstream {\n\tresults := make([]*Upstream, len(upstreams))\n\tfor i := range upstreams {\n\t\tresults[i] = &Upstream{Dial: upstreams[i].Dial}\n\t}\n\treturn results\n}\n\nvar (\n\tsrvs   = make(map[string]srvLookup)\n\tsrvsMu sync.RWMutex\n\n\taAaaa   = make(map[string]aLookup)\n\taAaaaMu sync.RWMutex\n)\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner = (*SRVUpstreams)(nil)\n\t_ UpstreamSource    = (*SRVUpstreams)(nil)\n\t_ caddy.Provisioner = (*AUpstreams)(nil)\n\t_ UpstreamSource    = (*AUpstreams)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/reverseproxy/upstreams_test.go",
    "content": "package reverseproxy\n\nimport \"testing\"\n\nfunc TestResolveIpVersion(t *testing.T) {\n\tfalseBool := false\n\ttrueBool := true\n\ttests := []struct {\n\t\tVersions          *IPVersions\n\t\texpectedIpVersion string\n\t}{\n\t\t{\n\t\t\tVersions:          &IPVersions{IPv4: &trueBool},\n\t\t\texpectedIpVersion: \"ip4\",\n\t\t},\n\t\t{\n\t\t\tVersions:          &IPVersions{IPv4: &falseBool},\n\t\t\texpectedIpVersion: \"ip\",\n\t\t},\n\t\t{\n\t\t\tVersions:          &IPVersions{IPv4: &trueBool, IPv6: &falseBool},\n\t\t\texpectedIpVersion: \"ip4\",\n\t\t},\n\t\t{\n\t\t\tVersions:          &IPVersions{IPv6: &trueBool},\n\t\t\texpectedIpVersion: \"ip6\",\n\t\t},\n\t\t{\n\t\t\tVersions:          &IPVersions{IPv6: &falseBool},\n\t\t\texpectedIpVersion: \"ip\",\n\t\t},\n\t\t{\n\t\t\tVersions:          &IPVersions{IPv6: &trueBool, IPv4: &falseBool},\n\t\t\texpectedIpVersion: \"ip6\",\n\t\t},\n\t\t{\n\t\t\tVersions:          &IPVersions{},\n\t\t\texpectedIpVersion: \"ip\",\n\t\t},\n\t\t{\n\t\t\tVersions:          &IPVersions{IPv4: &trueBool, IPv6: &trueBool},\n\t\t\texpectedIpVersion: \"ip\",\n\t\t},\n\t\t{\n\t\t\tVersions:          &IPVersions{IPv4: &falseBool, IPv6: &falseBool},\n\t\t\texpectedIpVersion: \"ip\",\n\t\t},\n\t}\n\tfor _, test := range tests {\n\t\tipVersion := resolveIpVersion(test.Versions)\n\t\tif ipVersion != test.expectedIpVersion {\n\t\t\tt.Errorf(\"resolveIpVersion(): Expected %s got %s\", test.expectedIpVersion, ipVersion)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/rewrite/caddyfile.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage rewrite\n\nimport (\n\t\"encoding/json\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\thttpcaddyfile.RegisterDirective(\"rewrite\", parseCaddyfileRewrite)\n\thttpcaddyfile.RegisterHandlerDirective(\"method\", parseCaddyfileMethod)\n\thttpcaddyfile.RegisterHandlerDirective(\"uri\", parseCaddyfileURI)\n\thttpcaddyfile.RegisterDirective(\"handle_path\", parseCaddyfileHandlePath)\n}\n\n// parseCaddyfileRewrite sets up a basic rewrite handler from Caddyfile tokens. Syntax:\n//\n//\trewrite [<matcher>] <to>\n//\n// Only URI components which are given in <to> will be set in the resulting URI.\n// See the docs for the rewrite handler for more information.\nfunc parseCaddyfileRewrite(h httpcaddyfile.Helper) ([]httpcaddyfile.ConfigValue, error) {\n\th.Next() // consume directive name\n\n\t// count the tokens to determine what to do\n\targsCount := h.CountRemainingArgs()\n\tif argsCount == 0 {\n\t\treturn nil, h.Errf(\"too few arguments; must have at least a rewrite URI\")\n\t}\n\tif argsCount > 2 {\n\t\treturn nil, h.Errf(\"too many arguments; should only be a matcher and a URI\")\n\t}\n\n\t// with only one arg, assume it's a rewrite URI with no matcher token\n\tif argsCount == 1 {\n\t\tif !h.NextArg() {\n\t\t\treturn nil, h.ArgErr()\n\t\t}\n\t\treturn h.NewRoute(nil, Rewrite{URI: h.Val()}), nil\n\t}\n\n\t// parse the matcher token into a matcher set\n\tuserMatcherSet, err := h.ExtractMatcherSet()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\th.Next() // consume directive name again, matcher parsing does a reset\n\th.Next() // advance to the rewrite URI\n\n\treturn h.NewRoute(userMatcherSet, Rewrite{URI: h.Val()}), nil\n}\n\n// parseCaddyfileMethod sets up a basic method rewrite handler from Caddyfile tokens. Syntax:\n//\n//\tmethod [<matcher>] <method>\nfunc parseCaddyfileMethod(h httpcaddyfile.Helper) (caddyhttp.MiddlewareHandler, error) {\n\th.Next() // consume directive name\n\tif !h.NextArg() {\n\t\treturn nil, h.ArgErr()\n\t}\n\tif h.NextArg() {\n\t\treturn nil, h.ArgErr()\n\t}\n\treturn Rewrite{Method: h.Val()}, nil\n}\n\n// parseCaddyfileURI sets up a handler for manipulating (but not \"rewriting\") the\n// URI from Caddyfile tokens. Syntax:\n//\n//\turi [<matcher>] strip_prefix|strip_suffix|replace|path_regexp <target> [<replacement> [<limit>]]\n//\n// If strip_prefix or strip_suffix are used, then <target> will be stripped\n// only if it is the beginning or the end, respectively, of the URI path. If\n// replace is used, then <target> will be replaced with <replacement> across\n// the whole URI, up to <limit> times (or unlimited if unspecified). If\n// path_regexp is used, then regular expression replacements will be performed\n// on the path portion of the URI (and a limit cannot be set).\nfunc parseCaddyfileURI(h httpcaddyfile.Helper) (caddyhttp.MiddlewareHandler, error) {\n\th.Next() // consume directive name\n\n\targs := h.RemainingArgs()\n\tif len(args) < 1 {\n\t\treturn nil, h.ArgErr()\n\t}\n\n\tvar rewr Rewrite\n\n\tswitch args[0] {\n\tcase \"strip_prefix\":\n\t\tif len(args) != 2 {\n\t\t\treturn nil, h.ArgErr()\n\t\t}\n\t\trewr.StripPathPrefix = args[1]\n\n\tcase \"strip_suffix\":\n\t\tif len(args) != 2 {\n\t\t\treturn nil, h.ArgErr()\n\t\t}\n\t\trewr.StripPathSuffix = args[1]\n\n\tcase \"replace\":\n\t\tvar find, replace, lim string\n\t\tswitch len(args) {\n\t\tcase 4:\n\t\t\tlim = args[3]\n\t\t\tfallthrough\n\t\tcase 3:\n\t\t\tfind = args[1]\n\t\t\treplace = args[2]\n\t\tdefault:\n\t\t\treturn nil, h.ArgErr()\n\t\t}\n\n\t\tvar limInt int\n\t\tif lim != \"\" {\n\t\t\tvar err error\n\t\t\tlimInt, err = strconv.Atoi(lim)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, h.Errf(\"limit must be an integer; invalid: %v\", err)\n\t\t\t}\n\t\t}\n\n\t\trewr.URISubstring = append(rewr.URISubstring, substrReplacer{\n\t\t\tFind:    find,\n\t\t\tReplace: replace,\n\t\t\tLimit:   limInt,\n\t\t})\n\n\tcase \"path_regexp\":\n\t\tif len(args) != 3 {\n\t\t\treturn nil, h.ArgErr()\n\t\t}\n\t\tfind, replace := args[1], args[2]\n\t\trewr.PathRegexp = append(rewr.PathRegexp, &regexReplacer{\n\t\t\tFind:    find,\n\t\t\tReplace: replace,\n\t\t})\n\n\tcase \"query\":\n\t\tif len(args) > 4 {\n\t\t\treturn nil, h.ArgErr()\n\t\t}\n\t\trewr.Query = &queryOps{}\n\t\tvar hasArgs bool\n\t\tif len(args) > 1 {\n\t\t\thasArgs = true\n\t\t\terr := applyQueryOps(h, rewr.Query, args[1:])\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\n\t\tfor h.NextBlock(0) {\n\t\t\tif hasArgs {\n\t\t\t\treturn nil, h.Err(\"Cannot specify uri query rewrites in both argument and block\")\n\t\t\t}\n\t\t\t// nolint:prealloc\n\t\t\tqueryArgs := []string{h.Val()}\n\t\t\tqueryArgs = append(queryArgs, h.RemainingArgs()...)\n\t\t\terr := applyQueryOps(h, rewr.Query, queryArgs)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\n\tdefault:\n\t\treturn nil, h.Errf(\"unrecognized URI manipulation '%s'\", args[0])\n\t}\n\treturn rewr, nil\n}\n\nfunc applyQueryOps(h httpcaddyfile.Helper, qo *queryOps, args []string) error {\n\tkey := args[0]\n\tswitch {\n\tcase strings.HasPrefix(key, \"-\"):\n\t\tif len(args) != 1 {\n\t\t\treturn h.ArgErr()\n\t\t}\n\t\tqo.Delete = append(qo.Delete, strings.TrimLeft(key, \"-\"))\n\n\tcase strings.HasPrefix(key, \"+\"):\n\t\tif len(args) != 2 {\n\t\t\treturn h.ArgErr()\n\t\t}\n\t\tparam := strings.TrimLeft(key, \"+\")\n\t\tqo.Add = append(qo.Add, queryOpsArguments{Key: param, Val: args[1]})\n\n\tcase strings.Contains(key, \">\"):\n\t\tif len(args) != 1 {\n\t\t\treturn h.ArgErr()\n\t\t}\n\t\trenameValKey := strings.Split(key, \">\")\n\t\tqo.Rename = append(qo.Rename, queryOpsArguments{Key: renameValKey[0], Val: renameValKey[1]})\n\n\tcase len(args) == 3:\n\t\tqo.Replace = append(qo.Replace, &queryOpsReplacement{Key: key, SearchRegexp: args[1], Replace: args[2]})\n\n\tdefault:\n\t\tif len(args) != 2 {\n\t\t\treturn h.ArgErr()\n\t\t}\n\t\tqo.Set = append(qo.Set, queryOpsArguments{Key: key, Val: args[1]})\n\t}\n\treturn nil\n}\n\n// parseCaddyfileHandlePath parses the handle_path directive. Syntax:\n//\n//\thandle_path [<matcher>] {\n//\t    <directives...>\n//\t}\n//\n// Only path matchers (with a `/` prefix) are supported as this is a shortcut\n// for the handle directive with a strip_prefix rewrite.\nfunc parseCaddyfileHandlePath(h httpcaddyfile.Helper) ([]httpcaddyfile.ConfigValue, error) {\n\th.Next() // consume directive name\n\n\t// there must be a path matcher\n\tif !h.NextArg() {\n\t\treturn nil, h.ArgErr()\n\t}\n\n\t// read the prefix to strip\n\tpath := h.Val()\n\tif !strings.HasPrefix(path, \"/\") {\n\t\treturn nil, h.Errf(\"path matcher must begin with '/', got %s\", path)\n\t}\n\n\t// we only want to strip what comes before the '/' if\n\t// the user specified it (e.g. /api/* should only strip /api)\n\tvar stripPath string\n\tif strings.HasSuffix(path, \"/*\") {\n\t\tstripPath = path[:len(path)-2]\n\t} else if strings.HasSuffix(path, \"*\") {\n\t\tstripPath = path[:len(path)-1]\n\t} else {\n\t\tstripPath = path\n\t}\n\n\t// the ParseSegmentAsSubroute function expects the cursor\n\t// to be at the token just before the block opening,\n\t// so we need to rewind because we already read past it\n\th.Reset()\n\th.Next()\n\n\t// parse the block contents as a subroute handler\n\thandler, err := httpcaddyfile.ParseSegmentAsSubroute(h)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tsubroute, ok := handler.(*caddyhttp.Subroute)\n\tif !ok {\n\t\treturn nil, h.Errf(\"segment was not parsed as a subroute\")\n\t}\n\n\t// make a matcher on the path and everything below it\n\tpathMatcher := caddy.ModuleMap{\n\t\t\"path\": h.JSON(caddyhttp.MatchPath{path}),\n\t}\n\n\t// build a route with a rewrite handler to strip the path prefix\n\troute := caddyhttp.Route{\n\t\tHandlersRaw: []json.RawMessage{\n\t\t\tcaddyconfig.JSONModuleObject(Rewrite{\n\t\t\t\tStripPathPrefix: stripPath,\n\t\t\t}, \"handler\", \"rewrite\", nil),\n\t\t},\n\t}\n\n\t// prepend the route to the subroute\n\tsubroute.Routes = append([]caddyhttp.Route{route}, subroute.Routes...)\n\n\t// build and return a route from the subroute\n\treturn h.NewRoute(pathMatcher, subroute), nil\n}\n"
  },
  {
    "path": "modules/caddyhttp/rewrite/rewrite.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage rewrite\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Rewrite{})\n}\n\n// Rewrite is a middleware which can rewrite/mutate HTTP requests.\n//\n// The Method and URI properties are \"setters\" (the request URI\n// will be overwritten with the given values). Other properties are\n// \"modifiers\" (they modify existing values in a differentiable\n// way). It is atypical to combine the use of setters and\n// modifiers in a single rewrite.\n//\n// To ensure consistent behavior, prefix and suffix stripping is\n// performed in the URL-decoded (unescaped, normalized) space by\n// default except for the specific bytes where an escape sequence\n// is used in the prefix or suffix pattern.\n//\n// For all modifiers, paths are cleaned before being modified so that\n// multiple, consecutive slashes are collapsed into a single slash,\n// and dot elements are resolved and removed. In the special case\n// of a prefix, suffix, or substring containing \"//\" (repeated slashes),\n// slashes will not be merged while cleaning the path so that\n// the rewrite can be interpreted literally.\ntype Rewrite struct {\n\t// Changes the request's HTTP verb.\n\tMethod string `json:\"method,omitempty\"`\n\n\t// Changes the request's URI, which consists of path and query string.\n\t// Only components of the URI that are specified will be changed.\n\t// For example, a value of \"/foo.html\" or \"foo.html\" will only change\n\t// the path and will preserve any existing query string. Similarly, a\n\t// value of \"?a=b\" will only change the query string and will not affect\n\t// the path. Both can also be changed: \"/foo?a=b\" - this sets both the\n\t// path and query string at the same time.\n\t//\n\t// You can also use placeholders. For example, to preserve the existing\n\t// query string, you might use: \"?{http.request.uri.query}&a=b\". Any\n\t// key-value pairs you add to the query string will not overwrite\n\t// existing values (individual pairs are append-only).\n\t//\n\t// To clear the query string, explicitly set an empty one: \"?\"\n\tURI string `json:\"uri,omitempty\"`\n\n\t// Strips the given prefix from the beginning of the URI path.\n\t// The prefix should be written in normalized (unescaped) form,\n\t// but if an escaping (`%xx`) is used, the path will be required\n\t// to have that same escape at that position in order to match.\n\tStripPathPrefix string `json:\"strip_path_prefix,omitempty\"`\n\n\t// Strips the given suffix from the end of the URI path.\n\t// The suffix should be written in normalized (unescaped) form,\n\t// but if an escaping (`%xx`) is used, the path will be required\n\t// to have that same escape at that position in order to match.\n\tStripPathSuffix string `json:\"strip_path_suffix,omitempty\"`\n\n\t// Performs substring replacements on the URI.\n\tURISubstring []substrReplacer `json:\"uri_substring,omitempty\"`\n\n\t// Performs regular expression replacements on the URI path.\n\tPathRegexp []*regexReplacer `json:\"path_regexp,omitempty\"`\n\n\t// Mutates the query string of the URI.\n\tQuery *queryOps `json:\"query,omitempty\"`\n\n\tlogger *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Rewrite) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.rewrite\",\n\t\tNew: func() caddy.Module { return new(Rewrite) },\n\t}\n}\n\n// Provision sets up rewr.\nfunc (rewr *Rewrite) Provision(ctx caddy.Context) error {\n\trewr.logger = ctx.Logger()\n\n\tfor i, rep := range rewr.PathRegexp {\n\t\tif rep.Find == \"\" {\n\t\t\treturn fmt.Errorf(\"path_regexp find cannot be empty\")\n\t\t}\n\t\tre, err := regexp.Compile(rep.Find)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"compiling regular expression %d: %v\", i, err)\n\t\t}\n\t\trep.re = re\n\t}\n\tif rewr.Query != nil {\n\t\tfor _, replacementOp := range rewr.Query.Replace {\n\t\t\terr := replacementOp.Provision(ctx)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"compiling regular expression %s in query rewrite replace operation: %v\", replacementOp.SearchRegexp, err)\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (rewr Rewrite) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\tconst message = \"rewrote request\"\n\n\tc := rewr.logger.Check(zap.DebugLevel, message)\n\tif c == nil {\n\t\trewr.Rewrite(r, repl)\n\t\treturn next.ServeHTTP(w, r)\n\t}\n\n\tchanged := rewr.Rewrite(r, repl)\n\n\tif changed {\n\t\tc.Write(\n\t\t\tzap.Object(\"request\", caddyhttp.LoggableHTTPRequest{Request: r}),\n\t\t\tzap.String(\"method\", r.Method),\n\t\t\tzap.String(\"uri\", r.RequestURI),\n\t\t)\n\t}\n\n\treturn next.ServeHTTP(w, r)\n}\n\n// rewrite performs the rewrites on r using repl, which should\n// have been obtained from r, but is passed in for efficiency.\n// It returns true if any changes were made to r.\nfunc (rewr Rewrite) Rewrite(r *http.Request, repl *caddy.Replacer) bool {\n\toldMethod := r.Method\n\toldURI := r.RequestURI\n\n\t// method\n\tif rewr.Method != \"\" {\n\t\tr.Method = strings.ToUpper(repl.ReplaceAll(rewr.Method, \"\"))\n\t}\n\n\t// uri (path, query string and... fragment, because why not)\n\tif uri := rewr.URI; uri != \"\" {\n\t\t// find the bounds of each part of the URI that exist\n\t\tpathStart, qsStart, fragStart := -1, -1, -1\n\t\tpathEnd, qsEnd := -1, -1\n\tloop:\n\t\tfor i, ch := range uri {\n\t\t\tswitch {\n\t\t\tcase ch == '?' && qsStart < 0:\n\t\t\t\tpathEnd, qsStart = i, i+1\n\t\t\tcase ch == '#' && fragStart < 0: // everything after fragment is fragment (very clear in RFC 3986 section 4.2)\n\t\t\t\tif qsStart < 0 {\n\t\t\t\t\tpathEnd = i\n\t\t\t\t} else {\n\t\t\t\t\tqsEnd = i\n\t\t\t\t}\n\t\t\t\tfragStart = i + 1\n\t\t\t\tbreak loop\n\t\t\tcase pathStart < 0 && qsStart < 0:\n\t\t\t\tpathStart = i\n\t\t\t}\n\t\t}\n\t\tif pathStart >= 0 && pathEnd < 0 {\n\t\t\tpathEnd = len(uri)\n\t\t}\n\t\tif qsStart >= 0 && qsEnd < 0 {\n\t\t\tqsEnd = len(uri)\n\t\t}\n\n\t\t// isolate the three main components of the URI\n\t\tvar path, query, frag string\n\t\tif pathStart > -1 {\n\t\t\tpath = uri[pathStart:pathEnd]\n\t\t}\n\t\tif qsStart > -1 {\n\t\t\tquery = uri[qsStart:qsEnd]\n\t\t}\n\t\tif fragStart > -1 {\n\t\t\tfrag = uri[fragStart:]\n\t\t}\n\n\t\t// build components which are specified, and store them\n\t\t// in a temporary variable so that they all read the\n\t\t// same version of the URI\n\t\tvar newPath, newQuery, newFrag string\n\n\t\tif path != \"\" {\n\t\t\t// replace the `path` placeholder to escaped path\n\t\t\tpathPlaceholder := \"{http.request.uri.path}\"\n\t\t\tif strings.Contains(path, pathPlaceholder) {\n\t\t\t\tpath = strings.ReplaceAll(path, pathPlaceholder, r.URL.EscapedPath())\n\t\t\t}\n\n\t\t\tnewPath = repl.ReplaceAll(path, \"\")\n\t\t}\n\n\t\t// before continuing, we need to check if a query string\n\t\t// snuck into the path component during replacements\n\t\tif before, after, found := strings.Cut(newPath, \"?\"); found {\n\t\t\t// recompute; new path contains a query string\n\t\t\tvar injectedQuery string\n\t\t\tnewPath, injectedQuery = before, after\n\t\t\t// don't overwrite explicitly-configured query string\n\t\t\tif query == \"\" {\n\t\t\t\tquery = injectedQuery\n\t\t\t}\n\t\t}\n\n\t\tif query != \"\" {\n\t\t\tnewQuery = buildQueryString(query, repl)\n\t\t}\n\t\tif frag != \"\" {\n\t\t\tnewFrag = repl.ReplaceAll(frag, \"\")\n\t\t}\n\n\t\t// update the URI with the new components\n\t\t// only after building them\n\t\tif pathStart >= 0 {\n\t\t\tif path, err := url.PathUnescape(newPath); err != nil {\n\t\t\t\tr.URL.Path = newPath\n\t\t\t} else {\n\t\t\t\tr.URL.Path = path\n\t\t\t}\n\t\t\tr.URL.RawPath = \"\" // force recomputing when EscapedPath() is called\n\t\t}\n\t\tif qsStart >= 0 {\n\t\t\tr.URL.RawQuery = newQuery\n\t\t}\n\t\tif fragStart >= 0 {\n\t\t\tr.URL.Fragment = newFrag\n\t\t}\n\t}\n\n\t// strip path prefix or suffix\n\tif rewr.StripPathPrefix != \"\" {\n\t\tprefix := repl.ReplaceAll(rewr.StripPathPrefix, \"\")\n\t\tif !strings.HasPrefix(prefix, \"/\") {\n\t\t\tprefix = \"/\" + prefix\n\t\t}\n\t\tmergeSlashes := !strings.Contains(prefix, \"//\")\n\t\tchangePath(r, func(escapedPath string) string {\n\t\t\tescapedPath = caddyhttp.CleanPath(escapedPath, mergeSlashes)\n\t\t\treturn trimPathPrefix(escapedPath, prefix)\n\t\t})\n\t}\n\tif rewr.StripPathSuffix != \"\" {\n\t\tsuffix := repl.ReplaceAll(rewr.StripPathSuffix, \"\")\n\t\tmergeSlashes := !strings.Contains(suffix, \"//\")\n\t\tchangePath(r, func(escapedPath string) string {\n\t\t\tescapedPath = caddyhttp.CleanPath(escapedPath, mergeSlashes)\n\t\t\treturn reverse(trimPathPrefix(reverse(escapedPath), reverse(suffix)))\n\t\t})\n\t}\n\n\t// substring replacements in URI\n\tfor _, rep := range rewr.URISubstring {\n\t\trep.do(r, repl)\n\t}\n\n\t// regular expression replacements on the path\n\tfor _, rep := range rewr.PathRegexp {\n\t\trep.do(r, repl)\n\t}\n\n\t// apply query operations\n\tif rewr.Query != nil {\n\t\trewr.Query.do(r, repl)\n\t}\n\n\t// update the encoded copy of the URI\n\tr.RequestURI = r.URL.RequestURI()\n\n\t// return true if anything changed\n\treturn r.Method != oldMethod || r.RequestURI != oldURI\n}\n\n// buildQueryString takes an input query string and\n// performs replacements on each component, returning\n// the resulting query string. This function appends\n// duplicate keys rather than replaces.\nfunc buildQueryString(qs string, repl *caddy.Replacer) string {\n\tvar sb strings.Builder\n\n\t// first component must be key, which is the same\n\t// as if we just wrote a value in previous iteration\n\twroteVal := true\n\n\tfor len(qs) > 0 {\n\t\t// determine the end of this component, which will be at\n\t\t// the next equal sign or ampersand, whichever comes first\n\t\tnextEq, nextAmp := strings.Index(qs, \"=\"), strings.Index(qs, \"&\")\n\t\tampIsNext := nextAmp >= 0 && (nextAmp < nextEq || nextEq < 0)\n\t\tend := len(qs) // assume no delimiter remains...\n\t\tif ampIsNext {\n\t\t\tend = nextAmp // ...unless ampersand is first...\n\t\t} else if nextEq >= 0 && (nextEq < nextAmp || nextAmp < 0) {\n\t\t\tend = nextEq // ...or unless equal is first.\n\t\t}\n\n\t\t// consume the component and write the result\n\t\tcomp := qs[:end]\n\t\tcomp, _ = repl.ReplaceFunc(comp, func(name string, val any) (any, error) {\n\t\t\tif name == \"http.request.uri.query\" && wroteVal {\n\t\t\t\treturn val, nil // already escaped\n\t\t\t}\n\t\t\tvar valStr string\n\t\t\tswitch v := val.(type) {\n\t\t\tcase string:\n\t\t\t\tvalStr = v\n\t\t\tcase fmt.Stringer:\n\t\t\t\tvalStr = v.String()\n\t\t\tcase int:\n\t\t\t\tvalStr = strconv.Itoa(v)\n\t\t\tdefault:\n\t\t\t\tvalStr = fmt.Sprintf(\"%+v\", v)\n\t\t\t}\n\t\t\treturn url.QueryEscape(valStr), nil\n\t\t})\n\t\tif end < len(qs) {\n\t\t\tend++ // consume delimiter\n\t\t}\n\t\tqs = qs[end:]\n\n\t\t// if previous iteration wrote a value,\n\t\t// that means we are writing a key\n\t\tif wroteVal {\n\t\t\tif sb.Len() > 0 && len(comp) > 0 {\n\t\t\t\tsb.WriteRune('&')\n\t\t\t}\n\t\t} else {\n\t\t\tsb.WriteRune('=')\n\t\t}\n\t\tsb.WriteString(comp)\n\n\t\t// remember for the next iteration that we just wrote a value,\n\t\t// which means the next iteration MUST write a key\n\t\twroteVal = ampIsNext\n\t}\n\n\treturn sb.String()\n}\n\n// trimPathPrefix is like strings.TrimPrefix, but customized for advanced URI\n// path prefix matching. The string prefix will be trimmed from the beginning\n// of escapedPath if escapedPath starts with prefix. Rather than a naive 1:1\n// comparison of each byte to determine if escapedPath starts with prefix,\n// both strings are iterated in lock-step, and if prefix has a '%' encoding\n// at a particular position, escapedPath must also have the same encoding\n// representation for that character. In other words, if the prefix string\n// uses the escaped form for a character, escapedPath must literally use the\n// same escape at that position. Otherwise, all character comparisons are\n// performed in normalized/unescaped space.\nfunc trimPathPrefix(escapedPath, prefix string) string {\n\tvar iPath, iPrefix int\n\tfor iPath < len(escapedPath) && iPrefix < len(prefix) {\n\t\tprefixCh := prefix[iPrefix]\n\t\tch := string(escapedPath[iPath])\n\n\t\tif ch == \"%\" && prefixCh != '%' && len(escapedPath) >= iPath+3 {\n\t\t\tvar err error\n\t\t\tch, err = url.PathUnescape(escapedPath[iPath : iPath+3])\n\t\t\tif err != nil {\n\t\t\t\t// should be impossible unless EscapedPath() is returning invalid values!\n\t\t\t\treturn escapedPath\n\t\t\t}\n\t\t\tiPath += 2\n\t\t}\n\n\t\t// prefix comparisons are case-insensitive to consistency with\n\t\t// path matcher, which is case-insensitive for good reasons\n\t\tif !strings.EqualFold(ch, string(prefixCh)) {\n\t\t\treturn escapedPath\n\t\t}\n\n\t\tiPath++\n\t\tiPrefix++\n\t}\n\n\t// if we iterated through the entire prefix, we found it, so trim it\n\tif iPath >= len(prefix) {\n\t\treturn escapedPath[iPath:]\n\t}\n\n\t// otherwise we did not find the prefix\n\treturn escapedPath\n}\n\nfunc reverse(s string) string {\n\tr := []rune(s)\n\tfor i, j := 0, len(r)-1; i < len(r)/2; i, j = i+1, j-1 {\n\t\tr[i], r[j] = r[j], r[i]\n\t}\n\treturn string(r)\n}\n\n// substrReplacer describes either a simple and fast substring replacement.\ntype substrReplacer struct {\n\t// A substring to find. Supports placeholders.\n\tFind string `json:\"find,omitempty\"`\n\n\t// The substring to replace with. Supports placeholders.\n\tReplace string `json:\"replace,omitempty\"`\n\n\t// Maximum number of replacements per string.\n\t// Set to <= 0 for no limit (default).\n\tLimit int `json:\"limit,omitempty\"`\n}\n\n// do performs the substring replacement on r.\nfunc (rep substrReplacer) do(r *http.Request, repl *caddy.Replacer) {\n\tif rep.Find == \"\" {\n\t\treturn\n\t}\n\n\tlim := rep.Limit\n\tif lim == 0 {\n\t\tlim = -1\n\t}\n\n\tfind := repl.ReplaceAll(rep.Find, \"\")\n\treplace := repl.ReplaceAll(rep.Replace, \"\")\n\n\tmergeSlashes := !strings.Contains(rep.Find, \"//\")\n\n\tchangePath(r, func(pathOrRawPath string) string {\n\t\treturn strings.Replace(caddyhttp.CleanPath(pathOrRawPath, mergeSlashes), find, replace, lim)\n\t})\n\n\tr.URL.RawQuery = strings.Replace(r.URL.RawQuery, find, replace, lim)\n}\n\n// regexReplacer describes a replacement using a regular expression.\ntype regexReplacer struct {\n\t// The regular expression to find.\n\tFind string `json:\"find,omitempty\"`\n\n\t// The substring to replace with. Supports placeholders and\n\t// regular expression capture groups.\n\tReplace string `json:\"replace,omitempty\"`\n\n\tre *regexp.Regexp\n}\n\nfunc (rep regexReplacer) do(r *http.Request, repl *caddy.Replacer) {\n\tif rep.Find == \"\" || rep.re == nil {\n\t\treturn\n\t}\n\treplace := repl.ReplaceAll(rep.Replace, \"\")\n\tchangePath(r, func(pathOrRawPath string) string {\n\t\treturn rep.re.ReplaceAllString(pathOrRawPath, replace)\n\t})\n}\n\nfunc changePath(req *http.Request, newVal func(pathOrRawPath string) string) {\n\treq.URL.RawPath = newVal(req.URL.EscapedPath())\n\tif p, err := url.PathUnescape(req.URL.RawPath); err == nil && p != \"\" {\n\t\treq.URL.Path = p\n\t} else {\n\t\treq.URL.Path = newVal(req.URL.Path)\n\t}\n\t// RawPath is only set if it's different from the normalized Path (std lib)\n\tif req.URL.RawPath == req.URL.Path {\n\t\treq.URL.RawPath = \"\"\n\t}\n}\n\n// queryOps describes the operations to perform on query keys: add, set, rename and delete.\ntype queryOps struct {\n\t// Renames a query key from Key to Val, without affecting the value.\n\tRename []queryOpsArguments `json:\"rename,omitempty\"`\n\n\t// Sets query parameters; overwrites a query key with the given value.\n\tSet []queryOpsArguments `json:\"set,omitempty\"`\n\n\t// Adds query parameters; does not overwrite an existing query field,\n\t// and only appends an additional value for that key if any already exist.\n\tAdd []queryOpsArguments `json:\"add,omitempty\"`\n\n\t// Replaces query parameters.\n\tReplace []*queryOpsReplacement `json:\"replace,omitempty\"`\n\n\t// Deletes a given query key by name.\n\tDelete []string `json:\"delete,omitempty\"`\n}\n\n// Provision compiles the query replace operation regex.\nfunc (replacement *queryOpsReplacement) Provision(_ caddy.Context) error {\n\tif replacement.SearchRegexp != \"\" {\n\t\tre, err := regexp.Compile(replacement.SearchRegexp)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"replacement for query field '%s': %v\", replacement.Key, err)\n\t\t}\n\t\treplacement.re = re\n\t}\n\treturn nil\n}\n\nfunc (q *queryOps) do(r *http.Request, repl *caddy.Replacer) {\n\tquery := r.URL.Query()\n\tfor _, renameParam := range q.Rename {\n\t\tkey := repl.ReplaceAll(renameParam.Key, \"\")\n\t\tval := repl.ReplaceAll(renameParam.Val, \"\")\n\t\tif key == \"\" || val == \"\" {\n\t\t\tcontinue\n\t\t}\n\t\tquery[val] = query[key]\n\t\tdelete(query, key)\n\t}\n\n\tfor _, setParam := range q.Set {\n\t\tkey := repl.ReplaceAll(setParam.Key, \"\")\n\t\tif key == \"\" {\n\t\t\tcontinue\n\t\t}\n\t\tval := repl.ReplaceAll(setParam.Val, \"\")\n\t\tquery[key] = []string{val}\n\t}\n\n\tfor _, addParam := range q.Add {\n\t\tkey := repl.ReplaceAll(addParam.Key, \"\")\n\t\tif key == \"\" {\n\t\t\tcontinue\n\t\t}\n\t\tval := repl.ReplaceAll(addParam.Val, \"\")\n\t\tquery[key] = append(query[key], val)\n\t}\n\n\tfor _, replaceParam := range q.Replace {\n\t\tkey := repl.ReplaceAll(replaceParam.Key, \"\")\n\t\tsearch := repl.ReplaceKnown(replaceParam.Search, \"\")\n\t\treplace := repl.ReplaceKnown(replaceParam.Replace, \"\")\n\n\t\t// replace all query keys...\n\t\tif key == \"*\" {\n\t\t\tfor fieldName, vals := range query {\n\t\t\t\tfor i := range vals {\n\t\t\t\t\tif replaceParam.re != nil {\n\t\t\t\t\t\tquery[fieldName][i] = replaceParam.re.ReplaceAllString(query[fieldName][i], replace)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tquery[fieldName][i] = strings.ReplaceAll(query[fieldName][i], search, replace)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\tfor fieldName, vals := range query {\n\t\t\tfor i := range vals {\n\t\t\t\tif replaceParam.re != nil {\n\t\t\t\t\tquery[fieldName][i] = replaceParam.re.ReplaceAllString(query[fieldName][i], replace)\n\t\t\t\t} else {\n\t\t\t\t\tquery[fieldName][i] = strings.ReplaceAll(query[fieldName][i], search, replace)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tfor _, deleteParam := range q.Delete {\n\t\tparam := repl.ReplaceAll(deleteParam, \"\")\n\t\tif param == \"\" {\n\t\t\tcontinue\n\t\t}\n\t\tdelete(query, param)\n\t}\n\n\tr.URL.RawQuery = query.Encode()\n}\n\ntype queryOpsArguments struct {\n\t// A key in the query string. Note that query string keys may appear multiple times.\n\tKey string `json:\"key,omitempty\"`\n\n\t// The value for the given operation; for add and set, this is\n\t// simply the value of the query, and for rename this is the\n\t// query key to rename to.\n\tVal string `json:\"val,omitempty\"`\n}\n\ntype queryOpsReplacement struct {\n\t// The key to replace in the query string.\n\tKey string `json:\"key,omitempty\"`\n\n\t// The substring to search for.\n\tSearch string `json:\"search,omitempty\"`\n\n\t// The regular expression to search with.\n\tSearchRegexp string `json:\"search_regexp,omitempty\"`\n\n\t// The string with which to replace matches.\n\tReplace string `json:\"replace,omitempty\"`\n\n\tre *regexp.Regexp\n}\n\n// Interface guard\nvar _ caddyhttp.MiddlewareHandler = (*Rewrite)(nil)\n"
  },
  {
    "path": "modules/caddyhttp/rewrite/rewrite_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage rewrite\n\nimport (\n\t\"net/http\"\n\t\"regexp\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc TestRewrite(t *testing.T) {\n\trepl := caddy.NewReplacer()\n\n\tfor i, tc := range []struct {\n\t\tinput, expect *http.Request\n\t\trule          Rewrite\n\t}{\n\t\t{\n\t\t\tinput:  newRequest(t, \"GET\", \"/\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{Method: \"GET\", URI: \"/\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{Method: \"POST\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/\"),\n\t\t\texpect: newRequest(t, \"POST\", \"/\"),\n\t\t},\n\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/foo\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/foo\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/bar\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"foo\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/\"),\n\t\t\texpect: newRequest(t, \"GET\", \"foo\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"{http.request.uri}\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/bar%3Fbaz?c=d\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/bar%3Fbaz?c=d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"{http.request.uri.path}\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/bar%3Fbaz\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/bar%3Fbaz\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/foo{http.request.uri.path}\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/bar\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo/bar\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/index.php?p={http.request.uri.path}\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo/bar\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/index.php?p=%2Ffoo%2Fbar\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"?a=b&{http.request.uri.query}\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/?a=b\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/?c=d\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/?c=d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/?c=d\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/?a=b\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/?c=d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"?c=d\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo?c=d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/?c=d\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/?c=d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/?{http.request.uri.query}&c=d\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/?c=d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/foo?{http.request.uri.query}&c=d\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo?c=d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"?{http.request.uri.query}&c=d\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo?c=d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"{http.request.uri.path}?{http.request.uri.query}&c=d\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo?c=d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"{http.request.uri.path}?{http.request.uri.query}&c=d\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo?c=d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/index.php?{http.request.uri.query}&c=d\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/index.php?c=d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"?a=b&c=d\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo?a=b&c=d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/index.php?{http.request.uri.query}&c=d\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/?a=b\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/index.php?a=b&c=d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/index.php?c=d&{http.request.uri.query}\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/?a=b\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/index.php?c=d&a=b\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/index.php?{http.request.uri.query}&p={http.request.uri.path}\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo/bar?a=b\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/index.php?a=b&p=%2Ffoo%2Fbar\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"{http.request.uri.path}?\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo/bar?a=b&c=d\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo/bar\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"?qs={http.request.uri.query}\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo?a=b&c=d\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo?qs=a%3Db%26c%3Dd\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/foo?{http.request.uri.query}#frag\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo/bar?a=b\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo?a=b#frag\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/foo{http.request.uri}\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/bar?a=b\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo/bar?a=b\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/foo{http.request.uri}\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/bar\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo/bar\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/foo{http.request.uri}?c=d\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/bar?a=b\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo/bar?c=d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/foo{http.request.uri}?{http.request.uri.query}&c=d\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/bar?a=b\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo/bar?a=b&c=d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"{http.request.uri}\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/bar?a=b\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/bar?a=b\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"{http.request.uri.path}bar?c=d\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo/?a=b\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo/bar?c=d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/i{http.request.uri}\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/%C2%B7%E2%88%B5.png\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/i/%C2%B7%E2%88%B5.png\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/i{http.request.uri}\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/·∵.png?a=b\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/i/%C2%B7%E2%88%B5.png?a=b\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/i{http.request.uri}\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/%C2%B7%E2%88%B5.png?a=b\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/i/%C2%B7%E2%88%B5.png?a=b\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/bar#?\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo#fragFirst?c=d\"), // not a valid query string (is part of fragment)\n\t\t\texpect: newRequest(t, \"GET\", \"/bar#?\"),             // I think this is right? but who knows; std lib drops fragment when parsing\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/bar\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo#fragFirst?c=d\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/bar#fragFirst?c=d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URI: \"/api/admin/panel\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/api/admin%2Fpanel\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/api/admin/panel\"),\n\t\t},\n\n\t\t{\n\t\t\trule:   Rewrite{StripPathPrefix: \"/prefix\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo/bar\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo/bar\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{StripPathPrefix: \"/prefix\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/prefix/foo/bar\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo/bar\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{StripPathPrefix: \"prefix\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/prefix/foo/bar\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo/bar\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{StripPathPrefix: \"/prefix\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/prefix\"),\n\t\t\texpect: newRequest(t, \"GET\", \"\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{StripPathPrefix: \"/prefix\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{StripPathPrefix: \"/prefix\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/prefix/foo%2Fbar\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo%2Fbar\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{StripPathPrefix: \"/prefix\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo/prefix/bar\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo/prefix/bar\"),\n\t\t},\n\t\t{\n\t\t\trule: Rewrite{StripPathPrefix: \"//prefix\"},\n\t\t\t// scheme and host needed for URL parser to succeed in setting up test\n\t\t\tinput:  newRequest(t, \"GET\", \"http://host//prefix/foo/bar\"),\n\t\t\texpect: newRequest(t, \"GET\", \"http://host/foo/bar\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{StripPathPrefix: \"//prefix\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/prefix/foo/bar\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/prefix/foo/bar\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{StripPathPrefix: \"/a%2Fb/c\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/a%2Fb/c/d\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{StripPathPrefix: \"/a%2Fb/c\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/a%2fb/c/d\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{StripPathPrefix: \"/a/b/c\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/a%2Fb/c/d\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{StripPathPrefix: \"/a%2Fb/c\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/a/b/c/d\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/a/b/c/d\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{StripPathPrefix: \"//a%2Fb/c\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/a/b/c/d\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/a/b/c/d\"),\n\t\t},\n\n\t\t{\n\t\t\trule:   Rewrite{StripPathSuffix: \"/suffix\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo/bar\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo/bar\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{StripPathSuffix: \"suffix\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo/bar/suffix\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo/bar/\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{StripPathSuffix: \"suffix\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo%2Fbar/suffix\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo%2Fbar/\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{StripPathSuffix: \"%2fsuffix\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo%2Fbar%2fsuffix\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo%2Fbar\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{StripPathSuffix: \"/suffix\"},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo/suffix/bar\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo/suffix/bar\"),\n\t\t},\n\n\t\t{\n\t\t\trule:   Rewrite{URISubstring: []substrReplacer{{Find: \"findme\", Replace: \"replaced\"}}},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo/bar\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo/bar\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URISubstring: []substrReplacer{{Find: \"findme\", Replace: \"replaced\"}}},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo/findme/bar\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo/replaced/bar\"),\n\t\t},\n\t\t{\n\t\t\trule:   Rewrite{URISubstring: []substrReplacer{{Find: \"findme\", Replace: \"replaced\"}}},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo/findme%2Fbar\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo/replaced%2Fbar\"),\n\t\t},\n\n\t\t{\n\t\t\trule:   Rewrite{PathRegexp: []*regexReplacer{{Find: \"/{2,}\", Replace: \"/\"}}},\n\t\t\tinput:  newRequest(t, \"GET\", \"/foo//bar///baz?a=b//c\"),\n\t\t\texpect: newRequest(t, \"GET\", \"/foo/bar/baz?a=b//c\"),\n\t\t},\n\t} {\n\t\t// copy the original input just enough so that we can\n\t\t// compare it after the rewrite to see if it changed\n\t\turlCopy := *tc.input.URL\n\t\toriginalInput := &http.Request{\n\t\t\tMethod:     tc.input.Method,\n\t\t\tRequestURI: tc.input.RequestURI,\n\t\t\tURL:        &urlCopy,\n\t\t}\n\n\t\t// populate the replacer just enough for our tests\n\t\trepl.Set(\"http.request.uri\", tc.input.RequestURI)\n\t\trepl.Set(\"http.request.uri.path\", tc.input.URL.Path)\n\t\trepl.Set(\"http.request.uri.query\", tc.input.URL.RawQuery)\n\n\t\t// we can't directly call Provision() without a valid caddy.Context\n\t\t// (TODO: fix that) so here we ad-hoc compile the regex\n\t\tfor _, rep := range tc.rule.PathRegexp {\n\t\t\tre, err := regexp.Compile(rep.Find)\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\trep.re = re\n\t\t}\n\n\t\tchanged := tc.rule.Rewrite(tc.input, repl)\n\n\t\tif expected, actual := !reqEqual(originalInput, tc.input), changed; expected != actual {\n\t\t\tt.Errorf(\"Test %d: Expected changed=%t but was %t\", i, expected, actual)\n\t\t}\n\t\tif expected, actual := tc.expect.Method, tc.input.Method; expected != actual {\n\t\t\tt.Errorf(\"Test %d: Expected Method='%s' but got '%s'\", i, expected, actual)\n\t\t}\n\t\tif expected, actual := tc.expect.RequestURI, tc.input.RequestURI; expected != actual {\n\t\t\tt.Errorf(\"Test %d: Expected RequestURI='%s' but got '%s'\", i, expected, actual)\n\t\t}\n\t\tif expected, actual := tc.expect.URL.String(), tc.input.URL.String(); expected != actual {\n\t\t\tt.Errorf(\"Test %d: Expected URL='%s' but got '%s'\", i, expected, actual)\n\t\t}\n\t\tif expected, actual := tc.expect.URL.RequestURI(), tc.input.URL.RequestURI(); expected != actual {\n\t\t\tt.Errorf(\"Test %d: Expected URL.RequestURI()='%s' but got '%s'\", i, expected, actual)\n\t\t}\n\t\tif expected, actual := tc.expect.URL.Fragment, tc.input.URL.Fragment; expected != actual {\n\t\t\tt.Errorf(\"Test %d: Expected URL.Fragment='%s' but got '%s'\", i, expected, actual)\n\t\t}\n\t}\n}\n\nfunc newRequest(t *testing.T, method, uri string) *http.Request {\n\treq, err := http.NewRequest(method, uri, nil)\n\tif err != nil {\n\t\tt.Fatalf(\"error creating request: %v\", err)\n\t}\n\treq.RequestURI = req.URL.RequestURI() // simulate incoming request\n\treturn req\n}\n\n// reqEqual if r1 and r2 are equal enough for our purposes.\nfunc reqEqual(r1, r2 *http.Request) bool {\n\tif r1.Method != r2.Method {\n\t\treturn false\n\t}\n\tif r1.RequestURI != r2.RequestURI {\n\t\treturn false\n\t}\n\tif (r1.URL == nil && r2.URL != nil) || (r1.URL != nil && r2.URL == nil) {\n\t\treturn false\n\t}\n\tif r1.URL == nil && r2.URL == nil {\n\t\treturn true\n\t}\n\treturn r1.URL.Scheme == r2.URL.Scheme &&\n\t\tr1.URL.Host == r2.URL.Host &&\n\t\tr1.URL.Path == r2.URL.Path &&\n\t\tr1.URL.RawPath == r2.URL.RawPath &&\n\t\tr1.URL.RawQuery == r2.URL.RawQuery &&\n\t\tr1.URL.Fragment == r2.URL.Fragment\n}\n"
  },
  {
    "path": "modules/caddyhttp/routes.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\n// Route consists of a set of rules for matching HTTP requests,\n// a list of handlers to execute, and optional flow control\n// parameters which customize the handling of HTTP requests\n// in a highly flexible and performant manner.\ntype Route struct {\n\t// Group is an optional name for a group to which this\n\t// route belongs. Grouping a route makes it mutually\n\t// exclusive with others in its group; if a route belongs\n\t// to a group, only the first matching route in that group\n\t// will be executed.\n\tGroup string `json:\"group,omitempty\"`\n\n\t// The matcher sets which will be used to qualify this\n\t// route for a request (essentially the \"if\" statement\n\t// of this route). Each matcher set is OR'ed, but matchers\n\t// within a set are AND'ed together.\n\tMatcherSetsRaw RawMatcherSets `json:\"match,omitempty\" caddy:\"namespace=http.matchers\"`\n\n\t// The list of handlers for this route. Upon matching a request, they are chained\n\t// together in a middleware fashion: requests flow from the first handler to the last\n\t// (top of the list to the bottom), with the possibility that any handler could stop\n\t// the chain and/or return an error. Responses flow back through the chain (bottom of\n\t// the list to the top) as they are written out to the client.\n\t//\n\t// Not all handlers call the next handler in the chain. For example, the reverse_proxy\n\t// handler always sends a request upstream or returns an error. Thus, configuring\n\t// handlers after reverse_proxy in the same route is illogical, since they would never\n\t// be executed. You will want to put handlers which originate the response at the very\n\t// end of your route(s). The documentation for a module should state whether it invokes\n\t// the next handler, but sometimes it is common sense.\n\t//\n\t// Some handlers manipulate the response. Remember that requests flow down the list, and\n\t// responses flow up the list.\n\t//\n\t// For example, if you wanted to use both `templates` and `encode` handlers, you would\n\t// need to put `templates` after `encode` in your route, because responses flow up.\n\t// Thus, `templates` will be able to parse and execute the plain-text response as a\n\t// template, and then return it up to the `encode` handler which will then compress it\n\t// into a binary format.\n\t//\n\t// If `templates` came before `encode`, then `encode` would write a compressed,\n\t// binary-encoded response to `templates` which would not be able to parse the response\n\t// properly.\n\t//\n\t// The correct order, then, is this:\n\t//\n\t//     [\n\t//         {\"handler\": \"encode\"},\n\t//         {\"handler\": \"templates\"},\n\t//         {\"handler\": \"file_server\"}\n\t//     ]\n\t//\n\t// The request flows ⬇️ DOWN (`encode` -> `templates` -> `file_server`).\n\t//\n\t// 1. First, `encode` will choose how to `encode` the response and wrap the response.\n\t// 2. Then, `templates` will wrap the response with a buffer.\n\t// 3. Finally, `file_server` will originate the content from a file.\n\t//\n\t// The response flows ⬆️ UP (`file_server` -> `templates` -> `encode`):\n\t//\n\t// 1. First, `file_server` will write the file to the response.\n\t// 2. That write will be buffered and then executed by `templates`.\n\t// 3. Lastly, the write from `templates` will flow into `encode` which will compress the stream.\n\t//\n\t// If you think of routes in this way, it will be easy and even fun to solve the puzzle of writing correct routes.\n\tHandlersRaw []json.RawMessage `json:\"handle,omitempty\" caddy:\"namespace=http.handlers inline_key=handler\"`\n\n\t// If true, no more routes will be executed after this one.\n\tTerminal bool `json:\"terminal,omitempty\"`\n\n\t// decoded values\n\tMatcherSets MatcherSets         `json:\"-\"`\n\tHandlers    []MiddlewareHandler `json:\"-\"`\n\n\tmiddleware  []Middleware\n\tmetrics     *Metrics\n\tmetricsCtx  caddy.Context\n\thandlerName string\n}\n\n// Empty returns true if the route has all zero/default values.\nfunc (r Route) Empty() bool {\n\treturn len(r.MatcherSetsRaw) == 0 &&\n\t\tlen(r.MatcherSets) == 0 &&\n\t\tlen(r.HandlersRaw) == 0 &&\n\t\tlen(r.Handlers) == 0 &&\n\t\t!r.Terminal &&\n\t\tr.Group == \"\"\n}\n\nfunc (r Route) String() string {\n\tvar handlersRaw strings.Builder\n\thandlersRaw.WriteByte('[')\n\tfor _, hr := range r.HandlersRaw {\n\t\thandlersRaw.WriteByte(' ')\n\t\thandlersRaw.WriteString(string(hr))\n\t}\n\thandlersRaw.WriteByte(']')\n\n\treturn fmt.Sprintf(`{Group:\"%s\" MatcherSetsRaw:%s HandlersRaw:%s Terminal:%t}`,\n\t\tr.Group, r.MatcherSetsRaw, handlersRaw.String(), r.Terminal)\n}\n\n// Provision sets up both the matchers and handlers in the route.\nfunc (r *Route) Provision(ctx caddy.Context, metrics *Metrics) error {\n\terr := r.ProvisionMatchers(ctx)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn r.ProvisionHandlers(ctx, metrics)\n}\n\n// ProvisionMatchers sets up all the matchers by loading the\n// matcher modules. Only call this method directly if you need\n// to set up matchers and handlers separately without having\n// to provision a second time; otherwise use Provision instead.\nfunc (r *Route) ProvisionMatchers(ctx caddy.Context) error {\n\t// matchers\n\tmatchersIface, err := ctx.LoadModule(r, \"MatcherSetsRaw\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"loading matcher modules: %v\", err)\n\t}\n\terr = r.MatcherSets.FromInterface(matchersIface)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// ProvisionHandlers sets up all the handlers by loading the\n// handler modules. Only call this method directly if you need\n// to set up matchers and handlers separately without having\n// to provision a second time; otherwise use Provision instead.\nfunc (r *Route) ProvisionHandlers(ctx caddy.Context, metrics *Metrics) error {\n\thandlersIface, err := ctx.LoadModule(r, \"HandlersRaw\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"loading handler modules: %v\", err)\n\t}\n\tfor _, handler := range handlersIface.([]any) {\n\t\tr.Handlers = append(r.Handlers, handler.(MiddlewareHandler))\n\t}\n\n\t// Store metrics info for route-level instrumentation (applied once\n\t// per route in wrapRoute, instead of per-handler which was redundant).\n\tr.metrics = metrics\n\tr.metricsCtx = ctx\n\tif len(r.Handlers) > 0 {\n\t\tr.handlerName = caddy.GetModuleName(r.Handlers[0])\n\t}\n\n\t// Make ProvisionHandlers idempotent by clearing the middleware field\n\tr.middleware = []Middleware{}\n\n\t// pre-compile the middleware handler chain\n\tfor _, midhandler := range r.Handlers {\n\t\tr.middleware = append(r.middleware, wrapMiddleware(ctx, midhandler))\n\t}\n\treturn nil\n}\n\n// Compile prepares a middleware chain from the route list.\n// This should only be done once during the request, just\n// before the middleware chain is executed.\nfunc (r Route) Compile(next Handler) Handler {\n\treturn wrapRoute(r)(next)\n}\n\n// RouteList is a list of server routes that can\n// create a middleware chain.\ntype RouteList []Route\n\n// Provision sets up both the matchers and handlers in the routes.\nfunc (routes RouteList) Provision(ctx caddy.Context) error {\n\terr := routes.ProvisionMatchers(ctx)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn routes.ProvisionHandlers(ctx, nil)\n}\n\n// ProvisionMatchers sets up all the matchers by loading the\n// matcher modules. Only call this method directly if you need\n// to set up matchers and handlers separately without having\n// to provision a second time; otherwise use Provision instead.\nfunc (routes RouteList) ProvisionMatchers(ctx caddy.Context) error {\n\tfor i := range routes {\n\t\terr := routes[i].ProvisionMatchers(ctx)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"route %d: %v\", i, err)\n\t\t}\n\t}\n\treturn nil\n}\n\n// ProvisionHandlers sets up all the handlers by loading the\n// handler modules. Only call this method directly if you need\n// to set up matchers and handlers separately without having\n// to provision a second time; otherwise use Provision instead.\nfunc (routes RouteList) ProvisionHandlers(ctx caddy.Context, metrics *Metrics) error {\n\tfor i := range routes {\n\t\terr := routes[i].ProvisionHandlers(ctx, metrics)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"route %d: %v\", i, err)\n\t\t}\n\t}\n\treturn nil\n}\n\n// Compile prepares a middleware chain from the route list.\n// This should only be done either once during provisioning\n// for top-level routes, or on each request just before the\n// middleware chain is executed for subroutes.\nfunc (routes RouteList) Compile(next Handler) Handler {\n\tmid := make([]Middleware, 0, len(routes))\n\tfor _, route := range routes {\n\t\tmid = append(mid, wrapRoute(route))\n\t}\n\tstack := next\n\tfor i := len(mid) - 1; i >= 0; i-- {\n\t\tstack = mid[i](stack)\n\t}\n\treturn stack\n}\n\n// wrapRoute wraps route with a middleware and handler so that it can\n// be chained in and defer evaluation of its matchers to request-time.\n// Like wrapMiddleware, it is vital that this wrapping takes place in\n// its own stack frame so as to not overwrite the reference to the\n// intended route by looping and changing the reference each time.\nfunc wrapRoute(route Route) Middleware {\n\treturn func(next Handler) Handler {\n\t\treturn HandlerFunc(func(rw http.ResponseWriter, req *http.Request) error {\n\t\t\t// TODO: Update this comment, it seems we've moved the copy into the handler?\n\t\t\t// copy the next handler (it's an interface, so it's just\n\t\t\t// a very lightweight copy of a pointer); this is important\n\t\t\t// because this is a closure to the func below, which\n\t\t\t// re-assigns the value as it compiles the middleware stack;\n\t\t\t// if we don't make this copy, we'd affect the underlying\n\t\t\t// pointer for all future request (yikes); we could\n\t\t\t// alternatively solve this by moving the func below out of\n\t\t\t// this closure and into a standalone package-level func,\n\t\t\t// but I just thought this made more sense\n\t\t\tnextCopy := next\n\n\t\t\t// route must match at least one of the matcher sets\n\t\t\tmatches, err := route.MatcherSets.AnyMatchWithError(req)\n\t\t\tif err != nil {\n\t\t\t\t// allow matchers the opportunity to short circuit\n\t\t\t\t// the request and trigger the error handling chain\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif !matches {\n\t\t\t\t// call the next handler, and skip this one,\n\t\t\t\t// since the matcher didn't match\n\t\t\t\treturn nextCopy.ServeHTTP(rw, req)\n\t\t\t}\n\n\t\t\t// if route is part of a group, ensure only the\n\t\t\t// first matching route in the group is applied\n\t\t\tif route.Group != \"\" {\n\t\t\t\tgroups := req.Context().Value(routeGroupCtxKey).(map[string]struct{})\n\n\t\t\t\tif _, ok := groups[route.Group]; ok {\n\t\t\t\t\t// this group has already been\n\t\t\t\t\t// satisfied by a matching route\n\t\t\t\t\treturn nextCopy.ServeHTTP(rw, req)\n\t\t\t\t}\n\n\t\t\t\t// this matching route satisfies the group\n\t\t\t\tgroups[route.Group] = struct{}{}\n\t\t\t}\n\n\t\t\t// make terminal routes terminate\n\t\t\tif route.Terminal {\n\t\t\t\tif _, ok := req.Context().Value(ErrorCtxKey).(error); ok {\n\t\t\t\t\tnextCopy = errorEmptyHandler\n\t\t\t\t} else {\n\t\t\t\t\tnextCopy = emptyHandler\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// compile this route's handler stack\n\t\t\tfor i := len(route.middleware) - 1; i >= 0; i-- {\n\t\t\t\tnextCopy = route.middleware[i](nextCopy)\n\t\t\t}\n\n\t\t\t// Apply metrics instrumentation once for the entire route,\n\t\t\t// rather than wrapping each individual handler. This avoids\n\t\t\t// redundant metrics collection that caused significant CPU\n\t\t\t// overhead (see issue #4644).\n\t\t\tif route.metrics != nil {\n\t\t\t\tnextCopy = newMetricsInstrumentedRoute(\n\t\t\t\t\troute.metricsCtx, route.handlerName, nextCopy, route.metrics,\n\t\t\t\t)\n\t\t\t}\n\n\t\t\treturn nextCopy.ServeHTTP(rw, req)\n\t\t})\n\t}\n}\n\n// wrapMiddleware wraps mh such that it can be correctly\n// appended to a list of middleware in preparation for\n// compiling into a handler chain.\nfunc wrapMiddleware(ctx caddy.Context, mh MiddlewareHandler) Middleware {\n\treturn func(next Handler) Handler {\n\t\treturn HandlerFunc(func(w http.ResponseWriter, r *http.Request) error {\n\t\t\t// EXPERIMENTAL: Trace each module that gets invoked\n\t\t\tif server, ok := r.Context().Value(ServerCtxKey).(*Server); ok && server != nil {\n\t\t\t\tserver.logTrace(mh)\n\t\t\t}\n\t\t\treturn mh.ServeHTTP(w, r, next)\n\t\t})\n\t}\n}\n\n// MatcherSet is a set of matchers which\n// must all match in order for the request\n// to be matched successfully.\ntype MatcherSet []any\n\n// Match returns true if the request matches all\n// matchers in mset or if there are no matchers.\nfunc (mset MatcherSet) Match(r *http.Request) bool {\n\tfor _, m := range mset {\n\t\tif me, ok := m.(RequestMatcherWithError); ok {\n\t\t\tmatch, _ := me.MatchWithError(r)\n\t\t\tif !match {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\t\tif me, ok := m.(RequestMatcher); ok {\n\t\t\tif !me.Match(r) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\t\treturn false\n\t}\n\treturn true\n}\n\n// MatchWithError returns true if r matches m.\nfunc (mset MatcherSet) MatchWithError(r *http.Request) (bool, error) {\n\tfor _, m := range mset {\n\t\tif me, ok := m.(RequestMatcherWithError); ok {\n\t\t\tmatch, err := me.MatchWithError(r)\n\t\t\tif err != nil || !match {\n\t\t\t\treturn match, err\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\t\tif me, ok := m.(RequestMatcher); ok {\n\t\t\tif !me.Match(r) {\n\t\t\t\t// for backwards compatibility\n\t\t\t\terr, ok := GetVar(r.Context(), MatcherErrorVarKey).(error)\n\t\t\t\tif ok {\n\t\t\t\t\t// clear out the error from context since we've consumed it\n\t\t\t\t\tSetVar(r.Context(), MatcherErrorVarKey, nil)\n\t\t\t\t\treturn false, err\n\t\t\t\t}\n\t\t\t\treturn false, nil\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\t\treturn false, fmt.Errorf(\"matcher is not a RequestMatcher or RequestMatcherWithError: %#v\", m)\n\t}\n\treturn true, nil\n}\n\n// RawMatcherSets is a group of matcher sets\n// in their raw, JSON form.\ntype RawMatcherSets []caddy.ModuleMap\n\n// MatcherSets is a group of matcher sets capable\n// of checking whether a request matches any of\n// the sets.\ntype MatcherSets []MatcherSet\n\n// AnyMatch returns true if req matches any of the\n// matcher sets in ms or if there are no matchers,\n// in which case the request always matches.\n//\n// Deprecated: Use AnyMatchWithError instead.\nfunc (ms MatcherSets) AnyMatch(req *http.Request) bool {\n\tfor _, m := range ms {\n\t\tmatch, err := m.MatchWithError(req)\n\t\tif err != nil {\n\t\t\tSetVar(req.Context(), MatcherErrorVarKey, err)\n\t\t\treturn false\n\t\t}\n\t\tif match {\n\t\t\treturn match\n\t\t}\n\t}\n\treturn len(ms) == 0\n}\n\n// AnyMatchWithError returns true if req matches any of the\n// matcher sets in ms or if there are no matchers, in which\n// case the request always matches. If any matcher returns\n// an error, we cut short and return the error.\nfunc (ms MatcherSets) AnyMatchWithError(req *http.Request) (bool, error) {\n\tfor _, m := range ms {\n\t\tmatch, err := m.MatchWithError(req)\n\t\tif err != nil || match {\n\t\t\treturn match, err\n\t\t}\n\t}\n\treturn len(ms) == 0, nil\n}\n\n// FromInterface fills ms from an 'any' value obtained from LoadModule.\nfunc (ms *MatcherSets) FromInterface(matcherSets any) error {\n\tfor _, matcherSetIfaces := range matcherSets.([]map[string]any) {\n\t\tvar matcherSet MatcherSet\n\t\tfor _, matcher := range matcherSetIfaces {\n\t\t\tif m, ok := matcher.(RequestMatcherWithError); ok {\n\t\t\t\tmatcherSet = append(matcherSet, m)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif m, ok := matcher.(RequestMatcher); ok {\n\t\t\t\tmatcherSet = append(matcherSet, m)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\treturn fmt.Errorf(\"decoded module is not a RequestMatcher or RequestMatcherWithError: %#v\", matcher)\n\t\t}\n\t\t*ms = append(*ms, matcherSet)\n\t}\n\treturn nil\n}\n\n// TODO: Is this used?\nfunc (ms MatcherSets) String() string {\n\tvar result strings.Builder\n\tresult.WriteByte('[')\n\tfor _, matcherSet := range ms {\n\t\tfor _, matcher := range matcherSet {\n\t\t\tfmt.Fprintf(&result, \" %#v\", matcher)\n\t\t}\n\t}\n\tresult.WriteByte(']')\n\treturn result.String()\n}\n\nvar routeGroupCtxKey = caddy.CtxKey(\"route_group\")\n"
  },
  {
    "path": "modules/caddyhttp/server.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/netip\"\n\t\"net/url\"\n\t\"runtime\"\n\t\"slices\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/quic-go/quic-go\"\n\t\"github.com/quic-go/quic-go/http3\"\n\th3qlog \"github.com/quic-go/quic-go/http3/qlog\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyevents\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddytls\"\n)\n\n// Server describes an HTTP server.\ntype Server struct {\n\t// Socket addresses to which to bind listeners. Accepts\n\t// [network addresses](/docs/conventions#network-addresses)\n\t// that may include port ranges. Listener addresses must\n\t// be unique; they cannot be repeated across all defined\n\t// servers.\n\tListen []string `json:\"listen,omitempty\"`\n\n\t// A list of listener wrapper modules, which can modify the behavior\n\t// of the base listener. They are applied in the given order.\n\tListenerWrappersRaw []json.RawMessage `json:\"listener_wrappers,omitempty\" caddy:\"namespace=caddy.listeners inline_key=wrapper\"`\n\n\t// A list of packet conn wrapper modules, which can modify the behavior\n\t// of the base packet conn. They are applied in the given order.\n\tPacketConnWrappersRaw []json.RawMessage `json:\"packet_conn_wrappers,omitempty\" caddy:\"namespace=caddy.packetconns inline_key=wrapper\"`\n\n\t// How long to allow a read from a client's upload. Setting this\n\t// to a short, non-zero value can mitigate slowloris attacks, but\n\t// may also affect legitimately slow clients.\n\tReadTimeout caddy.Duration `json:\"read_timeout,omitempty\"`\n\n\t// ReadHeaderTimeout is like ReadTimeout but for request headers.\n\t// Default is 1 minute.\n\tReadHeaderTimeout caddy.Duration `json:\"read_header_timeout,omitempty\"`\n\n\t// WriteTimeout is how long to allow a write to a client. Note\n\t// that setting this to a small value when serving large files\n\t// may negatively affect legitimately slow clients.\n\tWriteTimeout caddy.Duration `json:\"write_timeout,omitempty\"`\n\n\t// IdleTimeout is the maximum time to wait for the next request\n\t// when keep-alives are enabled. If zero, a default timeout of\n\t// 5m is applied to help avoid resource exhaustion.\n\tIdleTimeout caddy.Duration `json:\"idle_timeout,omitempty\"`\n\n\t// KeepAliveInterval is the interval at which TCP keepalive packets\n\t// are sent to keep the connection alive at the TCP layer when no other\n\t// data is being transmitted.\n\t// If zero, the default is 15s.\n\t// If negative, keepalive packets are not sent and other keepalive parameters\n\t// are ignored.\n\tKeepAliveInterval caddy.Duration `json:\"keepalive_interval,omitempty\"`\n\n\t// KeepAliveIdle is the time that the connection must be idle before\n\t// the first TCP keep-alive probe is sent when no other data is being\n\t// transmitted.\n\t// If zero, the default is 15s.\n\t// If negative, underlying socket value is unchanged.\n\tKeepAliveIdle caddy.Duration `json:\"keepalive_idle,omitempty\"`\n\n\t// KeepAliveCount is the maximum number of TCP keep-alive probes that\n\t// should be sent before dropping a connection.\n\t// If zero, the default is 9.\n\t// If negative, underlying socket value is unchanged.\n\tKeepAliveCount int `json:\"keepalive_count,omitempty\"`\n\n\t// MaxHeaderBytes is the maximum size to parse from a client's\n\t// HTTP request headers.\n\tMaxHeaderBytes int `json:\"max_header_bytes,omitempty\"`\n\n\t// Enable full-duplex communication for HTTP/1 requests.\n\t// Only has an effect if Caddy was built with Go 1.21 or later.\n\t//\n\t// For HTTP/1 requests, the Go HTTP server by default consumes any\n\t// unread portion of the request body before beginning to write the\n\t// response, preventing handlers from concurrently reading from the\n\t// request and writing the response. Enabling this option disables\n\t// this behavior and permits handlers to continue to read from the\n\t// request while concurrently writing the response.\n\t//\n\t// For HTTP/2 requests, the Go HTTP server always permits concurrent\n\t// reads and responses, so this option has no effect.\n\t//\n\t// Test thoroughly with your HTTP clients, as some older clients may\n\t// not support full-duplex HTTP/1 which can cause them to deadlock.\n\t// See https://github.com/golang/go/issues/57786 for more info.\n\t//\n\t// TODO: This is an EXPERIMENTAL feature. Subject to change or removal.\n\tEnableFullDuplex bool `json:\"enable_full_duplex,omitempty\"`\n\n\t// Routes describes how this server will handle requests.\n\t// Routes are executed sequentially. First a route's matchers\n\t// are evaluated, then its grouping. If it matches and has\n\t// not been mutually-excluded by its grouping, then its\n\t// handlers are executed sequentially. The sequence of invoked\n\t// handlers comprises a compiled middleware chain that flows\n\t// from each matching route and its handlers to the next.\n\t//\n\t// By default, all unrouted requests receive a 200 OK response\n\t// to indicate the server is working.\n\tRoutes RouteList `json:\"routes,omitempty\"`\n\n\t// Errors is how this server will handle errors returned from any\n\t// of the handlers in the primary routes. If the primary handler\n\t// chain returns an error, the error along with its recommended\n\t// status code are bubbled back up to the HTTP server which\n\t// executes a separate error route, specified using this property.\n\t// The error routes work exactly like the normal routes.\n\tErrors *HTTPErrorConfig `json:\"errors,omitempty\"`\n\n\t// NamedRoutes describes a mapping of reusable routes that can be\n\t// invoked by their name. This can be used to optimize memory usage\n\t// when the same route is needed for many subroutes, by having\n\t// the handlers and matchers be only provisioned once, but used from\n\t// many places. These routes are not executed unless they are invoked\n\t// from another route.\n\t//\n\t// EXPERIMENTAL: Subject to change or removal.\n\tNamedRoutes map[string]*Route `json:\"named_routes,omitempty\"`\n\n\t// How to handle TLS connections. At least one policy is\n\t// required to enable HTTPS on this server if automatic\n\t// HTTPS is disabled or does not apply.\n\tTLSConnPolicies caddytls.ConnectionPolicies `json:\"tls_connection_policies,omitempty\"`\n\n\t// AutoHTTPS configures or disables automatic HTTPS within this server.\n\t// HTTPS is enabled automatically and by default when qualifying names\n\t// are present in a Host matcher and/or when the server is listening\n\t// only on the HTTPS port.\n\tAutoHTTPS *AutoHTTPSConfig `json:\"automatic_https,omitempty\"`\n\n\t// If true, will require that a request's Host header match\n\t// the value of the ServerName sent by the client's TLS\n\t// ClientHello; often a necessary safeguard when using TLS\n\t// client authentication.\n\tStrictSNIHost *bool `json:\"strict_sni_host,omitempty\"`\n\n\t// A module which provides a source of IP ranges, from which\n\t// requests should be trusted. By default, no proxies are\n\t// trusted.\n\t//\n\t// On its own, this configuration will not do anything,\n\t// but it can be used as a default set of ranges for\n\t// handlers or matchers in routes to pick up, instead\n\t// of needing to configure each of them. See the\n\t// `reverse_proxy` handler for example, which uses this\n\t// to trust sensitive incoming `X-Forwarded-*` headers.\n\tTrustedProxiesRaw json.RawMessage `json:\"trusted_proxies,omitempty\" caddy:\"namespace=http.ip_sources inline_key=source\"`\n\n\t// The headers from which the client IP address could be\n\t// read from. These will be considered in order, with the\n\t// first good value being used as the client IP.\n\t// By default, only `X-Forwarded-For` is considered.\n\t//\n\t// This depends on `trusted_proxies` being configured and\n\t// the request being validated as coming from a trusted\n\t// proxy, otherwise the client IP will be set to the direct\n\t// remote IP address.\n\tClientIPHeaders []string `json:\"client_ip_headers,omitempty\"`\n\n\t// If greater than zero, enables strict ClientIPHeaders\n\t// (default X-Forwarded-For) parsing. If enabled, the\n\t// ClientIPHeaders will be parsed from right to left, and\n\t// the first value that is both valid and doesn't match the\n\t// trusted proxy list will be used as client IP. If zero,\n\t// the ClientIPHeaders will be parsed from left to right,\n\t// and the first value that is a valid IP address will be\n\t// used as client IP.\n\t//\n\t// This depends on `trusted_proxies` being configured.\n\t// This option is disabled by default.\n\tTrustedProxiesStrict int `json:\"trusted_proxies_strict,omitempty\"`\n\n\t// If greater than zero, enables trusting socket connections\n\t// (e.g. Unix domain sockets) as coming from a trusted\n\t// proxy.\n\t//\n\t// This option is disabled by default.\n\tTrustedProxiesUnix bool `json:\"trusted_proxies_unix,omitempty\"`\n\n\t// Enables access logging and configures how access logs are handled\n\t// in this server. To minimally enable access logs, simply set this\n\t// to a non-null, empty struct.\n\tLogs *ServerLogConfig `json:\"logs,omitempty\"`\n\n\t// Protocols specifies which HTTP protocols to enable.\n\t// Supported values are:\n\t//\n\t// - `h1` (HTTP/1.1)\n\t// - `h2` (HTTP/2)\n\t// - `h2c` (cleartext HTTP/2)\n\t// - `h3` (HTTP/3)\n\t//\n\t// If enabling `h2` or `h2c`, `h1` must also be enabled;\n\t// this is due to current limitations in the Go standard\n\t// library.\n\t//\n\t// HTTP/2 operates only over TLS (HTTPS). HTTP/3 opens\n\t// a UDP socket to serve QUIC connections.\n\t//\n\t// H2C operates over plain TCP if the client supports it;\n\t// however, because this is not implemented by the Go\n\t// standard library, other server options are not compatible\n\t// and will not be applied to H2C requests. Do not enable this\n\t// only to achieve maximum client compatibility. In practice,\n\t// very few clients implement H2C, and even fewer require it.\n\t// Enabling H2C can be useful for serving/proxying gRPC\n\t// if encryption is not possible or desired.\n\t//\n\t// We recommend for most users to simply let Caddy use the\n\t// default settings.\n\t//\n\t// Default: `[h1 h2 h3]`\n\tProtocols []string `json:\"protocols,omitempty\"`\n\n\t// ListenProtocols overrides Protocols for each parallel address in Listen.\n\t// A nil value or element indicates that Protocols will be used instead.\n\tListenProtocols [][]string `json:\"listen_protocols,omitempty\"`\n\n\t// If set, overrides whether QUIC listeners allow 0-RTT (early data).\n\t// If nil, the default behavior is used (currently allowed).\n\t//\n\t// One reason to disable 0-RTT is if a remote IP matcher is used,\n\t// which introduces a dependency on the remote address being verified\n\t// if routing happens before the TLS handshake completes. An HTTP 425\n\t// response is written in that case, but some clients misbehave and\n\t// don't perform a retry, so disabling 0-RTT can smooth it out.\n\tAllow0RTT *bool `json:\"allow_0rtt,omitempty\"`\n\n\t// If set, metrics observations will be enabled.\n\t// This setting is EXPERIMENTAL and subject to change.\n\t// DEPRECATED: Use the app-level `metrics` field.\n\tMetrics *Metrics `json:\"metrics,omitempty\"`\n\n\tname string\n\n\tprimaryHandlerChain Handler\n\terrorHandlerChain   Handler\n\tlistenerWrappers    []caddy.ListenerWrapper\n\tpacketConnWrappers  []caddy.PacketConnWrapper\n\tlisteners           []net.Listener\n\tquicListeners       []http3.QUICListener // http3 now leave the quic.Listener management to us\n\n\ttlsApp       *caddytls.TLS\n\tevents       *caddyevents.App\n\tlogger       *zap.Logger\n\taccessLogger *zap.Logger\n\terrorLogger  *zap.Logger\n\ttraceLogger  *zap.Logger\n\tctx          caddy.Context\n\n\tserver    *http.Server\n\th3server  *http3.Server\n\taddresses []caddy.NetworkAddress\n\n\ttrustedProxies IPRangeSource\n\n\tshutdownAt   time.Time\n\tshutdownAtMu *sync.RWMutex\n\n\t// registered callback functions\n\tconnStateFuncs   []func(net.Conn, http.ConnState)\n\tconnContextFuncs []func(ctx context.Context, c net.Conn) context.Context\n\tonShutdownFuncs  []func()\n\tonStopFuncs      []func(context.Context) error // TODO: Experimental (Nov. 2023)\n}\n\nvar (\n\tServerHeader = \"Caddy\"\n\tserverHeader = []string{ServerHeader}\n)\n\n// ServeHTTP is the entry point for all HTTP requests.\nfunc (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {\n\tstart := time.Now()\n\n\t// If there are listener wrappers that process tls connections but don't return a *tls.Conn, this field will be nil.\n\tif r.TLS == nil {\n\t\tif tlsConnStateFunc, ok := r.Context().Value(tlsConnectionStateFuncCtxKey).(func() *tls.ConnectionState); ok {\n\t\t\tr.TLS = tlsConnStateFunc()\n\t\t}\n\t}\n\n\t// enable full-duplex for HTTP/1, ensuring the entire\n\t// request body gets consumed before writing the response\n\tif s.EnableFullDuplex && r.ProtoMajor == 1 {\n\t\tif err := http.NewResponseController(w).EnableFullDuplex(); err != nil { //nolint:bodyclose\n\t\t\tif c := s.logger.Check(zapcore.WarnLevel, \"failed to enable full duplex\"); c != nil {\n\t\t\t\tc.Write(zap.Error(err))\n\t\t\t}\n\t\t}\n\t}\n\n\t// set the Server header\n\th := w.Header()\n\th[\"Server\"] = serverHeader\n\n\t// advertise HTTP/3, if enabled\n\tif s.h3server != nil && r.ProtoMajor < 3 {\n\t\tif err := s.h3server.SetQUICHeaders(h); err != nil {\n\t\t\tif c := s.logger.Check(zapcore.ErrorLevel, \"setting HTTP/3 Alt-Svc header\"); c != nil {\n\t\t\t\tc.Write(zap.Error(err))\n\t\t\t}\n\t\t}\n\t}\n\n\t// prepare internals of the request for the handler pipeline\n\trepl := caddy.NewReplacer()\n\tr = PrepareRequest(r, repl, w, s)\n\n\t// clone the request for logging purposes before it enters any handler chain;\n\t// this is necessary to capture the original request in case it gets modified\n\t// during handling (cloning the request and using .WithLazy is considerably\n\t// faster than using .With, which will JSON-encode the request immediately)\n\tshouldLogCredentials := s.Logs != nil && s.Logs.ShouldLogCredentials\n\tloggableReq := zap.Object(\"request\", LoggableHTTPRequest{\n\t\tRequest:              r.Clone(r.Context()),\n\t\tShouldLogCredentials: shouldLogCredentials,\n\t})\n\terrLog := s.errorLogger.WithLazy(loggableReq)\n\n\tvar duration time.Duration\n\n\tif s.shouldLogRequest(r) {\n\t\twrec := NewResponseRecorder(w, nil, nil)\n\t\tw = wrec\n\n\t\t// wrap the request body in a LengthReader\n\t\t// so we can track the number of bytes read from it\n\t\tvar bodyReader *lengthReader\n\t\tif r.Body != nil {\n\t\t\tbodyReader = &lengthReader{Source: r.Body}\n\t\t\tr.Body = bodyReader\n\n\t\t\t// should always be true, private interface can only be referenced in the same package\n\t\t\tif setReadSizer, ok := wrec.(interface{ setReadSize(*int) }); ok {\n\t\t\t\tsetReadSizer.setReadSize(&bodyReader.Length)\n\t\t\t}\n\t\t}\n\n\t\t// capture the original version of the request\n\t\taccLog := s.accessLogger.WithLazy(loggableReq)\n\n\t\tdefer s.logRequest(accLog, r, wrec, &duration, repl, bodyReader, shouldLogCredentials)\n\t}\n\n\t// guarantee ACME HTTP challenges; handle them separately from any user-defined handlers\n\tif s.tlsApp.HandleHTTPChallenge(w, r) {\n\t\tduration = time.Since(start)\n\t\treturn\n\t}\n\n\terr := s.serveHTTP(w, r)\n\tduration = time.Since(start)\n\n\tif err == nil {\n\t\treturn\n\t}\n\n\t// restore original request before invoking error handler chain (issue #3717)\n\t// NOTE: this does not restore original headers if modified (for efficiency)\n\torigReq, ok := r.Context().Value(OriginalRequestCtxKey).(http.Request)\n\tif ok {\n\t\tr.Method = origReq.Method\n\t\tr.RemoteAddr = origReq.RemoteAddr\n\t\tr.RequestURI = origReq.RequestURI\n\t\tcloneURL(origReq.URL, r.URL)\n\t}\n\n\t// prepare the error log\n\terrLog = errLog.With(zap.Duration(\"duration\", duration))\n\terrLoggers := []*zap.Logger{errLog}\n\tif s.Logs != nil {\n\t\terrLoggers = s.Logs.wrapLogger(errLog, r)\n\t}\n\n\t// get the values that will be used to log the error\n\terrStatus, errMsg, errFields := errLogValues(err)\n\n\t// add HTTP error information to request context\n\tr = s.Errors.WithError(r, err)\n\n\tvar fields []zapcore.Field\n\tif s.Errors != nil && len(s.Errors.Routes) > 0 {\n\t\t// execute user-defined error handling route\n\t\tif err2 := s.errorHandlerChain.ServeHTTP(w, r); err2 == nil {\n\t\t\t// user's error route handled the error response successfully, so now just log the error\n\t\t\tfor _, logger := range errLoggers {\n\t\t\t\tif c := logger.Check(zapcore.DebugLevel, errMsg); c != nil {\n\t\t\t\t\tif fields == nil {\n\t\t\t\t\t\tfields = errFields()\n\t\t\t\t\t}\n\t\t\t\t\tc.Write(fields...)\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\t// well... this is awkward\n\t\t\tfor _, logger := range errLoggers {\n\t\t\t\tif c := logger.Check(zapcore.ErrorLevel, \"error handling handler error\"); c != nil {\n\t\t\t\t\tif fields == nil {\n\t\t\t\t\t\tfields = errFields()\n\t\t\t\t\t\tfields = append([]zapcore.Field{\n\t\t\t\t\t\t\tzap.String(\"error\", err2.Error()),\n\t\t\t\t\t\t\tzap.Namespace(\"first_error\"),\n\t\t\t\t\t\t\tzap.String(\"msg\", errMsg),\n\t\t\t\t\t\t}, fields...)\n\t\t\t\t\t}\n\t\t\t\t\tc.Write(fields...)\n\t\t\t\t}\n\t\t\t}\n\t\t\tif handlerErr, ok := err.(HandlerError); ok {\n\t\t\t\tw.WriteHeader(handlerErr.StatusCode)\n\t\t\t} else {\n\t\t\t\tw.WriteHeader(http.StatusInternalServerError)\n\t\t\t}\n\t\t}\n\t} else {\n\t\tlogLevel := zapcore.DebugLevel\n\t\tif errStatus >= 500 {\n\t\t\tlogLevel = zapcore.ErrorLevel\n\t\t}\n\n\t\tfor _, logger := range errLoggers {\n\t\t\tif c := logger.Check(logLevel, errMsg); c != nil {\n\t\t\t\tif fields == nil {\n\t\t\t\t\tfields = errFields()\n\t\t\t\t}\n\t\t\t\tc.Write(fields...)\n\t\t\t}\n\t\t}\n\t\tw.WriteHeader(errStatus)\n\t}\n}\n\nfunc (s *Server) serveHTTP(w http.ResponseWriter, r *http.Request) error {\n\t// reject very long methods; probably a mistake or an attack\n\tif len(r.Method) > 32 {\n\t\tif s.shouldLogRequest(r) {\n\t\t\tif c := s.accessLogger.Check(zapcore.DebugLevel, \"rejecting request with long method\"); c != nil {\n\t\t\t\tc.Write(\n\t\t\t\t\tzap.String(\"method_trunc\", r.Method[:32]),\n\t\t\t\t\tzap.String(\"remote_addr\", r.RemoteAddr),\n\t\t\t\t)\n\t\t\t}\n\t\t}\n\t\treturn HandlerError{StatusCode: http.StatusMethodNotAllowed}\n\t}\n\n\t// RFC 9112 section 3.2: \"A server MUST respond with a 400 (Bad Request) status\n\t// code to any HTTP/1.1 request message that lacks a Host header field and to any\n\t// request message that contains more than one Host header field line or a Host\n\t// header field with an invalid field value.\"\n\tif r.ProtoMajor == 1 && r.ProtoMinor == 1 && r.Host == \"\" {\n\t\treturn HandlerError{\n\t\t\tErr:        errors.New(\"rfc9112 forbids empty Host\"),\n\t\t\tStatusCode: http.StatusBadRequest,\n\t\t}\n\t}\n\n\t// execute the primary handler chain\n\treturn s.primaryHandlerChain.ServeHTTP(w, r)\n}\n\n// wrapPrimaryRoute wraps stack (a compiled middleware handler chain)\n// in s.enforcementHandler which performs crucial security checks, etc.\nfunc (s *Server) wrapPrimaryRoute(stack Handler) Handler {\n\treturn HandlerFunc(func(w http.ResponseWriter, r *http.Request) error {\n\t\treturn s.enforcementHandler(w, r, stack)\n\t})\n}\n\n// enforcementHandler is an implicit middleware which performs\n// standard checks before executing the HTTP middleware chain.\nfunc (s *Server) enforcementHandler(w http.ResponseWriter, r *http.Request, next Handler) error {\n\t// enforce strict host matching, which ensures that the SNI\n\t// value (if any), matches the Host header; essential for\n\t// servers that rely on TLS ClientAuth sharing a listener\n\t// with servers that do not; if not enforced, client could\n\t// bypass by sending benign SNI then restricted Host header\n\tif s.StrictSNIHost != nil && *s.StrictSNIHost && r.TLS != nil {\n\t\thostname, _, err := net.SplitHostPort(r.Host)\n\t\tif err != nil {\n\t\t\thostname = r.Host // OK; probably lacked port\n\t\t}\n\t\tif !strings.EqualFold(r.TLS.ServerName, hostname) {\n\t\t\terr := fmt.Errorf(\"strict host matching: TLS ServerName (%s) and HTTP Host (%s) values differ\",\n\t\t\t\tr.TLS.ServerName, hostname)\n\t\t\tr.Close = true\n\t\t\treturn Error(http.StatusMisdirectedRequest, err)\n\t\t}\n\t}\n\treturn next.ServeHTTP(w, r)\n}\n\n// listenersUseAnyPortOtherThan returns true if there are any\n// listeners in s that use a port which is not otherPort.\nfunc (s *Server) listenersUseAnyPortOtherThan(otherPort int) bool {\n\tfor _, lnAddr := range s.Listen {\n\t\tladdrs, err := caddy.ParseNetworkAddress(lnAddr)\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\t\tif uint(otherPort) > laddrs.EndPort || uint(otherPort) < laddrs.StartPort {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\n// hasListenerAddress returns true if s has a listener\n// at the given address fullAddr. Currently, fullAddr\n// must represent exactly one socket address (port\n// ranges are not supported)\nfunc (s *Server) hasListenerAddress(fullAddr string) bool {\n\tladdrs, err := caddy.ParseNetworkAddress(fullAddr)\n\tif err != nil {\n\t\treturn false\n\t}\n\tif laddrs.PortRangeSize() != 1 {\n\t\treturn false // TODO: support port ranges\n\t}\n\n\tfor _, lnAddr := range s.Listen {\n\t\tthisAddrs, err := caddy.ParseNetworkAddress(lnAddr)\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\t\tif thisAddrs.Network != laddrs.Network {\n\t\t\tcontinue\n\t\t}\n\n\t\t// Apparently, Linux requires all bound ports to be distinct\n\t\t// *regardless of host interface* even if the addresses are\n\t\t// in fact different; binding \"192.168.0.1:9000\" and then\n\t\t// \":9000\" will fail for \":9000\" because \"address is already\n\t\t// in use\" even though it's not, and the same bindings work\n\t\t// fine on macOS. I also found on Linux that listening on\n\t\t// \"[::]:9000\" would fail with a similar error, except with\n\t\t// the address \"0.0.0.0:9000\", as if deliberately ignoring\n\t\t// that I specified the IPv6 interface explicitly. This seems\n\t\t// to be a major bug in the Linux network stack and I don't\n\t\t// know why it hasn't been fixed yet, so for now we have to\n\t\t// special-case ourselves around Linux like a doting parent.\n\t\t// The second issue seems very similar to a discussion here:\n\t\t// https://github.com/nodejs/node/issues/9390\n\t\t//\n\t\t// However, binding to *different specific* interfaces\n\t\t// (e.g. 127.0.0.2:80 and 127.0.0.3:80) IS allowed on Linux.\n\t\t// The conflict only happens when mixing specific IPs with\n\t\t// wildcards (0.0.0.0 or ::).\n\n\t\t// Hosts match exactly (e.g. 127.0.0.2 == 127.0.0.2) -> Conflict.\n\t\thostMatch := thisAddrs.Host == laddrs.Host\n\n\t\t// On Linux, specific IP vs Wildcard fails to bind.\n\t\t// So if we are on Linux AND either host is empty (wildcard), we treat\n\t\t// it as a match (conflict). But if both are specific and different\n\t\t// (127.0.0.2 vs 127.0.0.3), this remains false (no conflict).\n\t\tlinuxWildcardConflict := runtime.GOOS == \"linux\" && (thisAddrs.Host == \"\" || laddrs.Host == \"\")\n\n\t\tif (hostMatch || linuxWildcardConflict) &&\n\t\t\t(laddrs.StartPort <= thisAddrs.EndPort) &&\n\t\t\t(laddrs.StartPort >= thisAddrs.StartPort) {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\nfunc (s *Server) hasTLSClientAuth() bool {\n\treturn slices.ContainsFunc(s.TLSConnPolicies, func(cp *caddytls.ConnectionPolicy) bool {\n\t\treturn cp.ClientAuthentication != nil && cp.ClientAuthentication.Active()\n\t})\n}\n\n// findLastRouteWithHostMatcher returns the index of the last route\n// in the server which has a host matcher. Used during Automatic HTTPS\n// to determine where to insert the HTTP->HTTPS redirect route, such\n// that it is after any other host matcher but before any \"catch-all\"\n// route without a host matcher.\nfunc (s *Server) findLastRouteWithHostMatcher() int {\n\tfoundHostMatcher := false\n\tlastIndex := len(s.Routes)\n\n\tfor i, route := range s.Routes {\n\t\t// since we want to break out of an inner loop, use a closure\n\t\t// to allow us to use 'return' when we found a host matcher\n\t\tfound := (func() bool {\n\t\t\tfor _, sets := range route.MatcherSets {\n\t\t\t\tfor _, matcher := range sets {\n\t\t\t\t\tswitch matcher.(type) {\n\t\t\t\t\tcase *MatchHost:\n\t\t\t\t\t\tfoundHostMatcher = true\n\t\t\t\t\t\treturn true\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn false\n\t\t})()\n\n\t\t// if we found the host matcher, change the lastIndex to\n\t\t// just after the current route\n\t\tif found {\n\t\t\tlastIndex = i + 1\n\t\t}\n\t}\n\n\t// If we didn't actually find a host matcher, return 0\n\t// because that means every defined route was a \"catch-all\".\n\t// See https://caddy.community/t/how-to-set-priority-in-caddyfile/13002/8\n\tif !foundHostMatcher {\n\t\treturn 0\n\t}\n\n\treturn lastIndex\n}\n\n// serveHTTP3 creates a QUIC listener, configures an HTTP/3 server if\n// not already done, and then uses that server to serve HTTP/3 over\n// the listener, with Server s as the handler.\nfunc (s *Server) serveHTTP3(addr caddy.NetworkAddress, tlsCfg *tls.Config) error {\n\th3net, err := getHTTP3Network(addr.Network)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"starting HTTP/3 QUIC listener: %v\", err)\n\t}\n\taddr.Network = h3net\n\th3ln, err := addr.ListenQUIC(s.ctx, 0, net.ListenConfig{}, tlsCfg, s.packetConnWrappers, s.Allow0RTT)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"starting HTTP/3 QUIC listener: %v\", err)\n\t}\n\n\t// create HTTP/3 server if not done already\n\tif s.h3server == nil {\n\t\ts.h3server = &http3.Server{\n\t\t\tHandler:        s,\n\t\t\tTLSConfig:      tlsCfg,\n\t\t\tMaxHeaderBytes: s.MaxHeaderBytes,\n\t\t\tQUICConfig: &quic.Config{\n\t\t\t\tVersions: []quic.Version{quic.Version1, quic.Version2},\n\t\t\t\tTracer:   h3qlog.DefaultConnectionTracer,\n\t\t\t},\n\t\t\tIdleTimeout: time.Duration(s.IdleTimeout),\n\t\t}\n\t}\n\n\ts.quicListeners = append(s.quicListeners, h3ln)\n\n\t//nolint:errcheck\n\tgo s.h3server.ServeListener(h3ln)\n\n\treturn nil\n}\n\n// configureServer applies/binds the registered callback functions to the server.\nfunc (s *Server) configureServer(server *http.Server) {\n\tfor _, f := range s.connStateFuncs {\n\t\tif server.ConnState != nil {\n\t\t\tbaseConnStateFunc := server.ConnState\n\t\t\tserver.ConnState = func(conn net.Conn, state http.ConnState) {\n\t\t\t\tbaseConnStateFunc(conn, state)\n\t\t\t\tf(conn, state)\n\t\t\t}\n\t\t} else {\n\t\t\tserver.ConnState = f\n\t\t}\n\t}\n\n\tfor _, f := range s.connContextFuncs {\n\t\tif server.ConnContext != nil {\n\t\t\tbaseConnContextFunc := server.ConnContext\n\t\t\tserver.ConnContext = func(ctx context.Context, c net.Conn) context.Context {\n\t\t\t\treturn f(baseConnContextFunc(ctx, c), c)\n\t\t\t}\n\t\t} else {\n\t\t\tserver.ConnContext = f\n\t\t}\n\t}\n\n\tfor _, f := range s.onShutdownFuncs {\n\t\tserver.RegisterOnShutdown(f)\n\t}\n}\n\n// RegisterConnState registers f to be invoked on s.ConnState.\nfunc (s *Server) RegisterConnState(f func(net.Conn, http.ConnState)) {\n\ts.connStateFuncs = append(s.connStateFuncs, f)\n}\n\n// RegisterConnContext registers f to be invoked as part of s.ConnContext.\nfunc (s *Server) RegisterConnContext(f func(ctx context.Context, c net.Conn) context.Context) {\n\ts.connContextFuncs = append(s.connContextFuncs, f)\n}\n\n// RegisterOnShutdown registers f to be invoked when the server begins to shut down.\nfunc (s *Server) RegisterOnShutdown(f func()) {\n\ts.onShutdownFuncs = append(s.onShutdownFuncs, f)\n}\n\n// RegisterOnStop registers f to be invoked after the server has shut down completely.\n//\n// EXPERIMENTAL: Subject to change or removal.\nfunc (s *Server) RegisterOnStop(f func(context.Context) error) {\n\ts.onStopFuncs = append(s.onStopFuncs, f)\n}\n\n// HTTPErrorConfig determines how to handle errors\n// from the HTTP handlers.\ntype HTTPErrorConfig struct {\n\t// The routes to evaluate after the primary handler\n\t// chain returns an error. In an error route, extra\n\t// placeholders are available:\n\t//\n\t// Placeholder | Description\n\t// ------------|---------------\n\t// `{http.error.status_code}` | The recommended HTTP status code\n\t// `{http.error.status_text}` | The status text associated with the recommended status code\n\t// `{http.error.message}`     | The error message\n\t// `{http.error.trace}`       | The origin of the error\n\t// `{http.error.id}`          | An identifier for this occurrence of the error\n\tRoutes RouteList `json:\"routes,omitempty\"`\n}\n\n// WithError makes a shallow copy of r to add the error to its\n// context, and sets placeholders on the request's replacer\n// related to err. It returns the modified request which has\n// the error information in its context and replacer. It\n// overwrites any existing error values that are stored.\nfunc (*HTTPErrorConfig) WithError(r *http.Request, err error) *http.Request {\n\t// add the raw error value to the request context\n\t// so it can be accessed by error handlers\n\tc := context.WithValue(r.Context(), ErrorCtxKey, err)\n\tr = r.WithContext(c)\n\n\t// add error values to the replacer\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\trepl.Set(\"http.error\", err)\n\tif handlerErr, ok := err.(HandlerError); ok {\n\t\trepl.Set(\"http.error.status_code\", handlerErr.StatusCode)\n\t\trepl.Set(\"http.error.status_text\", http.StatusText(handlerErr.StatusCode))\n\t\trepl.Set(\"http.error.id\", handlerErr.ID)\n\t\trepl.Set(\"http.error.trace\", handlerErr.Trace)\n\t\tif handlerErr.Err != nil {\n\t\t\trepl.Set(\"http.error.message\", handlerErr.Err.Error())\n\t\t} else {\n\t\t\trepl.Set(\"http.error.message\", http.StatusText(handlerErr.StatusCode))\n\t\t}\n\t}\n\n\treturn r\n}\n\n// shouldLogRequest returns true if this request should be logged.\nfunc (s *Server) shouldLogRequest(r *http.Request) bool {\n\tif s.accessLogger == nil || s.Logs == nil {\n\t\t// logging is disabled\n\t\treturn false\n\t}\n\n\t// strip off the port if any, logger names are host only\n\thostWithoutPort, _, err := net.SplitHostPort(r.Host)\n\tif err != nil {\n\t\thostWithoutPort = r.Host\n\t}\n\n\tfor loggerName := range s.Logs.LoggerNames {\n\t\tif certmagic.MatchWildcard(hostWithoutPort, loggerName) {\n\t\t\t// this host is mapped to a particular logger name\n\t\t\treturn true\n\t\t}\n\t}\n\tfor _, dh := range s.Logs.SkipHosts {\n\t\t// logging for this particular host is disabled\n\t\tif certmagic.MatchWildcard(hostWithoutPort, dh) {\n\t\t\treturn false\n\t\t}\n\t}\n\t// if configured, this host is not mapped and thus must not be logged\n\treturn !s.Logs.SkipUnmappedHosts\n}\n\n// logTrace will log that this middleware handler is being invoked.\n// It emits at DEBUG level.\nfunc (s *Server) logTrace(mh MiddlewareHandler) {\n\tif s.Logs == nil || !s.Logs.Trace {\n\t\treturn\n\t}\n\tif c := s.traceLogger.Check(zapcore.DebugLevel, caddy.GetModuleName(mh)); c != nil {\n\t\tc.Write(zap.Any(\"module\", mh))\n\t}\n}\n\n// logRequest logs the request to access logs, unless skipped.\nfunc (s *Server) logRequest(\n\taccLog *zap.Logger, r *http.Request, wrec ResponseRecorder, duration *time.Duration,\n\trepl *caddy.Replacer, bodyReader *lengthReader, shouldLogCredentials bool,\n) {\n\tctx := r.Context()\n\n\t// this request may be flagged as omitted from the logs\n\tif skip, ok := GetVar(ctx, LogSkipVar).(bool); ok && skip {\n\t\treturn\n\t}\n\n\tstatus := wrec.Status()\n\tsize := wrec.Size()\n\n\trepl.Set(\"http.response.status\", status) // will be 0 if no response is written by us (Go will write 200 to client)\n\trepl.Set(\"http.response.size\", size)\n\trepl.Set(\"http.response.duration\", duration)\n\trepl.Set(\"http.response.duration_ms\", duration.Seconds()*1e3) // multiply seconds to preserve decimal (see #4666)\n\n\tloggers := []*zap.Logger{accLog}\n\tif s.Logs != nil {\n\t\tloggers = s.Logs.wrapLogger(accLog, r)\n\t}\n\n\tmessage := \"handled request\"\n\tif nop, ok := GetVar(ctx, \"unhandled\").(bool); ok && nop {\n\t\tmessage = \"NOP\"\n\t}\n\n\tlogLevel := zapcore.InfoLevel\n\tif status >= 500 {\n\t\tlogLevel = zapcore.ErrorLevel\n\t}\n\n\tvar fields []zapcore.Field\n\tfor _, logger := range loggers {\n\t\tc := logger.Check(logLevel, message)\n\t\tif c == nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tif fields == nil {\n\t\t\tuserID, _ := repl.GetString(\"http.auth.user.id\")\n\n\t\t\treqBodyLength := 0\n\t\t\tif bodyReader != nil {\n\t\t\t\treqBodyLength = bodyReader.Length\n\t\t\t}\n\n\t\t\textra := ctx.Value(ExtraLogFieldsCtxKey).(*ExtraLogFields)\n\n\t\t\tfieldCount := 6\n\t\t\tfields = make([]zapcore.Field, 0, fieldCount+len(extra.fields))\n\t\t\tfields = append(fields,\n\t\t\t\tzap.Int(\"bytes_read\", reqBodyLength),\n\t\t\t\tzap.String(\"user_id\", userID),\n\t\t\t\tzap.Duration(\"duration\", *duration),\n\t\t\t\tzap.Int(\"size\", size),\n\t\t\t\tzap.Int(\"status\", status),\n\t\t\t\tzap.Object(\"resp_headers\", LoggableHTTPHeader{\n\t\t\t\t\tHeader:               wrec.Header(),\n\t\t\t\t\tShouldLogCredentials: shouldLogCredentials,\n\t\t\t\t}),\n\t\t\t)\n\t\t\tfields = append(fields, extra.fields...)\n\t\t}\n\n\t\tc.Write(fields...)\n\t}\n}\n\n// protocol returns true if the protocol proto is configured/enabled.\nfunc (s *Server) protocol(proto string) bool {\n\tif s.ListenProtocols == nil {\n\t\tif slices.Contains(s.Protocols, proto) {\n\t\t\treturn true\n\t\t}\n\t} else {\n\t\tfor _, lnProtocols := range s.ListenProtocols {\n\t\t\tfor _, lnProtocol := range lnProtocols {\n\t\t\t\tif lnProtocol == \"\" && slices.Contains(s.Protocols, proto) || lnProtocol == proto {\n\t\t\t\t\treturn true\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn false\n}\n\n// Listeners returns the server's listeners. These are active listeners,\n// so calling Accept() or Close() on them will probably break things.\n// They are made available here for read-only purposes (e.g. Addr())\n// and for type-asserting for purposes where you know what you're doing.\n//\n// EXPERIMENTAL: Subject to change or removal.\nfunc (s *Server) Listeners() []net.Listener { return s.listeners }\n\n// Name returns the server's name.\nfunc (s *Server) Name() string { return s.name }\n\n// PrepareRequest fills the request r for use in a Caddy HTTP handler chain. w and s can\n// be nil, but the handlers will lose response placeholders and access to the server.\nfunc PrepareRequest(r *http.Request, repl *caddy.Replacer, w http.ResponseWriter, s *Server) *http.Request {\n\t// set up the context for the request\n\tctx := context.WithValue(r.Context(), caddy.ReplacerCtxKey, repl)\n\tctx = context.WithValue(ctx, ServerCtxKey, s)\n\n\ttrusted, clientIP := determineTrustedProxy(r, s)\n\tctx = context.WithValue(ctx, VarsCtxKey, map[string]any{\n\t\tTrustedProxyVarKey: trusted,\n\t\tClientIPVarKey:     clientIP,\n\t})\n\n\tctx = context.WithValue(ctx, routeGroupCtxKey, make(map[string]struct{}))\n\n\tvar url2 url.URL // avoid letting this escape to the heap\n\tctx = context.WithValue(ctx, OriginalRequestCtxKey, originalRequest(r, &url2))\n\n\tctx = context.WithValue(ctx, ExtraLogFieldsCtxKey, new(ExtraLogFields))\n\tr = r.WithContext(ctx)\n\n\t// once the pointer to the request won't change\n\t// anymore, finish setting up the replacer\n\taddHTTPVarsToReplacer(repl, r, w)\n\n\treturn r\n}\n\n// originalRequest returns a partial, shallow copy of\n// req, including: req.Method, deep copy of req.URL\n// (into the urlCopy parameter, which should be on the\n// stack), req.RequestURI, and req.RemoteAddr. Notably,\n// headers are not copied. This function is designed to\n// be very fast and efficient, and useful primarily for\n// read-only/logging purposes.\nfunc originalRequest(req *http.Request, urlCopy *url.URL) http.Request {\n\tcloneURL(req.URL, urlCopy)\n\treturn http.Request{\n\t\tMethod:     req.Method,\n\t\tRemoteAddr: req.RemoteAddr,\n\t\tRequestURI: req.RequestURI,\n\t\tURL:        urlCopy,\n\t}\n}\n\n// determineTrustedProxy parses the remote IP address of\n// the request, and determines (if the server configured it)\n// if the client is a trusted proxy. If trusted, also returns\n// the real client IP if possible.\nfunc determineTrustedProxy(r *http.Request, s *Server) (bool, string) {\n\t// If there's no server, then we can't check anything\n\tif s == nil {\n\t\treturn false, \"\"\n\t}\n\n\tif s.TrustedProxiesUnix && r.RemoteAddr == \"@\" {\n\t\tif s.TrustedProxiesStrict > 0 {\n\t\t\tipRanges := []netip.Prefix{}\n\t\t\tif s.trustedProxies != nil {\n\t\t\t\tipRanges = s.trustedProxies.GetIPRanges(r)\n\t\t\t}\n\t\t\treturn true, strictUntrustedClientIp(r, s.ClientIPHeaders, ipRanges, \"@\")\n\t\t} else {\n\t\t\treturn true, trustedRealClientIP(r, s.ClientIPHeaders, \"@\")\n\t\t}\n\t}\n\t// Parse the remote IP, ignore the error as non-fatal,\n\t// but the remote IP is required to continue, so we\n\t// just return early. This should probably never happen\n\t// though, unless some other module manipulated the request's\n\t// remote address and used an invalid value.\n\tclientIP, _, err := net.SplitHostPort(r.RemoteAddr)\n\tif err != nil {\n\t\treturn false, \"\"\n\t}\n\n\t// Client IP may contain a zone if IPv6, so we need\n\t// to pull that out before parsing the IP\n\tclientIP, _, _ = strings.Cut(clientIP, \"%\")\n\tipAddr, err := netip.ParseAddr(clientIP)\n\tif err != nil {\n\t\treturn false, \"\"\n\t}\n\n\t// Check if the client is a trusted proxy\n\tif s.trustedProxies == nil {\n\t\treturn false, ipAddr.String()\n\t}\n\n\tif isTrustedClientIP(ipAddr, s.trustedProxies.GetIPRanges(r)) {\n\t\tif s.TrustedProxiesStrict > 0 {\n\t\t\treturn true, strictUntrustedClientIp(r, s.ClientIPHeaders, s.trustedProxies.GetIPRanges(r), ipAddr.String())\n\t\t}\n\t\treturn true, trustedRealClientIP(r, s.ClientIPHeaders, ipAddr.String())\n\t}\n\n\treturn false, ipAddr.String()\n}\n\n// isTrustedClientIP returns true if the given IP address is\n// in the list of trusted IP ranges.\nfunc isTrustedClientIP(ipAddr netip.Addr, trusted []netip.Prefix) bool {\n\treturn slices.ContainsFunc(trusted, func(prefix netip.Prefix) bool {\n\t\treturn prefix.Contains(ipAddr)\n\t})\n}\n\n// trustedRealClientIP finds the client IP from the request assuming it is\n// from a trusted client. If there is no client IP headers, then the\n// direct remote address is returned. If there are client IP headers,\n// then the first value from those headers is used.\nfunc trustedRealClientIP(r *http.Request, headers []string, clientIP string) string {\n\t// Read all the values of the configured client IP headers, in order\n\t// nolint:prealloc\n\tvar values []string\n\tfor _, field := range headers {\n\t\tvalues = append(values, r.Header.Values(field)...)\n\t}\n\n\t// If we don't have any values, then give up\n\tif len(values) == 0 {\n\t\treturn clientIP\n\t}\n\n\t// Since there can be many header values, we need to\n\t// join them together before splitting to get the full list\n\tallValues := strings.SplitSeq(strings.Join(values, \",\"), \",\")\n\n\t// Get first valid left-most IP address\n\tfor part := range allValues {\n\t\t// Some proxies may retain the port number, so split if possible\n\t\thost, _, err := net.SplitHostPort(part)\n\t\tif err != nil {\n\t\t\thost = part\n\t\t}\n\n\t\t// Remove any zone identifier from the IP address\n\t\thost, _, _ = strings.Cut(strings.TrimSpace(host), \"%\")\n\n\t\t// Parse the IP address\n\t\tipAddr, err := netip.ParseAddr(host)\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\t\treturn ipAddr.String()\n\t}\n\n\t// We didn't find a valid IP\n\treturn clientIP\n}\n\n// strictUntrustedClientIp iterates through the list of client IP headers,\n// parses them from right-to-left, and returns the first valid IP address\n// that is untrusted. If no valid IP address is found, then the direct\n// remote address is returned.\nfunc strictUntrustedClientIp(r *http.Request, headers []string, trusted []netip.Prefix, clientIP string) string {\n\tfor _, headerName := range headers {\n\t\tparts := strings.Split(strings.Join(r.Header.Values(headerName), \",\"), \",\")\n\n\t\tfor i := len(parts) - 1; i >= 0; i-- {\n\t\t\t// Some proxies may retain the port number, so split if possible\n\t\t\thost, _, err := net.SplitHostPort(parts[i])\n\t\t\tif err != nil {\n\t\t\t\thost = parts[i]\n\t\t\t}\n\n\t\t\t// Remove any zone identifier from the IP address\n\t\t\thost, _, _ = strings.Cut(strings.TrimSpace(host), \"%\")\n\n\t\t\t// Parse the IP address\n\t\t\tipAddr, err := netip.ParseAddr(host)\n\t\t\tif err != nil {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif !isTrustedClientIP(ipAddr, trusted) {\n\t\t\t\treturn ipAddr.String()\n\t\t\t}\n\t\t}\n\t}\n\n\treturn clientIP\n}\n\n// cloneURL makes a copy of r.URL and returns a\n// new value that doesn't reference the original.\nfunc cloneURL(from, to *url.URL) {\n\t*to = *from\n\tif from.User != nil {\n\t\tuserInfo := new(url.Userinfo)\n\t\t*userInfo = *from.User\n\t\tto.User = userInfo\n\t}\n}\n\n// lengthReader is an io.ReadCloser that keeps track of the\n// number of bytes read from the request body.\ntype lengthReader struct {\n\tSource io.ReadCloser\n\tLength int\n}\n\nfunc (r *lengthReader) Read(b []byte) (int, error) {\n\tn, err := r.Source.Read(b)\n\tr.Length += n\n\treturn n, err\n}\n\nfunc (r *lengthReader) Close() error {\n\treturn r.Source.Close()\n}\n\n// Context keys for HTTP request context values.\nconst (\n\t// For referencing the server instance\n\tServerCtxKey caddy.CtxKey = \"server\"\n\n\t// For the request's variable table\n\tVarsCtxKey caddy.CtxKey = \"vars\"\n\n\t// For a partial copy of the unmodified request that\n\t// originally came into the server's entry handler\n\tOriginalRequestCtxKey caddy.CtxKey = \"original_request\"\n\n\t// DEPRECATED: not used anymore.\n\t// To refer to the underlying connection, implement a middleware plugin\n\t// that RegisterConnContext during provisioning.\n\tConnCtxKey caddy.CtxKey = \"conn\"\n\n\t// used to get the tls connection state in the context, if available\n\ttlsConnectionStateFuncCtxKey caddy.CtxKey = \"tls_connection_state_func\"\n\n\t// For tracking whether the client is a trusted proxy\n\tTrustedProxyVarKey string = \"trusted_proxy\"\n\n\t// For tracking the real client IP (affected by trusted_proxy)\n\tClientIPVarKey string = \"client_ip\"\n)\n\nvar networkTypesHTTP3 = map[string]string{\n\t\"unixgram\": \"unixgram\",\n\t\"udp\":      \"udp\",\n\t\"udp4\":     \"udp4\",\n\t\"udp6\":     \"udp6\",\n\t\"tcp\":      \"udp\",\n\t\"tcp4\":     \"udp4\",\n\t\"tcp6\":     \"udp6\",\n\t\"fdgram\":   \"fdgram\",\n}\n\n// RegisterNetworkHTTP3 registers a mapping from non-HTTP/3 network to HTTP/3\n// network. This should be called during init() and will panic if the network\n// type is standard, reserved, or already registered.\n//\n// EXPERIMENTAL: Subject to change.\nfunc RegisterNetworkHTTP3(originalNetwork, h3Network string) {\n\tif _, ok := networkTypesHTTP3[strings.ToLower(originalNetwork)]; ok {\n\t\tpanic(\"network type \" + originalNetwork + \" is already registered\")\n\t}\n\tnetworkTypesHTTP3[originalNetwork] = h3Network\n}\n\nfunc getHTTP3Network(originalNetwork string) (string, error) {\n\th3Network, ok := networkTypesHTTP3[strings.ToLower(originalNetwork)]\n\tif !ok {\n\t\treturn \"\", fmt.Errorf(\"network '%s' cannot handle HTTP/3 connections\", originalNetwork)\n\t}\n\treturn h3Network, nil\n}\n"
  },
  {
    "path": "modules/caddyhttp/server_test.go",
    "content": "package caddyhttp\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"net/netip\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n)\n\ntype writeFunc func(p []byte) (int, error)\n\ntype nopSyncer writeFunc\n\nfunc (n nopSyncer) Write(p []byte) (int, error) {\n\treturn n(p)\n}\n\nfunc (n nopSyncer) Sync() error {\n\treturn nil\n}\n\n// testLogger returns a logger and a buffer to which the logger writes. The\n// buffer can be read for asserting log output.\nfunc testLogger(wf writeFunc) *zap.Logger {\n\tws := nopSyncer(wf)\n\tencoderCfg := zapcore.EncoderConfig{\n\t\tMessageKey:     \"msg\",\n\t\tLevelKey:       \"level\",\n\t\tNameKey:        \"logger\",\n\t\tEncodeLevel:    zapcore.LowercaseLevelEncoder,\n\t\tEncodeTime:     zapcore.ISO8601TimeEncoder,\n\t\tEncodeDuration: zapcore.StringDurationEncoder,\n\t}\n\tcore := zapcore.NewCore(zapcore.NewJSONEncoder(encoderCfg), ws, zap.DebugLevel)\n\n\treturn zap.New(core)\n}\n\nfunc TestServer_LogRequest(t *testing.T) {\n\ts := &Server{}\n\n\tctx := context.Background()\n\tctx = context.WithValue(ctx, ExtraLogFieldsCtxKey, new(ExtraLogFields))\n\treq := httptest.NewRequest(http.MethodGet, \"/\", nil).WithContext(ctx)\n\trec := httptest.NewRecorder()\n\twrec := NewResponseRecorder(rec, nil, nil)\n\n\tduration := 50 * time.Millisecond\n\trepl := NewTestReplacer(req)\n\tbodyReader := &lengthReader{Source: req.Body}\n\tshouldLogCredentials := false\n\n\tbuf := bytes.Buffer{}\n\taccLog := testLogger(buf.Write)\n\ts.logRequest(accLog, req, wrec, &duration, repl, bodyReader, shouldLogCredentials)\n\n\tassert.JSONEq(t, `{\n\t\t\"msg\":\"handled request\", \"level\":\"info\", \"bytes_read\":0,\n\t\t\"duration\":\"50ms\", \"resp_headers\": {}, \"size\":0,\n\t\t\"status\":0, \"user_id\":\"\"\n\t}`, buf.String())\n}\n\nfunc TestServer_LogRequest_WithTrace(t *testing.T) {\n\ts := &Server{}\n\n\textra := new(ExtraLogFields)\n\tctx := context.WithValue(context.Background(), ExtraLogFieldsCtxKey, extra)\n\textra.Add(zap.String(\"traceID\", \"1234567890abcdef\"))\n\textra.Add(zap.String(\"spanID\", \"12345678\"))\n\n\treq := httptest.NewRequest(http.MethodGet, \"/\", nil).WithContext(ctx)\n\trec := httptest.NewRecorder()\n\twrec := NewResponseRecorder(rec, nil, nil)\n\n\tduration := 50 * time.Millisecond\n\trepl := NewTestReplacer(req)\n\tbodyReader := &lengthReader{Source: req.Body}\n\tshouldLogCredentials := false\n\n\tbuf := bytes.Buffer{}\n\taccLog := testLogger(buf.Write)\n\ts.logRequest(accLog, req, wrec, &duration, repl, bodyReader, shouldLogCredentials)\n\n\tassert.JSONEq(t, `{\n\t\t\"msg\":\"handled request\", \"level\":\"info\", \"bytes_read\":0,\n\t\t\"duration\":\"50ms\", \"resp_headers\": {}, \"size\":0,\n\t\t\"status\":0, \"user_id\":\"\",\n\t\t\"traceID\":\"1234567890abcdef\",\n\t\t\"spanID\":\"12345678\"\n\t}`, buf.String())\n}\n\nfunc BenchmarkServer_LogRequest(b *testing.B) {\n\ts := &Server{}\n\n\textra := new(ExtraLogFields)\n\tctx := context.WithValue(context.Background(), ExtraLogFieldsCtxKey, extra)\n\n\treq := httptest.NewRequest(http.MethodGet, \"/\", nil).WithContext(ctx)\n\trec := httptest.NewRecorder()\n\twrec := NewResponseRecorder(rec, nil, nil)\n\n\tduration := 50 * time.Millisecond\n\trepl := NewTestReplacer(req)\n\tbodyReader := &lengthReader{Source: req.Body}\n\n\tbuf := io.Discard\n\taccLog := testLogger(buf.Write)\n\n\tfor b.Loop() {\n\t\ts.logRequest(accLog, req, wrec, &duration, repl, bodyReader, false)\n\t}\n}\n\nfunc BenchmarkServer_LogRequest_NopLogger(b *testing.B) {\n\ts := &Server{}\n\n\textra := new(ExtraLogFields)\n\tctx := context.WithValue(context.Background(), ExtraLogFieldsCtxKey, extra)\n\n\treq := httptest.NewRequest(http.MethodGet, \"/\", nil).WithContext(ctx)\n\trec := httptest.NewRecorder()\n\twrec := NewResponseRecorder(rec, nil, nil)\n\n\tduration := 50 * time.Millisecond\n\trepl := NewTestReplacer(req)\n\tbodyReader := &lengthReader{Source: req.Body}\n\n\taccLog := zap.NewNop()\n\n\tfor b.Loop() {\n\t\ts.logRequest(accLog, req, wrec, &duration, repl, bodyReader, false)\n\t}\n}\n\nfunc BenchmarkServer_LogRequest_WithTrace(b *testing.B) {\n\ts := &Server{}\n\n\textra := new(ExtraLogFields)\n\tctx := context.WithValue(context.Background(), ExtraLogFieldsCtxKey, extra)\n\textra.Add(zap.String(\"traceID\", \"1234567890abcdef\"))\n\textra.Add(zap.String(\"spanID\", \"12345678\"))\n\n\treq := httptest.NewRequest(http.MethodGet, \"/\", nil).WithContext(ctx)\n\trec := httptest.NewRecorder()\n\twrec := NewResponseRecorder(rec, nil, nil)\n\n\tduration := 50 * time.Millisecond\n\trepl := NewTestReplacer(req)\n\tbodyReader := &lengthReader{Source: req.Body}\n\n\tbuf := io.Discard\n\taccLog := testLogger(buf.Write)\n\n\tfor b.Loop() {\n\t\ts.logRequest(accLog, req, wrec, &duration, repl, bodyReader, false)\n\t}\n}\n\nfunc TestServer_TrustedRealClientIP_NoTrustedHeaders(t *testing.T) {\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"192.0.2.1:12345\"\n\tip := trustedRealClientIP(req, []string{}, \"192.0.2.1\")\n\n\tassert.Equal(t, ip, \"192.0.2.1\")\n}\n\nfunc TestServer_TrustedRealClientIP_OneTrustedHeaderEmpty(t *testing.T) {\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"192.0.2.1:12345\"\n\tip := trustedRealClientIP(req, []string{\"X-Forwarded-For\"}, \"192.0.2.1\")\n\n\tassert.Equal(t, ip, \"192.0.2.1\")\n}\n\nfunc TestServer_TrustedRealClientIP_OneTrustedHeaderInvalid(t *testing.T) {\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"192.0.2.1:12345\"\n\treq.Header.Set(\"X-Forwarded-For\", \"not, an, ip\")\n\tip := trustedRealClientIP(req, []string{\"X-Forwarded-For\"}, \"192.0.2.1\")\n\n\tassert.Equal(t, ip, \"192.0.2.1\")\n}\n\nfunc TestServer_TrustedRealClientIP_OneTrustedHeaderValid(t *testing.T) {\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"192.0.2.1:12345\"\n\treq.Header.Set(\"X-Forwarded-For\", \"10.0.0.1\")\n\tip := trustedRealClientIP(req, []string{\"X-Forwarded-For\"}, \"192.0.2.1\")\n\n\tassert.Equal(t, ip, \"10.0.0.1\")\n}\n\nfunc TestServer_TrustedRealClientIP_OneTrustedHeaderValidArray(t *testing.T) {\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"192.0.2.1:12345\"\n\treq.Header.Set(\"X-Forwarded-For\", \"1.1.1.1, 2.2.2.2, 3.3.3.3\")\n\tip := trustedRealClientIP(req, []string{\"X-Forwarded-For\"}, \"192.0.2.1\")\n\n\tassert.Equal(t, ip, \"1.1.1.1\")\n}\n\nfunc TestServer_TrustedRealClientIP_IncludesPort(t *testing.T) {\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"192.0.2.1:12345\"\n\treq.Header.Set(\"X-Forwarded-For\", \"1.1.1.1:1234\")\n\tip := trustedRealClientIP(req, []string{\"X-Forwarded-For\"}, \"192.0.2.1\")\n\n\tassert.Equal(t, ip, \"1.1.1.1\")\n}\n\nfunc TestServer_TrustedRealClientIP_SkipsInvalidIps(t *testing.T) {\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"192.0.2.1:12345\"\n\treq.Header.Set(\"X-Forwarded-For\", \"not an ip, bad bad, 10.0.0.1\")\n\tip := trustedRealClientIP(req, []string{\"X-Forwarded-For\"}, \"192.0.2.1\")\n\n\tassert.Equal(t, ip, \"10.0.0.1\")\n}\n\nfunc TestServer_TrustedRealClientIP_MultipleTrustedHeaderValidArray(t *testing.T) {\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"192.0.2.1:12345\"\n\treq.Header.Set(\"Real-Client-IP\", \"1.1.1.1, 2.2.2.2, 3.3.3.3\")\n\treq.Header.Set(\"X-Forwarded-For\", \"3.3.3.3, 4.4.4.4\")\n\tip1 := trustedRealClientIP(req, []string{\"X-Forwarded-For\", \"Real-Client-IP\"}, \"192.0.2.1\")\n\tip2 := trustedRealClientIP(req, []string{\"Real-Client-IP\", \"X-Forwarded-For\"}, \"192.0.2.1\")\n\tip3 := trustedRealClientIP(req, []string{\"Missing-Header-IP\", \"Real-Client-IP\", \"X-Forwarded-For\"}, \"192.0.2.1\")\n\n\tassert.Equal(t, ip1, \"3.3.3.3\")\n\tassert.Equal(t, ip2, \"1.1.1.1\")\n\tassert.Equal(t, ip3, \"1.1.1.1\")\n}\n\nfunc TestServer_DetermineTrustedProxy_NoConfig(t *testing.T) {\n\tserver := &Server{}\n\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"192.0.2.1:12345\"\n\n\ttrusted, clientIP := determineTrustedProxy(req, server)\n\n\tassert.False(t, trusted)\n\tassert.Equal(t, clientIP, \"192.0.2.1\")\n}\n\nfunc TestServer_DetermineTrustedProxy_NoConfigIpv6(t *testing.T) {\n\tserver := &Server{}\n\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"[::1]:12345\"\n\n\ttrusted, clientIP := determineTrustedProxy(req, server)\n\n\tassert.False(t, trusted)\n\tassert.Equal(t, clientIP, \"::1\")\n}\n\nfunc TestServer_DetermineTrustedProxy_NoConfigIpv6Zones(t *testing.T) {\n\tserver := &Server{}\n\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"[::1%eth2]:12345\"\n\n\ttrusted, clientIP := determineTrustedProxy(req, server)\n\n\tassert.False(t, trusted)\n\tassert.Equal(t, clientIP, \"::1\")\n}\n\nfunc TestServer_DetermineTrustedProxy_TrustedLoopback(t *testing.T) {\n\tloopbackPrefix, _ := netip.ParsePrefix(\"127.0.0.1/8\")\n\n\tserver := &Server{\n\t\ttrustedProxies: &StaticIPRange{\n\t\t\tranges: []netip.Prefix{loopbackPrefix},\n\t\t},\n\t\tClientIPHeaders: []string{\"X-Forwarded-For\"},\n\t}\n\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"127.0.0.1:12345\"\n\treq.Header.Set(\"X-Forwarded-For\", \"31.40.0.10\")\n\n\ttrusted, clientIP := determineTrustedProxy(req, server)\n\n\tassert.True(t, trusted)\n\tassert.Equal(t, clientIP, \"31.40.0.10\")\n}\n\nfunc TestServer_DetermineTrustedProxy_UnixSocket(t *testing.T) {\n\tserver := &Server{\n\t\tClientIPHeaders:    []string{\"X-Forwarded-For\"},\n\t\tTrustedProxiesUnix: true,\n\t}\n\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"@\"\n\treq.Header.Set(\"X-Forwarded-For\", \"2.2.2.2, 3.3.3.3\")\n\n\ttrusted, clientIP := determineTrustedProxy(req, server)\n\n\tassert.True(t, trusted)\n\tassert.Equal(t, \"2.2.2.2\", clientIP)\n}\n\nfunc TestServer_DetermineTrustedProxy_UnixSocketStrict(t *testing.T) {\n\tserver := &Server{\n\t\tClientIPHeaders:      []string{\"X-Forwarded-For\"},\n\t\tTrustedProxiesUnix:   true,\n\t\tTrustedProxiesStrict: 1,\n\t}\n\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"@\"\n\treq.Header.Set(\"X-Forwarded-For\", \"2.2.2.2, 3.3.3.3\")\n\n\ttrusted, clientIP := determineTrustedProxy(req, server)\n\n\tassert.True(t, trusted)\n\tassert.Equal(t, \"3.3.3.3\", clientIP)\n}\n\nfunc TestServer_DetermineTrustedProxy_UntrustedPrefix(t *testing.T) {\n\tloopbackPrefix, _ := netip.ParsePrefix(\"127.0.0.1/8\")\n\n\tserver := &Server{\n\t\ttrustedProxies: &StaticIPRange{\n\t\t\tranges: []netip.Prefix{loopbackPrefix},\n\t\t},\n\t\tClientIPHeaders: []string{\"X-Forwarded-For\"},\n\t}\n\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"10.0.0.1:12345\"\n\treq.Header.Set(\"X-Forwarded-For\", \"31.40.0.10\")\n\n\ttrusted, clientIP := determineTrustedProxy(req, server)\n\n\tassert.False(t, trusted)\n\tassert.Equal(t, clientIP, \"10.0.0.1\")\n}\n\nfunc TestServer_DetermineTrustedProxy_MultipleTrustedPrefixes(t *testing.T) {\n\tloopbackPrefix, _ := netip.ParsePrefix(\"127.0.0.1/8\")\n\tlocalPrivatePrefix, _ := netip.ParsePrefix(\"10.0.0.0/8\")\n\n\tserver := &Server{\n\t\ttrustedProxies: &StaticIPRange{\n\t\t\tranges: []netip.Prefix{loopbackPrefix, localPrivatePrefix},\n\t\t},\n\t\tClientIPHeaders: []string{\"X-Forwarded-For\"},\n\t}\n\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"10.0.0.1:12345\"\n\treq.Header.Set(\"X-Forwarded-For\", \"31.40.0.10\")\n\n\ttrusted, clientIP := determineTrustedProxy(req, server)\n\n\tassert.True(t, trusted)\n\tassert.Equal(t, clientIP, \"31.40.0.10\")\n}\n\nfunc TestServer_DetermineTrustedProxy_MultipleTrustedClientHeaders(t *testing.T) {\n\tloopbackPrefix, _ := netip.ParsePrefix(\"127.0.0.1/8\")\n\tlocalPrivatePrefix, _ := netip.ParsePrefix(\"10.0.0.0/8\")\n\n\tserver := &Server{\n\t\ttrustedProxies: &StaticIPRange{\n\t\t\tranges: []netip.Prefix{loopbackPrefix, localPrivatePrefix},\n\t\t},\n\t\tClientIPHeaders: []string{\"CF-Connecting-IP\", \"X-Forwarded-For\"},\n\t}\n\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"10.0.0.1:12345\"\n\treq.Header.Set(\"CF-Connecting-IP\", \"1.1.1.1, 2.2.2.2\")\n\treq.Header.Set(\"X-Forwarded-For\", \"3.3.3.3, 4.4.4.4\")\n\n\ttrusted, clientIP := determineTrustedProxy(req, server)\n\n\tassert.True(t, trusted)\n\tassert.Equal(t, clientIP, \"1.1.1.1\")\n}\n\nfunc TestServer_DetermineTrustedProxy_MatchLeftMostValidIp(t *testing.T) {\n\tlocalPrivatePrefix, _ := netip.ParsePrefix(\"10.0.0.0/8\")\n\n\tserver := &Server{\n\t\ttrustedProxies: &StaticIPRange{\n\t\t\tranges: []netip.Prefix{localPrivatePrefix},\n\t\t},\n\t\tClientIPHeaders: []string{\"X-Forwarded-For\"},\n\t}\n\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"10.0.0.1:12345\"\n\treq.Header.Set(\"X-Forwarded-For\", \"30.30.30.30, 45.54.45.54, 10.0.0.1\")\n\n\ttrusted, clientIP := determineTrustedProxy(req, server)\n\n\tassert.True(t, trusted)\n\tassert.Equal(t, clientIP, \"30.30.30.30\")\n}\n\nfunc TestServer_DetermineTrustedProxy_MatchRightMostUntrusted(t *testing.T) {\n\tlocalPrivatePrefix, _ := netip.ParsePrefix(\"10.0.0.0/8\")\n\n\tserver := &Server{\n\t\ttrustedProxies: &StaticIPRange{\n\t\t\tranges: []netip.Prefix{localPrivatePrefix},\n\t\t},\n\t\tClientIPHeaders:      []string{\"X-Forwarded-For\"},\n\t\tTrustedProxiesStrict: 1,\n\t}\n\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"10.0.0.1:12345\"\n\treq.Header.Set(\"X-Forwarded-For\", \"30.30.30.30, 45.54.45.54, 10.0.0.1\")\n\n\ttrusted, clientIP := determineTrustedProxy(req, server)\n\n\tassert.True(t, trusted)\n\tassert.Equal(t, clientIP, \"45.54.45.54\")\n}\n\nfunc TestServer_DetermineTrustedProxy_MatchRightMostUntrustedSkippingEmpty(t *testing.T) {\n\tlocalPrivatePrefix, _ := netip.ParsePrefix(\"10.0.0.0/8\")\n\n\tserver := &Server{\n\t\ttrustedProxies: &StaticIPRange{\n\t\t\tranges: []netip.Prefix{localPrivatePrefix},\n\t\t},\n\t\tClientIPHeaders:      []string{\"Missing-Header\", \"CF-Connecting-IP\", \"X-Forwarded-For\"},\n\t\tTrustedProxiesStrict: 1,\n\t}\n\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"10.0.0.1:12345\"\n\treq.Header.Set(\"CF-Connecting-IP\", \"not a real IP\")\n\treq.Header.Set(\"X-Forwarded-For\", \"30.30.30.30, bad, 45.54.45.54, not real\")\n\n\ttrusted, clientIP := determineTrustedProxy(req, server)\n\n\tassert.True(t, trusted)\n\tassert.Equal(t, clientIP, \"45.54.45.54\")\n}\n\nfunc TestServer_DetermineTrustedProxy_MatchRightMostUntrustedSkippingTrusted(t *testing.T) {\n\tlocalPrivatePrefix, _ := netip.ParsePrefix(\"10.0.0.0/8\")\n\n\tserver := &Server{\n\t\ttrustedProxies: &StaticIPRange{\n\t\t\tranges: []netip.Prefix{localPrivatePrefix},\n\t\t},\n\t\tClientIPHeaders:      []string{\"CF-Connecting-IP\", \"X-Forwarded-For\"},\n\t\tTrustedProxiesStrict: 1,\n\t}\n\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"10.0.0.1:12345\"\n\treq.Header.Set(\"CF-Connecting-IP\", \"10.0.0.1, 10.0.0.2, 10.0.0.3\")\n\treq.Header.Set(\"X-Forwarded-For\", \"30.30.30.30, 45.54.45.54, 10.0.0.4\")\n\n\ttrusted, clientIP := determineTrustedProxy(req, server)\n\n\tassert.True(t, trusted)\n\tassert.Equal(t, clientIP, \"45.54.45.54\")\n}\n\nfunc TestServer_DetermineTrustedProxy_MatchRightMostUntrustedFirst(t *testing.T) {\n\tlocalPrivatePrefix, _ := netip.ParsePrefix(\"10.0.0.0/8\")\n\n\tserver := &Server{\n\t\ttrustedProxies: &StaticIPRange{\n\t\t\tranges: []netip.Prefix{localPrivatePrefix},\n\t\t},\n\t\tClientIPHeaders:      []string{\"CF-Connecting-IP\", \"X-Forwarded-For\"},\n\t\tTrustedProxiesStrict: 1,\n\t}\n\n\treq := httptest.NewRequest(\"GET\", \"/\", nil)\n\treq.RemoteAddr = \"10.0.0.1:12345\"\n\treq.Header.Set(\"CF-Connecting-IP\", \"10.0.0.1, 90.100.110.120, 10.0.0.2, 10.0.0.3\")\n\treq.Header.Set(\"X-Forwarded-For\", \"30.30.30.30, 45.54.45.54, 10.0.0.4\")\n\n\ttrusted, clientIP := determineTrustedProxy(req, server)\n\n\tassert.True(t, trusted)\n\tassert.Equal(t, clientIP, \"90.100.110.120\")\n}\n"
  },
  {
    "path": "modules/caddyhttp/standard/imports.go",
    "content": "package standard\n\nimport (\n\t// standard Caddy HTTP app modules\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/caddyauth\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/encode\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/encode/brotli\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/encode/gzip\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/encode/zstd\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/fileserver\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/headers\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/intercept\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/logging\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/map\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/proxyprotocol\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/push\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/requestbody\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/reverseproxy\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/reverseproxy/fastcgi\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/reverseproxy/forwardauth\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/rewrite\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/templates\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/tracing\"\n)\n"
  },
  {
    "path": "modules/caddyhttp/staticerror.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\t\"strconv\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(StaticError{})\n}\n\n// StaticError implements a simple handler that returns an error.\n// This handler returns an error value, but does not write a response.\n// This is useful when you want the server to act as if an error\n// occurred; for example, to invoke your custom error handling logic.\n//\n// Since this handler does not write a response, the error information\n// is for use by the server to know how to handle the error.\ntype StaticError struct {\n\t// The error message. Optional. Default is no error message.\n\tError string `json:\"error,omitempty\"`\n\n\t// The recommended HTTP status code. Can be either an integer or a\n\t// string if placeholders are needed. Optional. Default is 500.\n\tStatusCode WeakString `json:\"status_code,omitempty\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (StaticError) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.error\",\n\t\tNew: func() caddy.Module { return new(StaticError) },\n\t}\n}\n\n// UnmarshalCaddyfile sets up the handler from Caddyfile tokens. Syntax:\n//\n//\terror [<matcher>] <status>|<message> [<status>] {\n//\t    message <text>\n//\t}\n//\n// If there is just one argument (other than the matcher), it is considered\n// to be a status code if it's a valid positive integer of 3 digits.\nfunc (e *StaticError) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume directive name\n\targs := d.RemainingArgs()\n\tswitch len(args) {\n\tcase 1:\n\t\tif len(args[0]) == 3 {\n\t\t\tif num, err := strconv.Atoi(args[0]); err == nil && num > 0 {\n\t\t\t\te.StatusCode = WeakString(args[0])\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\te.Error = args[0]\n\tcase 2:\n\t\te.Error = args[0]\n\t\te.StatusCode = WeakString(args[1])\n\tdefault:\n\t\treturn d.ArgErr()\n\t}\n\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"message\":\n\t\t\tif e.Error != \"\" {\n\t\t\t\treturn d.Err(\"message already specified\")\n\t\t\t}\n\t\t\tif !d.AllArgs(&e.Error) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized subdirective '%s'\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (e StaticError) ServeHTTP(w http.ResponseWriter, r *http.Request, _ Handler) error {\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\n\tstatusCode := http.StatusInternalServerError\n\tif codeStr := e.StatusCode.String(); codeStr != \"\" {\n\t\tintVal, err := strconv.Atoi(repl.ReplaceAll(codeStr, \"\"))\n\t\tif err != nil {\n\t\t\treturn Error(http.StatusInternalServerError, err)\n\t\t}\n\t\tstatusCode = intVal\n\t}\n\treturn Error(statusCode, fmt.Errorf(\"%s\", repl.ReplaceKnown(e.Error, \"\")))\n}\n\n// Interface guard\nvar (\n\t_ MiddlewareHandler     = (*StaticError)(nil)\n\t_ caddyfile.Unmarshaler = (*StaticError)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/staticresp.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"bytes\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/textproto\"\n\t\"os\"\n\t\"slices\"\n\t\"strconv\"\n\t\"strings\"\n\t\"text/template\"\n\t\"time\"\n\n\t\"github.com/spf13/cobra\"\n\t\"go.uber.org/zap\"\n\n\tcaddycmd \"github.com/caddyserver/caddy/v2/cmd\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(StaticResponse{})\n\tcaddycmd.RegisterCommand(caddycmd.Command{\n\t\tName:  \"respond\",\n\t\tUsage: `[--status <code>] [--body <content>] [--listen <addr>] [--access-log] [--debug] [--header \"Field: value\"] <body|status>`,\n\t\tShort: \"Simple, hard-coded HTTP responses for development and testing\",\n\t\tLong: `\nSpins up a quick-and-clean HTTP server for development and testing purposes.\n\nWith no options specified, this command listens on a random available port\nand answers HTTP requests with an empty 200 response. The listen address can\nbe customized with the --listen flag and will always be printed to stdout.\nIf the listen address includes a port range, multiple servers will be started.\n\nIf a final, unnamed argument is given, it will be treated as a status code\n(same as the --status flag) if it is a 3-digit number. Otherwise, it is used\nas the response body (same as the --body flag). The --status and --body flags\nwill always override this argument (for example, to write a body that\nliterally says \"404\" but with a status code of 200, do '--status 200 404').\n\nA body may be given in 3 ways: a flag, a final (and unnamed) argument to\nthe command, or piped to stdin (if flag and argument are unset). Limited\ntemplate evaluation is supported on the body, with the following variables:\n\n\t{{.N}}        The server number (useful if using a port range)\n\t{{.Port}}     The listener port\n\t{{.Address}}  The listener address\n\n(See the docs for the text/template package in the Go standard library for\ninformation about using templates: https://pkg.go.dev/text/template)\n\nAccess/request logging and more verbose debug logging can also be enabled.\n\nResponse headers may be added using the --header flag for each header field.\n`,\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\tcmd.Flags().StringP(\"listen\", \"l\", \":0\", \"The address to which to bind the listener\")\n\t\t\tcmd.Flags().IntP(\"status\", \"s\", http.StatusOK, \"The response status code\")\n\t\t\tcmd.Flags().StringP(\"body\", \"b\", \"\", \"The body of the HTTP response\")\n\t\t\tcmd.Flags().BoolP(\"access-log\", \"\", false, \"Enable the access log\")\n\t\t\tcmd.Flags().BoolP(\"debug\", \"v\", false, \"Enable more verbose debug-level logging\")\n\t\t\tcmd.Flags().StringArrayP(\"header\", \"H\", []string{}, \"Set a header on the response (format: \\\"Field: value\\\")\")\n\t\t\tcmd.RunE = caddycmd.WrapCommandFuncForCobra(cmdRespond)\n\t\t},\n\t})\n}\n\n// StaticResponse implements a simple responder for static responses.\ntype StaticResponse struct {\n\t// The HTTP status code to respond with. Can be an integer or,\n\t// if needing to use a placeholder, a string.\n\t//\n\t// If the status code is 103 (Early Hints), the response headers\n\t// will be written to the client immediately, the body will be\n\t// ignored, and the next handler will be invoked. This behavior\n\t// is EXPERIMENTAL while RFC 8297 is a draft, and may be changed\n\t// or removed.\n\tStatusCode WeakString `json:\"status_code,omitempty\"`\n\n\t// Header fields to set on the response; overwrites any existing\n\t// header fields of the same names after normalization.\n\tHeaders http.Header `json:\"headers,omitempty\"`\n\n\t// The response body. If non-empty, the Content-Type header may\n\t// be added automatically if it is not explicitly configured nor\n\t// already set on the response; the default value is\n\t// \"text/plain; charset=utf-8\" unless the body is a valid JSON object\n\t// or array, in which case the value will be \"application/json\".\n\t// Other than those common special cases the Content-Type header\n\t// should be set explicitly if it is desired because MIME sniffing\n\t// is disabled for safety.\n\tBody string `json:\"body,omitempty\"`\n\n\t// If true, the server will close the client's connection\n\t// after writing the response.\n\tClose bool `json:\"close,omitempty\"`\n\n\t// Immediately and forcefully closes the connection without\n\t// writing a response. Interrupts any other HTTP streams on\n\t// the same connection.\n\tAbort bool `json:\"abort,omitempty\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (StaticResponse) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.static_response\",\n\t\tNew: func() caddy.Module { return new(StaticResponse) },\n\t}\n}\n\n// UnmarshalCaddyfile sets up the handler from Caddyfile tokens. Syntax:\n//\n//\trespond [<matcher>] <status>|<body> [<status>] {\n//\t    body <text>\n//\t    close\n//\t}\n//\n// If there is just one argument (other than the matcher), it is considered\n// to be a status code if it's a valid positive integer of 3 digits.\nfunc (s *StaticResponse) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume directive name\n\targs := d.RemainingArgs()\n\tswitch len(args) {\n\tcase 1:\n\t\tif len(args[0]) == 3 {\n\t\t\tif num, err := strconv.Atoi(args[0]); err == nil && num > 0 {\n\t\t\t\ts.StatusCode = WeakString(args[0])\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\ts.Body = args[0]\n\tcase 2:\n\t\ts.Body = args[0]\n\t\ts.StatusCode = WeakString(args[1])\n\tdefault:\n\t\treturn d.ArgErr()\n\t}\n\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"body\":\n\t\t\tif s.Body != \"\" {\n\t\t\t\treturn d.Err(\"body already specified\")\n\t\t\t}\n\t\t\tif !d.AllArgs(&s.Body) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\tcase \"close\":\n\t\t\tif s.Close {\n\t\t\t\treturn d.Err(\"close already specified\")\n\t\t\t}\n\t\t\ts.Close = true\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized subdirective '%s'\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (s StaticResponse) ServeHTTP(w http.ResponseWriter, r *http.Request, next Handler) error {\n\t// close the connection immediately\n\tif s.Abort {\n\t\tpanic(http.ErrAbortHandler)\n\t}\n\n\t// close the connection after responding\n\tif s.Close {\n\t\tr.Close = true\n\t\tw.Header().Set(\"Connection\", \"close\")\n\t}\n\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\n\t// set all headers\n\tfor field, vals := range s.Headers {\n\t\tfield = textproto.CanonicalMIMEHeaderKey(repl.ReplaceAll(field, \"\"))\n\t\tnewVals := make([]string, len(vals))\n\t\tfor i := range vals {\n\t\t\tnewVals[i] = repl.ReplaceAll(vals[i], \"\")\n\t\t}\n\t\tw.Header()[field] = newVals\n\t}\n\n\t// implicitly set Content-Type header if we can do so safely\n\t// (this allows templates handler to eval templates successfully\n\t// or for clients to render JSON properly which is very common)\n\tbody := repl.ReplaceKnown(s.Body, \"\")\n\tif body != \"\" && w.Header().Get(\"Content-Type\") == \"\" {\n\t\tcontent := strings.TrimSpace(body)\n\t\tif len(content) > 2 &&\n\t\t\t(content[0] == '{' && content[len(content)-1] == '}' ||\n\t\t\t\t(content[0] == '[' && content[len(content)-1] == ']')) &&\n\t\t\tjson.Valid([]byte(content)) {\n\t\t\tw.Header().Set(\"Content-Type\", \"application/json\")\n\t\t} else {\n\t\t\tw.Header().Set(\"Content-Type\", \"text/plain; charset=utf-8\")\n\t\t}\n\t}\n\n\t// do not allow Go to sniff the content-type, for safety\n\tif w.Header().Get(\"Content-Type\") == \"\" {\n\t\tw.Header()[\"Content-Type\"] = nil\n\t}\n\n\t// get the status code; if this handler exists in an error route,\n\t// use the recommended status code as the default; otherwise 200\n\tstatusCode := http.StatusOK\n\tif reqErr, ok := r.Context().Value(ErrorCtxKey).(error); ok {\n\t\tif handlerErr, ok := reqErr.(HandlerError); ok {\n\t\t\tif handlerErr.StatusCode > 0 {\n\t\t\t\tstatusCode = handlerErr.StatusCode\n\t\t\t}\n\t\t}\n\t}\n\tif codeStr := s.StatusCode.String(); codeStr != \"\" {\n\t\tintVal, err := strconv.Atoi(repl.ReplaceAll(codeStr, \"\"))\n\t\tif err != nil {\n\t\t\treturn Error(http.StatusInternalServerError, err)\n\t\t}\n\t\tstatusCode = intVal\n\t}\n\n\t// write headers\n\tw.WriteHeader(statusCode)\n\n\t// write response body\n\tif statusCode != http.StatusEarlyHints && body != \"\" {\n\t\tfmt.Fprint(w, body) //nolint:gosec // no XSS unless you sabatoge your own config\n\t}\n\n\t// continue handling after Early Hints as they are not the final response\n\tif statusCode == http.StatusEarlyHints {\n\t\treturn next.ServeHTTP(w, r)\n\t}\n\n\treturn nil\n}\n\nfunc buildHTTPServer(\n\ti int,\n\tport uint,\n\taddr string,\n\tstatusCode int,\n\thdr http.Header,\n\tbody string,\n\taccessLog bool,\n) (*Server, error) {\n\t// nolint:prealloc\n\tvar handlers []json.RawMessage\n\n\t// response body supports a basic template; evaluate it\n\ttplCtx := struct {\n\t\tN       int    // server number\n\t\tPort    uint   // only the port\n\t\tAddress string // listener address\n\t}{\n\t\tN:       i,\n\t\tPort:    port,\n\t\tAddress: addr,\n\t}\n\ttpl, err := template.New(\"body\").Parse(body)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tbuf := new(bytes.Buffer)\n\terr = tpl.Execute(buf, tplCtx)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// create route with handler\n\thandler := StaticResponse{\n\t\tStatusCode: WeakString(fmt.Sprintf(\"%d\", statusCode)),\n\t\tHeaders:    hdr,\n\t\tBody:       buf.String(),\n\t}\n\thandlers = append(handlers, caddyconfig.JSONModuleObject(handler, \"handler\", \"static_response\", nil))\n\troute := Route{HandlersRaw: handlers}\n\n\tserver := &Server{\n\t\tListen:            []string{addr},\n\t\tReadHeaderTimeout: caddy.Duration(10 * time.Second),\n\t\tIdleTimeout:       caddy.Duration(30 * time.Second),\n\t\tMaxHeaderBytes:    1024 * 10,\n\t\tRoutes:            RouteList{route},\n\t\tAutoHTTPS:         &AutoHTTPSConfig{DisableRedir: true},\n\t}\n\tif accessLog {\n\t\tserver.Logs = new(ServerLogConfig)\n\t}\n\n\treturn server, nil\n}\n\nfunc cmdRespond(fl caddycmd.Flags) (int, error) {\n\tcaddy.TrapSignals()\n\n\t// get flag values\n\tlisten := fl.String(\"listen\")\n\tstatusCodeFl := fl.Int(\"status\")\n\tbodyFl := fl.String(\"body\")\n\taccessLog := fl.Bool(\"access-log\")\n\tdebug := fl.Bool(\"debug\")\n\targ := fl.Arg(0)\n\n\tif fl.NArg() > 1 {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"too many unflagged arguments\")\n\t}\n\n\t// prefer status and body from explicit flags\n\tstatusCode, body := statusCodeFl, bodyFl\n\n\t// figure out if status code was explicitly specified; this lets\n\t// us set a non-zero value as the default but is a little hacky\n\tstatusCodeFlagSpecified := slices.Contains(os.Args, \"--status\")\n\n\t// try to determine what kind of parameter the unnamed argument is\n\tif arg != \"\" {\n\t\t// specifying body and status flags makes the argument redundant/unused\n\t\tif bodyFl != \"\" && statusCodeFlagSpecified {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"unflagged argument \\\"%s\\\" is overridden by flags\", arg)\n\t\t}\n\n\t\t// if a valid 3-digit number, treat as status code; otherwise body\n\t\tif argInt, err := strconv.Atoi(arg); err == nil && !statusCodeFlagSpecified {\n\t\t\tif argInt >= 100 && argInt <= 999 {\n\t\t\t\tstatusCode = argInt\n\t\t\t}\n\t\t} else if body == \"\" {\n\t\t\tbody = arg\n\t\t}\n\t}\n\n\t// if we still need a body, see if stdin is being piped\n\tif body == \"\" {\n\t\tstdinInfo, err := os.Stdin.Stat()\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, err\n\t\t}\n\t\tif stdinInfo.Mode()&os.ModeNamedPipe != 0 {\n\t\t\tbodyBytes, err := io.ReadAll(os.Stdin)\n\t\t\tif err != nil {\n\t\t\t\treturn caddy.ExitCodeFailedStartup, err\n\t\t\t}\n\t\t\tbody = string(bodyBytes)\n\t\t}\n\t}\n\n\t// build headers map\n\theaders, err := fl.GetStringArray(\"header\")\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"invalid header flag: %v\", err)\n\t}\n\thdr := make(http.Header)\n\tfor i, h := range headers {\n\t\tkey, val, found := strings.Cut(h, \":\")\n\t\tkey, val = strings.TrimSpace(key), strings.TrimSpace(val)\n\t\tif !found || key == \"\" || val == \"\" {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"header %d: invalid format \\\"%s\\\" (expecting \\\"Field: value\\\")\", i, h)\n\t\t}\n\t\thdr.Set(key, val)\n\t}\n\n\t// build each HTTP server\n\thttpApp := App{Servers: make(map[string]*Server)}\n\n\t// expand listen address, if more than one port\n\tlistenAddr, err := caddy.ParseNetworkAddress(listen)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\tif !listenAddr.IsUnixNetwork() && !listenAddr.IsFdNetwork() {\n\t\tlistenAddrs := make([]string, 0, listenAddr.PortRangeSize())\n\t\tfor offset := uint(0); offset < listenAddr.PortRangeSize(); offset++ {\n\t\t\tlistenAddrs = append(listenAddrs, listenAddr.JoinHostPort(offset))\n\t\t}\n\n\t\tfor i, addr := range listenAddrs {\n\t\t\tserver, err := buildHTTPServer(i, listenAddr.StartPort+uint(i), addr, statusCode, hdr, body, accessLog)\n\t\t\tif err != nil {\n\t\t\t\treturn caddy.ExitCodeFailedStartup, err\n\t\t\t}\n\n\t\t\t// save server\n\t\t\thttpApp.Servers[fmt.Sprintf(\"static%d\", i)] = server\n\t\t}\n\t} else {\n\t\tserver, err := buildHTTPServer(0, 0, listen, statusCode, hdr, body, accessLog)\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, err\n\t\t}\n\n\t\t// save server\n\t\thttpApp.Servers[fmt.Sprintf(\"static%d\", 0)] = server\n\t}\n\n\t// finish building the config\n\tvar false bool\n\tcfg := &caddy.Config{\n\t\tAdmin: &caddy.AdminConfig{\n\t\t\tDisabled: true,\n\t\t\tConfig: &caddy.ConfigSettings{\n\t\t\t\tPersist: &false,\n\t\t\t},\n\t\t},\n\t\tAppsRaw: caddy.ModuleMap{\n\t\t\t\"http\": caddyconfig.JSON(httpApp, nil),\n\t\t},\n\t}\n\tif debug {\n\t\tcfg.Logging = &caddy.Logging{\n\t\t\tLogs: map[string]*caddy.CustomLog{\n\t\t\t\t\"default\": {BaseLog: caddy.BaseLog{Level: zap.DebugLevel.CapitalString()}},\n\t\t\t},\n\t\t}\n\t}\n\n\t// run it!\n\terr = caddy.Run(cfg)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\t// to print listener addresses, get the active HTTP app\n\tloadedHTTPApp, err := caddy.ActiveContext().App(\"http\")\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\t// print each listener address\n\tfor _, srv := range loadedHTTPApp.(*App).Servers {\n\t\tfor _, ln := range srv.listeners {\n\t\t\tfmt.Printf(\"Server address: %s\\n\", ln.Addr())\n\t\t}\n\t}\n\n\tselect {}\n}\n\n// Interface guards\nvar (\n\t_ MiddlewareHandler     = (*StaticResponse)(nil)\n\t_ caddyfile.Unmarshaler = (*StaticResponse)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/staticresp_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"context\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"strconv\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc TestStaticResponseHandler(t *testing.T) {\n\tr := fakeRequest()\n\tw := httptest.NewRecorder()\n\n\ts := StaticResponse{\n\t\tStatusCode: WeakString(strconv.Itoa(http.StatusNotFound)),\n\t\tHeaders: http.Header{\n\t\t\t\"X-Test\": []string{\"Testing\"},\n\t\t},\n\t\tBody:  \"Text\",\n\t\tClose: true,\n\t}\n\n\terr := s.ServeHTTP(w, r, nil)\n\tif err != nil {\n\t\tt.Errorf(\"did not expect an error, but got: %v\", err)\n\t}\n\n\tresp := w.Result()\n\trespBody, _ := io.ReadAll(resp.Body)\n\n\tif resp.StatusCode != http.StatusNotFound {\n\t\tt.Errorf(\"expected status %d but got %d\", http.StatusNotFound, resp.StatusCode)\n\t}\n\tif resp.Header.Get(\"X-Test\") != \"Testing\" {\n\t\tt.Errorf(\"expected x-test header to be 'testing' but was '%s'\", resp.Header.Get(\"X-Test\"))\n\t}\n\tif string(respBody) != \"Text\" {\n\t\tt.Errorf(\"expected body to be 'test' but was '%s'\", respBody)\n\t}\n}\n\nfunc fakeRequest() *http.Request {\n\tr, _ := http.NewRequest(\"GET\", \"/\", nil)\n\trepl := caddy.NewReplacer()\n\tctx := context.WithValue(r.Context(), caddy.ReplacerCtxKey, repl)\n\tr = r.WithContext(ctx)\n\treturn r\n}\n"
  },
  {
    "path": "modules/caddyhttp/subroute.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Subroute{})\n}\n\n// Subroute implements a handler that compiles and executes routes.\n// This is useful for a batch of routes that all inherit the same\n// matchers, or for multiple routes that should be treated as a\n// single route.\n//\n// You can also use subroutes to handle errors from its handlers.\n// First the primary routes will be executed, and if they return an\n// error, the errors routes will be executed; in that case, an error\n// is only returned to the entry point at the server if there is an\n// additional error returned from the errors routes.\ntype Subroute struct {\n\t// The primary list of routes to compile and execute.\n\tRoutes RouteList `json:\"routes,omitempty\"`\n\n\t// If the primary routes return an error, error handling\n\t// can be promoted to this configuration instead.\n\tErrors *HTTPErrorConfig `json:\"errors,omitempty\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Subroute) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.subroute\",\n\t\tNew: func() caddy.Module { return new(Subroute) },\n\t}\n}\n\n// Provision sets up subrouting.\nfunc (sr *Subroute) Provision(ctx caddy.Context) error {\n\tif sr.Routes != nil {\n\t\terr := sr.Routes.Provision(ctx)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"setting up subroutes: %v\", err)\n\t\t}\n\t\tif sr.Errors != nil {\n\t\t\terr := sr.Errors.Routes.Provision(ctx)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"setting up error subroutes: %v\", err)\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (sr *Subroute) ServeHTTP(w http.ResponseWriter, r *http.Request, next Handler) error {\n\tsubroute := sr.Routes.Compile(next)\n\terr := subroute.ServeHTTP(w, r)\n\tif err != nil && sr.Errors != nil {\n\t\tr = sr.Errors.WithError(r, err)\n\t\terrRoute := sr.Errors.Routes.Compile(next)\n\t\treturn errRoute.ServeHTTP(w, r)\n\t}\n\treturn err\n}\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner = (*Subroute)(nil)\n\t_ MiddlewareHandler = (*Subroute)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/templates/caddyfile.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage templates\n\nimport (\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\thttpcaddyfile.RegisterHandlerDirective(\"templates\", parseCaddyfile)\n}\n\n// parseCaddyfile sets up the handler from Caddyfile tokens. Syntax:\n//\n//\ttemplates [<matcher>] {\n//\t    mime <types...>\n//\t    between <open_delim> <close_delim>\n//\t    root <path>\n//\t}\nfunc parseCaddyfile(h httpcaddyfile.Helper) (caddyhttp.MiddlewareHandler, error) {\n\th.Next() // consume directive name\n\tt := new(Templates)\n\tfor h.NextBlock(0) {\n\t\tswitch h.Val() {\n\t\tcase \"mime\":\n\t\t\tt.MIMETypes = h.RemainingArgs()\n\t\t\tif len(t.MIMETypes) == 0 {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\tcase \"between\":\n\t\t\tt.Delimiters = h.RemainingArgs()\n\t\t\tif len(t.Delimiters) != 2 {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\tcase \"root\":\n\t\t\tif !h.Args(&t.FileRoot) {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\tcase \"extensions\":\n\t\t\tif h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tif t.ExtensionsRaw != nil {\n\t\t\t\treturn nil, h.Err(\"extensions already specified\")\n\t\t\t}\n\t\t\tfor nesting := h.Nesting(); h.NextBlock(nesting); {\n\t\t\t\textensionModuleName := h.Val()\n\t\t\t\tmodID := \"http.handlers.templates.functions.\" + extensionModuleName\n\t\t\t\tunm, err := caddyfile.UnmarshalModule(h.Dispenser, modID)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t\tcf, ok := unm.(CustomFunctions)\n\t\t\t\tif !ok {\n\t\t\t\t\treturn nil, h.Errf(\"module %s (%T) does not provide template functions\", modID, unm)\n\t\t\t\t}\n\t\t\t\tif t.ExtensionsRaw == nil {\n\t\t\t\t\tt.ExtensionsRaw = make(caddy.ModuleMap)\n\t\t\t\t}\n\t\t\t\tt.ExtensionsRaw[extensionModuleName] = caddyconfig.JSON(cf, nil)\n\t\t\t}\n\t\t}\n\t}\n\treturn t, nil\n}\n"
  },
  {
    "path": "modules/caddyhttp/templates/frontmatter.go",
    "content": "package templates\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"strings\"\n\t\"unicode\"\n\n\t\"github.com/BurntSushi/toml\"\n\t\"gopkg.in/yaml.v3\"\n)\n\nfunc extractFrontMatter(input string) (map[string]any, string, error) {\n\t// get the bounds of the first non-empty line\n\tvar firstLineStart, firstLineEnd int\n\tlineEmpty := true\n\tfor i, b := range input {\n\t\tif b == '\\n' {\n\t\t\tfirstLineStart = firstLineEnd\n\t\t\tif firstLineStart > 0 {\n\t\t\t\tfirstLineStart++ // skip newline character\n\t\t\t}\n\t\t\tfirstLineEnd = i\n\t\t\tif !lineEmpty {\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\t\tlineEmpty = lineEmpty && unicode.IsSpace(b)\n\t}\n\tfirstLine := input[firstLineStart:firstLineEnd]\n\n\t// ensure residue windows carriage return byte is removed\n\tfirstLine = strings.TrimSpace(firstLine)\n\n\t// see what kind of front matter there is, if any\n\tvar closingFence []string\n\tvar fmParser func([]byte) (map[string]any, error)\n\tfor _, fmType := range supportedFrontMatterTypes {\n\t\tif firstLine == fmType.FenceOpen {\n\t\t\tclosingFence = fmType.FenceClose\n\t\t\tfmParser = fmType.ParseFunc\n\t\t\tbreak\n\t\t}\n\t}\n\n\tif fmParser == nil {\n\t\t// no recognized front matter; whole document is body\n\t\treturn nil, input, nil\n\t}\n\n\t// find end of front matter\n\tvar fmEndFence string\n\tfmEndFenceStart := -1\n\tfor _, fence := range closingFence {\n\t\tindex := strings.Index(input[firstLineEnd:], \"\\n\"+fence)\n\t\tif index >= 0 {\n\t\t\tfmEndFenceStart = index\n\t\t\tfmEndFence = fence\n\t\t\tbreak\n\t\t}\n\t}\n\tif fmEndFenceStart < 0 {\n\t\treturn nil, \"\", fmt.Errorf(\"unterminated front matter\")\n\t}\n\tfmEndFenceStart += firstLineEnd + 1 // add 1 to account for newline\n\n\t// extract and parse front matter\n\tfrontMatter := input[firstLineEnd:fmEndFenceStart]\n\tfm, err := fmParser([]byte(frontMatter))\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\n\t// the rest is the body\n\tbody := input[fmEndFenceStart+len(fmEndFence):]\n\n\treturn fm, body, nil\n}\n\nfunc yamlFrontMatter(input []byte) (map[string]any, error) {\n\tm := make(map[string]any)\n\terr := yaml.Unmarshal(input, &m)\n\treturn m, err\n}\n\nfunc tomlFrontMatter(input []byte) (map[string]any, error) {\n\tm := make(map[string]any)\n\terr := toml.Unmarshal(input, &m)\n\treturn m, err\n}\n\nfunc jsonFrontMatter(input []byte) (map[string]any, error) {\n\tinput = append([]byte{'{'}, input...)\n\tinput = append(input, '}')\n\tm := make(map[string]any)\n\terr := json.Unmarshal(input, &m)\n\treturn m, err\n}\n\ntype parsedMarkdownDoc struct {\n\tMeta map[string]any `json:\"meta,omitempty\"`\n\tBody string         `json:\"body,omitempty\"`\n}\n\ntype frontMatterType struct {\n\tFenceOpen  string\n\tFenceClose []string\n\tParseFunc  func(input []byte) (map[string]any, error)\n}\n\nvar supportedFrontMatterTypes = []frontMatterType{\n\t{\n\t\tFenceOpen:  \"---\",\n\t\tFenceClose: []string{\"---\", \"...\"},\n\t\tParseFunc:  yamlFrontMatter,\n\t},\n\t{\n\t\tFenceOpen:  \"+++\",\n\t\tFenceClose: []string{\"+++\"},\n\t\tParseFunc:  tomlFrontMatter,\n\t},\n\t{\n\t\tFenceOpen:  \"{\",\n\t\tFenceClose: []string{\"}\"},\n\t\tParseFunc:  jsonFrontMatter,\n\t},\n}\n"
  },
  {
    "path": "modules/caddyhttp/templates/frontmatter_fuzz.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n//go:build gofuzz\n\npackage templates\n\nfunc FuzzExtractFrontMatter(data []byte) int {\n\t_, _, err := extractFrontMatter(string(data))\n\tif err != nil {\n\t\treturn 0\n\t}\n\treturn 1\n}\n"
  },
  {
    "path": "modules/caddyhttp/templates/templates.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage templates\n\nimport (\n\t\"bytes\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"strconv\"\n\t\"strings\"\n\t\"text/template\"\n\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Templates{})\n}\n\n// Templates is a middleware which executes response bodies as Go templates.\n// The syntax is documented in the Go standard library's\n// [text/template package](https://golang.org/pkg/text/template/).\n//\n// ⚠️ Template functions/actions are still experimental, so they are subject to change.\n//\n// Custom template functions can be registered by creating a plugin module under the `http.handlers.templates.functions.*` namespace that implements the `CustomFunctions` interface.\n//\n// [All Sprig functions](https://masterminds.github.io/sprig/) are supported.\n//\n// In addition to the standard functions and the Sprig library, Caddy adds\n// extra functions and data that are available to a template:\n//\n// ##### `.Args`\n//\n// A slice of arguments passed to this page/context, for example\n// as the result of a [`include`](#include).\n//\n// ```\n// {{index .Args 0}} // first argument\n// ```\n//\n// ##### `.Cookie`\n//\n// Gets the value of a cookie by name.\n//\n// ```\n// {{.Cookie \"cookiename\"}}\n// ```\n//\n// ##### `env`\n//\n// Gets an environment variable.\n//\n// ```\n// {{env \"VAR_NAME\"}}\n// ```\n//\n// ##### `placeholder`\n//\n// Gets an [placeholder variable](/docs/conventions#placeholders).\n// The braces (`{}`) have to be omitted.\n//\n// ```\n// {{placeholder \"http.request.uri.path\"}}\n// {{placeholder \"http.error.status_code\"}}\n// ```\n//\n// As a shortcut, `ph` is an alias for `placeholder`.\n//\n// ```\n// {{ph \"http.request.method\"}}\n// ```\n//\n// ##### `.Host`\n//\n// Returns the hostname portion (no port) of the Host header of the HTTP request.\n//\n// ```\n// {{.Host}}\n// ```\n//\n// ##### `httpInclude`\n//\n// Includes the contents of another file, and renders it in-place,\n// by making a virtual HTTP request (also known as a sub-request).\n// The URI path must exist on the same virtual server because the\n// request does not use sockets; instead, the request is crafted in\n// memory and the handler is invoked directly for increased efficiency.\n//\n// ```\n// {{httpInclude \"/foo/bar?q=val\"}}\n// ```\n//\n// ##### `import`\n//\n// Reads and returns the contents of another file, and parses it\n// as a template, adding any template definitions to the template\n// stack. If there are no definitions, the filepath will be the\n// definition name. Any `{{ define }}` blocks will be accessible by\n// `{{ template }}` or `{{ block }}`. Imports must happen before the\n// template or block action is called. Note that the contents are\n// NOT escaped, so you should only import trusted template files.\n//\n// **filename.html**\n// ```\n// {{ define \"main\" }}\n// content\n// {{ end }}\n// ```\n//\n// **index.html**\n// ```\n// {{ import \"/path/to/filename.html\" }}\n// {{ template \"main\" }}\n// ```\n//\n// ##### `include`\n//\n// Includes the contents of another file, rendering it in-place.\n// Optionally can pass key-value pairs as arguments to be accessed\n// by the included file. Use [`.Args N`](#args) to access the N-th\n// argument, 0-indexed. Note that the contents are NOT escaped, so\n// you should only include trusted template files.\n//\n// ```\n// {{include \"path/to/file.html\"}}  // no arguments\n// {{include \"path/to/file.html\" \"arg0\" 1 \"value 2\"}}  // with arguments\n// ```\n//\n// ##### `readFile`\n//\n// Reads and returns the contents of another file, as-is.\n// Note that the contents are NOT escaped, so you should\n// only read trusted files.\n//\n// ```\n// {{readFile \"path/to/file.html\"}}\n// ```\n//\n// ##### `listFiles`\n//\n// Returns a list of the files in the given directory, which is relative\n// to the template context's file root.\n//\n// ```\n// {{listFiles \"/mydir\"}}\n// ```\n//\n// ##### `markdown`\n//\n// Renders the given Markdown text as HTML and returns it. This uses the\n// [Goldmark](https://github.com/yuin/goldmark) library,\n// which is CommonMark compliant. It also has these extensions\n// enabled: GitHub Flavored Markdown, Footnote, and syntax\n// highlighting provided by [Chroma](https://github.com/alecthomas/chroma).\n//\n// ```\n// {{markdown \"My _markdown_ text\"}}\n// ```\n//\n// ##### `.RemoteIP`\n//\n// Returns the connection's IP address.\n//\n// ```\n// {{.RemoteIP}}\n// ```\n//\n// ##### `.ClientIP`\n//\n// Returns the real client's IP address, if `trusted_proxies` was configured,\n// otherwise returns the connection's IP address.\n//\n// ```\n// {{.ClientIP}}\n// ```\n//\n// ##### `.Req`\n//\n// Accesses the current HTTP request, which has various fields, including:\n//\n//   - `.Method` - the method\n//   - `.URL` - the URL, which in turn has component fields (Scheme, Host, Path, etc.)\n//   - `.Header` - the header fields\n//   - `.Host` - the Host or :authority header of the request\n//\n// ```\n// {{.Req.Header.Get \"User-Agent\"}}\n// ```\n//\n// ##### `.OriginalReq`\n//\n// Like [`.Req`](#req), except it accesses the original HTTP\n// request before rewrites or other internal modifications.\n//\n// ##### `.RespHeader.Add`\n//\n// Adds a header field to the HTTP response.\n//\n// ```\n// {{.RespHeader.Add \"Field-Name\" \"val\"}}\n// ```\n//\n// ##### `.RespHeader.Del`\n//\n// Deletes a header field on the HTTP response.\n//\n// ```\n// {{.RespHeader.Del \"Field-Name\"}}\n// ```\n//\n// ##### `.RespHeader.Set`\n//\n// Sets a header field on the HTTP response, replacing any existing value.\n//\n// ```\n// {{.RespHeader.Set \"Field-Name\" \"val\"}}\n// ```\n//\n// ##### `httpError`\n//\n// Returns an error with the given status code to the HTTP handler chain.\n//\n// ```\n// {{if not (fileExists $includedFile)}}{{httpError 404}}{{end}}\n// ```\n//\n// ##### `splitFrontMatter`\n//\n// Splits front matter out from the body. Front matter is metadata that\n// appears at the very beginning of a file or string. Front matter can\n// be in YAML, TOML, or JSON formats:\n//\n// **TOML** front matter starts and ends with `+++`:\n//\n// ```toml\n// +++\n// template = \"blog\"\n// title = \"Blog Homepage\"\n// sitename = \"A Caddy site\"\n// +++\n// ```\n//\n// **YAML** is surrounded by `---`:\n//\n// ```yaml\n// ---\n// template: blog\n// title: Blog Homepage\n// sitename: A Caddy site\n// ---\n// ```\n//\n// **JSON** is simply `{` and `}`:\n//\n// ```json\n// {\n// \"template\": \"blog\",\n// \"title\": \"Blog Homepage\",\n// \"sitename\": \"A Caddy site\"\n// }\n// ```\n//\n// The resulting front matter will be made available like so:\n//\n// - `.Meta` to access the metadata fields, for example: `{{$parsed.Meta.title}}`\n// - `.Body` to access the body after the front matter, for example: `{{markdown $parsed.Body}}`\n//\n// ##### `stripHTML`\n//\n// Removes HTML from a string.\n//\n// ```\n// {{stripHTML \"Shows <b>only</b> text content\"}}\n// ```\n//\n// ##### `humanize`\n//\n// Transforms size and time inputs to a human readable format.\n// This uses the [go-humanize](https://github.com/dustin/go-humanize) library.\n//\n// The first argument must be a format type, and the last argument\n// is the input, or the input can be piped in. The supported format\n// types are:\n// - **size** which turns an integer amount of bytes into a string like `2.3 MB`\n// - **time** which turns a time string into a relative time string like `2 weeks ago`\n//\n// For the `time` format, the layout for parsing the input can be configured\n// by appending a colon `:` followed by the desired time layout. You can\n// find the documentation on time layouts [in Go's docs](https://pkg.go.dev/time#pkg-constants).\n// The default time layout is `RFC1123Z`, i.e. `Mon, 02 Jan 2006 15:04:05 -0700`.\n//\n// ```\n// {{humanize \"size\" \"2048000\"}}\n// {{placeholder \"http.response.header.Content-Length\" | humanize \"size\"}}\n// {{humanize \"time\" \"Fri, 05 May 2022 15:04:05 +0200\"}}\n// {{humanize \"time:2006-Jan-02\" \"2022-May-05\"}}\n// ```\n//\n// ##### `pathEscape`\n//\n// Passes a string through `url.PathEscape`, replacing characters that have\n// special meaning in URL path parameters (`?`, `&`, `%`).\n//\n// Useful e.g. to include filenames containing these characters in URL path\n// parameters, or use them as an `img` element's `src` attribute.\n//\n// ```\n// {{pathEscape \"50%_valid_filename?.jpg\"}}\n// ```\n//\n// ##### `maybe`\n//\n// Invokes a custom template function only if it is registered (plugged-in)\n// in the `http.handlers.templates.functions.*` namespace.\n//\n// The first argument is the function name, and any subsequent arguments\n// are forwarded to that function. If the named function is not available,\n// the invocation is ignored and a log message is emitted.\n//\n// This is useful for templates that optionally use components which may\n// not be present in every build or environment.\n//\n// NOTE: This function is EXPERIMENTAL and subject to change or removal.\n//\n// ```\n// {{ maybe \"myOptionalFunc\" \"arg1\" 2 }}\n// ```\ntype Templates struct {\n\t// The root path from which to load files. Required if template functions\n\t// accessing the file system are used (such as include). Default is\n\t// `{http.vars.root}` if set, or current working directory otherwise.\n\tFileRoot string `json:\"file_root,omitempty\"`\n\n\t// The MIME types for which to render templates. It is important to use\n\t// this if the route matchers do not exclude images or other binary files.\n\t// Default is text/plain, text/markdown, and text/html.\n\tMIMETypes []string `json:\"mime_types,omitempty\"`\n\n\t// The template action delimiters. If set, must be precisely two elements:\n\t// the opening and closing delimiters. Default: `[\"{{\", \"}}\"]`\n\tDelimiters []string `json:\"delimiters,omitempty\"`\n\n\t// Extensions adds functions to the template's func map. These often\n\t// act as components on web pages, for example.\n\tExtensionsRaw caddy.ModuleMap `json:\"match,omitempty\" caddy:\"namespace=http.handlers.templates.functions\"`\n\n\tcustomFuncs []template.FuncMap\n\tlogger      *zap.Logger\n}\n\n// CustomFunctions is the interface for registering custom template functions.\ntype CustomFunctions interface {\n\t// CustomTemplateFunctions should return the mapping from custom function names to implementations.\n\tCustomTemplateFunctions() template.FuncMap\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Templates) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.templates\",\n\t\tNew: func() caddy.Module { return new(Templates) },\n\t}\n}\n\n// Provision provisions t.\nfunc (t *Templates) Provision(ctx caddy.Context) error {\n\tt.logger = ctx.Logger()\n\tmods, err := ctx.LoadModule(t, \"ExtensionsRaw\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"loading template extensions: %v\", err)\n\t}\n\tfor _, modIface := range mods.(map[string]any) {\n\t\tt.customFuncs = append(t.customFuncs, modIface.(CustomFunctions).CustomTemplateFunctions())\n\t}\n\n\tif t.MIMETypes == nil {\n\t\tt.MIMETypes = defaultMIMETypes\n\t}\n\tif t.FileRoot == \"\" {\n\t\tt.FileRoot = \"{http.vars.root}\"\n\t}\n\treturn nil\n}\n\n// Validate ensures t has a valid configuration.\nfunc (t *Templates) Validate() error {\n\tif len(t.Delimiters) != 0 && len(t.Delimiters) != 2 {\n\t\treturn fmt.Errorf(\"delimiters must consist of exactly two elements: opening and closing\")\n\t}\n\treturn nil\n}\n\nfunc (t *Templates) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {\n\tbuf := bufPool.Get().(*bytes.Buffer)\n\tbuf.Reset()\n\tdefer bufPool.Put(buf)\n\n\t// shouldBuf determines whether to execute templates on this response,\n\t// since generally we will not want to execute for images or CSS, etc.\n\tshouldBuf := func(status int, header http.Header) bool {\n\t\tct := header.Get(\"Content-Type\")\n\t\tfor _, mt := range t.MIMETypes {\n\t\t\tif strings.Contains(ct, mt) {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t\treturn false\n\t}\n\n\trec := caddyhttp.NewResponseRecorder(w, buf, shouldBuf)\n\n\terr := next.ServeHTTP(rec, r)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif !rec.Buffered() {\n\t\treturn nil\n\t}\n\n\terr = t.executeTemplate(rec, r)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\trec.Header().Set(\"Content-Length\", strconv.Itoa(buf.Len()))\n\trec.Header().Del(\"Accept-Ranges\") // we don't know ranges for dynamically-created content\n\trec.Header().Del(\"Last-Modified\") // useless for dynamic content since it's always changing\n\n\t// we don't know a way to quickly generate etag for dynamic content,\n\t// and weak etags still cause browsers to rely on it even after a\n\t// refresh, so disable them until we find a better way to do this\n\trec.Header().Del(\"Etag\")\n\n\treturn rec.WriteResponse()\n}\n\n// executeTemplate executes the template contained in wb.buf and replaces it with the results.\nfunc (t *Templates) executeTemplate(rr caddyhttp.ResponseRecorder, r *http.Request) error {\n\tvar fs http.FileSystem\n\tif t.FileRoot != \"\" {\n\t\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\t\tfs = http.Dir(repl.ReplaceAll(t.FileRoot, \".\"))\n\t}\n\n\tctx := &TemplateContext{\n\t\tRoot:        fs,\n\t\tReq:         r,\n\t\tRespHeader:  WrappedHeader{rr.Header()},\n\t\tconfig:      t,\n\t\tCustomFuncs: t.customFuncs,\n\t}\n\n\terr := ctx.executeTemplateInBuffer(r.URL.Path, rr.Buffer())\n\tif err != nil {\n\t\t// templates may return a custom HTTP error to be propagated to the client,\n\t\t// otherwise for any other error we assume the template is broken\n\t\tvar handlerErr caddyhttp.HandlerError\n\t\tif errors.As(err, &handlerErr) {\n\t\t\treturn handlerErr\n\t\t}\n\t\treturn caddyhttp.Error(http.StatusInternalServerError, err)\n\t}\n\n\treturn nil\n}\n\n// virtualResponseWriter is used in virtualized HTTP requests\n// that templates may execute.\ntype virtualResponseWriter struct {\n\tstatus int\n\theader http.Header\n\tbody   *bytes.Buffer\n}\n\nfunc (vrw *virtualResponseWriter) Header() http.Header {\n\treturn vrw.header\n}\n\nfunc (vrw *virtualResponseWriter) WriteHeader(statusCode int) {\n\tvrw.status = statusCode\n}\n\nfunc (vrw *virtualResponseWriter) Write(data []byte) (int, error) {\n\treturn vrw.body.Write(data)\n}\n\nvar defaultMIMETypes = []string{\n\t\"text/html\",\n\t\"text/plain\",\n\t\"text/markdown\",\n}\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner           = (*Templates)(nil)\n\t_ caddy.Validator             = (*Templates)(nil)\n\t_ caddyhttp.MiddlewareHandler = (*Templates)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/templates/tplcontext.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage templates\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/fs\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"path\"\n\t\"reflect\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"text/template\"\n\t\"time\"\n\n\t\"github.com/Masterminds/sprig/v3\"\n\tchromahtml \"github.com/alecthomas/chroma/v2/formatters/html\"\n\t\"github.com/dustin/go-humanize\"\n\t\"github.com/yuin/goldmark\"\n\thighlighting \"github.com/yuin/goldmark-highlighting/v2\"\n\t\"github.com/yuin/goldmark/extension\"\n\t\"github.com/yuin/goldmark/parser\"\n\tgmhtml \"github.com/yuin/goldmark/renderer/html\"\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\n// TemplateContext is the TemplateContext with which HTTP templates are executed.\ntype TemplateContext struct {\n\tRoot        http.FileSystem\n\tReq         *http.Request\n\tArgs        []any // defined by arguments to funcInclude\n\tRespHeader  WrappedHeader\n\tCustomFuncs []template.FuncMap // functions added by plugins\n\n\tconfig *Templates\n\ttpl    *template.Template\n}\n\n// NewTemplate returns a new template intended to be evaluated with this\n// context, as it is initialized with configuration from this context.\nfunc (c *TemplateContext) NewTemplate(tplName string) *template.Template {\n\tc.tpl = template.New(tplName).Option(\"missingkey=zero\")\n\n\t// customize delimiters, if applicable\n\tif c.config != nil && len(c.config.Delimiters) == 2 {\n\t\tc.tpl.Delims(c.config.Delimiters[0], c.config.Delimiters[1])\n\t}\n\n\t// add sprig library\n\tc.tpl.Funcs(sprigFuncMap)\n\n\t// add all custom functions\n\tfor _, funcMap := range c.CustomFuncs {\n\t\tc.tpl.Funcs(funcMap)\n\t}\n\n\t// add our own library\n\tc.tpl.Funcs(template.FuncMap{\n\t\t\"include\":          c.funcInclude,\n\t\t\"readFile\":         c.funcReadFile,\n\t\t\"import\":           c.funcImport,\n\t\t\"httpInclude\":      c.funcHTTPInclude,\n\t\t\"stripHTML\":        c.funcStripHTML,\n\t\t\"markdown\":         c.funcMarkdown,\n\t\t\"splitFrontMatter\": c.funcSplitFrontMatter,\n\t\t\"listFiles\":        c.funcListFiles,\n\t\t\"fileStat\":         c.funcFileStat,\n\t\t\"env\":              c.funcEnv,\n\t\t\"placeholder\":      c.funcPlaceholder,\n\t\t\"ph\":               c.funcPlaceholder, // shortcut\n\t\t\"fileExists\":       c.funcFileExists,\n\t\t\"httpError\":        c.funcHTTPError,\n\t\t\"humanize\":         c.funcHumanize,\n\t\t\"maybe\":            c.funcMaybe,\n\t\t\"pathEscape\":       url.PathEscape,\n\t})\n\treturn c.tpl\n}\n\n// OriginalReq returns the original, unmodified, un-rewritten request as\n// it originally came in over the wire.\nfunc (c TemplateContext) OriginalReq() http.Request {\n\tor, _ := c.Req.Context().Value(caddyhttp.OriginalRequestCtxKey).(http.Request)\n\treturn or\n}\n\n// funcInclude returns the contents of filename relative to the site root and renders it in place.\n// Note that included files are NOT escaped, so you should only include\n// trusted files. If it is not trusted, be sure to use escaping functions\n// in your template.\nfunc (c TemplateContext) funcInclude(filename string, args ...any) (string, error) {\n\tbodyBuf := bufPool.Get().(*bytes.Buffer)\n\tbodyBuf.Reset()\n\tdefer bufPool.Put(bodyBuf)\n\n\terr := c.readFileToBuffer(filename, bodyBuf)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tc.Args = args\n\n\terr = c.executeTemplateInBuffer(filename, bodyBuf)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\treturn bodyBuf.String(), nil\n}\n\n// funcReadFile returns the contents of a filename relative to the site root.\n// Note that included files are NOT escaped, so you should only include\n// trusted files. If it is not trusted, be sure to use escaping functions\n// in your template.\nfunc (c TemplateContext) funcReadFile(filename string) (string, error) {\n\tbodyBuf := bufPool.Get().(*bytes.Buffer)\n\tbodyBuf.Reset()\n\tdefer bufPool.Put(bodyBuf)\n\n\terr := c.readFileToBuffer(filename, bodyBuf)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\treturn bodyBuf.String(), nil\n}\n\n// readFileToBuffer reads a file into a buffer\nfunc (c TemplateContext) readFileToBuffer(filename string, bodyBuf *bytes.Buffer) error {\n\tif c.Root == nil {\n\t\treturn fmt.Errorf(\"root file system not specified\")\n\t}\n\n\tfile, err := c.Root.Open(filename)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer file.Close()\n\n\t_, err = io.Copy(bodyBuf, file)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\n// funcHTTPInclude returns the body of a virtual (lightweight) request\n// to the given URI on the same server. Note that included bodies\n// are NOT escaped, so you should only include trusted resources.\n// If it is not trusted, be sure to use escaping functions yourself.\nfunc (c TemplateContext) funcHTTPInclude(uri string) (string, error) {\n\t// prevent virtual request loops by counting how many levels\n\t// deep we are; and if we get too deep, return an error\n\trecursionCount := 1\n\tif numStr := c.Req.Header.Get(recursionPreventionHeader); numStr != \"\" {\n\t\tnum, err := strconv.Atoi(numStr)\n\t\tif err != nil {\n\t\t\treturn \"\", fmt.Errorf(\"parsing %s: %v\", recursionPreventionHeader, err)\n\t\t}\n\t\tif num >= 3 {\n\t\t\treturn \"\", fmt.Errorf(\"virtual request cycle\")\n\t\t}\n\t\trecursionCount = num + 1\n\t}\n\n\tbuf := bufPool.Get().(*bytes.Buffer)\n\tbuf.Reset()\n\tdefer bufPool.Put(buf)\n\n\tvirtReq, err := http.NewRequest(\"GET\", uri, nil)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tvirtReq.Host = c.Req.Host\n\tvirtReq.RemoteAddr = \"127.0.0.1:10000\" // https://github.com/caddyserver/caddy/issues/5835\n\tvirtReq.Header = c.Req.Header.Clone()\n\tvirtReq.Header.Set(\"Accept-Encoding\", \"identity\") // https://github.com/caddyserver/caddy/issues/4352\n\tvirtReq.Trailer = c.Req.Trailer.Clone()\n\tvirtReq.Header.Set(recursionPreventionHeader, strconv.Itoa(recursionCount))\n\n\tvrw := &virtualResponseWriter{body: buf, header: make(http.Header)}\n\tserver := c.Req.Context().Value(caddyhttp.ServerCtxKey).(http.Handler)\n\n\tserver.ServeHTTP(vrw, virtReq)\n\tif vrw.status >= 400 {\n\t\treturn \"\", fmt.Errorf(\"http %d\", vrw.status)\n\t}\n\n\terr = c.executeTemplateInBuffer(uri, buf)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\treturn buf.String(), nil\n}\n\n// funcImport parses the filename into the current template stack. The imported\n// file will be rendered within the current template by calling {{ block }} or\n// {{ template }} from the standard template library. If the imported file has\n// no {{ define }} blocks, the name of the import will be the path\nfunc (c *TemplateContext) funcImport(filename string) (string, error) {\n\tbodyBuf := bufPool.Get().(*bytes.Buffer)\n\tbodyBuf.Reset()\n\tdefer bufPool.Put(bodyBuf)\n\n\terr := c.readFileToBuffer(filename, bodyBuf)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\t_, err = c.tpl.Parse(bodyBuf.String())\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn \"\", nil\n}\n\nfunc (c *TemplateContext) executeTemplateInBuffer(tplName string, buf *bytes.Buffer) error {\n\tc.NewTemplate(tplName)\n\n\t_, err := c.tpl.Parse(buf.String())\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tbuf.Reset() // reuse buffer for output\n\n\treturn c.tpl.Execute(buf, c)\n}\n\nfunc (c TemplateContext) funcPlaceholder(name string) string {\n\trepl := c.Req.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\n\t// For safety, we don't want to allow the file placeholder in\n\t// templates because it could be used to read arbitrary files\n\t// if the template contents were not trusted.\n\trepl = repl.WithoutFile()\n\n\tvalue, _ := repl.GetString(name)\n\treturn value\n}\n\nfunc (TemplateContext) funcEnv(varName string) string {\n\treturn os.Getenv(varName)\n}\n\n// Cookie gets the value of a cookie with name.\nfunc (c TemplateContext) Cookie(name string) string {\n\tcookies := c.Req.Cookies()\n\tfor _, cookie := range cookies {\n\t\tif cookie.Name == name {\n\t\t\treturn cookie.Value\n\t\t}\n\t}\n\treturn \"\"\n}\n\n// RemoteIP gets the IP address of the connection's remote IP.\nfunc (c TemplateContext) RemoteIP() string {\n\tip, _, err := net.SplitHostPort(c.Req.RemoteAddr)\n\tif err != nil {\n\t\treturn c.Req.RemoteAddr\n\t}\n\treturn ip\n}\n\n// ClientIP gets the IP address of the real client making the request\n// if the request is trusted (see trusted_proxies), otherwise returns\n// the connection's remote IP.\nfunc (c TemplateContext) ClientIP() string {\n\taddress := caddyhttp.GetVar(c.Req.Context(), caddyhttp.ClientIPVarKey).(string)\n\tclientIP, _, err := net.SplitHostPort(address)\n\tif err != nil {\n\t\tclientIP = address // no port\n\t}\n\treturn clientIP\n}\n\n// Host returns the hostname portion of the Host header\n// from the HTTP request.\nfunc (c TemplateContext) Host() (string, error) {\n\thost, _, err := net.SplitHostPort(c.Req.Host)\n\tif err != nil {\n\t\tif !strings.Contains(c.Req.Host, \":\") {\n\t\t\t// common with sites served on the default port 80\n\t\t\treturn c.Req.Host, nil\n\t\t}\n\t\treturn \"\", err\n\t}\n\treturn host, nil\n}\n\n// funcStripHTML returns s without HTML tags. It is fairly naive\n// but works with most valid HTML inputs.\nfunc (TemplateContext) funcStripHTML(s string) string {\n\tvar buf bytes.Buffer\n\tvar inTag, inQuotes bool\n\tvar tagStart int\n\tfor i, ch := range s {\n\t\tif inTag {\n\t\t\tif ch == '>' && !inQuotes {\n\t\t\t\tinTag = false\n\t\t\t} else if ch == '<' && !inQuotes {\n\t\t\t\t// false start\n\t\t\t\tbuf.WriteString(s[tagStart:i])\n\t\t\t\ttagStart = i\n\t\t\t} else if ch == '\"' {\n\t\t\t\tinQuotes = !inQuotes\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\t\tif ch == '<' {\n\t\t\tinTag = true\n\t\t\ttagStart = i\n\t\t\tcontinue\n\t\t}\n\t\tbuf.WriteRune(ch)\n\t}\n\tif inTag {\n\t\t// false start\n\t\tbuf.WriteString(s[tagStart:])\n\t}\n\treturn buf.String()\n}\n\n// funcMarkdown renders the markdown body as HTML. The resulting\n// HTML is NOT escaped so that it can be rendered as HTML.\nfunc (TemplateContext) funcMarkdown(input any) (string, error) {\n\tinputStr := caddy.ToString(input)\n\n\tmd := goldmark.New(\n\t\tgoldmark.WithExtensions(\n\t\t\textension.GFM,\n\t\t\textension.Footnote,\n\t\t\thighlighting.NewHighlighting(\n\t\t\t\thighlighting.WithFormatOptions(\n\t\t\t\t\tchromahtml.WithClasses(true),\n\t\t\t\t),\n\t\t\t),\n\t\t),\n\t\tgoldmark.WithParserOptions(\n\t\t\tparser.WithAutoHeadingID(),\n\t\t),\n\t\tgoldmark.WithRendererOptions(\n\t\t\tgmhtml.WithUnsafe(), // TODO: this is not awesome, maybe should be configurable?\n\t\t),\n\t)\n\n\tbuf := bufPool.Get().(*bytes.Buffer)\n\tbuf.Reset()\n\tdefer bufPool.Put(buf)\n\n\terr := md.Convert([]byte(inputStr), buf)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\treturn buf.String(), nil\n}\n\n// funcSplitFrontMatter parses front matter out from the beginning of input,\n// and returns the separated key-value pairs and the body/content. input\n// must be a \"stringy\" value.\nfunc (TemplateContext) funcSplitFrontMatter(input any) (parsedMarkdownDoc, error) {\n\tmeta, body, err := extractFrontMatter(caddy.ToString(input))\n\tif err != nil {\n\t\treturn parsedMarkdownDoc{}, err\n\t}\n\treturn parsedMarkdownDoc{Meta: meta, Body: body}, nil\n}\n\n// funcListFiles reads and returns a slice of names from the given\n// directory relative to the root of c.\nfunc (c TemplateContext) funcListFiles(name string) ([]string, error) {\n\tif c.Root == nil {\n\t\treturn nil, fmt.Errorf(\"root file system not specified\")\n\t}\n\n\tdir, err := c.Root.Open(path.Clean(name))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer dir.Close()\n\n\tstat, err := dir.Stat()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif !stat.IsDir() {\n\t\treturn nil, fmt.Errorf(\"%v is not a directory\", name)\n\t}\n\n\tdirInfo, err := dir.Readdir(0)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tnames := make([]string, len(dirInfo))\n\tfor i, fileInfo := range dirInfo {\n\t\tnames[i] = fileInfo.Name()\n\t}\n\n\treturn names, nil\n}\n\n// funcFileExists returns true if filename can be opened successfully.\nfunc (c TemplateContext) funcFileExists(filename string) (bool, error) {\n\tif c.Root == nil {\n\t\treturn false, fmt.Errorf(\"root file system not specified\")\n\t}\n\tfile, err := c.Root.Open(filename)\n\tif err == nil {\n\t\tfile.Close()\n\t\treturn true, nil\n\t}\n\treturn false, nil\n}\n\n// funcFileStat returns Stat of a filename\nfunc (c TemplateContext) funcFileStat(filename string) (fs.FileInfo, error) {\n\tif c.Root == nil {\n\t\treturn nil, fmt.Errorf(\"root file system not specified\")\n\t}\n\n\tfile, err := c.Root.Open(path.Clean(filename))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer file.Close()\n\n\treturn file.Stat()\n}\n\n// funcHTTPError returns a structured HTTP handler error. EXPERIMENTAL; SUBJECT TO CHANGE.\n// Example usage: `{{if not (fileExists $includeFile)}}{{httpError 404}}{{end}}`\nfunc (c TemplateContext) funcHTTPError(statusCode int) (bool, error) {\n\t// Delete some headers that may have been set by the underlying\n\t// handler (such as file_server) which may break the error response.\n\tc.RespHeader.Header.Del(\"Content-Length\")\n\tc.RespHeader.Header.Del(\"Content-Type\")\n\tc.RespHeader.Header.Del(\"Etag\")\n\tc.RespHeader.Header.Del(\"Last-Modified\")\n\tc.RespHeader.Header.Del(\"Accept-Ranges\")\n\n\treturn false, caddyhttp.Error(statusCode, nil)\n}\n\n// funcHumanize transforms size and time inputs to a human readable format.\n//\n// Size inputs are expected to be integers, and are formatted as a\n// byte size, such as \"83 MB\".\n//\n// Time inputs are parsed using the given layout (default layout is RFC1123Z)\n// and are formatted as a relative time, such as \"2 weeks ago\".\n// See https://pkg.go.dev/time#pkg-constants for time layout docs.\nfunc (c TemplateContext) funcHumanize(formatType, data string) (string, error) {\n\t// The format type can optionally be followed\n\t// by a colon to provide arguments for the format\n\tparts := strings.Split(formatType, \":\")\n\n\tswitch parts[0] {\n\tcase \"size\":\n\t\tdataint, dataerr := strconv.ParseUint(data, 10, 64)\n\t\tif dataerr != nil {\n\t\t\treturn \"\", fmt.Errorf(\"humanize: size cannot be parsed: %s\", dataerr.Error())\n\t\t}\n\t\treturn humanize.Bytes(dataint), nil\n\n\tcase \"time\":\n\t\ttimelayout := time.RFC1123Z\n\t\tif len(parts) > 1 {\n\t\t\ttimelayout = parts[1]\n\t\t}\n\n\t\tdataint, dataerr := time.Parse(timelayout, data)\n\t\tif dataerr != nil {\n\t\t\treturn \"\", fmt.Errorf(\"humanize: time cannot be parsed: %s\", dataerr.Error())\n\t\t}\n\t\treturn humanize.Time(dataint), nil\n\t}\n\n\treturn \"\", fmt.Errorf(\"no know function was given\")\n}\n\n// funcMaybe invokes the plugged-in function named functionName if it is plugged in\n// (is a module in the 'http.handlers.templates.functions' namespace). If it is not\n// available, a log message is emitted.\n//\n// The first argument is the function name, and the rest of the arguments are\n// passed on to the actual function.\n//\n// This function is useful for executing templates that use components that may be\n// considered as optional in some cases (like during local development) where you do\n// not want to require everyone to have a custom Caddy build to be able to execute\n// your template.\n//\n// NOTE: This function is EXPERIMENTAL and subject to change or removal.\nfunc (c TemplateContext) funcMaybe(functionName string, args ...any) (any, error) {\n\tfor _, funcMap := range c.CustomFuncs {\n\t\tif fn, ok := funcMap[functionName]; ok {\n\t\t\tval := reflect.ValueOf(fn)\n\t\t\tif val.Kind() != reflect.Func {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\targVals := make([]reflect.Value, len(args))\n\t\t\tfor i, arg := range args {\n\t\t\t\targVals[i] = reflect.ValueOf(arg)\n\t\t\t}\n\t\t\treturnVals := val.Call(argVals)\n\t\t\tswitch len(returnVals) {\n\t\t\tcase 0:\n\t\t\t\treturn \"\", nil\n\t\t\tcase 1:\n\t\t\t\treturn returnVals[0].Interface(), nil\n\t\t\tcase 2:\n\t\t\t\tvar err error\n\t\t\t\tif !returnVals[1].IsNil() {\n\t\t\t\t\terr = returnVals[1].Interface().(error)\n\t\t\t\t}\n\t\t\t\treturn returnVals[0].Interface(), err\n\t\t\tdefault:\n\t\t\t\treturn nil, fmt.Errorf(\"maybe %s: invalid number of return values: %d\", functionName, len(returnVals))\n\t\t\t}\n\t\t}\n\t}\n\tc.config.logger.Named(\"maybe\").Warn(\"template function could not be found; ignoring invocation\", zap.String(\"name\", functionName))\n\treturn \"\", nil\n}\n\n// WrappedHeader wraps niladic functions so that they\n// can be used in templates. (Template functions must\n// return a value.)\ntype WrappedHeader struct{ http.Header }\n\n// Add adds a header field value, appending val to\n// existing values for that field. It returns an\n// empty string.\nfunc (h WrappedHeader) Add(field, val string) string {\n\th.Header.Add(field, val)\n\treturn \"\"\n}\n\n// Set sets a header field value, overwriting any\n// other values for that field. It returns an\n// empty string.\nfunc (h WrappedHeader) Set(field, val string) string {\n\th.Header.Set(field, val)\n\treturn \"\"\n}\n\n// Del deletes a header field. It returns an empty string.\nfunc (h WrappedHeader) Del(field string) string {\n\th.Header.Del(field)\n\treturn \"\"\n}\n\nvar bufPool = sync.Pool{\n\tNew: func() any {\n\t\treturn new(bytes.Buffer)\n\t},\n}\n\n// at time of writing, sprig.FuncMap() makes a copy, thus\n// involves iterating the whole map, so do it just once\nvar sprigFuncMap = sprig.TxtFuncMap()\n\nconst recursionPreventionHeader = \"Caddy-Templates-Include\"\n"
  },
  {
    "path": "modules/caddyhttp/templates/tplcontext_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage templates\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io/fs\"\n\t\"net/http\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"reflect\"\n\t\"sort\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\ntype handle struct{}\n\nfunc (h *handle) ServeHTTP(w http.ResponseWriter, r *http.Request) {\n\tif r.Header.Get(\"Accept-Encoding\") == \"identity\" {\n\t\tw.Write([]byte(\"good contents\"))\n\t} else {\n\t\tw.Write([]byte(\"bad cause Accept-Encoding: \" + r.Header.Get(\"Accept-Encoding\")))\n\t}\n}\n\nfunc TestHTTPInclude(t *testing.T) {\n\ttplContext := getContextOrFail(t)\n\tfor i, test := range []struct {\n\t\turi     string\n\t\thandler *handle\n\t\texpect  string\n\t}{\n\t\t{\n\t\t\turi:     \"https://example.com/foo/bar\",\n\t\t\thandler: &handle{},\n\t\t\texpect:  \"good contents\",\n\t\t},\n\t} {\n\t\tctx := context.WithValue(tplContext.Req.Context(), caddyhttp.ServerCtxKey, test.handler)\n\t\ttplContext.Req = tplContext.Req.WithContext(ctx)\n\t\ttplContext.Req.Header.Add(\"Accept-Encoding\", \"gzip\")\n\t\tresult, err := tplContext.funcHTTPInclude(test.uri)\n\t\tif result != test.expect {\n\t\t\tt.Errorf(\"Test %d: expected '%s' but got '%s'\", i, test.expect, result)\n\t\t}\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Test %d: got error: %v\", i, result)\n\t\t}\n\t}\n}\n\nfunc TestMarkdown(t *testing.T) {\n\ttplContext := getContextOrFail(t)\n\n\tfor i, test := range []struct {\n\t\tbody   string\n\t\texpect string\n\t}{\n\t\t{\n\t\t\tbody:   \"- str1\\n- str2\\n\",\n\t\t\texpect: \"<ul>\\n<li>str1</li>\\n<li>str2</li>\\n</ul>\\n\",\n\t\t},\n\t} {\n\t\tresult, err := tplContext.funcMarkdown(test.body)\n\t\tif result != test.expect {\n\t\t\tt.Errorf(\"Test %d: expected '%s' but got '%s'\", i, test.expect, result)\n\t\t}\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Test %d: got error: %v\", i, result)\n\t\t}\n\t}\n}\n\nfunc TestCookie(t *testing.T) {\n\tfor i, test := range []struct {\n\t\tcookie     *http.Cookie\n\t\tcookieName string\n\t\texpect     string\n\t}{\n\t\t{\n\t\t\t// happy path\n\t\t\tcookie:     &http.Cookie{Name: \"cookieName\", Value: \"cookieValue\"},\n\t\t\tcookieName: \"cookieName\",\n\t\t\texpect:     \"cookieValue\",\n\t\t},\n\t\t{\n\t\t\t// try to get a non-existing cookie\n\t\t\tcookie:     &http.Cookie{Name: \"cookieName\", Value: \"cookieValue\"},\n\t\t\tcookieName: \"notExisting\",\n\t\t\texpect:     \"\",\n\t\t},\n\t\t{\n\t\t\t// partial name match\n\t\t\tcookie:     &http.Cookie{Name: \"cookie\", Value: \"cookieValue\"},\n\t\t\tcookieName: \"cook\",\n\t\t\texpect:     \"\",\n\t\t},\n\t\t{\n\t\t\t// cookie with optional fields\n\t\t\tcookie:     &http.Cookie{Name: \"cookie\", Value: \"cookieValue\", Path: \"/path\", Domain: \"https://localhost\", Expires: time.Now().Add(10 * time.Minute), MaxAge: 120},\n\t\t\tcookieName: \"cookie\",\n\t\t\texpect:     \"cookieValue\",\n\t\t},\n\t} {\n\t\ttplContext := getContextOrFail(t)\n\t\ttplContext.Req.AddCookie(test.cookie)\n\t\tactual := tplContext.Cookie(test.cookieName)\n\t\tif actual != test.expect {\n\t\t\tt.Errorf(\"Test %d: Expected cookie value '%s' but got '%s' for cookie with name '%s'\",\n\t\t\t\ti, test.expect, actual, test.cookieName)\n\t\t}\n\t}\n}\n\nfunc TestImport(t *testing.T) {\n\tfor i, test := range []struct {\n\t\tfileContent string\n\t\tfileName    string\n\t\tshouldErr   bool\n\t\texpect      string\n\t}{\n\t\t{\n\t\t\t// file exists, template is defined\n\t\t\tfileContent: `{{ define \"imported\" }}text{{end}}`,\n\t\t\tfileName:    \"file1\",\n\t\t\tshouldErr:   false,\n\t\t\texpect:      `\"imported\"`,\n\t\t},\n\t\t{\n\t\t\t// file does not exit\n\t\t\tfileContent: \"\",\n\t\t\tfileName:    \"\",\n\t\t\tshouldErr:   true,\n\t\t},\n\t} {\n\t\ttplContext := getContextOrFail(t)\n\t\tvar absFilePath string\n\n\t\t// create files for test case\n\t\tif test.fileName != \"\" {\n\t\t\tabsFilePath := filepath.Join(fmt.Sprintf(\"%s\", tplContext.Root), test.fileName)\n\t\t\tif err := os.WriteFile(absFilePath, []byte(test.fileContent), os.ModePerm); err != nil {\n\t\t\t\tos.Remove(absFilePath)\n\t\t\t\tt.Fatalf(\"Test %d: Expected no error creating file, got: '%s'\", i, err.Error())\n\t\t\t}\n\t\t}\n\n\t\t// perform test\n\t\ttplContext.NewTemplate(\"parent\")\n\t\tactual, err := tplContext.funcImport(test.fileName)\n\t\ttemplateWasDefined := strings.Contains(tplContext.tpl.DefinedTemplates(), test.expect)\n\t\tif err != nil {\n\t\t\tif !test.shouldErr {\n\t\t\t\tt.Errorf(\"Test %d: Expected no error, got: '%s'\", i, err)\n\t\t\t}\n\t\t} else if test.shouldErr {\n\t\t\tt.Errorf(\"Test %d: Expected error but had none\", i)\n\t\t} else if !templateWasDefined && actual != \"\" {\n\t\t\t// template should be defined, return value should be an empty string\n\t\t\tt.Errorf(\"Test %d: Expected template %s to be define but got %s\", i, test.expect, tplContext.tpl.DefinedTemplates())\n\t\t}\n\n\t\tif absFilePath != \"\" {\n\t\t\tif err := os.Remove(absFilePath); err != nil && !errors.Is(err, fs.ErrNotExist) {\n\t\t\t\tt.Fatalf(\"Test %d: Expected no error removing temporary test file, got: %v\", i, err)\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc TestNestedInclude(t *testing.T) {\n\tfor i, test := range []struct {\n\t\tchild      string\n\t\tchildFile  string\n\t\tparent     string\n\t\tparentFile string\n\t\tshouldErr  bool\n\t\texpect     string\n\t\tchild2     string\n\t\tchild2File string\n\t}{\n\t\t{\n\t\t\t// include in parent\n\t\t\tchild:      `{{ include \"file1\" }}`,\n\t\t\tchildFile:  \"file0\",\n\t\t\tparent:     `{{ $content := \"file2\" }}{{ $p := include $content}}`,\n\t\t\tparentFile: \"file1\",\n\t\t\tshouldErr:  false,\n\t\t\texpect:     ``,\n\t\t\tchild2:     `This shouldn't show`,\n\t\t\tchild2File: \"file2\",\n\t\t},\n\t} {\n\t\tcontext := getContextOrFail(t)\n\t\tvar absFilePath string\n\t\tvar absFilePath0 string\n\t\tvar absFilePath1 string\n\t\tvar buf *bytes.Buffer\n\t\tvar err error\n\n\t\t// create files and for test case\n\t\tif test.parentFile != \"\" {\n\t\t\tabsFilePath = filepath.Join(fmt.Sprintf(\"%s\", context.Root), test.parentFile)\n\t\t\tif err := os.WriteFile(absFilePath, []byte(test.parent), os.ModePerm); err != nil {\n\t\t\t\tos.Remove(absFilePath)\n\t\t\t\tt.Fatalf(\"Test %d: Expected no error creating file, got: '%s'\", i, err.Error())\n\t\t\t}\n\t\t}\n\t\tif test.childFile != \"\" {\n\t\t\tabsFilePath0 = filepath.Join(fmt.Sprintf(\"%s\", context.Root), test.childFile)\n\t\t\tif err := os.WriteFile(absFilePath0, []byte(test.child), os.ModePerm); err != nil {\n\t\t\t\tos.Remove(absFilePath0)\n\t\t\t\tt.Fatalf(\"Test %d: Expected no error creating file, got: '%s'\", i, err.Error())\n\t\t\t}\n\t\t}\n\t\tif test.child2File != \"\" {\n\t\t\tabsFilePath1 = filepath.Join(fmt.Sprintf(\"%s\", context.Root), test.child2File)\n\t\t\tif err := os.WriteFile(absFilePath1, []byte(test.child2), os.ModePerm); err != nil {\n\t\t\t\tos.Remove(absFilePath0)\n\t\t\t\tt.Fatalf(\"Test %d: Expected no error creating file, got: '%s'\", i, err.Error())\n\t\t\t}\n\t\t}\n\n\t\tbuf = bufPool.Get().(*bytes.Buffer)\n\t\tbuf.Reset()\n\t\tdefer bufPool.Put(buf)\n\t\tbuf.WriteString(test.child)\n\t\terr = context.executeTemplateInBuffer(test.childFile, buf)\n\n\t\tif err != nil {\n\t\t\tif !test.shouldErr {\n\t\t\t\tt.Errorf(\"Test %d: Expected no error, got: '%s'\", i, err)\n\t\t\t}\n\t\t} else if test.shouldErr {\n\t\t\tt.Errorf(\"Test %d: Expected error but had none\", i)\n\t\t} else if buf.String() != test.expect {\n\t\t\t//\n\t\t\tt.Errorf(\"Test %d: Expected '%s' but got '%s'\", i, test.expect, buf.String())\n\t\t}\n\n\t\tif absFilePath != \"\" {\n\t\t\tif err := os.Remove(absFilePath); err != nil && !errors.Is(err, fs.ErrNotExist) {\n\t\t\t\tt.Fatalf(\"Test %d: Expected no error removing temporary test file, got: %v\", i, err)\n\t\t\t}\n\t\t}\n\t\tif absFilePath0 != \"\" {\n\t\t\tif err := os.Remove(absFilePath0); err != nil && !errors.Is(err, fs.ErrNotExist) {\n\t\t\t\tt.Fatalf(\"Test %d: Expected no error removing temporary test file, got: %v\", i, err)\n\t\t\t}\n\t\t}\n\t\tif absFilePath1 != \"\" {\n\t\t\tif err := os.Remove(absFilePath1); err != nil && !errors.Is(err, fs.ErrNotExist) {\n\t\t\t\tt.Fatalf(\"Test %d: Expected no error removing temporary test file, got: %v\", i, err)\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc TestInclude(t *testing.T) {\n\tfor i, test := range []struct {\n\t\tfileContent string\n\t\tfileName    string\n\t\tshouldErr   bool\n\t\texpect      string\n\t\targs        string\n\t}{\n\t\t{\n\t\t\t// file exists, content is text only\n\t\t\tfileContent: \"text\",\n\t\t\tfileName:    \"file1\",\n\t\t\tshouldErr:   false,\n\t\t\texpect:      \"text\",\n\t\t},\n\t\t{\n\t\t\t// file exists, content is template\n\t\t\tfileContent: \"{{ if . }}text{{ end }}\",\n\t\t\tfileName:    \"file1\",\n\t\t\tshouldErr:   false,\n\t\t\texpect:      \"text\",\n\t\t},\n\t\t{\n\t\t\t// file does not exit\n\t\t\tfileContent: \"\",\n\t\t\tfileName:    \"\",\n\t\t\tshouldErr:   true,\n\t\t},\n\t\t{\n\t\t\t// args\n\t\t\tfileContent: \"{{ index .Args 0 }}\",\n\t\t\tfileName:    \"file1\",\n\t\t\tshouldErr:   false,\n\t\t\targs:        \"text\",\n\t\t\texpect:      \"text\",\n\t\t},\n\t\t{\n\t\t\t// args, reference arg out of range\n\t\t\tfileContent: \"{{ index .Args 1 }}\",\n\t\t\tfileName:    \"file1\",\n\t\t\tshouldErr:   true,\n\t\t\targs:        \"text\",\n\t\t},\n\t} {\n\t\ttplContext := getContextOrFail(t)\n\t\tvar absFilePath string\n\n\t\t// create files for test case\n\t\tif test.fileName != \"\" {\n\t\t\tabsFilePath := filepath.Join(fmt.Sprintf(\"%s\", tplContext.Root), test.fileName)\n\t\t\tif err := os.WriteFile(absFilePath, []byte(test.fileContent), os.ModePerm); err != nil {\n\t\t\t\tos.Remove(absFilePath)\n\t\t\t\tt.Fatalf(\"Test %d: Expected no error creating file, got: '%s'\", i, err.Error())\n\t\t\t}\n\t\t}\n\n\t\t// perform test\n\t\tactual, err := tplContext.funcInclude(test.fileName, test.args)\n\t\tif err != nil {\n\t\t\tif !test.shouldErr {\n\t\t\t\tt.Errorf(\"Test %d: Expected no error, got: '%s'\", i, err)\n\t\t\t}\n\t\t} else if test.shouldErr {\n\t\t\tt.Errorf(\"Test %d: Expected error but had none\", i)\n\t\t} else if actual != test.expect {\n\t\t\tt.Errorf(\"Test %d: Expected %s but got %s\", i, test.expect, actual)\n\t\t}\n\n\t\tif absFilePath != \"\" {\n\t\t\tif err := os.Remove(absFilePath); err != nil && !errors.Is(err, fs.ErrNotExist) {\n\t\t\t\tt.Fatalf(\"Test %d: Expected no error removing temporary test file, got: %v\", i, err)\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc TestCookieMultipleCookies(t *testing.T) {\n\ttplContext := getContextOrFail(t)\n\n\tcookieNameBase, cookieValueBase := \"cookieName\", \"cookieValue\"\n\n\tfor i := 0; i < 10; i++ {\n\t\ttplContext.Req.AddCookie(&http.Cookie{\n\t\t\tName:  fmt.Sprintf(\"%s%d\", cookieNameBase, i),\n\t\t\tValue: fmt.Sprintf(\"%s%d\", cookieValueBase, i),\n\t\t})\n\t}\n\n\tfor i := 0; i < 10; i++ {\n\t\texpectedCookieVal := fmt.Sprintf(\"%s%d\", cookieValueBase, i)\n\t\tactualCookieVal := tplContext.Cookie(fmt.Sprintf(\"%s%d\", cookieNameBase, i))\n\t\tif actualCookieVal != expectedCookieVal {\n\t\t\tt.Errorf(\"Expected cookie value %s, found %s\", expectedCookieVal, actualCookieVal)\n\t\t}\n\t}\n}\n\nfunc TestIP(t *testing.T) {\n\ttplContext := getContextOrFail(t)\n\tfor i, test := range []struct {\n\t\tinputRemoteAddr string\n\t\texpect          string\n\t}{\n\t\t{\"1.1.1.1:1111\", \"1.1.1.1\"},\n\t\t{\"1.1.1.1\", \"1.1.1.1\"},\n\t\t{\"[::1]:11\", \"::1\"},\n\t\t{\"[2001:db8:a0b:12f0::1]\", \"[2001:db8:a0b:12f0::1]\"},\n\t\t{`[fe80:1::3%eth0]:44`, `fe80:1::3%eth0`},\n\t} {\n\t\ttplContext.Req.RemoteAddr = test.inputRemoteAddr\n\t\tif actual := tplContext.RemoteIP(); actual != test.expect {\n\t\t\tt.Errorf(\"Test %d: Expected %s but got %s\", i, test.expect, actual)\n\t\t}\n\t}\n}\n\nfunc TestStripHTML(t *testing.T) {\n\ttplContext := getContextOrFail(t)\n\n\tfor i, test := range []struct {\n\t\tinput  string\n\t\texpect string\n\t}{\n\t\t{\n\t\t\t// no tags\n\t\t\tinput:  `h1`,\n\t\t\texpect: `h1`,\n\t\t},\n\t\t{\n\t\t\t// happy path\n\t\t\tinput:  `<h1>h1</h1>`,\n\t\t\texpect: `h1`,\n\t\t},\n\t\t{\n\t\t\t// tag in quotes\n\t\t\tinput:  `<h1\">\">h1</h1>`,\n\t\t\texpect: `h1`,\n\t\t},\n\t\t{\n\t\t\t// multiple tags\n\t\t\tinput:  `<h1><b>h1</b></h1>`,\n\t\t\texpect: `h1`,\n\t\t},\n\t\t{\n\t\t\t// tags not closed\n\t\t\tinput:  `<h1`,\n\t\t\texpect: `<h1`,\n\t\t},\n\t\t{\n\t\t\t// false start\n\t\t\tinput:  `<h1<b>hi`,\n\t\t\texpect: `<h1hi`,\n\t\t},\n\t} {\n\t\tactual := tplContext.funcStripHTML(test.input)\n\t\tif actual != test.expect {\n\t\t\tt.Errorf(\"Test %d: Expected %s, found %s. Input was StripHTML(%s)\", i, test.expect, actual, test.input)\n\t\t}\n\t}\n}\n\nfunc TestFileListing(t *testing.T) {\n\tfor i, test := range []struct {\n\t\tfileNames []string\n\t\tinputBase string\n\t\tshouldErr bool\n\t\tverifyErr func(error) bool\n\t}{\n\t\t{\n\t\t\t// directory and files exist\n\t\t\tfileNames: []string{\"file1\", \"file2\"},\n\t\t\tshouldErr: false,\n\t\t},\n\t\t{\n\t\t\t// directory exists, no files\n\t\t\tfileNames: []string{},\n\t\t\tshouldErr: false,\n\t\t},\n\t\t{\n\t\t\t// file or directory does not exist\n\t\t\tfileNames: nil,\n\t\t\tinputBase: \"doesNotExist\",\n\t\t\tshouldErr: true,\n\t\t\tverifyErr: func(err error) bool { return errors.Is(err, fs.ErrNotExist) },\n\t\t},\n\t\t{\n\t\t\t// directory and files exist, but path to a file\n\t\t\tfileNames: []string{\"file1\", \"file2\"},\n\t\t\tinputBase: \"file1\",\n\t\t\tshouldErr: true,\n\t\t\tverifyErr: func(err error) bool {\n\t\t\t\treturn strings.HasSuffix(err.Error(), \"is not a directory\")\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\t// try to escape Context Root\n\t\t\tfileNames: nil,\n\t\t\tinputBase: filepath.Join(\"..\", \"..\", \"..\", \"..\", \"..\", \"etc\"),\n\t\t\tshouldErr: true,\n\t\t\tverifyErr: func(err error) bool { return errors.Is(err, fs.ErrNotExist) },\n\t\t},\n\t} {\n\t\ttplContext := getContextOrFail(t)\n\t\tvar dirPath string\n\t\tvar err error\n\n\t\t// create files for test case\n\t\tif test.fileNames != nil {\n\t\t\tdirPath, err = os.MkdirTemp(fmt.Sprintf(\"%s\", tplContext.Root), \"caddy_ctxtest\")\n\t\t\tif err != nil {\n\t\t\t\tt.Fatalf(\"Test %d: Expected no error creating directory, got: '%s'\", i, err.Error())\n\t\t\t}\n\t\t\tfor _, name := range test.fileNames {\n\t\t\t\tabsFilePath := filepath.Join(dirPath, name)\n\t\t\t\tif err = os.WriteFile(absFilePath, []byte(\"\"), os.ModePerm); err != nil {\n\t\t\t\t\tos.RemoveAll(dirPath)\n\t\t\t\t\tt.Fatalf(\"Test %d: Expected no error creating file, got: '%s'\", i, err.Error())\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// perform test\n\t\tinput := filepath.ToSlash(filepath.Join(filepath.Base(dirPath), test.inputBase))\n\t\tactual, err := tplContext.funcListFiles(input)\n\t\tif err != nil {\n\t\t\tif !test.shouldErr {\n\t\t\t\tt.Errorf(\"Test %d: Expected no error, got: '%s'\", i, err)\n\t\t\t} else if !test.verifyErr(err) {\n\t\t\t\tt.Errorf(\"Test %d: Could not verify error content, got: '%s'\", i, err)\n\t\t\t}\n\t\t} else if test.shouldErr {\n\t\t\tt.Errorf(\"Test %d: Expected error but had none\", i)\n\t\t} else {\n\t\t\tnumFiles := len(test.fileNames)\n\t\t\t// reflect.DeepEqual does not consider two empty slices to be equal\n\t\t\tif numFiles == 0 && len(actual) != 0 {\n\t\t\t\tt.Errorf(\"Test %d: Expected files %v, got: %v\",\n\t\t\t\t\ti, test.fileNames, actual)\n\t\t\t} else {\n\t\t\t\tsort.Strings(actual)\n\t\t\t\tif numFiles > 0 && !reflect.DeepEqual(test.fileNames, actual) {\n\t\t\t\t\tt.Errorf(\"Test %d: Expected files %v, got: %v\",\n\t\t\t\t\t\ti, test.fileNames, actual)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif dirPath != \"\" {\n\t\t\tif err := os.RemoveAll(dirPath); err != nil && !errors.Is(err, fs.ErrNotExist) {\n\t\t\t\tt.Fatalf(\"Test %d: Expected no error removing temporary test directory, got: %v\", i, err)\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc TestSplitFrontMatter(t *testing.T) {\n\ttplContext := getContextOrFail(t)\n\n\tfor i, test := range []struct {\n\t\tinput  string\n\t\texpect string\n\t\tbody   string\n\t}{\n\t\t{\n\t\t\t// yaml with windows newline\n\t\t\tinput:  \"---\\r\\ntitle: Welcome\\r\\n---\\r\\n# Test\\\\r\\\\n\",\n\t\t\texpect: `Welcome`,\n\t\t\tbody:   \"\\r\\n# Test\\\\r\\\\n\",\n\t\t},\n\t\t{\n\t\t\t// yaml\n\t\t\tinput: `---\ntitle: Welcome\n---\n### Test`,\n\t\t\texpect: `Welcome`,\n\t\t\tbody:   \"\\n### Test\",\n\t\t},\n\t\t{\n\t\t\t// yaml with dots for closer\n\t\t\tinput: `---\ntitle: Welcome\n...\n### Test`,\n\t\t\texpect: `Welcome`,\n\t\t\tbody:   \"\\n### Test\",\n\t\t},\n\t\t{\n\t\t\t// yaml with non-fence '...' line after closing fence (i.e. first matching closing fence should be used)\n\t\t\tinput: `---\ntitle: Welcome\n---\n### Test\n...\nyeah`,\n\t\t\texpect: `Welcome`,\n\t\t\tbody:   \"\\n### Test\\n...\\nyeah\",\n\t\t},\n\t\t{\n\t\t\t// toml\n\t\t\tinput: `+++\ntitle = \"Welcome\"\n+++\n### Test`,\n\t\t\texpect: `Welcome`,\n\t\t\tbody:   \"\\n### Test\",\n\t\t},\n\t\t{\n\t\t\t// json\n\t\t\tinput: `{\n    \"title\": \"Welcome\"\n}\n### Test`,\n\t\t\texpect: `Welcome`,\n\t\t\tbody:   \"\\n### Test\",\n\t\t},\n\t} {\n\t\tresult, _ := tplContext.funcSplitFrontMatter(test.input)\n\t\tif result.Meta[\"title\"] != test.expect {\n\t\t\tt.Errorf(\"Test %d: Expected %s, found %s. Input was SplitFrontMatter(%s)\", i, test.expect, result.Meta[\"title\"], test.input)\n\t\t}\n\t\tif result.Body != test.body {\n\t\t\tt.Errorf(\"Test %d: Expected body %s, found %s. Input was SplitFrontMatter(%s)\", i, test.body, result.Body, test.input)\n\t\t}\n\t}\n}\n\nfunc TestHumanize(t *testing.T) {\n\ttplContext := getContextOrFail(t)\n\tfor i, test := range []struct {\n\t\tformat    string\n\t\tinputData string\n\t\texpect    string\n\t\terrorCase bool\n\t\tverifyErr func(actual_string, substring string) bool\n\t}{\n\t\t{\n\t\t\tformat:    \"size\",\n\t\t\tinputData: \"2048000\",\n\t\t\texpect:    \"2.0 MB\",\n\t\t\terrorCase: false,\n\t\t\tverifyErr: strings.Contains,\n\t\t},\n\t\t{\n\t\t\tformat:    \"time\",\n\t\t\tinputData: \"Fri, 05 May 2022 15:04:05 +0200\",\n\t\t\texpect:    \"ago\",\n\t\t\terrorCase: false,\n\t\t\tverifyErr: strings.HasSuffix,\n\t\t},\n\t\t{\n\t\t\tformat:    \"time:2006-Jan-02\",\n\t\t\tinputData: \"2022-May-05\",\n\t\t\texpect:    \"ago\",\n\t\t\terrorCase: false,\n\t\t\tverifyErr: strings.HasSuffix,\n\t\t},\n\t\t{\n\t\t\tformat:    \"time\",\n\t\t\tinputData: \"Fri, 05 May 2022 15:04:05 GMT+0200\",\n\t\t\texpect:    \"error:\",\n\t\t\terrorCase: true,\n\t\t\tverifyErr: strings.HasPrefix,\n\t\t},\n\t} {\n\t\tif actual, err := tplContext.funcHumanize(test.format, test.inputData); !test.verifyErr(actual, test.expect) {\n\t\t\tif !test.errorCase {\n\t\t\t\tt.Errorf(\"Test %d: Expected '%s' but got '%s'\", i, test.expect, actual)\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Errorf(\"Test %d: error: %s\", i, err.Error())\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc getContextOrFail(t *testing.T) TemplateContext {\n\ttplContext, err := initTestContext()\n\tt.Cleanup(func() {\n\t\tos.RemoveAll(string(tplContext.Root.(http.Dir)))\n\t})\n\tif err != nil {\n\t\tt.Fatalf(\"failed to prepare test context: %v\", err)\n\t}\n\treturn tplContext\n}\n\nfunc initTestContext() (TemplateContext, error) {\n\tbody := bytes.NewBufferString(\"request body\")\n\trequest, err := http.NewRequest(\"GET\", \"https://example.com/foo/bar\", body)\n\tif err != nil {\n\t\treturn TemplateContext{}, err\n\t}\n\ttmpDir, err := os.MkdirTemp(os.TempDir(), \"caddy\")\n\tif err != nil {\n\t\treturn TemplateContext{}, err\n\t}\n\treturn TemplateContext{\n\t\tRoot:       http.Dir(tmpDir),\n\t\tReq:        request,\n\t\tRespHeader: WrappedHeader{make(http.Header)},\n\t}, nil\n}\n"
  },
  {
    "path": "modules/caddyhttp/tracing/module.go",
    "content": "package tracing\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Tracing{})\n\thttpcaddyfile.RegisterHandlerDirective(\"tracing\", parseCaddyfile)\n}\n\n// Tracing implements an HTTP handler that adds support for distributed tracing,\n// using OpenTelemetry. This module is responsible for the injection and\n// propagation of the trace context. Configure this module via environment\n// variables (see https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/sdk-environment-variables.md).\n// Some values can be overwritten in the configuration file.\ntype Tracing struct {\n\t// SpanName is a span name. It should follow the naming guidelines here:\n\t// https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/api.md#span\n\tSpanName string `json:\"span\"`\n\n\t// SpanAttributes are custom key-value pairs to be added to spans\n\tSpanAttributes map[string]string `json:\"span_attributes,omitempty\"`\n\n\t// otel implements opentelemetry related logic.\n\totel openTelemetryWrapper\n\n\tlogger *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Tracing) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.tracing\",\n\t\tNew: func() caddy.Module { return new(Tracing) },\n\t}\n}\n\n// Provision implements caddy.Provisioner.\nfunc (ot *Tracing) Provision(ctx caddy.Context) error {\n\tot.logger = ctx.Logger()\n\n\tvar err error\n\tot.otel, err = newOpenTelemetryWrapper(ctx, ot.SpanName, ot.SpanAttributes)\n\n\treturn err\n}\n\n// Cleanup implements caddy.CleanerUpper and closes any idle connections. It\n// calls Shutdown method for a trace provider https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/sdk.md#shutdown.\nfunc (ot *Tracing) Cleanup() error {\n\tif err := ot.otel.cleanup(ot.logger); err != nil {\n\t\treturn fmt.Errorf(\"tracerProvider shutdown: %w\", err)\n\t}\n\treturn nil\n}\n\n// ServeHTTP implements caddyhttp.MiddlewareHandler.\nfunc (ot *Tracing) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {\n\treturn ot.otel.ServeHTTP(w, r, next)\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens. Syntax:\n//\n//\ttracing {\n//\t    [span <span_name>]\n//\t\t[span_attributes {\n//\t\t\tattr1 value1\n//\t\t\tattr2 value2\n//\t\t}]\n//\t}\nfunc (ot *Tracing) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\tsetParameter := func(d *caddyfile.Dispenser, val *string) error {\n\t\tif d.NextArg() {\n\t\t\t*val = d.Val()\n\t\t} else {\n\t\t\treturn d.ArgErr()\n\t\t}\n\t\tif d.NextArg() {\n\t\t\treturn d.ArgErr()\n\t\t}\n\t\treturn nil\n\t}\n\n\t// paramsMap is a mapping between \"string\" parameter from the Caddyfile and its destination within the module\n\tparamsMap := map[string]*string{\n\t\t\"span\": &ot.SpanName,\n\t}\n\n\td.Next() // consume directive name\n\tif d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"span_attributes\":\n\t\t\tif ot.SpanAttributes == nil {\n\t\t\t\tot.SpanAttributes = make(map[string]string)\n\t\t\t}\n\t\t\tfor d.NextBlock(1) {\n\t\t\t\tkey := d.Val()\n\t\t\t\tif !d.NextArg() {\n\t\t\t\t\treturn d.ArgErr()\n\t\t\t\t}\n\t\t\t\tvalue := d.Val()\n\t\t\t\tif d.NextArg() {\n\t\t\t\t\treturn d.ArgErr()\n\t\t\t\t}\n\t\t\t\tot.SpanAttributes[key] = value\n\t\t\t}\n\t\tdefault:\n\t\t\tif dst, ok := paramsMap[d.Val()]; ok {\n\t\t\t\tif err := setParameter(d, dst); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc parseCaddyfile(h httpcaddyfile.Helper) (caddyhttp.MiddlewareHandler, error) {\n\tvar m Tracing\n\terr := m.UnmarshalCaddyfile(h.Dispenser)\n\treturn &m, err\n}\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner           = (*Tracing)(nil)\n\t_ caddyhttp.MiddlewareHandler = (*Tracing)(nil)\n\t_ caddyfile.Unmarshaler       = (*Tracing)(nil)\n)\n"
  },
  {
    "path": "modules/caddyhttp/tracing/module_test.go",
    "content": "package tracing\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"go.opentelemetry.io/otel/sdk/trace\"\n\t\"go.opentelemetry.io/otel/sdk/trace/tracetest\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc TestTracing_UnmarshalCaddyfile(t *testing.T) {\n\ttests := []struct {\n\t\tname           string\n\t\tspanName       string\n\t\tspanAttributes map[string]string\n\t\td              *caddyfile.Dispenser\n\t\twantErr        bool\n\t}{\n\t\t{\n\t\t\tname:     \"Full config\",\n\t\t\tspanName: \"my-span\",\n\t\t\tspanAttributes: map[string]string{\n\t\t\t\t\"attr1\": \"value1\",\n\t\t\t\t\"attr2\": \"value2\",\n\t\t\t},\n\t\t\td: caddyfile.NewTestDispenser(`\ntracing {\n\tspan my-span\n\tspan_attributes {\n\t\tattr1 value1\n\t\tattr2 value2\n\t}\n}`),\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname:     \"Only span name in the config\",\n\t\t\tspanName: \"my-span\",\n\t\t\td: caddyfile.NewTestDispenser(`\ntracing {\n\tspan my-span\n}`),\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Empty config\",\n\t\t\td: caddyfile.NewTestDispenser(`\ntracing {\n}`),\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Only span attributes\",\n\t\t\tspanAttributes: map[string]string{\n\t\t\t\t\"service.name\":    \"my-service\",\n\t\t\t\t\"service.version\": \"1.0.0\",\n\t\t\t},\n\t\t\td: caddyfile.NewTestDispenser(`\ntracing {\n\tspan_attributes {\n\t\tservice.name my-service\n\t\tservice.version 1.0.0\n\t}\n}`),\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tot := &Tracing{}\n\t\t\tif err := ot.UnmarshalCaddyfile(tt.d); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"UnmarshalCaddyfile() error = %v, wantErrType %v\", err, tt.wantErr)\n\t\t\t}\n\n\t\t\tif ot.SpanName != tt.spanName {\n\t\t\t\tt.Errorf(\"UnmarshalCaddyfile() SpanName = %v, want SpanName %v\", ot.SpanName, tt.spanName)\n\t\t\t}\n\n\t\t\tif len(tt.spanAttributes) > 0 {\n\t\t\t\tif ot.SpanAttributes == nil {\n\t\t\t\t\tt.Errorf(\"UnmarshalCaddyfile() SpanAttributes is nil, expected %v\", tt.spanAttributes)\n\t\t\t\t} else {\n\t\t\t\t\tfor key, expectedValue := range tt.spanAttributes {\n\t\t\t\t\t\tif actualValue, exists := ot.SpanAttributes[key]; !exists {\n\t\t\t\t\t\t\tt.Errorf(\"UnmarshalCaddyfile() SpanAttributes missing key %v\", key)\n\t\t\t\t\t\t} else if actualValue != expectedValue {\n\t\t\t\t\t\t\tt.Errorf(\"UnmarshalCaddyfile() SpanAttributes[%v] = %v, want %v\", key, actualValue, expectedValue)\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestTracing_UnmarshalCaddyfile_Error(t *testing.T) {\n\ttests := []struct {\n\t\tname    string\n\t\td       *caddyfile.Dispenser\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"Unknown parameter\",\n\t\t\td: caddyfile.NewTestDispenser(`\n\t\ttracing {\n\t\t\tfoo bar\n\t\t}`),\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Missed argument\",\n\t\t\td: caddyfile.NewTestDispenser(`\ntracing {\n\tspan\n}`),\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Span attributes missing value\",\n\t\t\td: caddyfile.NewTestDispenser(`\ntracing {\n\tspan_attributes {\n\t\tkey\n\t}\n}`),\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Span attributes too many arguments\",\n\t\t\td: caddyfile.NewTestDispenser(`\ntracing {\n\tspan_attributes {\n\t\tkey value extra\n\t}\n}`),\n\t\t\twantErr: true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tot := &Tracing{}\n\t\t\tif err := ot.UnmarshalCaddyfile(tt.d); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"UnmarshalCaddyfile() error = %v, wantErrType %v\", err, tt.wantErr)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestTracing_ServeHTTP_Propagation_Without_Initial_Headers(t *testing.T) {\n\tot := &Tracing{\n\t\tSpanName: \"mySpan\",\n\t}\n\n\treq := createRequestWithContext(\"GET\", \"https://example.com/foo\")\n\tw := httptest.NewRecorder()\n\n\tvar handler caddyhttp.HandlerFunc = func(writer http.ResponseWriter, request *http.Request) error {\n\t\ttraceparent := request.Header.Get(\"Traceparent\")\n\t\tif traceparent == \"\" || strings.HasPrefix(traceparent, \"00-00000000000000000000000000000000-0000000000000000\") {\n\t\t\tt.Errorf(\"Invalid traceparent: %v\", traceparent)\n\t\t}\n\n\t\treturn nil\n\t}\n\n\tctx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tdefer cancel()\n\n\tif err := ot.Provision(ctx); err != nil {\n\t\tt.Errorf(\"Provision error: %v\", err)\n\t\tt.FailNow()\n\t}\n\n\tif err := ot.ServeHTTP(w, req, handler); err != nil {\n\t\tt.Errorf(\"ServeHTTP error: %v\", err)\n\t}\n}\n\nfunc TestTracing_ServeHTTP_Propagation_With_Initial_Headers(t *testing.T) {\n\tot := &Tracing{\n\t\tSpanName: \"mySpan\",\n\t}\n\n\treq := createRequestWithContext(\"GET\", \"https://example.com/foo\")\n\treq.Header.Set(\"traceparent\", \"00-11111111111111111111111111111111-1111111111111111-01\")\n\tw := httptest.NewRecorder()\n\n\tvar handler caddyhttp.HandlerFunc = func(writer http.ResponseWriter, request *http.Request) error {\n\t\ttraceparent := request.Header.Get(\"Traceparent\")\n\t\tif !strings.HasPrefix(traceparent, \"00-11111111111111111111111111111111\") {\n\t\t\tt.Errorf(\"Invalid traceparent: %v\", traceparent)\n\t\t}\n\n\t\treturn nil\n\t}\n\n\tctx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tdefer cancel()\n\n\tif err := ot.Provision(ctx); err != nil {\n\t\tt.Errorf(\"Provision error: %v\", err)\n\t\tt.FailNow()\n\t}\n\n\tif err := ot.ServeHTTP(w, req, handler); err != nil {\n\t\tt.Errorf(\"ServeHTTP error: %v\", err)\n\t}\n}\n\nfunc TestTracing_ServeHTTP_Next_Error(t *testing.T) {\n\tot := &Tracing{\n\t\tSpanName: \"mySpan\",\n\t}\n\n\treq := createRequestWithContext(\"GET\", \"https://example.com/foo\")\n\tw := httptest.NewRecorder()\n\n\texpectErr := errors.New(\"test error\")\n\n\tvar handler caddyhttp.HandlerFunc = func(writer http.ResponseWriter, request *http.Request) error {\n\t\treturn expectErr\n\t}\n\n\tctx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tdefer cancel()\n\n\tif err := ot.Provision(ctx); err != nil {\n\t\tt.Errorf(\"Provision error: %v\", err)\n\t\tt.FailNow()\n\t}\n\n\tif err := ot.ServeHTTP(w, req, handler); err == nil || !errors.Is(err, expectErr) {\n\t\tt.Errorf(\"expected error, got: %v\", err)\n\t}\n}\n\nfunc TestTracing_JSON_Configuration(t *testing.T) {\n\t// Test that our struct correctly marshals to and from JSON\n\toriginal := &Tracing{\n\t\tSpanName: \"test-span\",\n\t\tSpanAttributes: map[string]string{\n\t\t\t\"service.name\":    \"test-service\",\n\t\t\t\"service.version\": \"1.0.0\",\n\t\t\t\"env\":             \"test\",\n\t\t},\n\t}\n\n\tjsonData, err := json.Marshal(original)\n\tif err != nil {\n\t\tt.Fatalf(\"Failed to marshal to JSON: %v\", err)\n\t}\n\n\tvar unmarshaled Tracing\n\tif err := json.Unmarshal(jsonData, &unmarshaled); err != nil {\n\t\tt.Fatalf(\"Failed to unmarshal from JSON: %v\", err)\n\t}\n\n\tif unmarshaled.SpanName != original.SpanName {\n\t\tt.Errorf(\"Expected SpanName %s, got %s\", original.SpanName, unmarshaled.SpanName)\n\t}\n\n\tif len(unmarshaled.SpanAttributes) != len(original.SpanAttributes) {\n\t\tt.Errorf(\"Expected %d span attributes, got %d\", len(original.SpanAttributes), len(unmarshaled.SpanAttributes))\n\t}\n\n\tfor key, expectedValue := range original.SpanAttributes {\n\t\tif actualValue, exists := unmarshaled.SpanAttributes[key]; !exists {\n\t\t\tt.Errorf(\"Expected span attribute %s to exist\", key)\n\t\t} else if actualValue != expectedValue {\n\t\t\tt.Errorf(\"Expected span attribute %s = %s, got %s\", key, expectedValue, actualValue)\n\t\t}\n\t}\n\n\tt.Logf(\"JSON representation: %s\", string(jsonData))\n}\n\nfunc TestTracing_OpenTelemetry_Span_Attributes(t *testing.T) {\n\t// Create an in-memory span recorder to capture actual span data\n\tspanRecorder := tracetest.NewSpanRecorder()\n\tprovider := trace.NewTracerProvider(\n\t\ttrace.WithSpanProcessor(spanRecorder),\n\t)\n\n\t// Create our tracing module with span attributes that include placeholders\n\tot := &Tracing{\n\t\tSpanName: \"test-span\",\n\t\tSpanAttributes: map[string]string{\n\t\t\t\"static\":               \"test-service\",\n\t\t\t\"request-placeholder\":  \"{http.request.method}\",\n\t\t\t\"response-placeholder\": \"{http.response.header.X-Some-Header}\",\n\t\t\t\"mixed\":                \"prefix-{http.request.method}-{http.response.header.X-Some-Header}\",\n\t\t},\n\t}\n\n\t// Create a specific request to test against\n\treq, _ := http.NewRequest(\"POST\", \"https://api.example.com/v1/users?id=123\", nil)\n\treq.Host = \"api.example.com\"\n\n\tw := httptest.NewRecorder()\n\n\t// Set up the replacer\n\trepl := caddy.NewReplacer()\n\tctx := context.WithValue(req.Context(), caddy.ReplacerCtxKey, repl)\n\tctx = context.WithValue(ctx, caddyhttp.VarsCtxKey, make(map[string]any))\n\treq = req.WithContext(ctx)\n\n\t// Set up request placeholders\n\trepl.Set(\"http.request.method\", req.Method)\n\trepl.Set(\"http.request.uri\", req.URL.RequestURI())\n\n\t// Handler to generate the response\n\tvar handler caddyhttp.HandlerFunc = func(writer http.ResponseWriter, request *http.Request) error {\n\t\twriter.Header().Set(\"X-Some-Header\", \"some-value\")\n\t\twriter.WriteHeader(200)\n\n\t\t// Make response headers available to replacer\n\t\trepl.Set(\"http.response.header.X-Some-Header\", writer.Header().Get(\"X-Some-Header\"))\n\n\t\treturn nil\n\t}\n\n\t// Set up Caddy context\n\tcaddyCtx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tdefer cancel()\n\n\t// Override the global tracer provider with our test provider\n\t// This is a bit hacky but necessary to capture the actual spans\n\toriginalProvider := globalTracerProvider\n\tglobalTracerProvider = &tracerProvider{\n\t\ttracerProvider:         provider,\n\t\ttracerProvidersCounter: 1, // Simulate one user\n\t}\n\tdefer func() {\n\t\tglobalTracerProvider = originalProvider\n\t}()\n\n\t// Provision the tracing module\n\tif err := ot.Provision(caddyCtx); err != nil {\n\t\tt.Errorf(\"Provision error: %v\", err)\n\t\tt.FailNow()\n\t}\n\n\t// Execute the request\n\tif err := ot.ServeHTTP(w, req, handler); err != nil {\n\t\tt.Errorf(\"ServeHTTP error: %v\", err)\n\t}\n\n\t// Get the recorded spans\n\tspans := spanRecorder.Ended()\n\tif len(spans) == 0 {\n\t\tt.Fatal(\"Expected at least one span to be recorded\")\n\t}\n\n\t// Find our span (should be the one with our test span name)\n\tvar testSpan trace.ReadOnlySpan\n\tfor _, span := range spans {\n\t\tif span.Name() == \"test-span\" {\n\t\t\ttestSpan = span\n\t\t\tbreak\n\t\t}\n\t}\n\n\tif testSpan == nil {\n\t\tt.Fatal(\"Could not find test span in recorded spans\")\n\t}\n\n\t// Verify that the span attributes were set correctly with placeholder replacement\n\texpectedAttributes := map[string]string{\n\t\t\"static\":               \"test-service\",\n\t\t\"request-placeholder\":  \"POST\",\n\t\t\"response-placeholder\": \"some-value\",\n\t\t\"mixed\":                \"prefix-POST-some-value\",\n\t}\n\n\tactualAttributes := make(map[string]string)\n\tfor _, attr := range testSpan.Attributes() {\n\t\tactualAttributes[string(attr.Key)] = attr.Value.AsString()\n\t}\n\n\tfor key, expectedValue := range expectedAttributes {\n\t\tif actualValue, exists := actualAttributes[key]; !exists {\n\t\t\tt.Errorf(\"Expected span attribute %s to be set\", key)\n\t\t} else if actualValue != expectedValue {\n\t\t\tt.Errorf(\"Expected span attribute %s = %s, got %s\", key, expectedValue, actualValue)\n\t\t}\n\t}\n\n\tt.Logf(\"Recorded span attributes: %+v\", actualAttributes)\n}\n\nfunc createRequestWithContext(method string, url string) *http.Request {\n\tr, _ := http.NewRequest(method, url, nil)\n\trepl := caddy.NewReplacer()\n\tctx := context.WithValue(r.Context(), caddy.ReplacerCtxKey, repl)\n\tr = r.WithContext(ctx)\n\treturn r\n}\n"
  },
  {
    "path": "modules/caddyhttp/tracing/tracer.go",
    "content": "package tracing\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net/http\"\n\n\t\"go.opentelemetry.io/contrib/exporters/autoexport\"\n\t\"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp\"\n\t\"go.opentelemetry.io/contrib/propagators/autoprop\"\n\t\"go.opentelemetry.io/otel/attribute\"\n\t\"go.opentelemetry.io/otel/propagation\"\n\t\"go.opentelemetry.io/otel/sdk/resource\"\n\tsdktrace \"go.opentelemetry.io/otel/sdk/trace\"\n\tsemconv \"go.opentelemetry.io/otel/semconv/v1.17.0\"\n\t\"go.opentelemetry.io/otel/trace\"\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nconst (\n\twebEngineName                = \"Caddy\"\n\tdefaultSpanName              = \"handler\"\n\tnextCallCtxKey  caddy.CtxKey = \"nextCall\"\n)\n\n// nextCall store the next handler, and the error value return on calling it (if any)\ntype nextCall struct {\n\tnext caddyhttp.Handler\n\terr  error\n}\n\n// openTelemetryWrapper is responsible for the tracing injection, extraction and propagation.\ntype openTelemetryWrapper struct {\n\tpropagators propagation.TextMapPropagator\n\n\thandler http.Handler\n\n\tspanName       string\n\tspanAttributes map[string]string\n}\n\n// newOpenTelemetryWrapper is responsible for the openTelemetryWrapper initialization using provided configuration.\nfunc newOpenTelemetryWrapper(\n\tctx context.Context,\n\tspanName string,\n\tspanAttributes map[string]string,\n) (openTelemetryWrapper, error) {\n\tif spanName == \"\" {\n\t\tspanName = defaultSpanName\n\t}\n\n\tot := openTelemetryWrapper{\n\t\tspanName:       spanName,\n\t\tspanAttributes: spanAttributes,\n\t}\n\n\tversion, _ := caddy.Version()\n\tres, err := ot.newResource(webEngineName, version)\n\tif err != nil {\n\t\treturn ot, fmt.Errorf(\"creating resource error: %w\", err)\n\t}\n\n\ttraceExporter, err := autoexport.NewSpanExporter(ctx)\n\tif err != nil {\n\t\treturn ot, fmt.Errorf(\"creating trace exporter error: %w\", err)\n\t}\n\n\tot.propagators = autoprop.NewTextMapPropagator()\n\n\ttracerProvider := globalTracerProvider.getTracerProvider(\n\t\tsdktrace.WithBatcher(traceExporter),\n\t\tsdktrace.WithResource(res),\n\t)\n\n\tot.handler = otelhttp.NewHandler(http.HandlerFunc(ot.serveHTTP),\n\t\tot.spanName,\n\t\totelhttp.WithTracerProvider(tracerProvider),\n\t\totelhttp.WithPropagators(ot.propagators),\n\t\totelhttp.WithSpanNameFormatter(ot.spanNameFormatter),\n\t)\n\n\treturn ot, nil\n}\n\n// serveHTTP injects a tracing context and call the next handler.\nfunc (ot *openTelemetryWrapper) serveHTTP(w http.ResponseWriter, r *http.Request) {\n\tctx := r.Context()\n\tot.propagators.Inject(ctx, propagation.HeaderCarrier(r.Header))\n\tspanCtx := trace.SpanContextFromContext(ctx)\n\tif spanCtx.IsValid() {\n\t\ttraceID := spanCtx.TraceID().String()\n\t\tspanID := spanCtx.SpanID().String()\n\t\t// Add a trace_id placeholder, accessible via `{http.vars.trace_id}`.\n\t\tcaddyhttp.SetVar(ctx, \"trace_id\", traceID)\n\t\t// Add a span_id placeholder, accessible via `{http.vars.span_id}`.\n\t\tcaddyhttp.SetVar(ctx, \"span_id\", spanID)\n\t\t// Add the traceID and spanID to the log fields for the request.\n\t\tif extra, ok := ctx.Value(caddyhttp.ExtraLogFieldsCtxKey).(*caddyhttp.ExtraLogFields); ok {\n\t\t\textra.Add(zap.String(\"traceID\", traceID))\n\t\t\textra.Add(zap.String(\"spanID\", spanID))\n\t\t}\n\t}\n\n\tnext := ctx.Value(nextCallCtxKey).(*nextCall)\n\tnext.err = next.next.ServeHTTP(w, r)\n\n\t// Add custom span attributes to the current span\n\tspan := trace.SpanFromContext(ctx)\n\tif span.IsRecording() && len(ot.spanAttributes) > 0 {\n\t\treplacer := ctx.Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\t\tattributes := make([]attribute.KeyValue, 0, len(ot.spanAttributes))\n\t\tfor key, value := range ot.spanAttributes {\n\t\t\t// Allow placeholder replacement in attribute values\n\t\t\treplacedValue := replacer.ReplaceAll(value, \"\")\n\t\t\tattributes = append(attributes, attribute.String(key, replacedValue))\n\t\t}\n\t\tspan.SetAttributes(attributes...)\n\t}\n}\n\n// ServeHTTP propagates call to the by wrapped by `otelhttp` next handler.\nfunc (ot *openTelemetryWrapper) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {\n\tn := &nextCall{\n\t\tnext: next,\n\t\terr:  nil,\n\t}\n\tot.handler.ServeHTTP(w, r.WithContext(context.WithValue(r.Context(), nextCallCtxKey, n)))\n\n\treturn n.err\n}\n\n// cleanup flush all remaining data and shutdown a tracerProvider\nfunc (ot *openTelemetryWrapper) cleanup(logger *zap.Logger) error {\n\treturn globalTracerProvider.cleanupTracerProvider(logger)\n}\n\n// newResource creates a resource that describe current handler instance and merge it with a default attributes value.\nfunc (ot *openTelemetryWrapper) newResource(\n\twebEngineName,\n\twebEngineVersion string,\n) (*resource.Resource, error) {\n\treturn resource.Merge(resource.Default(), resource.NewSchemaless(\n\t\tsemconv.WebEngineName(webEngineName),\n\t\tsemconv.WebEngineVersion(webEngineVersion),\n\t))\n}\n\n// spanNameFormatter performs the replacement of placeholders in the span name\nfunc (ot *openTelemetryWrapper) spanNameFormatter(operation string, r *http.Request) string {\n\treturn r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer).ReplaceAll(operation, \"\")\n}\n"
  },
  {
    "path": "modules/caddyhttp/tracing/tracer_test.go",
    "content": "package tracing\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc TestOpenTelemetryWrapper_newOpenTelemetryWrapper(t *testing.T) {\n\tctx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tdefer cancel()\n\n\tvar otw openTelemetryWrapper\n\tvar err error\n\n\tif otw, err = newOpenTelemetryWrapper(ctx,\n\t\t\"\",\n\t\tnil,\n\t); err != nil {\n\t\tt.Errorf(\"newOpenTelemetryWrapper() error = %v\", err)\n\t\tt.FailNow()\n\t}\n\n\tif otw.propagators == nil {\n\t\tt.Errorf(\"Propagators should not be empty\")\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/tracing/tracerprovider.go",
    "content": "package tracing\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"sync\"\n\n\tsdktrace \"go.opentelemetry.io/otel/sdk/trace\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n)\n\n// globalTracerProvider stores global tracer provider and is responsible for graceful shutdown when nobody is using it.\nvar globalTracerProvider = &tracerProvider{}\n\ntype tracerProvider struct {\n\tmu                     sync.Mutex\n\ttracerProvider         *sdktrace.TracerProvider\n\ttracerProvidersCounter int\n}\n\n// getTracerProvider create or return an existing global TracerProvider\nfunc (t *tracerProvider) getTracerProvider(opts ...sdktrace.TracerProviderOption) *sdktrace.TracerProvider {\n\tt.mu.Lock()\n\tdefer t.mu.Unlock()\n\n\tt.tracerProvidersCounter++\n\n\tif t.tracerProvider == nil {\n\t\tt.tracerProvider = sdktrace.NewTracerProvider(\n\t\t\topts...,\n\t\t)\n\t}\n\n\treturn t.tracerProvider\n}\n\n// cleanupTracerProvider gracefully shutdown a TracerProvider\nfunc (t *tracerProvider) cleanupTracerProvider(logger *zap.Logger) error {\n\tt.mu.Lock()\n\tdefer t.mu.Unlock()\n\n\tif t.tracerProvidersCounter > 0 {\n\t\tt.tracerProvidersCounter--\n\t}\n\n\tif t.tracerProvidersCounter == 0 {\n\t\tif t.tracerProvider != nil {\n\t\t\t// tracerProvider.ForceFlush SHOULD be invoked according to https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/sdk.md#forceflush\n\t\t\tif err := t.tracerProvider.ForceFlush(context.Background()); err != nil {\n\t\t\t\tif c := logger.Check(zapcore.ErrorLevel, \"forcing flush\"); c != nil {\n\t\t\t\t\tc.Write(zap.Error(err))\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// tracerProvider.Shutdown MUST be invoked according to https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/sdk.md#shutdown\n\t\t\tif err := t.tracerProvider.Shutdown(context.Background()); err != nil {\n\t\t\t\treturn fmt.Errorf(\"tracerProvider shutdown error: %w\", err)\n\t\t\t}\n\t\t}\n\n\t\tt.tracerProvider = nil\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "modules/caddyhttp/tracing/tracerprovider_test.go",
    "content": "package tracing\n\nimport (\n\t\"testing\"\n\n\t\"go.uber.org/zap\"\n)\n\nfunc Test_tracersProvider_getTracerProvider(t *testing.T) {\n\ttp := tracerProvider{}\n\n\ttp.getTracerProvider()\n\ttp.getTracerProvider()\n\n\tif tp.tracerProvider == nil {\n\t\tt.Errorf(\"There should be tracer provider\")\n\t}\n\n\tif tp.tracerProvidersCounter != 2 {\n\t\tt.Errorf(\"Tracer providers counter should equal to 2\")\n\t}\n}\n\nfunc Test_tracersProvider_cleanupTracerProvider(t *testing.T) {\n\ttp := tracerProvider{}\n\n\ttp.getTracerProvider()\n\ttp.getTracerProvider()\n\n\terr := tp.cleanupTracerProvider(zap.NewNop())\n\tif err != nil {\n\t\tt.Errorf(\"There should be no error: %v\", err)\n\t}\n\n\tif tp.tracerProvider == nil {\n\t\tt.Errorf(\"There should be tracer provider\")\n\t}\n\n\tif tp.tracerProvidersCounter != 1 {\n\t\tt.Errorf(\"Tracer providers counter should equal to 1\")\n\t}\n}\n"
  },
  {
    "path": "modules/caddyhttp/vars.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddyhttp\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"reflect\"\n\t\"strings\"\n\n\t\"github.com/google/cel-go/cel\"\n\t\"github.com/google/cel-go/common/types/ref\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nvar stringSliceType = reflect.TypeFor[[]string]()\n\nfunc init() {\n\tcaddy.RegisterModule(VarsMiddleware{})\n\tcaddy.RegisterModule(VarsMatcher{})\n\tcaddy.RegisterModule(MatchVarsRE{})\n}\n\n// VarsMiddleware is an HTTP middleware which sets variables to\n// have values that can be used in the HTTP request handler\n// chain. The primary way to access variables is with placeholders,\n// which have the form: `{http.vars.variable_name}`, or with\n// the `vars` and `vars_regexp` request matchers.\n//\n// The key is the variable name, and the value is the value of the\n// variable. Both the name and value may use or contain placeholders.\ntype VarsMiddleware map[string]any\n\n// CaddyModule returns the Caddy module information.\nfunc (VarsMiddleware) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.vars\",\n\t\tNew: func() caddy.Module { return new(VarsMiddleware) },\n\t}\n}\n\nfunc (m VarsMiddleware) ServeHTTP(w http.ResponseWriter, r *http.Request, next Handler) error {\n\tvars := r.Context().Value(VarsCtxKey).(map[string]any)\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\tfor k, v := range m {\n\t\tkeyExpanded := repl.ReplaceAll(k, \"\")\n\t\tif valStr, ok := v.(string); ok {\n\t\t\tv = repl.ReplaceAll(valStr, \"\")\n\t\t}\n\t\tvars[keyExpanded] = v\n\n\t\t// Special case: the user ID is in the replacer, pulled from there\n\t\t// for access logs. Allow users to override it with the vars handler.\n\t\tif keyExpanded == \"http.auth.user.id\" {\n\t\t\trepl.Set(keyExpanded, v)\n\t\t}\n\t}\n\treturn next.ServeHTTP(w, r)\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler. Syntax:\n//\n//\tvars [<name> <val>] {\n//\t    <name> <val>\n//\t    ...\n//\t}\nfunc (m *VarsMiddleware) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume directive name\n\n\tif *m == nil {\n\t\t*m = make(VarsMiddleware)\n\t}\n\n\tnextVar := func(headerLine bool) error {\n\t\tif headerLine {\n\t\t\t// header line is optional\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn nil\n\t\t\t}\n\t\t}\n\t\tvarName := d.Val()\n\n\t\tif !d.NextArg() {\n\t\t\treturn d.ArgErr()\n\t\t}\n\t\tvarValue := d.ScalarVal()\n\n\t\t(*m)[varName] = varValue\n\n\t\tif d.NextArg() {\n\t\t\treturn d.ArgErr()\n\t\t}\n\t\treturn nil\n\t}\n\n\tif err := nextVar(true); err != nil {\n\t\treturn err\n\t}\n\tfor d.NextBlock(0) {\n\t\tif err := nextVar(false); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// VarsMatcher is an HTTP request matcher which can match\n// requests based on variables in the context or placeholder\n// values. The key is the placeholder or name of the variable,\n// and the values are possible values the variable can be in\n// order to match (logical OR'ed).\n//\n// If the key is surrounded by `{ }` it is assumed to be a\n// placeholder. Otherwise, it will be considered a variable\n// name.\n//\n// Placeholders in the keys are not expanded, but\n// placeholders in the values are.\ntype VarsMatcher map[string][]string\n\n// CaddyModule returns the Caddy module information.\nfunc (VarsMatcher) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.matchers.vars\",\n\t\tNew: func() caddy.Module { return new(VarsMatcher) },\n\t}\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (m *VarsMatcher) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\tif *m == nil {\n\t\t*m = make(map[string][]string)\n\t}\n\t// iterate to merge multiple matchers into one\n\tfor d.Next() {\n\t\tvar field string\n\t\tif !d.Args(&field) {\n\t\t\treturn d.Errf(\"malformed vars matcher: expected field name\")\n\t\t}\n\t\tvals := d.RemainingArgs()\n\t\tif len(vals) == 0 {\n\t\t\treturn d.Errf(\"malformed vars matcher: expected at least one value to match against\")\n\t\t}\n\t\t(*m)[field] = append((*m)[field], vals...)\n\t\tif d.NextBlock(0) {\n\t\t\treturn d.Err(\"malformed vars matcher: blocks are not supported\")\n\t\t}\n\t}\n\treturn nil\n}\n\n// Match matches a request based on variables in the context,\n// or placeholders if the key is not a variable.\nfunc (m VarsMatcher) Match(r *http.Request) bool {\n\tmatch, _ := m.MatchWithError(r)\n\treturn match\n}\n\n// MatchWithError returns true if r matches m.\nfunc (m VarsMatcher) MatchWithError(r *http.Request) (bool, error) {\n\tif len(m) == 0 {\n\t\treturn true, nil\n\t}\n\n\tvars := r.Context().Value(VarsCtxKey).(map[string]any)\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\n\tvar fromPlaceholder bool\n\tvar matcherValExpanded, valExpanded, varStr, v string\n\tvar varValue any\n\tfor key, vals := range m {\n\t\tif strings.HasPrefix(key, \"{\") &&\n\t\t\tstrings.HasSuffix(key, \"}\") &&\n\t\t\tstrings.Count(key, \"{\") == 1 {\n\t\t\tvarValue, _ = repl.Get(strings.Trim(key, \"{}\"))\n\t\t\tfromPlaceholder = true\n\t\t} else {\n\t\t\tvarValue = vars[key]\n\t\t\tfromPlaceholder = false\n\t\t}\n\n\t\tswitch vv := varValue.(type) {\n\t\tcase string:\n\t\t\tvarStr = vv\n\t\tcase fmt.Stringer:\n\t\t\tvarStr = vv.String()\n\t\tcase error:\n\t\t\tvarStr = vv.Error()\n\t\tcase nil:\n\t\t\tvarStr = \"\"\n\t\tdefault:\n\t\t\tvarStr = fmt.Sprintf(\"%v\", vv)\n\t\t}\n\n\t\t// Only expand placeholders in values from literal variable names\n\t\t// (e.g. map outputs). Values resolved from placeholder keys are\n\t\t// already final and must not be re-expanded, as that would allow\n\t\t// user input like {env.SECRET} to be evaluated.\n\t\tvalExpanded = varStr\n\t\tif !fromPlaceholder {\n\t\t\tvalExpanded = repl.ReplaceAll(varStr, \"\")\n\t\t}\n\n\t\t// see if any of the values given in the matcher match the actual value\n\t\tfor _, v = range vals {\n\t\t\tmatcherValExpanded = repl.ReplaceAll(v, \"\")\n\t\t\tif valExpanded == matcherValExpanded {\n\t\t\t\treturn true, nil\n\t\t\t}\n\t\t}\n\t}\n\treturn false, nil\n}\n\n// CELLibrary produces options that expose this matcher for use in CEL\n// expression matchers.\n//\n// Example:\n//\n//\texpression vars({'{magic_number}': ['3', '5']})\n//\texpression vars({'{foo}': 'single_value'})\nfunc (VarsMatcher) CELLibrary(_ caddy.Context) (cel.Library, error) {\n\treturn CELMatcherImpl(\n\t\t\"vars\",\n\t\t\"vars_matcher_request_map\",\n\t\t[]*cel.Type{CELTypeJSON},\n\t\tfunc(data ref.Val) (RequestMatcherWithError, error) {\n\t\t\tmapStrListStr, err := CELValueToMapStrList(data)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\treturn VarsMatcher(mapStrListStr), nil\n\t\t},\n\t)\n}\n\n// MatchVarsRE matches the value of the context variables by a given regular expression.\n//\n// Upon a match, it adds placeholders to the request: `{http.regexp.name.capture_group}`\n// where `name` is the regular expression's name, and `capture_group` is either\n// the named or positional capture group from the expression itself. If no name\n// is given, then the placeholder omits the name: `{http.regexp.capture_group}`\n// (potentially leading to collisions).\ntype MatchVarsRE map[string]*MatchRegexp\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchVarsRE) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.matchers.vars_regexp\",\n\t\tNew: func() caddy.Module { return new(MatchVarsRE) },\n\t}\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (m *MatchVarsRE) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\tif *m == nil {\n\t\t*m = make(map[string]*MatchRegexp)\n\t}\n\t// iterate to merge multiple matchers into one\n\tfor d.Next() {\n\t\tvar first, second, third string\n\t\tif !d.Args(&first, &second) {\n\t\t\treturn d.ArgErr()\n\t\t}\n\n\t\tvar name, field, val string\n\t\tif d.Args(&third) {\n\t\t\tname = first\n\t\t\tfield = second\n\t\t\tval = third\n\t\t} else {\n\t\t\tfield = first\n\t\t\tval = second\n\t\t}\n\n\t\t// Default to the named matcher's name, if no regexp name is provided\n\t\tif name == \"\" {\n\t\t\tname = d.GetContextString(caddyfile.MatcherNameCtxKey)\n\t\t}\n\n\t\t(*m)[field] = &MatchRegexp{Pattern: val, Name: name}\n\t\tif d.NextBlock(0) {\n\t\t\treturn d.Err(\"malformed vars_regexp matcher: blocks are not supported\")\n\t\t}\n\t}\n\treturn nil\n}\n\n// Provision compiles m's regular expressions.\nfunc (m MatchVarsRE) Provision(ctx caddy.Context) error {\n\tfor _, rm := range m {\n\t\terr := rm.Provision(ctx)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\n// Match returns true if r matches m.\nfunc (m MatchVarsRE) Match(r *http.Request) bool {\n\tmatch, _ := m.MatchWithError(r)\n\treturn match\n}\n\n// MatchWithError returns true if r matches m.\nfunc (m MatchVarsRE) MatchWithError(r *http.Request) (bool, error) {\n\tvars := r.Context().Value(VarsCtxKey).(map[string]any)\n\trepl := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\n\tvar fromPlaceholder, match bool\n\tvar valExpanded, varStr string\n\tvar varValue any\n\tfor key, val := range m {\n\t\tif strings.HasPrefix(key, \"{\") &&\n\t\t\tstrings.HasSuffix(key, \"}\") &&\n\t\t\tstrings.Count(key, \"{\") == 1 {\n\t\t\tvarValue, _ = repl.Get(strings.Trim(key, \"{}\"))\n\t\t\tfromPlaceholder = true\n\t\t} else {\n\t\t\tvarValue = vars[key]\n\t\t\tfromPlaceholder = false\n\t\t}\n\n\t\tswitch vv := varValue.(type) {\n\t\tcase string:\n\t\t\tvarStr = vv\n\t\tcase fmt.Stringer:\n\t\t\tvarStr = vv.String()\n\t\tcase error:\n\t\t\tvarStr = vv.Error()\n\t\tcase nil:\n\t\t\tvarStr = \"\"\n\t\tdefault:\n\t\t\tvarStr = fmt.Sprintf(\"%v\", vv)\n\t\t}\n\n\t\t// Only expand placeholders in values from literal variable names\n\t\t// (e.g. map outputs). Values resolved from placeholder keys are\n\t\t// already final and must not be re-expanded, as that would allow\n\t\t// user input like {env.SECRET} to be evaluated.\n\t\tvalExpanded = varStr\n\t\tif !fromPlaceholder {\n\t\t\tvalExpanded = repl.ReplaceAll(varStr, \"\")\n\t\t}\n\t\tif match = val.Match(valExpanded, repl); match {\n\t\t\treturn match, nil\n\t\t}\n\t}\n\treturn false, nil\n}\n\n// CELLibrary produces options that expose this matcher for use in CEL\n// expression matchers.\n//\n// Example:\n//\n//\texpression vars_regexp('foo', '{magic_number}', '[0-9]+')\n//\texpression vars_regexp('{magic_number}', '[0-9]+')\nfunc (MatchVarsRE) CELLibrary(ctx caddy.Context) (cel.Library, error) {\n\tunnamedPattern, err := CELMatcherImpl(\n\t\t\"vars_regexp\",\n\t\t\"vars_regexp_request_string_string\",\n\t\t[]*cel.Type{cel.StringType, cel.StringType},\n\t\tfunc(data ref.Val) (RequestMatcherWithError, error) {\n\t\t\trefStringList := stringSliceType\n\t\t\tparams, err := data.ConvertToNative(refStringList)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tstrParams := params.([]string)\n\t\t\tmatcher := MatchVarsRE{}\n\t\t\tmatcher[strParams[0]] = &MatchRegexp{\n\t\t\t\tPattern: strParams[1],\n\t\t\t\tName:    ctx.Value(MatcherNameCtxKey).(string),\n\t\t\t}\n\t\t\terr = matcher.Provision(ctx)\n\t\t\treturn matcher, err\n\t\t},\n\t)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tnamedPattern, err := CELMatcherImpl(\n\t\t\"vars_regexp\",\n\t\t\"vars_regexp_request_string_string_string\",\n\t\t[]*cel.Type{cel.StringType, cel.StringType, cel.StringType},\n\t\tfunc(data ref.Val) (RequestMatcherWithError, error) {\n\t\t\trefStringList := stringSliceType\n\t\t\tparams, err := data.ConvertToNative(refStringList)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tstrParams := params.([]string)\n\t\t\tname := strParams[0]\n\t\t\tif name == \"\" {\n\t\t\t\tname = ctx.Value(MatcherNameCtxKey).(string)\n\t\t\t}\n\t\t\tmatcher := MatchVarsRE{}\n\t\t\tmatcher[strParams[1]] = &MatchRegexp{\n\t\t\t\tPattern: strParams[2],\n\t\t\t\tName:    name,\n\t\t\t}\n\t\t\terr = matcher.Provision(ctx)\n\t\t\treturn matcher, err\n\t\t},\n\t)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tenvOpts := append(unnamedPattern.CompileOptions(), namedPattern.CompileOptions()...)\n\tprgOpts := append(unnamedPattern.ProgramOptions(), namedPattern.ProgramOptions()...)\n\treturn NewMatcherCELLibrary(envOpts, prgOpts), nil\n}\n\n// Validate validates m's regular expressions.\nfunc (m MatchVarsRE) Validate() error {\n\tfor _, rm := range m {\n\t\terr := rm.Validate()\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\n// GetVar gets a value out of the context's variable table by key.\n// If the key does not exist, the return value will be nil.\nfunc GetVar(ctx context.Context, key string) any {\n\tvarMap, ok := ctx.Value(VarsCtxKey).(map[string]any)\n\tif !ok {\n\t\treturn nil\n\t}\n\treturn varMap[key]\n}\n\n// SetVar sets a value in the context's variable table with\n// the given key. It overwrites any previous value with the\n// same key.\n//\n// If the value is nil (note: non-nil interface with nil\n// underlying value does not count) and the key exists in\n// the table, the key+value will be deleted from the table.\nfunc SetVar(ctx context.Context, key string, value any) {\n\tvarMap, ok := ctx.Value(VarsCtxKey).(map[string]any)\n\tif !ok {\n\t\treturn\n\t}\n\tif value == nil {\n\t\tif _, ok := varMap[key]; ok {\n\t\t\tdelete(varMap, key)\n\t\t\treturn\n\t\t}\n\t}\n\tvarMap[key] = value\n}\n\n// Interface guards\nvar (\n\t_ MiddlewareHandler       = (*VarsMiddleware)(nil)\n\t_ caddyfile.Unmarshaler   = (*VarsMiddleware)(nil)\n\t_ RequestMatcherWithError = (*VarsMatcher)(nil)\n\t_ caddyfile.Unmarshaler   = (*VarsMatcher)(nil)\n\t_ RequestMatcherWithError = (*MatchVarsRE)(nil)\n\t_ caddyfile.Unmarshaler   = (*MatchVarsRE)(nil)\n)\n"
  },
  {
    "path": "modules/caddypki/acmeserver/acmeserver.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage acmeserver\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\tweakrand \"math/rand/v2\"\n\t\"net\"\n\t\"net/http\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"regexp\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/go-chi/chi/v5\"\n\t\"github.com/smallstep/certificates/acme\"\n\t\"github.com/smallstep/certificates/acme/api\"\n\tacmeNoSQL \"github.com/smallstep/certificates/acme/db/nosql\"\n\t\"github.com/smallstep/certificates/authority\"\n\t\"github.com/smallstep/certificates/authority/provisioner\"\n\t\"github.com/smallstep/certificates/db\"\n\t\"github.com/smallstep/nosql\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddypki\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddytls\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Handler{})\n}\n\n// Handler is an ACME server handler.\ntype Handler struct {\n\t// The ID of the CA to use for signing. This refers to\n\t// the ID given to the CA in the `pki` app. If omitted,\n\t// the default ID is \"local\".\n\tCA string `json:\"ca,omitempty\"`\n\n\t// The lifetime for issued certificates\n\tLifetime caddy.Duration `json:\"lifetime,omitempty\"`\n\n\t// The hostname or IP address by which ACME clients\n\t// will access the server. This is used to populate\n\t// the ACME directory endpoint. If not set, the Host\n\t// header of the request will be used.\n\t// COMPATIBILITY NOTE / TODO: This property may go away in the\n\t// future. Do not rely on this property long-term; check release notes.\n\tHost string `json:\"host,omitempty\"`\n\n\t// The path prefix under which to serve all ACME\n\t// endpoints. All other requests will not be served\n\t// by this handler and will be passed through to\n\t// the next one. Default: \"/acme/\".\n\t// COMPATIBILITY NOTE / TODO: This property may go away in the\n\t// future, as it is currently only required due to\n\t// limitations in the underlying library. Do not rely\n\t// on this property long-term; check release notes.\n\tPathPrefix string `json:\"path_prefix,omitempty\"`\n\n\t// If true, the CA's root will be the issuer instead of\n\t// the intermediate. This is NOT recommended and should\n\t// only be used when devices/clients do not properly\n\t// validate certificate chains. EXPERIMENTAL: Might be\n\t// changed or removed in the future.\n\tSignWithRoot bool `json:\"sign_with_root,omitempty\"`\n\n\t// The addresses of DNS resolvers to use when looking up\n\t// the TXT records for solving DNS challenges.\n\t// It accepts [network addresses](/docs/conventions#network-addresses)\n\t// with port range of only 1. If the host is an IP address,\n\t// it will be dialed directly to resolve the upstream server.\n\t// If the host is not an IP address, the addresses are resolved\n\t// using the [name resolution convention](https://golang.org/pkg/net/#hdr-Name_Resolution)\n\t// of the Go standard library. If the array contains more\n\t// than 1 resolver address, one is chosen at random.\n\tResolvers []string `json:\"resolvers,omitempty\"`\n\n\t// Specify the set of enabled ACME challenges. An empty or absent value\n\t// means all challenges are enabled. Accepted values are:\n\t// \"http-01\", \"dns-01\", \"tls-alpn-01\"\n\tChallenges ACMEChallenges `json:\"challenges,omitempty\" `\n\n\t// The policy to use for issuing certificates\n\tPolicy *Policy `json:\"policy,omitempty\"`\n\n\tlogger    *zap.Logger\n\tresolvers []caddy.NetworkAddress\n\tctx       caddy.Context\n\n\tacmeDB        acme.DB\n\tacmeAuth      *authority.Authority\n\tacmeClient    acme.Client\n\tacmeLinker    acme.Linker\n\tacmeEndpoints http.Handler\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Handler) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.acme_server\",\n\t\tNew: func() caddy.Module { return new(Handler) },\n\t}\n}\n\n// Provision sets up the ACME server handler.\nfunc (ash *Handler) Provision(ctx caddy.Context) error {\n\tash.ctx = ctx\n\tash.logger = ctx.Logger()\n\n\t// set some defaults\n\tif ash.CA == \"\" {\n\t\tash.CA = caddypki.DefaultCAID\n\t}\n\tif ash.PathPrefix == \"\" {\n\t\tash.PathPrefix = defaultPathPrefix\n\t}\n\tif ash.Lifetime == 0 {\n\t\tash.Lifetime = caddy.Duration(12 * time.Hour)\n\t}\n\tif len(ash.Challenges) > 0 {\n\t\tif err := ash.Challenges.validate(); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tash.warnIfPolicyAllowsAll()\n\n\t// get a reference to the configured CA\n\tappModule, err := ctx.App(\"pki\")\n\tif err != nil {\n\t\treturn err\n\t}\n\tpkiApp := appModule.(*caddypki.PKI)\n\tca, err := pkiApp.GetCA(ctx, ash.CA)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// make sure leaf cert lifetime is less than the intermediate cert lifetime. this check only\n\t// applies for caddy-managed intermediate certificates\n\tif ca.Intermediate == nil && ash.Lifetime >= ca.IntermediateLifetime {\n\t\treturn fmt.Errorf(\"certificate lifetime (%s) should be less than intermediate certificate lifetime (%s)\", time.Duration(ash.Lifetime), time.Duration(ca.IntermediateLifetime))\n\t}\n\n\tdatabase, err := ash.openDatabase()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tauthorityConfig := caddypki.AuthorityConfig{\n\t\tSignWithRoot: ash.SignWithRoot,\n\t\tAuthConfig: &authority.AuthConfig{\n\t\t\tProvisioners: provisioner.List{\n\t\t\t\t&provisioner.ACME{\n\t\t\t\t\tName:       ash.CA,\n\t\t\t\t\tChallenges: ash.Challenges.toSmallstepType(),\n\t\t\t\t\tOptions: &provisioner.Options{\n\t\t\t\t\t\tX509: ash.Policy.normalizeRules(),\n\t\t\t\t\t},\n\t\t\t\t\tType: provisioner.TypeACME.String(),\n\t\t\t\t\tClaims: &provisioner.Claims{\n\t\t\t\t\t\tMinTLSDur:     &provisioner.Duration{Duration: 5 * time.Minute},\n\t\t\t\t\t\tMaxTLSDur:     &provisioner.Duration{Duration: 24 * time.Hour * 365},\n\t\t\t\t\t\tDefaultTLSDur: &provisioner.Duration{Duration: time.Duration(ash.Lifetime)},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\tDB: database,\n\t}\n\n\tash.acmeAuth, err = ca.NewAuthority(authorityConfig)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tash.acmeDB, err = acmeNoSQL.New(ash.acmeAuth.GetDatabase().(nosql.DB))\n\tif err != nil {\n\t\treturn fmt.Errorf(\"configuring ACME DB: %v\", err)\n\t}\n\n\tash.acmeClient, err = ash.makeClient()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tash.acmeLinker = acme.NewLinker(\n\t\tash.Host,\n\t\tstrings.Trim(ash.PathPrefix, \"/\"),\n\t)\n\n\t// extract its http.Handler so we can use it directly\n\tr := chi.NewRouter()\n\tr.Route(ash.PathPrefix, func(r chi.Router) {\n\t\tapi.Route(r)\n\t})\n\tash.acmeEndpoints = r\n\n\treturn nil\n}\n\nfunc (ash *Handler) warnIfPolicyAllowsAll() {\n\tallow := ash.Policy.normalizeAllowRules()\n\tdeny := ash.Policy.normalizeDenyRules()\n\tif allow != nil || deny != nil {\n\t\treturn\n\t}\n\n\tallowWildcardNames := ash.Policy != nil && ash.Policy.AllowWildcardNames\n\tash.logger.Warn(\n\t\t\"acme_server policy has no allow/deny rules; order identifiers are unrestricted (allow-all)\",\n\t\tzap.String(\"ca\", ash.CA),\n\t\tzap.Bool(\"allow_wildcard_names\", allowWildcardNames),\n\t)\n}\n\nfunc (ash Handler) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {\n\tif strings.HasPrefix(r.URL.Path, ash.PathPrefix) {\n\t\tacmeCtx := acme.NewContext(\n\t\t\tr.Context(),\n\t\t\tash.acmeDB,\n\t\t\tash.acmeClient,\n\t\t\tash.acmeLinker,\n\t\t\tnil,\n\t\t)\n\t\tacmeCtx = authority.NewContext(acmeCtx, ash.acmeAuth)\n\t\tr = r.WithContext(acmeCtx)\n\n\t\tash.acmeEndpoints.ServeHTTP(w, r)\n\t\treturn nil\n\t}\n\treturn next.ServeHTTP(w, r)\n}\n\nfunc (ash Handler) getDatabaseKey() string {\n\tkey := ash.CA\n\tkey = strings.ToLower(key)\n\tkey = strings.TrimSpace(key)\n\treturn keyCleaner.ReplaceAllLiteralString(key, \"\")\n}\n\n// Cleanup implements caddy.CleanerUpper and closes any idle databases.\nfunc (ash Handler) Cleanup() error {\n\tkey := ash.getDatabaseKey()\n\tdeleted, err := databasePool.Delete(key)\n\tif deleted {\n\t\tif c := ash.logger.Check(zapcore.DebugLevel, \"unloading unused CA database\"); c != nil {\n\t\t\tc.Write(zap.String(\"db_key\", key))\n\t\t}\n\t}\n\tif err != nil {\n\t\tif c := ash.logger.Check(zapcore.ErrorLevel, \"closing CA database\"); c != nil {\n\t\t\tc.Write(zap.String(\"db_key\", key), zap.Error(err))\n\t\t}\n\t}\n\treturn err\n}\n\nfunc (ash Handler) openDatabase() (*db.AuthDB, error) {\n\tkey := ash.getDatabaseKey()\n\tdatabase, loaded, err := databasePool.LoadOrNew(key, func() (caddy.Destructor, error) {\n\t\tdbFolder := filepath.Join(caddy.AppDataDir(), \"acme_server\", key)\n\t\tdbPath := filepath.Join(dbFolder, \"db\")\n\n\t\terr := os.MkdirAll(dbFolder, 0o755)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"making folder for CA database: %v\", err)\n\t\t}\n\n\t\tdbConfig := &db.Config{\n\t\t\tType:       \"bbolt\",\n\t\t\tDataSource: dbPath,\n\t\t}\n\t\tdatabase, err := db.New(dbConfig)\n\t\treturn databaseCloser{&database}, err\n\t})\n\n\tif loaded {\n\t\tif c := ash.logger.Check(zapcore.DebugLevel, \"loaded preexisting CA database\"); c != nil {\n\t\t\tc.Write(zap.String(\"db_key\", key))\n\t\t}\n\t}\n\n\treturn database.(databaseCloser).DB, err\n}\n\n// makeClient creates an ACME client which will use a custom\n// resolver instead of net.DefaultResolver.\nfunc (ash Handler) makeClient() (acme.Client, error) {\n\t// If no local resolvers are configured, check for global resolvers from TLS app\n\tresolversToUse := ash.Resolvers\n\tif len(resolversToUse) == 0 {\n\t\ttlsAppIface, err := ash.ctx.App(\"tls\")\n\t\tif err == nil {\n\t\t\ttlsApp := tlsAppIface.(*caddytls.TLS)\n\t\t\tif len(tlsApp.Resolvers) > 0 {\n\t\t\t\tresolversToUse = tlsApp.Resolvers\n\t\t\t}\n\t\t}\n\t}\n\n\tfor _, v := range resolversToUse {\n\t\taddr, err := caddy.ParseNetworkAddressWithDefaults(v, \"udp\", 53)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif addr.PortRangeSize() != 1 {\n\t\t\treturn nil, fmt.Errorf(\"resolver address must have exactly one address; cannot call %v\", addr)\n\t\t}\n\t\tash.resolvers = append(ash.resolvers, addr)\n\t}\n\n\tvar resolver *net.Resolver\n\tif len(ash.resolvers) != 0 {\n\t\tdialer := &net.Dialer{\n\t\t\tTimeout: 2 * time.Second,\n\t\t}\n\t\tresolver = &net.Resolver{\n\t\t\tPreferGo: true,\n\t\t\tDial: func(ctx context.Context, network, address string) (net.Conn, error) {\n\t\t\t\t//nolint:gosec\n\t\t\t\taddr := ash.resolvers[weakrand.IntN(len(ash.resolvers))]\n\t\t\t\treturn dialer.DialContext(ctx, addr.Network, addr.JoinHostPort(0))\n\t\t\t},\n\t\t}\n\t} else {\n\t\tresolver = net.DefaultResolver\n\t}\n\n\treturn resolverClient{\n\t\tClient:   acme.NewClient(),\n\t\tresolver: resolver,\n\t\tctx:      ash.ctx,\n\t}, nil\n}\n\ntype resolverClient struct {\n\tacme.Client\n\n\tresolver *net.Resolver\n\tctx      context.Context\n}\n\nfunc (c resolverClient) LookupTxt(name string) ([]string, error) {\n\treturn c.resolver.LookupTXT(c.ctx, name)\n}\n\nconst defaultPathPrefix = \"/acme/\"\n\nvar (\n\tkeyCleaner   = regexp.MustCompile(`[^\\w.-_]`)\n\tdatabasePool = caddy.NewUsagePool()\n)\n\ntype databaseCloser struct {\n\tDB *db.AuthDB\n}\n\nfunc (closer databaseCloser) Destruct() error {\n\treturn (*closer.DB).Shutdown()\n}\n\n// Interface guards\nvar (\n\t_ caddyhttp.MiddlewareHandler = (*Handler)(nil)\n\t_ caddy.Provisioner           = (*Handler)(nil)\n)\n"
  },
  {
    "path": "modules/caddypki/acmeserver/acmeserver_test.go",
    "content": "package acmeserver\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zaptest/observer\"\n)\n\nfunc TestHandler_warnIfPolicyAllowsAll(t *testing.T) {\n\ttests := []struct {\n\t\tname              string\n\t\tpolicy            *Policy\n\t\twantWarns         int\n\t\twantAllowWildcard bool\n\t}{\n\t\t{\n\t\t\tname:              \"warns when policy is nil\",\n\t\t\tpolicy:            nil,\n\t\t\twantWarns:         1,\n\t\t\twantAllowWildcard: false,\n\t\t},\n\t\t{\n\t\t\tname:              \"warns when allow/deny rules are empty\",\n\t\t\tpolicy:            &Policy{},\n\t\t\twantWarns:         1,\n\t\t\twantAllowWildcard: false,\n\t\t},\n\t\t{\n\t\t\tname: \"warns when only allow_wildcard_names is true\",\n\t\t\tpolicy: &Policy{\n\t\t\t\tAllowWildcardNames: true,\n\t\t\t},\n\t\t\twantWarns:         1,\n\t\t\twantAllowWildcard: true,\n\t\t},\n\t\t{\n\t\t\tname: \"does not warn when allow rules are configured\",\n\t\t\tpolicy: &Policy{\n\t\t\t\tAllow: &RuleSet{\n\t\t\t\t\tDomains: []string{\"example.com\"},\n\t\t\t\t},\n\t\t\t},\n\t\t\twantWarns:         0,\n\t\t\twantAllowWildcard: false,\n\t\t},\n\t\t{\n\t\t\tname: \"does not warn when deny rules are configured\",\n\t\t\tpolicy: &Policy{\n\t\t\t\tDeny: &RuleSet{\n\t\t\t\t\tDomains: []string{\"bad.example.com\"},\n\t\t\t\t},\n\t\t\t},\n\t\t\twantWarns:         0,\n\t\t\twantAllowWildcard: false,\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tcore, logs := observer.New(zap.WarnLevel)\n\t\t\tash := &Handler{\n\t\t\t\tCA:     \"local\",\n\t\t\t\tPolicy: tt.policy,\n\t\t\t\tlogger: zap.New(core),\n\t\t\t}\n\n\t\t\tash.warnIfPolicyAllowsAll()\n\t\t\tif logs.Len() != tt.wantWarns {\n\t\t\t\tt.Fatalf(\"expected %d warning logs, got %d\", tt.wantWarns, logs.Len())\n\t\t\t}\n\n\t\t\tif tt.wantWarns == 0 {\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tentry := logs.All()[0]\n\t\t\tif entry.Level != zap.WarnLevel {\n\t\t\t\tt.Fatalf(\"expected warn level, got %v\", entry.Level)\n\t\t\t}\n\t\t\tif !strings.Contains(entry.Message, \"policy has no allow/deny rules\") {\n\t\t\t\tt.Fatalf(\"unexpected log message: %q\", entry.Message)\n\t\t\t}\n\t\t\tctx := entry.ContextMap()\n\t\t\tif ctx[\"ca\"] != \"local\" {\n\t\t\t\tt.Fatalf(\"expected ca=local, got %v\", ctx[\"ca\"])\n\t\t\t}\n\t\t\tif ctx[\"allow_wildcard_names\"] != tt.wantAllowWildcard {\n\t\t\t\tt.Fatalf(\"expected allow_wildcard_names=%v, got %v\", tt.wantAllowWildcard, ctx[\"allow_wildcard_names\"])\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "modules/caddypki/acmeserver/caddyfile.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage acmeserver\n\nimport (\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddypki\"\n)\n\nfunc init() {\n\thttpcaddyfile.RegisterDirective(\"acme_server\", parseACMEServer)\n}\n\n// parseACMEServer sets up an ACME server handler from Caddyfile tokens.\n//\n//\tacme_server [<matcher>] {\n//\t\tca        <id>\n//\t\tlifetime  <duration>\n//\t\tresolvers <addresses...>\n//\t\tchallenges <challenges...>\n//\t\tallow_wildcard_names\n//\t\tallow {\n//\t\t\tdomains <domains...>\n//\t\t\tip_ranges <addresses...>\n//\t\t}\n//\t\tdeny {\n//\t\t\tdomains <domains...>\n//\t\t\tip_ranges <addresses...>\n//\t\t}\n//\t\tsign_with_root\n//\t}\nfunc parseACMEServer(h httpcaddyfile.Helper) ([]httpcaddyfile.ConfigValue, error) {\n\th.Next() // consume directive name\n\tmatcherSet, err := h.ExtractMatcherSet()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\th.Next() // consume the directive name again (matcher parsing resets)\n\n\t// no inline args allowed\n\tif h.NextArg() {\n\t\treturn nil, h.ArgErr()\n\t}\n\n\tvar acmeServer Handler\n\tvar ca *caddypki.CA\n\n\tfor h.NextBlock(0) {\n\t\tswitch h.Val() {\n\t\tcase \"ca\":\n\t\t\tif !h.AllArgs(&acmeServer.CA) {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tif ca == nil {\n\t\t\t\tca = new(caddypki.CA)\n\t\t\t}\n\t\t\tca.ID = acmeServer.CA\n\t\tcase \"lifetime\":\n\t\t\tif !h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(h.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tacmeServer.Lifetime = caddy.Duration(dur)\n\t\tcase \"resolvers\":\n\t\t\tacmeServer.Resolvers = h.RemainingArgs()\n\t\t\tif len(acmeServer.Resolvers) == 0 {\n\t\t\t\treturn nil, h.Errf(\"must specify at least one resolver address\")\n\t\t\t}\n\t\tcase \"challenges\":\n\t\t\tacmeServer.Challenges = append(acmeServer.Challenges, stringToChallenges(h.RemainingArgs())...)\n\t\tcase \"allow_wildcard_names\":\n\t\t\tif acmeServer.Policy == nil {\n\t\t\t\tacmeServer.Policy = &Policy{}\n\t\t\t}\n\t\t\tacmeServer.Policy.AllowWildcardNames = true\n\t\tcase \"allow\":\n\t\t\tr := &RuleSet{}\n\t\t\tfor nesting := h.Nesting(); h.NextBlock(nesting); {\n\t\t\t\tif h.CountRemainingArgs() == 0 {\n\t\t\t\t\treturn nil, h.ArgErr() // TODO:\n\t\t\t\t}\n\t\t\t\tswitch h.Val() {\n\t\t\t\tcase \"domains\":\n\t\t\t\t\tr.Domains = append(r.Domains, h.RemainingArgs()...)\n\t\t\t\tcase \"ip_ranges\":\n\t\t\t\t\tr.IPRanges = append(r.IPRanges, h.RemainingArgs()...)\n\t\t\t\tdefault:\n\t\t\t\t\treturn nil, h.Errf(\"unrecognized 'allow' subdirective: %s\", h.Val())\n\t\t\t\t}\n\t\t\t}\n\t\t\tif acmeServer.Policy == nil {\n\t\t\t\tacmeServer.Policy = &Policy{}\n\t\t\t}\n\t\t\tacmeServer.Policy.Allow = r\n\t\tcase \"deny\":\n\t\t\tr := &RuleSet{}\n\t\t\tfor nesting := h.Nesting(); h.NextBlock(nesting); {\n\t\t\t\tif h.CountRemainingArgs() == 0 {\n\t\t\t\t\treturn nil, h.ArgErr() // TODO:\n\t\t\t\t}\n\t\t\t\tswitch h.Val() {\n\t\t\t\tcase \"domains\":\n\t\t\t\t\tr.Domains = append(r.Domains, h.RemainingArgs()...)\n\t\t\t\tcase \"ip_ranges\":\n\t\t\t\t\tr.IPRanges = append(r.IPRanges, h.RemainingArgs()...)\n\t\t\t\tdefault:\n\t\t\t\t\treturn nil, h.Errf(\"unrecognized 'deny' subdirective: %s\", h.Val())\n\t\t\t\t}\n\t\t\t}\n\t\t\tif acmeServer.Policy == nil {\n\t\t\t\tacmeServer.Policy = &Policy{}\n\t\t\t}\n\t\t\tacmeServer.Policy.Deny = r\n\t\tcase \"sign_with_root\":\n\t\t\tif h.NextArg() {\n\t\t\t\treturn nil, h.ArgErr()\n\t\t\t}\n\t\t\tacmeServer.SignWithRoot = true\n\t\tdefault:\n\t\t\treturn nil, h.Errf(\"unrecognized ACME server directive: %s\", h.Val())\n\t\t}\n\t}\n\n\tconfigVals := h.NewRoute(matcherSet, acmeServer)\n\n\tif ca == nil {\n\t\treturn configVals, nil\n\t}\n\n\treturn append(configVals, httpcaddyfile.ConfigValue{\n\t\tClass: \"pki.ca\",\n\t\tValue: ca,\n\t}), nil\n}\n"
  },
  {
    "path": "modules/caddypki/acmeserver/challenges.go",
    "content": "package acmeserver\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"strings\"\n\n\t\"github.com/smallstep/certificates/authority/provisioner\"\n)\n\n// ACMEChallenge is an opaque string that represents supported ACME challenges.\ntype ACMEChallenge string\n\nconst (\n\tHTTP_01     ACMEChallenge = \"http-01\"\n\tDNS_01      ACMEChallenge = \"dns-01\"\n\tTLS_ALPN_01 ACMEChallenge = \"tls-alpn-01\"\n)\n\n// validate checks if the given challenge is supported.\nfunc (c ACMEChallenge) validate() error {\n\tswitch c {\n\tcase HTTP_01, DNS_01, TLS_ALPN_01:\n\t\treturn nil\n\tdefault:\n\t\treturn fmt.Errorf(\"acme challenge %q is not supported\", c)\n\t}\n}\n\n// The unmarshaller first marshals the value into a string. Then it\n// trims any space around it and lowercase it for normaliztion. The\n// method does not and should not validate the value within accepted enums.\nfunc (c *ACMEChallenge) UnmarshalJSON(b []byte) error {\n\tvar s string\n\tif err := json.Unmarshal(b, &s); err != nil {\n\t\treturn err\n\t}\n\t*c = ACMEChallenge(strings.ToLower(strings.TrimSpace(s)))\n\treturn nil\n}\n\n// String returns a string representation of the challenge.\nfunc (c ACMEChallenge) String() string {\n\treturn strings.ToLower(string(c))\n}\n\n// ACMEChallenges is a list of ACME challenges.\ntype ACMEChallenges []ACMEChallenge\n\n// validate checks if the given challenges are supported.\nfunc (c ACMEChallenges) validate() error {\n\tfor _, ch := range c {\n\t\tif err := ch.validate(); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (c ACMEChallenges) toSmallstepType() []provisioner.ACMEChallenge {\n\tif len(c) == 0 {\n\t\treturn nil\n\t}\n\tac := make([]provisioner.ACMEChallenge, len(c))\n\tfor i, ch := range c {\n\t\tac[i] = provisioner.ACMEChallenge(ch)\n\t}\n\treturn ac\n}\n\nfunc stringToChallenges(chs []string) ACMEChallenges {\n\tchallenges := make(ACMEChallenges, len(chs))\n\tfor i, ch := range chs {\n\t\tchallenges[i] = ACMEChallenge(ch)\n\t}\n\treturn challenges\n}\n"
  },
  {
    "path": "modules/caddypki/acmeserver/policy.go",
    "content": "package acmeserver\n\nimport (\n\t\"github.com/smallstep/certificates/authority/policy\"\n\t\"github.com/smallstep/certificates/authority/provisioner\"\n)\n\n// Policy defines the criteria for the ACME server\n// of when to issue a certificate. Refer to the\n// [Certificate Issuance Policy](https://smallstep.com/docs/step-ca/policies/)\n// on Smallstep website for the evaluation criteria.\ntype Policy struct {\n\t// If a rule set is configured to allow a certain type of name,\n\t// all other types of names are automatically denied.\n\tAllow *RuleSet `json:\"allow,omitempty\"`\n\n\t// If a rule set is configured to deny a certain type of name,\n\t// all other types of names are still allowed.\n\tDeny *RuleSet `json:\"deny,omitempty\"`\n\n\t// If set to true, the ACME server will allow issuing wildcard certificates.\n\tAllowWildcardNames bool `json:\"allow_wildcard_names,omitempty\"`\n}\n\n// RuleSet is the specific set of SAN criteria for a certificate\n// to be issued or denied.\ntype RuleSet struct {\n\t// Domains is a list of DNS domains that are allowed to be issued.\n\t// It can be in the form of FQDN for specific domain name, or\n\t// a wildcard domain name format, e.g. *.example.com, to allow\n\t// sub-domains of a domain.\n\tDomains []string `json:\"domains,omitempty\"`\n\n\t// IP ranges in the form of CIDR notation or specific IP addresses\n\t// to be approved or denied for certificates. Non-CIDR IP addresses\n\t// are matched exactly.\n\tIPRanges []string `json:\"ip_ranges,omitempty\"`\n}\n\n// normalizeAllowRules returns `nil` if policy is nil, the `Allow` rule is `nil`,\n// or all rules within the `Allow` rule are empty. Otherwise, it returns the X509NameOptions\n// with the content of the `Allow` rule.\nfunc (p *Policy) normalizeAllowRules() *policy.X509NameOptions {\n\tif (p == nil) || (p.Allow == nil) || (len(p.Allow.Domains) == 0 && len(p.Allow.IPRanges) == 0) {\n\t\treturn nil\n\t}\n\treturn &policy.X509NameOptions{\n\t\tDNSDomains: p.Allow.Domains,\n\t\tIPRanges:   p.Allow.IPRanges,\n\t}\n}\n\n// normalizeDenyRules returns `nil` if policy is nil, the `Deny` rule is `nil`,\n// or all rules within the `Deny` rule are empty. Otherwise, it returns the X509NameOptions\n// with the content of the `Deny` rule.\nfunc (p *Policy) normalizeDenyRules() *policy.X509NameOptions {\n\tif (p == nil) || (p.Deny == nil) || (len(p.Deny.Domains) == 0 && len(p.Deny.IPRanges) == 0) {\n\t\treturn nil\n\t}\n\treturn &policy.X509NameOptions{\n\t\tDNSDomains: p.Deny.Domains,\n\t\tIPRanges:   p.Deny.IPRanges,\n\t}\n}\n\n// normalizeRules returns `nil` if policy is nil, the `Allow` and `Deny` rules are `nil`,\nfunc (p *Policy) normalizeRules() *provisioner.X509Options {\n\tif p == nil {\n\t\treturn nil\n\t}\n\n\tallow := p.normalizeAllowRules()\n\tdeny := p.normalizeDenyRules()\n\tif allow == nil && deny == nil && !p.AllowWildcardNames {\n\t\treturn nil\n\t}\n\n\treturn &provisioner.X509Options{\n\t\tAllowedNames:       allow,\n\t\tDeniedNames:        deny,\n\t\tAllowWildcardNames: p.AllowWildcardNames,\n\t}\n}\n"
  },
  {
    "path": "modules/caddypki/acmeserver/policy_test.go",
    "content": "package acmeserver\n\nimport (\n\t\"reflect\"\n\t\"testing\"\n\n\t\"github.com/smallstep/certificates/authority/policy\"\n\t\"github.com/smallstep/certificates/authority/provisioner\"\n)\n\nfunc TestPolicyNormalizeAllowRules(t *testing.T) {\n\ttype fields struct {\n\t\tAllow              *RuleSet\n\t\tDeny               *RuleSet\n\t\tAllowWildcardNames bool\n\t}\n\ttests := []struct {\n\t\tname   string\n\t\tfields fields\n\t\twant   *policy.X509NameOptions\n\t}{\n\t\t{\n\t\t\tname:   \"providing no rules results in 'nil'\",\n\t\t\tfields: fields{},\n\t\t\twant:   nil,\n\t\t},\n\t\t{\n\t\t\tname: \"providing 'nil' Allow rules results in 'nil', regardless of Deny rules\",\n\t\t\tfields: fields{\n\t\t\t\tAllow:              nil,\n\t\t\t\tDeny:               &RuleSet{},\n\t\t\t\tAllowWildcardNames: true,\n\t\t\t},\n\t\t\twant: nil,\n\t\t},\n\t\t{\n\t\t\tname: \"providing empty Allow rules results in 'nil', regardless of Deny rules\",\n\t\t\tfields: fields{\n\t\t\t\tAllow: &RuleSet{\n\t\t\t\t\tDomains:  []string{},\n\t\t\t\t\tIPRanges: []string{},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: nil,\n\t\t},\n\t\t{\n\t\t\tname: \"rules configured in Allow are returned in X509NameOptions\",\n\t\t\tfields: fields{\n\t\t\t\tAllow: &RuleSet{\n\t\t\t\t\tDomains:  []string{\"example.com\"},\n\t\t\t\t\tIPRanges: []string{\"127.0.0.1/32\"},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: &policy.X509NameOptions{\n\t\t\t\tDNSDomains: []string{\"example.com\"},\n\t\t\t\tIPRanges:   []string{\"127.0.0.1/32\"},\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tp := &Policy{\n\t\t\t\tAllow:              tt.fields.Allow,\n\t\t\t\tDeny:               tt.fields.Deny,\n\t\t\t\tAllowWildcardNames: tt.fields.AllowWildcardNames,\n\t\t\t}\n\t\t\tif got := p.normalizeAllowRules(); !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"Policy.normalizeAllowRules() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestPolicy_normalizeDenyRules(t *testing.T) {\n\ttype fields struct {\n\t\tAllow              *RuleSet\n\t\tDeny               *RuleSet\n\t\tAllowWildcardNames bool\n\t}\n\ttests := []struct {\n\t\tname   string\n\t\tfields fields\n\t\twant   *policy.X509NameOptions\n\t}{\n\t\t{\n\t\t\tname:   \"providing no rules results in 'nil'\",\n\t\t\tfields: fields{},\n\t\t\twant:   nil,\n\t\t},\n\t\t{\n\t\t\tname: \"providing 'nil' Deny rules results in 'nil', regardless of Allow rules\",\n\t\t\tfields: fields{\n\t\t\t\tDeny:               nil,\n\t\t\t\tAllow:              &RuleSet{},\n\t\t\t\tAllowWildcardNames: true,\n\t\t\t},\n\t\t\twant: nil,\n\t\t},\n\t\t{\n\t\t\tname: \"providing empty Deny rules results in 'nil', regardless of Allow rules\",\n\t\t\tfields: fields{\n\t\t\t\tDeny: &RuleSet{\n\t\t\t\t\tDomains:  []string{},\n\t\t\t\t\tIPRanges: []string{},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: nil,\n\t\t},\n\t\t{\n\t\t\tname: \"rules configured in Deny are returned in X509NameOptions\",\n\t\t\tfields: fields{\n\t\t\t\tDeny: &RuleSet{\n\t\t\t\t\tDomains:  []string{\"example.com\"},\n\t\t\t\t\tIPRanges: []string{\"127.0.0.1/32\"},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: &policy.X509NameOptions{\n\t\t\t\tDNSDomains: []string{\"example.com\"},\n\t\t\t\tIPRanges:   []string{\"127.0.0.1/32\"},\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tp := &Policy{\n\t\t\t\tAllow:              tt.fields.Allow,\n\t\t\t\tDeny:               tt.fields.Deny,\n\t\t\t\tAllowWildcardNames: tt.fields.AllowWildcardNames,\n\t\t\t}\n\t\t\tif got := p.normalizeDenyRules(); !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"Policy.normalizeDenyRules() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestPolicy_normalizeRules(t *testing.T) {\n\ttests := []struct {\n\t\tname   string\n\t\tpolicy *Policy\n\t\twant   *provisioner.X509Options\n\t}{\n\t\t{\n\t\t\tname:   \"'nil' policy results in 'nil' options\",\n\t\t\tpolicy: nil,\n\t\t\twant:   nil,\n\t\t},\n\t\t{\n\t\t\tname: \"'nil' Allow/Deny rules and disallowing wildcard names result in 'nil' X509Options\",\n\t\t\tpolicy: &Policy{\n\t\t\t\tAllow:              nil,\n\t\t\t\tDeny:               nil,\n\t\t\t\tAllowWildcardNames: false,\n\t\t\t},\n\t\t\twant: nil,\n\t\t},\n\t\t{\n\t\t\tname: \"'nil' Allow/Deny rules and allowing wildcard names result in 'nil' Allow/Deny rules in X509Options but allowing wildcard names in X509Options\",\n\t\t\tpolicy: &Policy{\n\t\t\t\tAllow:              nil,\n\t\t\t\tDeny:               nil,\n\t\t\t\tAllowWildcardNames: true,\n\t\t\t},\n\t\t\twant: &provisioner.X509Options{\n\t\t\t\tAllowWildcardNames: true,\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := tt.policy.normalizeRules(); !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"Policy.normalizeRules() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "modules/caddypki/adminapi.go",
    "content": "// Copyright 2020 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddypki\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"strings\"\n\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(adminAPI{})\n}\n\n// adminAPI is a module that serves PKI endpoints to retrieve\n// information about the CAs being managed by Caddy.\ntype adminAPI struct {\n\tctx    caddy.Context\n\tlog    *zap.Logger\n\tpkiApp *PKI\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (adminAPI) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"admin.api.pki\",\n\t\tNew: func() caddy.Module { return new(adminAPI) },\n\t}\n}\n\n// Provision sets up the adminAPI module.\nfunc (a *adminAPI) Provision(ctx caddy.Context) error {\n\ta.ctx = ctx\n\ta.log = ctx.Logger(a) // TODO: passing in 'a' is a hack until the admin API is officially extensible (see #5032)\n\n\t// Avoid initializing PKI if it wasn't configured.\n\t// We intentionally ignore the error since it's not\n\t// fatal if the PKI app is not explicitly configured.\n\tpkiApp, err := ctx.AppIfConfigured(\"pki\")\n\tif err == nil {\n\t\ta.pkiApp = pkiApp.(*PKI)\n\t}\n\n\treturn nil\n}\n\n// Routes returns the admin routes for the PKI app.\nfunc (a *adminAPI) Routes() []caddy.AdminRoute {\n\treturn []caddy.AdminRoute{\n\t\t{\n\t\t\tPattern: adminPKIEndpointBase,\n\t\t\tHandler: caddy.AdminHandlerFunc(a.handleAPIEndpoints),\n\t\t},\n\t}\n}\n\n// handleAPIEndpoints routes API requests within adminPKIEndpointBase.\nfunc (a *adminAPI) handleAPIEndpoints(w http.ResponseWriter, r *http.Request) error {\n\turi := strings.TrimPrefix(r.URL.Path, \"/pki/\")\n\tparts := strings.Split(uri, \"/\")\n\tswitch {\n\tcase len(parts) == 2 && parts[0] == \"ca\" && parts[1] != \"\":\n\t\treturn a.handleCAInfo(w, r)\n\tcase len(parts) == 3 && parts[0] == \"ca\" && parts[1] != \"\" && parts[2] == \"certificates\":\n\t\treturn a.handleCACerts(w, r)\n\t}\n\treturn caddy.APIError{\n\t\tHTTPStatus: http.StatusNotFound,\n\t\tErr:        fmt.Errorf(\"resource not found: %v\", r.URL.Path),\n\t}\n}\n\n// handleCAInfo returns information about a particular\n// CA by its ID. If the CA ID is the default, then the CA will be\n// provisioned if it has not already been. Other CA IDs will return an\n// error if they have not been previously provisioned.\nfunc (a *adminAPI) handleCAInfo(w http.ResponseWriter, r *http.Request) error {\n\tif r.Method != http.MethodGet {\n\t\treturn caddy.APIError{\n\t\t\tHTTPStatus: http.StatusMethodNotAllowed,\n\t\t\tErr:        fmt.Errorf(\"method not allowed: %v\", r.Method),\n\t\t}\n\t}\n\n\tca, err := a.getCAFromAPIRequestPath(r)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\trootCert, interCert, err := rootAndIntermediatePEM(ca)\n\tif err != nil {\n\t\treturn caddy.APIError{\n\t\t\tHTTPStatus: http.StatusInternalServerError,\n\t\t\tErr:        fmt.Errorf(\"failed to get root and intermediate cert for CA %s: %v\", ca.ID, err),\n\t\t}\n\t}\n\n\trepl := ca.newReplacer()\n\n\tresponse := caInfo{\n\t\tID:               ca.ID,\n\t\tName:             ca.Name,\n\t\tRootCN:           repl.ReplaceAll(ca.RootCommonName, \"\"),\n\t\tIntermediateCN:   repl.ReplaceAll(ca.IntermediateCommonName, \"\"),\n\t\tRootCert:         string(rootCert),\n\t\tIntermediateCert: string(interCert),\n\t}\n\n\tencoded, err := json.Marshal(response)\n\tif err != nil {\n\t\treturn caddy.APIError{\n\t\t\tHTTPStatus: http.StatusInternalServerError,\n\t\t\tErr:        err,\n\t\t}\n\t}\n\n\tw.Header().Set(\"Content-Type\", \"application/json\")\n\t_, _ = w.Write(encoded)\n\n\treturn nil\n}\n\n// handleCACerts returns the certificate chain for a particular\n// CA by its ID. If the CA ID is the default, then the CA will be\n// provisioned if it has not already been. Other CA IDs will return an\n// error if they have not been previously provisioned.\nfunc (a *adminAPI) handleCACerts(w http.ResponseWriter, r *http.Request) error {\n\tif r.Method != http.MethodGet {\n\t\treturn caddy.APIError{\n\t\t\tHTTPStatus: http.StatusMethodNotAllowed,\n\t\t\tErr:        fmt.Errorf(\"method not allowed: %v\", r.Method),\n\t\t}\n\t}\n\n\tca, err := a.getCAFromAPIRequestPath(r)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\trootCert, interCert, err := rootAndIntermediatePEM(ca)\n\tif err != nil {\n\t\treturn caddy.APIError{\n\t\t\tHTTPStatus: http.StatusInternalServerError,\n\t\t\tErr:        fmt.Errorf(\"failed to get root and intermediate cert for CA %s: %v\", ca.ID, err),\n\t\t}\n\t}\n\n\tw.Header().Set(\"Content-Type\", \"application/pem-certificate-chain\")\n\t_, err = w.Write(interCert) //nolint:gosec // false positive... no XSS in a PEM for cryin' out loud\n\tif err == nil {\n\t\t_, _ = w.Write(rootCert) //nolint:gosec // false positive... no XSS in a PEM for cryin' out loud\n\t}\n\n\treturn nil\n}\n\nfunc (a *adminAPI) getCAFromAPIRequestPath(r *http.Request) (*CA, error) {\n\t// Grab the CA ID from the request path, it should be the 4th segment (/pki/ca/<ca>)\n\tid := strings.Split(r.URL.Path, \"/\")[3]\n\tif id == \"\" {\n\t\treturn nil, caddy.APIError{\n\t\t\tHTTPStatus: http.StatusBadRequest,\n\t\t\tErr:        fmt.Errorf(\"missing CA in path\"),\n\t\t}\n\t}\n\n\t// Find the CA by ID, if PKI is configured\n\tvar ca *CA\n\tvar ok bool\n\tif a.pkiApp != nil {\n\t\tca, ok = a.pkiApp.CAs[id]\n\t}\n\n\t// If we didn't find the CA, and PKI is not configured\n\t// then we'll either error out if the CA ID is not the\n\t// default. If the CA ID is the default, then we'll\n\t// provision it, because the user probably aims to\n\t// change their config to enable PKI immediately after\n\t// if they actually requested the local CA ID.\n\tif !ok {\n\t\tif id != DefaultCAID {\n\t\t\treturn nil, caddy.APIError{\n\t\t\t\tHTTPStatus: http.StatusNotFound,\n\t\t\t\tErr:        fmt.Errorf(\"no certificate authority configured with id: %s\", id),\n\t\t\t}\n\t\t}\n\n\t\t// Provision the default CA, which generates and stores a root\n\t\t// certificate in storage, if one doesn't already exist.\n\t\tca = new(CA)\n\t\terr := ca.Provision(a.ctx, id, a.log)\n\t\tif err != nil {\n\t\t\treturn nil, caddy.APIError{\n\t\t\t\tHTTPStatus: http.StatusInternalServerError,\n\t\t\t\tErr:        fmt.Errorf(\"failed to provision CA %s, %w\", id, err),\n\t\t\t}\n\t\t}\n\t}\n\n\treturn ca, nil\n}\n\nfunc rootAndIntermediatePEM(ca *CA) (root, inter []byte, err error) {\n\troot, err = pemEncodeCert(ca.RootCertificate().Raw)\n\tif err != nil {\n\t\treturn root, inter, err\n\t}\n\n\tfor _, interCert := range ca.IntermediateCertificateChain() {\n\t\tpemBytes, err := pemEncodeCert(interCert.Raw)\n\t\tif err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\t\tinter = append(inter, pemBytes...)\n\t}\n\n\treturn\n}\n\n// caInfo is the response structure for the CA info API endpoint.\ntype caInfo struct {\n\tID               string `json:\"id\"`\n\tName             string `json:\"name\"`\n\tRootCN           string `json:\"root_common_name\"`\n\tIntermediateCN   string `json:\"intermediate_common_name\"`\n\tRootCert         string `json:\"root_certificate\"`\n\tIntermediateCert string `json:\"intermediate_certificate\"`\n}\n\n// adminPKIEndpointBase is the base admin endpoint under which all PKI admin endpoints exist.\nconst adminPKIEndpointBase = \"/pki/\"\n\n// Interface guards\nvar (\n\t_ caddy.AdminRouter = (*adminAPI)(nil)\n\t_ caddy.Provisioner = (*adminAPI)(nil)\n)\n"
  },
  {
    "path": "modules/caddypki/ca.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddypki\n\nimport (\n\t\"crypto\"\n\t\"crypto/x509\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io/fs\"\n\t\"path\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/smallstep/certificates/authority\"\n\t\"github.com/smallstep/certificates/db\"\n\t\"github.com/smallstep/truststore\"\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\n// CA describes a certificate authority, which consists of\n// root/signing certificates and various settings pertaining\n// to the issuance of certificates and trusting them.\ntype CA struct {\n\t// The user-facing name of the certificate authority.\n\tName string `json:\"name,omitempty\"`\n\n\t// The name to put in the CommonName field of the\n\t// root certificate.\n\tRootCommonName string `json:\"root_common_name,omitempty\"`\n\n\t// The name to put in the CommonName field of the\n\t// intermediate certificates.\n\tIntermediateCommonName string `json:\"intermediate_common_name,omitempty\"`\n\n\t// The lifetime for the intermediate certificates\n\tIntermediateLifetime caddy.Duration `json:\"intermediate_lifetime,omitempty\"`\n\n\t// Whether Caddy will attempt to install the CA's root\n\t// into the system trust store, as well as into Java\n\t// and Mozilla Firefox trust stores. Default: true.\n\tInstallTrust *bool `json:\"install_trust,omitempty\"`\n\n\t// The root certificate to use; if null, one will be generated.\n\tRoot *KeyPair `json:\"root,omitempty\"`\n\n\t// The intermediate (signing) certificate; if null, one will be generated.\n\tIntermediate *KeyPair `json:\"intermediate,omitempty\"`\n\n\t// How often to check if intermediate (and root, when applicable) certificates need renewal.\n\t// Default: 10m.\n\tMaintenanceInterval caddy.Duration `json:\"maintenance_interval,omitempty\"`\n\n\t// The fraction of certificate lifetime (0.0–1.0) after which renewal is attempted.\n\t// For example, 0.2 means renew when 20% of the lifetime remains (e.g. ~73 days for a 1-year cert).\n\t// Default: 0.2.\n\tRenewalWindowRatio float64 `json:\"renewal_window_ratio,omitempty\"`\n\n\t// Optionally configure a separate storage module associated with this\n\t// issuer, instead of using Caddy's global/default-configured storage.\n\t// This can be useful if you want to keep your signing keys in a\n\t// separate location from your leaf certificates.\n\tStorageRaw json.RawMessage `json:\"storage,omitempty\" caddy:\"namespace=caddy.storage inline_key=module\"`\n\n\t// The unique config-facing ID of the certificate authority.\n\t// Since the ID is set in JSON config via object key, this\n\t// field is exported only for purposes of config generation\n\t// and module provisioning.\n\tID string `json:\"-\"`\n\n\tstorage    certmagic.Storage\n\troot       *x509.Certificate\n\tinterChain []*x509.Certificate\n\tinterKey   crypto.Signer\n\tmu         *sync.RWMutex\n\n\trootCertPath string // mainly used for logging purposes if trusting\n\tlog          *zap.Logger\n\tctx          caddy.Context\n}\n\n// Provision sets up the CA.\nfunc (ca *CA) Provision(ctx caddy.Context, id string, log *zap.Logger) error {\n\tca.mu = new(sync.RWMutex)\n\tca.log = log.Named(\"ca.\" + id)\n\tca.ctx = ctx\n\n\tif id == \"\" {\n\t\treturn fmt.Errorf(\"CA ID is required (use 'local' for the default CA)\")\n\t}\n\tca.mu.Lock()\n\tca.ID = id\n\tca.mu.Unlock()\n\n\tif ca.StorageRaw != nil {\n\t\tval, err := ctx.LoadModule(ca, \"StorageRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading storage module: %v\", err)\n\t\t}\n\t\tcmStorage, err := val.(caddy.StorageConverter).CertMagicStorage()\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"creating storage configuration: %v\", err)\n\t\t}\n\t\tca.storage = cmStorage\n\t}\n\tif ca.storage == nil {\n\t\tca.storage = ctx.Storage()\n\t}\n\n\tif ca.Name == \"\" {\n\t\tca.Name = defaultCAName\n\t}\n\tif ca.RootCommonName == \"\" {\n\t\tca.RootCommonName = defaultRootCommonName\n\t}\n\tif ca.IntermediateCommonName == \"\" {\n\t\tca.IntermediateCommonName = defaultIntermediateCommonName\n\t}\n\tif ca.IntermediateLifetime == 0 {\n\t\tca.IntermediateLifetime = caddy.Duration(defaultIntermediateLifetime)\n\t}\n\tif ca.MaintenanceInterval == 0 {\n\t\tca.MaintenanceInterval = caddy.Duration(defaultMaintenanceInterval)\n\t}\n\tif ca.RenewalWindowRatio <= 0 || ca.RenewalWindowRatio > 1 {\n\t\tca.RenewalWindowRatio = defaultRenewalWindowRatio\n\t}\n\n\t// load the certs and key that will be used for signing\n\tvar rootCert *x509.Certificate\n\tvar rootCertChain, interCertChain []*x509.Certificate\n\tvar rootKey, interKey crypto.Signer\n\tvar err error\n\tif ca.Root != nil {\n\t\tif ca.Root.Format == \"\" || ca.Root.Format == \"pem_file\" {\n\t\t\tca.rootCertPath = ca.Root.Certificate\n\t\t}\n\t\trootCertChain, rootKey, err = ca.Root.Load()\n\t\trootCert = rootCertChain[0]\n\t} else {\n\t\tca.rootCertPath = \"storage:\" + ca.storageKeyRootCert()\n\t\trootCert, rootKey, err = ca.loadOrGenRoot()\n\t}\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif ca.Intermediate != nil {\n\t\tinterCertChain, interKey, err = ca.Intermediate.Load()\n\t} else {\n\t\tactualRootLifetime := time.Until(rootCert.NotAfter)\n\t\tif time.Duration(ca.IntermediateLifetime) >= actualRootLifetime {\n\t\t\treturn fmt.Errorf(\"intermediate certificate lifetime must be less than actual root certificate lifetime (%s)\", actualRootLifetime)\n\t\t}\n\n\t\tinterCertChain, interKey, err = ca.loadOrGenIntermediate(rootCert, rootKey)\n\t}\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tca.mu.Lock()\n\tca.root, ca.interChain, ca.interKey = rootCert, interCertChain, interKey\n\tca.mu.Unlock()\n\n\treturn nil\n}\n\n// RootCertificate returns the CA's root certificate (public key).\nfunc (ca CA) RootCertificate() *x509.Certificate {\n\tca.mu.RLock()\n\tdefer ca.mu.RUnlock()\n\treturn ca.root\n}\n\n// RootKey returns the CA's root private key. Since the root key is\n// not cached in memory long-term, it needs to be loaded from storage,\n// which could yield an error.\nfunc (ca CA) RootKey() (crypto.Signer, error) {\n\t_, rootKey, err := ca.loadOrGenRoot()\n\treturn rootKey, err\n}\n\n// IntermediateCertificateChain returns the CA's intermediate\n// certificate chain.\nfunc (ca CA) IntermediateCertificateChain() []*x509.Certificate {\n\tca.mu.RLock()\n\tdefer ca.mu.RUnlock()\n\treturn ca.interChain\n}\n\n// IntermediateKey returns the CA's intermediate private key.\nfunc (ca CA) IntermediateKey() crypto.Signer {\n\tca.mu.RLock()\n\tdefer ca.mu.RUnlock()\n\treturn ca.interKey\n}\n\n// NewAuthority returns a new Smallstep-powered signing authority for this CA.\n// Note that we receive *CA (a pointer) in this method to ensure the closure within it, which\n// executes at a later time, always has the only copy of the CA so it can access the latest,\n// renewed certificates since NewAuthority was called. See #4517 and #4669.\nfunc (ca *CA) NewAuthority(authorityConfig AuthorityConfig) (*authority.Authority, error) {\n\t// get the root certificate and the issuer cert+key\n\trootCert := ca.RootCertificate()\n\n\t// set up the signer; cert/key which signs the leaf certs\n\tvar signerOption authority.Option\n\tif authorityConfig.SignWithRoot {\n\t\t// if we're signing with root, we can just pass the\n\t\t// cert/key directly, since it's unlikely to expire\n\t\t// while Caddy is running (long lifetime)\n\t\tvar issuerCert *x509.Certificate\n\t\tvar issuerKey crypto.Signer\n\t\tissuerCert = rootCert\n\t\tvar err error\n\t\tissuerKey, err = ca.RootKey()\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"loading signing key: %v\", err)\n\t\t}\n\t\tsignerOption = authority.WithX509Signer(issuerCert, issuerKey)\n\t} else {\n\t\t// if we're signing with intermediate, we need to make\n\t\t// sure it's always fresh, because the intermediate may\n\t\t// renew while Caddy is running (medium lifetime)\n\t\tsignerOption = authority.WithX509SignerFunc(func() ([]*x509.Certificate, crypto.Signer, error) {\n\t\t\tissuerChain := ca.IntermediateCertificateChain()\n\t\t\tissuerCert := issuerChain[0]\n\t\t\tissuerKey := ca.IntermediateKey()\n\t\t\tca.log.Debug(\"using intermediate signer\",\n\t\t\t\tzap.String(\"serial\", issuerCert.SerialNumber.String()),\n\t\t\t\tzap.String(\"not_before\", issuerCert.NotBefore.String()),\n\t\t\t\tzap.String(\"not_after\", issuerCert.NotAfter.String()))\n\t\t\treturn issuerChain, issuerKey, nil\n\t\t})\n\t}\n\n\topts := []authority.Option{\n\t\tauthority.WithConfig(&authority.Config{\n\t\t\tAuthorityConfig: authorityConfig.AuthConfig,\n\t\t}),\n\t\tsignerOption,\n\t\tauthority.WithX509RootCerts(rootCert),\n\t}\n\n\t// Add a database if we have one\n\tif authorityConfig.DB != nil {\n\t\topts = append(opts, authority.WithDatabase(*authorityConfig.DB))\n\t}\n\tauth, err := authority.NewEmbedded(opts...)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"initializing certificate authority: %v\", err)\n\t}\n\n\treturn auth, nil\n}\n\nfunc (ca CA) loadOrGenRoot() (rootCert *x509.Certificate, rootKey crypto.Signer, err error) {\n\tif ca.Root != nil {\n\t\trootChain, rootSigner, err := ca.Root.Load()\n\t\tif err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\t\treturn rootChain[0], rootSigner, nil\n\t}\n\trootCertPEM, err := ca.storage.Load(ca.ctx, ca.storageKeyRootCert())\n\tif err != nil {\n\t\tif !errors.Is(err, fs.ErrNotExist) {\n\t\t\treturn nil, nil, fmt.Errorf(\"loading root cert: %v\", err)\n\t\t}\n\n\t\t// TODO: should we require that all or none of the assets are required before overwriting anything?\n\t\trootCert, rootKey, err = ca.genRoot()\n\t\tif err != nil {\n\t\t\treturn nil, nil, fmt.Errorf(\"generating root: %v\", err)\n\t\t}\n\t}\n\n\tif rootCert == nil {\n\t\trootCert, err = pemDecodeCertificate(rootCertPEM)\n\t\tif err != nil {\n\t\t\treturn nil, nil, fmt.Errorf(\"parsing root certificate PEM: %v\", err)\n\t\t}\n\t}\n\tif rootKey == nil {\n\t\trootKeyPEM, err := ca.storage.Load(ca.ctx, ca.storageKeyRootKey())\n\t\tif err != nil {\n\t\t\treturn nil, nil, fmt.Errorf(\"loading root key: %v\", err)\n\t\t}\n\t\trootKey, err = certmagic.PEMDecodePrivateKey(rootKeyPEM)\n\t\tif err != nil {\n\t\t\treturn nil, nil, fmt.Errorf(\"decoding root key: %v\", err)\n\t\t}\n\t}\n\n\treturn rootCert, rootKey, nil\n}\n\nfunc (ca CA) genRoot() (rootCert *x509.Certificate, rootKey crypto.Signer, err error) {\n\trepl := ca.newReplacer()\n\n\trootCert, rootKey, err = generateRoot(repl.ReplaceAll(ca.RootCommonName, \"\"))\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"generating CA root: %v\", err)\n\t}\n\trootCertPEM, err := pemEncodeCert(rootCert.Raw)\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"encoding root certificate: %v\", err)\n\t}\n\terr = ca.storage.Store(ca.ctx, ca.storageKeyRootCert(), rootCertPEM)\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"saving root certificate: %v\", err)\n\t}\n\trootKeyPEM, err := certmagic.PEMEncodePrivateKey(rootKey)\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"encoding root key: %v\", err)\n\t}\n\terr = ca.storage.Store(ca.ctx, ca.storageKeyRootKey(), rootKeyPEM)\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"saving root key: %v\", err)\n\t}\n\n\treturn rootCert, rootKey, nil\n}\n\nfunc (ca CA) loadOrGenIntermediate(rootCert *x509.Certificate, rootKey crypto.Signer) (interCertChain []*x509.Certificate, interKey crypto.Signer, err error) {\n\tvar interCert *x509.Certificate\n\tinterCertPEM, err := ca.storage.Load(ca.ctx, ca.storageKeyIntermediateCert())\n\tif err != nil {\n\t\tif !errors.Is(err, fs.ErrNotExist) {\n\t\t\treturn nil, nil, fmt.Errorf(\"loading intermediate cert: %v\", err)\n\t\t}\n\n\t\t// TODO: should we require that all or none of the assets are required before overwriting anything?\n\t\tinterCert, interKey, err = ca.genIntermediate(rootCert, rootKey)\n\t\tif err != nil {\n\t\t\treturn nil, nil, fmt.Errorf(\"generating new intermediate cert: %v\", err)\n\t\t}\n\n\t\tinterCertChain = append(interCertChain, interCert)\n\t}\n\n\tif len(interCertChain) == 0 {\n\t\tinterCertChain, err = pemDecodeCertificateChain(interCertPEM)\n\t\tif err != nil {\n\t\t\treturn nil, nil, fmt.Errorf(\"decoding intermediate certificate PEM: %v\", err)\n\t\t}\n\t}\n\n\tif interKey == nil {\n\t\tinterKeyPEM, err := ca.storage.Load(ca.ctx, ca.storageKeyIntermediateKey())\n\t\tif err != nil {\n\t\t\treturn nil, nil, fmt.Errorf(\"loading intermediate key: %v\", err)\n\t\t}\n\t\tinterKey, err = certmagic.PEMDecodePrivateKey(interKeyPEM)\n\t\tif err != nil {\n\t\t\treturn nil, nil, fmt.Errorf(\"decoding intermediate key: %v\", err)\n\t\t}\n\t}\n\n\treturn interCertChain, interKey, nil\n}\n\nfunc (ca CA) genIntermediate(rootCert *x509.Certificate, rootKey crypto.Signer) (interCert *x509.Certificate, interKey crypto.Signer, err error) {\n\trepl := ca.newReplacer()\n\n\tinterCert, interKey, err = generateIntermediate(repl.ReplaceAll(ca.IntermediateCommonName, \"\"), rootCert, rootKey, time.Duration(ca.IntermediateLifetime))\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"generating CA intermediate: %v\", err)\n\t}\n\tinterCertPEM, err := pemEncodeCert(interCert.Raw)\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"encoding intermediate certificate: %v\", err)\n\t}\n\terr = ca.storage.Store(ca.ctx, ca.storageKeyIntermediateCert(), interCertPEM)\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"saving intermediate certificate: %v\", err)\n\t}\n\tinterKeyPEM, err := certmagic.PEMEncodePrivateKey(interKey)\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"encoding intermediate key: %v\", err)\n\t}\n\terr = ca.storage.Store(ca.ctx, ca.storageKeyIntermediateKey(), interKeyPEM)\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"saving intermediate key: %v\", err)\n\t}\n\n\treturn interCert, interKey, nil\n}\n\nfunc (ca CA) storageKeyCAPrefix() string {\n\treturn path.Join(\"pki\", \"authorities\", certmagic.StorageKeys.Safe(ca.ID))\n}\n\nfunc (ca CA) storageKeyRootCert() string {\n\treturn path.Join(ca.storageKeyCAPrefix(), \"root.crt\")\n}\n\nfunc (ca CA) storageKeyRootKey() string {\n\treturn path.Join(ca.storageKeyCAPrefix(), \"root.key\")\n}\n\nfunc (ca CA) storageKeyIntermediateCert() string {\n\treturn path.Join(ca.storageKeyCAPrefix(), \"intermediate.crt\")\n}\n\nfunc (ca CA) storageKeyIntermediateKey() string {\n\treturn path.Join(ca.storageKeyCAPrefix(), \"intermediate.key\")\n}\n\nfunc (ca CA) newReplacer() *caddy.Replacer {\n\trepl := caddy.NewReplacer()\n\trepl.Set(\"pki.ca.name\", ca.Name)\n\treturn repl\n}\n\n// installRoot installs this CA's root certificate into the\n// local trust store(s) if it is not already trusted. The CA\n// must already be provisioned.\nfunc (ca CA) installRoot() error {\n\t// avoid password prompt if already trusted\n\tif trusted(ca.root) {\n\t\tca.log.Info(\"root certificate is already trusted by system\",\n\t\t\tzap.String(\"path\", ca.rootCertPath))\n\t\treturn nil\n\t}\n\n\tca.log.Warn(\"installing root certificate (you might be prompted for password)\",\n\t\tzap.String(\"path\", ca.rootCertPath))\n\n\treturn truststore.Install(ca.root,\n\t\ttruststore.WithDebug(),\n\t\ttruststore.WithFirefox(),\n\t\ttruststore.WithJava(),\n\t)\n}\n\n// AuthorityConfig is used to help a CA configure\n// the underlying signing authority.\ntype AuthorityConfig struct {\n\tSignWithRoot bool\n\n\t// TODO: should we just embed the underlying authority.Config struct type?\n\tDB         *db.AuthDB\n\tAuthConfig *authority.AuthConfig\n}\n\nconst (\n\t// DefaultCAID is the default CA ID.\n\tDefaultCAID = \"local\"\n\n\tdefaultCAName                 = \"Caddy Local Authority\"\n\tdefaultRootCommonName         = \"{pki.ca.name} - {time.now.year} ECC Root\"\n\tdefaultIntermediateCommonName = \"{pki.ca.name} - ECC Intermediate\"\n\n\tdefaultRootLifetime         = 24 * time.Hour * 30 * 12 * 10\n\tdefaultIntermediateLifetime = 24 * time.Hour * 7\n\tdefaultMaintenanceInterval  = 10 * time.Minute\n\tdefaultRenewalWindowRatio   = 0.2\n)\n"
  },
  {
    "path": "modules/caddypki/certificates.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddypki\n\nimport (\n\t\"crypto\"\n\t\"crypto/x509\"\n\t\"time\"\n\n\t\"go.step.sm/crypto/keyutil\"\n\t\"go.step.sm/crypto/x509util\"\n)\n\nfunc generateRoot(commonName string) (*x509.Certificate, crypto.Signer, error) {\n\ttemplate, signer, err := newCert(commonName, x509util.DefaultRootTemplate, defaultRootLifetime)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\troot, err := x509util.CreateCertificate(template, template, signer.Public(), signer)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\treturn root, signer, nil\n}\n\nfunc generateIntermediate(commonName string, rootCrt *x509.Certificate, rootKey crypto.Signer, lifetime time.Duration) (*x509.Certificate, crypto.Signer, error) {\n\ttemplate, signer, err := newCert(commonName, x509util.DefaultIntermediateTemplate, lifetime)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\tintermediate, err := x509util.CreateCertificate(template, rootCrt, signer.Public(), rootKey)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\treturn intermediate, signer, nil\n}\n\nfunc newCert(commonName, templateName string, lifetime time.Duration) (cert *x509.Certificate, signer crypto.Signer, err error) {\n\tsigner, err = keyutil.GenerateDefaultSigner()\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\tcsr, err := x509util.CreateCertificateRequest(commonName, []string{}, signer)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\ttemplate, err := x509util.NewCertificate(csr, x509util.WithTemplate(templateName, x509util.CreateTemplateData(commonName, []string{})))\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\tcert = template.GetCertificate()\n\tcert.NotBefore = time.Now().Truncate(time.Second)\n\tcert.NotAfter = cert.NotBefore.Add(lifetime)\n\treturn cert, signer, nil\n}\n"
  },
  {
    "path": "modules/caddypki/command.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddypki\n\nimport (\n\t\"crypto/x509\"\n\t\"encoding/json\"\n\t\"encoding/pem\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"os\"\n\t\"path\"\n\n\t\"github.com/smallstep/truststore\"\n\t\"github.com/spf13/cobra\"\n\n\tcaddycmd \"github.com/caddyserver/caddy/v2/cmd\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddycmd.RegisterCommand(caddycmd.Command{\n\t\tName:  \"trust\",\n\t\tUsage: \"[--ca <id>] [--address <listen>] [--config <path> [--adapter <name>]]\",\n\t\tShort: \"Installs a CA certificate into local trust stores\",\n\t\tLong: `\nAdds a root certificate into the local trust stores.\n\nCaddy will attempt to install its root certificates into the local\ntrust stores automatically when they are first generated, but it\nmight fail if Caddy doesn't have the appropriate permissions to\nwrite to the trust store. This command is necessary to pre-install\nthe certificates before using them, if the server process runs as an\nunprivileged user (such as via systemd).\n\nBy default, this command installs the root certificate for Caddy's\ndefault CA (i.e. 'local'). You may specify the ID of another CA\nwith the --ca flag.\n\nThis command will attempt to connect to Caddy's admin API running at\n'` + caddy.DefaultAdminListen + `' to fetch the root certificate. You may\nexplicitly specify the --address, or use the --config flag to load\nthe admin address from your config, if not using the default.`,\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\tcmd.Flags().StringP(\"ca\", \"\", \"\", \"The ID of the CA to trust (defaults to 'local')\")\n\t\t\tcmd.Flags().StringP(\"address\", \"\", \"\", \"Address of the administration API listener (if --config is not used)\")\n\t\t\tcmd.Flags().StringP(\"config\", \"c\", \"\", \"Configuration file (if --address is not used)\")\n\t\t\tcmd.Flags().StringP(\"adapter\", \"a\", \"\", \"Name of config adapter to apply (if --config is used)\")\n\t\t\tcmd.RunE = caddycmd.WrapCommandFuncForCobra(cmdTrust)\n\t\t},\n\t})\n\n\tcaddycmd.RegisterCommand(caddycmd.Command{\n\t\tName:  \"untrust\",\n\t\tUsage: \"[--cert <path>] | [[--ca <id>] [--address <listen>] [--config <path> [--adapter <name>]]]\",\n\t\tShort: \"Untrusts a locally-trusted CA certificate\",\n\t\tLong: `\nUntrusts a root certificate from the local trust store(s).\n\nThis command uninstalls trust; it does not necessarily delete the\nroot certificate from trust stores entirely. Thus, repeatedly\ntrusting and untrusting new certificates can fill up trust databases.\n\nThis command does not delete or modify certificate files from Caddy's\nconfigured storage.\n\nThis command can be used in one of two ways. Either by specifying\nwhich certificate to untrust by a direct path to the certificate\nfile with the --cert flag, or by fetching the root certificate for\nthe CA from the admin API (default behaviour).\n\nIf the admin API is used, then the CA defaults to 'local'. You may\nspecify the ID of another CA with the --ca flag. By default, this\nwill attempt to connect to the Caddy's admin API running at\n'` + caddy.DefaultAdminListen + `' to fetch the root certificate.\nYou may explicitly specify the --address, or use the --config flag\nto load the admin address from your config, if not using the default.`,\n\t\tCobraFunc: func(cmd *cobra.Command) {\n\t\t\tcmd.Flags().StringP(\"cert\", \"p\", \"\", \"The path to the CA certificate to untrust\")\n\t\t\tcmd.Flags().StringP(\"ca\", \"\", \"\", \"The ID of the CA to untrust (defaults to 'local')\")\n\t\t\tcmd.Flags().StringP(\"address\", \"\", \"\", \"Address of the administration API listener (if --config is not used)\")\n\t\t\tcmd.Flags().StringP(\"config\", \"c\", \"\", \"Configuration file (if --address is not used)\")\n\t\t\tcmd.Flags().StringP(\"adapter\", \"a\", \"\", \"Name of config adapter to apply (if --config is used)\")\n\t\t\tcmd.RunE = caddycmd.WrapCommandFuncForCobra(cmdUntrust)\n\t\t},\n\t})\n}\n\nfunc cmdTrust(fl caddycmd.Flags) (int, error) {\n\tcaID := fl.String(\"ca\")\n\taddrFlag := fl.String(\"address\")\n\tconfigFlag := fl.String(\"config\")\n\tconfigAdapterFlag := fl.String(\"adapter\")\n\n\t// Prepare the URI to the admin endpoint\n\tif caID == \"\" {\n\t\tcaID = DefaultCAID\n\t}\n\n\t// Determine where we're sending the request to get the CA info\n\tadminAddr, err := caddycmd.DetermineAdminAPIAddress(addrFlag, nil, configFlag, configAdapterFlag)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"couldn't determine admin API address: %v\", err)\n\t}\n\n\t// Fetch the root cert from the admin API\n\trootCert, err := rootCertFromAdmin(adminAddr, caID)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\t// Set up the CA struct; we only need to fill in the root\n\t// because we're only using it to make use of the installRoot()\n\t// function. Also needs a logger for warnings, and a \"cert path\"\n\t// for the root cert; since we're loading from the API and we\n\t// don't know the actual storage path via this flow, we'll just\n\t// pass through the admin API address instead.\n\tca := CA{\n\t\tlog:          caddy.Log(),\n\t\troot:         rootCert,\n\t\trootCertPath: adminAddr + path.Join(adminPKIEndpointBase, \"ca\", caID),\n\t}\n\n\t// Install the cert!\n\terr = ca.installRoot()\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\treturn caddy.ExitCodeSuccess, nil\n}\n\nfunc cmdUntrust(fl caddycmd.Flags) (int, error) {\n\tcertFile := fl.String(\"cert\")\n\tcaID := fl.String(\"ca\")\n\taddrFlag := fl.String(\"address\")\n\tconfigFlag := fl.String(\"config\")\n\tconfigAdapterFlag := fl.String(\"adapter\")\n\n\tif certFile != \"\" && (caID != \"\" || addrFlag != \"\" || configFlag != \"\") {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"conflicting command line arguments, cannot use --cert with other flags\")\n\t}\n\n\t// If a file was specified, try to uninstall the cert matching that file\n\tif certFile != \"\" {\n\t\t// Sanity check, make sure cert file exists first\n\t\t_, err := os.Stat(certFile)\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"accessing certificate file: %v\", err)\n\t\t}\n\n\t\t// Uninstall the file!\n\t\terr = truststore.UninstallFile(certFile,\n\t\t\ttruststore.WithDebug(),\n\t\t\ttruststore.WithFirefox(),\n\t\t\ttruststore.WithJava())\n\t\tif err != nil {\n\t\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"failed to uninstall certificate file: %v\", err)\n\t\t}\n\n\t\treturn caddy.ExitCodeSuccess, nil\n\t}\n\n\t// Prepare the URI to the admin endpoint\n\tif caID == \"\" {\n\t\tcaID = DefaultCAID\n\t}\n\n\t// Determine where we're sending the request to get the CA info\n\tadminAddr, err := caddycmd.DetermineAdminAPIAddress(addrFlag, nil, configFlag, configAdapterFlag)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"couldn't determine admin API address: %v\", err)\n\t}\n\n\t// Fetch the root cert from the admin API\n\trootCert, err := rootCertFromAdmin(adminAddr, caID)\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, err\n\t}\n\n\t// Uninstall the cert!\n\terr = truststore.Uninstall(rootCert,\n\t\ttruststore.WithDebug(),\n\t\ttruststore.WithFirefox(),\n\t\ttruststore.WithJava())\n\tif err != nil {\n\t\treturn caddy.ExitCodeFailedStartup, fmt.Errorf(\"failed to uninstall certificate file: %v\", err)\n\t}\n\n\treturn caddy.ExitCodeSuccess, nil\n}\n\n// rootCertFromAdmin makes the API request to fetch the root certificate for the named CA via admin API.\nfunc rootCertFromAdmin(adminAddr string, caID string) (*x509.Certificate, error) {\n\turi := path.Join(adminPKIEndpointBase, \"ca\", caID)\n\n\t// Make the request to fetch the CA info\n\tresp, err := caddycmd.AdminAPIRequest(adminAddr, http.MethodGet, uri, make(http.Header), nil)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"requesting CA info: %v\", err)\n\t}\n\tdefer resp.Body.Close()\n\n\t// Decode the response\n\tcaInfo := new(caInfo)\n\terr = json.NewDecoder(resp.Body).Decode(caInfo)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to decode JSON response: %v\", err)\n\t}\n\n\t// Decode the root cert\n\trootBlock, _ := pem.Decode([]byte(caInfo.RootCert))\n\tif rootBlock == nil {\n\t\treturn nil, fmt.Errorf(\"failed to decode root certificate: %v\", err)\n\t}\n\trootCert, err := x509.ParseCertificate(rootBlock.Bytes)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to parse root certificate: %v\", err)\n\t}\n\n\treturn rootCert, nil\n}\n"
  },
  {
    "path": "modules/caddypki/crypto.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddypki\n\nimport (\n\t\"bytes\"\n\t\"crypto\"\n\t\"crypto/ecdsa\"\n\t\"crypto/ed25519\"\n\t\"crypto/rsa\"\n\t\"crypto/x509\"\n\t\"encoding/pem\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"go.step.sm/crypto/pemutil\"\n)\n\nfunc pemDecodeCertificate(pemDER []byte) (*x509.Certificate, error) {\n\tpemBlock, remaining := pem.Decode(pemDER)\n\tif pemBlock == nil {\n\t\treturn nil, fmt.Errorf(\"no PEM block found\")\n\t}\n\tif len(remaining) > 0 {\n\t\treturn nil, fmt.Errorf(\"input contained more than a single PEM block\")\n\t}\n\tif pemBlock.Type != \"CERTIFICATE\" {\n\t\treturn nil, fmt.Errorf(\"expected PEM block type to be CERTIFICATE, but got '%s'\", pemBlock.Type)\n\t}\n\treturn x509.ParseCertificate(pemBlock.Bytes)\n}\n\nfunc pemDecodeCertificateChain(pemDER []byte) ([]*x509.Certificate, error) {\n\tchain, err := pemutil.ParseCertificateBundle(pemDER)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed parsing certificate chain: %w\", err)\n\t}\n\n\treturn chain, nil\n}\n\nfunc pemEncodeCert(der []byte) ([]byte, error) {\n\treturn pemEncode(\"CERTIFICATE\", der)\n}\n\nfunc pemEncode(blockType string, b []byte) ([]byte, error) {\n\tvar buf bytes.Buffer\n\terr := pem.Encode(&buf, &pem.Block{Type: blockType, Bytes: b})\n\treturn buf.Bytes(), err\n}\n\nfunc trusted(cert *x509.Certificate) bool {\n\tchains, err := cert.Verify(x509.VerifyOptions{})\n\treturn len(chains) > 0 && err == nil\n}\n\n// KeyPair represents a public-private key pair, where the\n// public key is also called a certificate.\ntype KeyPair struct {\n\t// The certificate. By default, this should be the path to\n\t// a PEM file unless format is something else.\n\tCertificate string `json:\"certificate,omitempty\"`\n\n\t// The private key. By default, this should be the path to\n\t// a PEM file unless format is something else.\n\tPrivateKey string `json:\"private_key,omitempty\"` //nolint:gosec // false positive: yes it's exported, since it needs to encode/decode as JSON; and is often just a filepath\n\n\t// The format in which the certificate and private\n\t// key are provided. Default: pem_file\n\tFormat string `json:\"format,omitempty\"`\n}\n\n// Load loads the certificate chain and (optional) private key from\n// the corresponding files, using the configured format. If a\n// private key is read, it will be verified to belong to the first\n// certificate in the chain.\nfunc (kp KeyPair) Load() ([]*x509.Certificate, crypto.Signer, error) {\n\tswitch kp.Format {\n\tcase \"\", \"pem_file\":\n\t\tcertData, err := os.ReadFile(kp.Certificate)\n\t\tif err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\t\tchain, err := pemDecodeCertificateChain(certData)\n\t\tif err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\n\t\tvar key crypto.Signer\n\t\tif kp.PrivateKey != \"\" {\n\t\t\tkeyData, err := os.ReadFile(kp.PrivateKey)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, nil, err\n\t\t\t}\n\t\t\tkey, err = certmagic.PEMDecodePrivateKey(keyData)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, nil, err\n\t\t\t}\n\t\t\tif err := verifyKeysMatch(chain[0], key); err != nil {\n\t\t\t\treturn nil, nil, err\n\t\t\t}\n\t\t}\n\n\t\treturn chain, key, nil\n\n\tdefault:\n\t\treturn nil, nil, fmt.Errorf(\"unsupported format: %s\", kp.Format)\n\t}\n}\n\n// verifyKeysMatch verifies that the public key in the [x509.Certificate] matches\n// the public key of the [crypto.Signer].\nfunc verifyKeysMatch(crt *x509.Certificate, signer crypto.Signer) error {\n\tswitch pub := crt.PublicKey.(type) {\n\tcase *rsa.PublicKey:\n\t\tpk, ok := signer.Public().(*rsa.PublicKey)\n\t\tif !ok {\n\t\t\treturn fmt.Errorf(\"private key type %T does not match issuer public key type %T\", signer.Public(), pub)\n\t\t}\n\t\tif !pub.Equal(pk) {\n\t\t\treturn errors.New(\"private key does not match issuer public key\")\n\t\t}\n\tcase *ecdsa.PublicKey:\n\t\tpk, ok := signer.Public().(*ecdsa.PublicKey)\n\t\tif !ok {\n\t\t\treturn fmt.Errorf(\"private key type %T does not match issuer public key type %T\", signer.Public(), pub)\n\t\t}\n\t\tif !pub.Equal(pk) {\n\t\t\treturn errors.New(\"private key does not match issuer public key\")\n\t\t}\n\tcase ed25519.PublicKey:\n\t\tpk, ok := signer.Public().(ed25519.PublicKey)\n\t\tif !ok {\n\t\t\treturn fmt.Errorf(\"private key type %T does not match issuer public key type %T\", signer.Public(), pub)\n\t\t}\n\t\tif !pub.Equal(pk) {\n\t\t\treturn errors.New(\"private key does not match issuer public key\")\n\t\t}\n\tdefault:\n\t\treturn fmt.Errorf(\"unsupported key type: %T\", pub)\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "modules/caddypki/crypto_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddypki\n\nimport (\n\t\"crypto/rand\"\n\t\"crypto/x509\"\n\t\"crypto/x509/pkix\"\n\t\"encoding/pem\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"testing\"\n\t\"time\"\n\n\t\"go.step.sm/crypto/keyutil\"\n\t\"go.step.sm/crypto/pemutil\"\n)\n\nfunc TestKeyPair_Load(t *testing.T) {\n\trootSigner, err := keyutil.GenerateDefaultSigner()\n\tif err != nil {\n\t\tt.Fatalf(\"Failed creating signer: %v\", err)\n\t}\n\n\ttmpl := &x509.Certificate{\n\t\tSubject:    pkix.Name{CommonName: \"test-root\"},\n\t\tIsCA:       true,\n\t\tMaxPathLen: 3,\n\t}\n\trootBytes, err := x509.CreateCertificate(rand.Reader, tmpl, tmpl, rootSigner.Public(), rootSigner)\n\tif err != nil {\n\t\tt.Fatalf(\"Creating root certificate failed: %v\", err)\n\t}\n\n\troot, err := x509.ParseCertificate(rootBytes)\n\tif err != nil {\n\t\tt.Fatalf(\"Parsing root certificate failed: %v\", err)\n\t}\n\n\tintermediateSigner, err := keyutil.GenerateDefaultSigner()\n\tif err != nil {\n\t\tt.Fatalf(\"Creating intermedaite signer failed: %v\", err)\n\t}\n\n\tintermediateBytes, err := x509.CreateCertificate(rand.Reader, &x509.Certificate{\n\t\tSubject:    pkix.Name{CommonName: \"test-first-intermediate\"},\n\t\tIsCA:       true,\n\t\tMaxPathLen: 2,\n\t\tNotAfter:   time.Now().Add(time.Hour),\n\t}, root, intermediateSigner.Public(), rootSigner)\n\tif err != nil {\n\t\tt.Fatalf(\"Creating intermediate certificate failed: %v\", err)\n\t}\n\n\tintermediate, err := x509.ParseCertificate(intermediateBytes)\n\tif err != nil {\n\t\tt.Fatalf(\"Parsing intermediate certificate failed: %v\", err)\n\t}\n\n\tvar chainContents []byte\n\tchain := []*x509.Certificate{intermediate, root}\n\tfor _, cert := range chain {\n\t\tb, err := pemutil.Serialize(cert)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed serializing intermediate certificate: %v\", err)\n\t\t}\n\t\tchainContents = append(chainContents, pem.EncodeToMemory(b)...)\n\t}\n\n\tdir := t.TempDir()\n\trootCertFile := filepath.Join(dir, \"root.pem\")\n\tif _, err = pemutil.Serialize(root, pemutil.WithFilename(rootCertFile)); err != nil {\n\t\tt.Fatalf(\"Failed serializing root certificate: %v\", err)\n\t}\n\trootKeyFile := filepath.Join(dir, \"root.key\")\n\tif _, err = pemutil.Serialize(rootSigner, pemutil.WithFilename(rootKeyFile)); err != nil {\n\t\tt.Fatalf(\"Failed serializing root key: %v\", err)\n\t}\n\tintermediateCertFile := filepath.Join(dir, \"intermediate.pem\")\n\tif _, err = pemutil.Serialize(intermediate, pemutil.WithFilename(intermediateCertFile)); err != nil {\n\t\tt.Fatalf(\"Failed serializing intermediate certificate: %v\", err)\n\t}\n\tintermediateKeyFile := filepath.Join(dir, \"intermediate.key\")\n\tif _, err = pemutil.Serialize(intermediateSigner, pemutil.WithFilename(intermediateKeyFile)); err != nil {\n\t\tt.Fatalf(\"Failed serializing intermediate key: %v\", err)\n\t}\n\tchainFile := filepath.Join(dir, \"chain.pem\")\n\tif err := os.WriteFile(chainFile, chainContents, 0644); err != nil {\n\t\tt.Fatalf(\"Failed writing intermediate chain: %v\", err)\n\t}\n\n\tt.Run(\"ok/single-certificate-without-signer\", func(t *testing.T) {\n\t\tkp := KeyPair{\n\t\t\tCertificate: rootCertFile,\n\t\t}\n\t\tchain, signer, err := kp.Load()\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed loading KeyPair: %v\", err)\n\t\t}\n\t\tif len(chain) != 1 {\n\t\t\tt.Errorf(\"Expected 1 certificate in chain; got %d\", len(chain))\n\t\t}\n\t\tif signer != nil {\n\t\t\tt.Error(\"Expected no signer to be returned\")\n\t\t}\n\t})\n\n\tt.Run(\"ok/single-certificate-with-signer\", func(t *testing.T) {\n\t\tkp := KeyPair{\n\t\t\tCertificate: rootCertFile,\n\t\t\tPrivateKey:  rootKeyFile,\n\t\t}\n\t\tchain, signer, err := kp.Load()\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed loading KeyPair: %v\", err)\n\t\t}\n\t\tif len(chain) != 1 {\n\t\t\tt.Errorf(\"Expected 1 certificate in chain; got %d\", len(chain))\n\t\t}\n\t\tif signer == nil {\n\t\t\tt.Error(\"Expected signer to be returned\")\n\t\t}\n\t})\n\n\tt.Run(\"ok/multiple-certificates-with-signer\", func(t *testing.T) {\n\t\tkp := KeyPair{\n\t\t\tCertificate: chainFile,\n\t\t\tPrivateKey:  intermediateKeyFile,\n\t\t}\n\t\tchain, signer, err := kp.Load()\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed loading KeyPair: %v\", err)\n\t\t}\n\t\tif len(chain) != 2 {\n\t\t\tt.Errorf(\"Expected 2 certificates in chain; got %d\", len(chain))\n\t\t}\n\t\tif signer == nil {\n\t\t\tt.Error(\"Expected signer to be returned\")\n\t\t}\n\t})\n\n\tt.Run(\"fail/non-matching-public-key\", func(t *testing.T) {\n\t\tkp := KeyPair{\n\t\t\tCertificate: intermediateCertFile,\n\t\t\tPrivateKey:  rootKeyFile,\n\t\t}\n\t\tchain, signer, err := kp.Load()\n\t\tif err == nil {\n\t\t\tt.Error(\"Expected loading KeyPair to return an error\")\n\t\t}\n\t\tif chain != nil {\n\t\t\tt.Error(\"Expected no chain to be returned\")\n\t\t}\n\t\tif signer != nil {\n\t\t\tt.Error(\"Expected no signer to be returned\")\n\t\t}\n\t})\n}\n\nfunc Test_pemDecodeCertificate(t *testing.T) {\n\tsigner, err := keyutil.GenerateDefaultSigner()\n\tif err != nil {\n\t\tt.Fatalf(\"Failed creating signer: %v\", err)\n\t}\n\n\ttmpl := &x509.Certificate{\n\t\tSubject:    pkix.Name{CommonName: \"test-cert\"},\n\t\tIsCA:       true,\n\t\tMaxPathLen: 3,\n\t}\n\tderBytes, err := x509.CreateCertificate(rand.Reader, tmpl, tmpl, signer.Public(), signer)\n\tif err != nil {\n\t\tt.Fatalf(\"Creating root certificate failed: %v\", err)\n\t}\n\tcert, err := x509.ParseCertificate(derBytes)\n\tif err != nil {\n\t\tt.Fatalf(\"Parsing root certificate failed: %v\", err)\n\t}\n\n\tpemBlock, err := pemutil.Serialize(cert)\n\tif err != nil {\n\t\tt.Fatalf(\"Failed serializing certificate: %v\", err)\n\t}\n\tpemData := pem.EncodeToMemory(pemBlock)\n\n\tt.Run(\"ok\", func(t *testing.T) {\n\t\tcert, err := pemDecodeCertificate(pemData)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed decoding PEM data: %v\", err)\n\t\t}\n\t\tif cert == nil {\n\t\t\tt.Errorf(\"Expected a certificate in PEM data\")\n\t\t}\n\t})\n\n\tt.Run(\"fail/no-pem-data\", func(t *testing.T) {\n\t\tcert, err := pemDecodeCertificate(nil)\n\t\tif err == nil {\n\t\t\tt.Fatalf(\"Expected pemDecodeCertificate to return an error\")\n\t\t}\n\t\tif cert != nil {\n\t\t\tt.Errorf(\"Expected pemDecodeCertificate to return nil\")\n\t\t}\n\t})\n\n\tt.Run(\"fail/multiple\", func(t *testing.T) {\n\t\tmultiplePEMData := append(pemData, pemData...)\n\t\tcert, err := pemDecodeCertificate(multiplePEMData)\n\t\tif err == nil {\n\t\t\tt.Fatalf(\"Expected pemDecodeCertificate to return an error\")\n\t\t}\n\t\tif cert != nil {\n\t\t\tt.Errorf(\"Expected pemDecodeCertificate to return nil\")\n\t\t}\n\t})\n\n\tt.Run(\"fail/no-pem-certificate\", func(t *testing.T) {\n\t\tpkData := pem.EncodeToMemory(&pem.Block{\n\t\t\tType:  \"PRIVATE KEY\",\n\t\t\tBytes: []byte(\"some-bogus-private-key\"),\n\t\t})\n\t\tcert, err := pemDecodeCertificate(pkData)\n\t\tif err == nil {\n\t\t\tt.Fatalf(\"Expected pemDecodeCertificate to return an error\")\n\t\t}\n\t\tif cert != nil {\n\t\t\tt.Errorf(\"Expected pemDecodeCertificate to return nil\")\n\t\t}\n\t})\n}\n\nfunc Test_pemDecodeCertificateChain(t *testing.T) {\n\tsigner, err := keyutil.GenerateDefaultSigner()\n\tif err != nil {\n\t\tt.Fatalf(\"Failed creating signer: %v\", err)\n\t}\n\n\ttmpl := &x509.Certificate{\n\t\tSubject:    pkix.Name{CommonName: \"test-cert\"},\n\t\tIsCA:       true,\n\t\tMaxPathLen: 3,\n\t}\n\tderBytes, err := x509.CreateCertificate(rand.Reader, tmpl, tmpl, signer.Public(), signer)\n\tif err != nil {\n\t\tt.Fatalf(\"Creating root certificate failed: %v\", err)\n\t}\n\tcert, err := x509.ParseCertificate(derBytes)\n\tif err != nil {\n\t\tt.Fatalf(\"Parsing root certificate failed: %v\", err)\n\t}\n\n\tpemBlock, err := pemutil.Serialize(cert)\n\tif err != nil {\n\t\tt.Fatalf(\"Failed serializing certificate: %v\", err)\n\t}\n\tpemData := pem.EncodeToMemory(pemBlock)\n\n\tt.Run(\"ok/single\", func(t *testing.T) {\n\t\tcerts, err := pemDecodeCertificateChain(pemData)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed decoding PEM data: %v\", err)\n\t\t}\n\t\tif len(certs) != 1 {\n\t\t\tt.Errorf(\"Expected 1 certificate in PEM data; got %d\", len(certs))\n\t\t}\n\t})\n\n\tt.Run(\"ok/multiple\", func(t *testing.T) {\n\t\tmultiplePEMData := append(pemData, pemData...)\n\t\tcerts, err := pemDecodeCertificateChain(multiplePEMData)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed decoding PEM data: %v\", err)\n\t\t}\n\t\tif len(certs) != 2 {\n\t\t\tt.Errorf(\"Expected 2 certificates in PEM data; got %d\", len(certs))\n\t\t}\n\t})\n\n\tt.Run(\"fail/no-pem-certificate\", func(t *testing.T) {\n\t\tpkData := pem.EncodeToMemory(&pem.Block{\n\t\t\tType:  \"PRIVATE KEY\",\n\t\t\tBytes: []byte(\"some-bogus-private-key\"),\n\t\t})\n\t\tcerts, err := pemDecodeCertificateChain(pkData)\n\t\tif err == nil {\n\t\t\tt.Fatalf(\"Expected pemDecodeCertificateChain to return an error\")\n\t\t}\n\t\tif len(certs) != 0 {\n\t\t\tt.Errorf(\"Expected 0 certificates in PEM data; got %d\", len(certs))\n\t\t}\n\t})\n\n\tt.Run(\"fail/no-der-certificate\", func(t *testing.T) {\n\t\tcerts, err := pemDecodeCertificateChain([]byte(\"invalid-der-data\"))\n\t\tif err == nil {\n\t\t\tt.Fatalf(\"Expected pemDecodeCertificateChain to return an error\")\n\t\t}\n\t\tif len(certs) != 0 {\n\t\t\tt.Errorf(\"Expected 0 certificates in PEM data; got %d\", len(certs))\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "modules/caddypki/maintain.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddypki\n\nimport (\n\t\"crypto/x509\"\n\t\"fmt\"\n\t\"log\"\n\t\"runtime/debug\"\n\t\"time\"\n\n\t\"go.uber.org/zap\"\n)\n\nfunc (p *PKI) maintenanceForCA(ca *CA) {\n\tdefer func() {\n\t\tif err := recover(); err != nil {\n\t\t\tlog.Printf(\"[PANIC] PKI maintenance for CA %s: %v\\n%s\", ca.ID, err, debug.Stack())\n\t\t}\n\t}()\n\n\tinterval := time.Duration(ca.MaintenanceInterval)\n\tif interval <= 0 {\n\t\tinterval = defaultMaintenanceInterval\n\t}\n\tticker := time.NewTicker(interval)\n\tdefer ticker.Stop()\n\n\tfor {\n\t\tselect {\n\t\tcase <-ticker.C:\n\t\t\t_ = p.renewCertsForCA(ca)\n\t\tcase <-p.ctx.Done():\n\t\t\treturn\n\t\t}\n\t}\n}\n\nfunc (p *PKI) renewCerts() {\n\tfor _, ca := range p.CAs {\n\t\terr := p.renewCertsForCA(ca)\n\t\tif err != nil {\n\t\t\tp.log.Error(\"renewing intermediate certificates\",\n\t\t\t\tzap.Error(err),\n\t\t\t\tzap.String(\"ca\", ca.ID))\n\t\t}\n\t}\n}\n\nfunc (p *PKI) renewCertsForCA(ca *CA) error {\n\tca.mu.Lock()\n\tdefer ca.mu.Unlock()\n\n\tlog := p.log.With(zap.String(\"ca\", ca.ID))\n\n\t// only maintain the root if it's not manually provided in the config\n\tif ca.Root == nil {\n\t\tif ca.needsRenewal(ca.root) {\n\t\t\t// TODO: implement root renewal (use same key)\n\t\t\tlog.Warn(\"root certificate expiring soon (FIXME: ROOT RENEWAL NOT YET IMPLEMENTED)\",\n\t\t\t\tzap.Duration(\"time_remaining\", time.Until(ca.interChain[0].NotAfter)),\n\t\t\t)\n\t\t}\n\t}\n\n\t// only maintain the intermediate if it's not manually provided in the config\n\tif ca.Intermediate == nil {\n\t\tif ca.needsRenewal(ca.interChain[0]) {\n\t\t\tlog.Info(\"intermediate expires soon; renewing\",\n\t\t\t\tzap.Duration(\"time_remaining\", time.Until(ca.interChain[0].NotAfter)),\n\t\t\t)\n\n\t\t\trootCert, rootKey, err := ca.loadOrGenRoot()\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"loading root key: %v\", err)\n\t\t\t}\n\t\t\tinterCert, interKey, err := ca.genIntermediate(rootCert, rootKey)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"generating new certificate: %v\", err)\n\t\t\t}\n\t\t\tca.interChain, ca.interKey = []*x509.Certificate{interCert}, interKey\n\n\t\t\tlog.Info(\"renewed intermediate\",\n\t\t\t\tzap.Time(\"new_expiration\", ca.interChain[0].NotAfter),\n\t\t\t)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// needsRenewal reports whether the certificate is within its renewal window\n// (i.e. the fraction of lifetime remaining is less than or equal to RenewalWindowRatio).\nfunc (ca *CA) needsRenewal(cert *x509.Certificate) bool {\n\tratio := ca.RenewalWindowRatio\n\tif ratio <= 0 {\n\t\tratio = defaultRenewalWindowRatio\n\t}\n\tlifetime := cert.NotAfter.Sub(cert.NotBefore)\n\trenewalWindow := time.Duration(float64(lifetime) * ratio)\n\trenewalWindowStart := cert.NotAfter.Add(-renewalWindow)\n\treturn time.Now().After(renewalWindowStart)\n}\n"
  },
  {
    "path": "modules/caddypki/maintain_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddypki\n\nimport (\n\t\"crypto/x509\"\n\t\"testing\"\n\t\"time\"\n)\n\nfunc TestCA_needsRenewal(t *testing.T) {\n\tnow := time.Now()\n\n\t// cert with 100 days lifetime; last 20% = 20 days before expiry\n\t// So renewal window starts at (NotAfter - 20 days)\n\tmakeCert := func(daysUntilExpiry int, lifetimeDays int) *x509.Certificate {\n\t\tnotAfter := now.AddDate(0, 0, daysUntilExpiry)\n\t\tnotBefore := notAfter.AddDate(0, 0, -lifetimeDays)\n\t\treturn &x509.Certificate{NotBefore: notBefore, NotAfter: notAfter}\n\t}\n\n\ttests := []struct {\n\t\tname   string\n\t\tca     *CA\n\t\tcert   *x509.Certificate\n\t\texpect bool\n\t}{\n\t\t{\n\t\t\tname:   \"inside renewal window with ratio 0.2\",\n\t\t\tca:     &CA{RenewalWindowRatio: 0.2},\n\t\t\tcert:   makeCert(10, 100),\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tname:   \"outside renewal window with ratio 0.2\",\n\t\t\tca:     &CA{RenewalWindowRatio: 0.2},\n\t\t\tcert:   makeCert(50, 100),\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tname:   \"outside renewal window with 21 days left\",\n\t\t\tca:     &CA{RenewalWindowRatio: 0.2},\n\t\t\tcert:   makeCert(21, 100),\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tname:   \"just inside renewal window with ratio 0.5\",\n\t\t\tca:     &CA{RenewalWindowRatio: 0.5},\n\t\t\tcert:   makeCert(30, 100),\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tname:   \"zero ratio uses default\",\n\t\t\tca:     &CA{RenewalWindowRatio: 0},\n\t\t\tcert:   makeCert(10, 100),\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tname:   \"invalid ratio uses default\",\n\t\t\tca:     &CA{RenewalWindowRatio: 1.5},\n\t\t\tcert:   makeCert(10, 100),\n\t\t\texpect: true,\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgot := tt.ca.needsRenewal(tt.cert)\n\t\t\tif got != tt.expect {\n\t\t\t\tt.Errorf(\"needsRenewal() = %v, want %v\", got, tt.expect)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "modules/caddypki/pki.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddypki\n\nimport (\n\t\"fmt\"\n\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(PKI{})\n}\n\n// PKI provides Public Key Infrastructure facilities for Caddy.\n//\n// This app can define certificate authorities (CAs) which are capable\n// of signing certificates. Other modules can be configured to use\n// the CAs defined by this app for issuing certificates or getting\n// key information needed for establishing trust.\ntype PKI struct {\n\t// The certificate authorities to manage. Each CA is keyed by an\n\t// ID that is used to uniquely identify it from other CAs.\n\t// At runtime, the GetCA() method should be used instead to ensure\n\t// the default CA is provisioned if it hadn't already been.\n\t// The default CA ID is \"local\".\n\tCAs map[string]*CA `json:\"certificate_authorities,omitempty\"`\n\n\tctx caddy.Context\n\tlog *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (PKI) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"pki\",\n\t\tNew: func() caddy.Module { return new(PKI) },\n\t}\n}\n\n// Provision sets up the configuration for the PKI app.\nfunc (p *PKI) Provision(ctx caddy.Context) error {\n\tp.ctx = ctx\n\tp.log = ctx.Logger()\n\n\tfor caID, ca := range p.CAs {\n\t\terr := ca.Provision(ctx, caID, p.log)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"provisioning CA '%s': %v\", caID, err)\n\t\t}\n\t}\n\n\t// if this app is initialized at all, ensure there's at\n\t// least a default CA that can be used: the standard CA\n\t// which is used implicitly for signing local-use certs\n\tif len(p.CAs) == 0 {\n\t\terr := p.ProvisionDefaultCA(ctx)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"provisioning CA '%s': %v\", DefaultCAID, err)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// ProvisionDefaultCA sets up the default CA.\nfunc (p *PKI) ProvisionDefaultCA(ctx caddy.Context) error {\n\tif p.CAs == nil {\n\t\tp.CAs = make(map[string]*CA)\n\t}\n\n\tp.CAs[DefaultCAID] = new(CA)\n\treturn p.CAs[DefaultCAID].Provision(ctx, DefaultCAID, p.log)\n}\n\n// Start starts the PKI app.\nfunc (p *PKI) Start() error {\n\t// install roots to trust store, if not disabled\n\tfor _, ca := range p.CAs {\n\t\tif ca.InstallTrust != nil && !*ca.InstallTrust {\n\t\t\tca.log.Info(\"root certificate trust store installation disabled; unconfigured clients may show warnings\",\n\t\t\t\tzap.String(\"path\", ca.rootCertPath))\n\t\t\tcontinue\n\t\t}\n\n\t\tif err := ca.installRoot(); err != nil {\n\t\t\t// could be some system dependencies that are missing;\n\t\t\t// shouldn't totally prevent startup, but we should log it\n\t\t\tca.log.Error(\"failed to install root certificate\",\n\t\t\t\tzap.Error(err),\n\t\t\t\tzap.String(\"certificate_file\", ca.rootCertPath))\n\t\t}\n\t}\n\n\t// see if root/intermediates need renewal...\n\tp.renewCerts()\n\n\t// ...and keep them renewed (one goroutine per CA with its own interval)\n\tfor _, ca := range p.CAs {\n\t\tgo p.maintenanceForCA(ca)\n\t}\n\n\treturn nil\n}\n\n// Stop stops the PKI app.\nfunc (p *PKI) Stop() error {\n\treturn nil\n}\n\n// GetCA retrieves a CA by ID. If the ID is the default\n// CA ID, and it hasn't been provisioned yet, it will\n// be provisioned.\nfunc (p *PKI) GetCA(ctx caddy.Context, id string) (*CA, error) {\n\tca, ok := p.CAs[id]\n\tif !ok {\n\t\t// for anything other than the default CA ID, error out if it wasn't configured\n\t\tif id != DefaultCAID {\n\t\t\treturn nil, fmt.Errorf(\"no certificate authority configured with id: %s\", id)\n\t\t}\n\n\t\t// for the default CA ID, provision it, because we want it to \"just work\"\n\t\terr := p.ProvisionDefaultCA(ctx)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"failed to provision default CA: %s\", err)\n\t\t}\n\t\tca = p.CAs[id]\n\t}\n\n\treturn ca, nil\n}\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner = (*PKI)(nil)\n\t_ caddy.App         = (*PKI)(nil)\n)\n"
  },
  {
    "path": "modules/caddytls/acmeissuer.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"context\"\n\t\"crypto/x509\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/caddyserver/zerossl\"\n\t\"github.com/mholt/acmez/v3/acme\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(ACMEIssuer{})\n}\n\n// ACMEIssuer manages certificates using the ACME protocol (RFC 8555).\ntype ACMEIssuer struct {\n\t// The URL to the CA's ACME directory endpoint. Default:\n\t// https://acme-v02.api.letsencrypt.org/directory\n\tCA string `json:\"ca,omitempty\"`\n\n\t// The URL to the test CA's ACME directory endpoint.\n\t// This endpoint is only used during retries if there\n\t// is a failure using the primary CA. Default:\n\t// https://acme-staging-v02.api.letsencrypt.org/directory\n\tTestCA string `json:\"test_ca,omitempty\"`\n\n\t// Your email address, so the CA can contact you if necessary.\n\t// Not required, but strongly recommended to provide one so\n\t// you can be reached if there is a problem. Your email is\n\t// not sent to any Caddy mothership or used for any purpose\n\t// other than ACME transactions.\n\tEmail string `json:\"email,omitempty\"`\n\n\t// Optionally select an ACME profile to use for certificate\n\t// orders. Must be a profile name offered by the ACME server,\n\t// which are listed at its directory endpoint.\n\t//\n\t// EXPERIMENTAL: Subject to change.\n\t// See https://datatracker.ietf.org/doc/draft-aaron-acme-profiles/\n\tProfile string `json:\"profile,omitempty\"`\n\n\t// If you have an existing account with the ACME server, put\n\t// the private key here in PEM format. The ACME client will\n\t// look up your account information with this key first before\n\t// trying to create a new one. You can use placeholders here,\n\t// for example if you have it in an environment variable.\n\tAccountKey string `json:\"account_key,omitempty\"`\n\n\t// If using an ACME CA that requires an external account\n\t// binding, specify the CA-provided credentials here.\n\tExternalAccount *acme.EAB `json:\"external_account,omitempty\"`\n\n\t// Time to wait before timing out an ACME operation.\n\t// Default: 0 (no timeout)\n\tACMETimeout caddy.Duration `json:\"acme_timeout,omitempty\"`\n\n\t// Configures the various ACME challenge types.\n\tChallenges *ChallengesConfig `json:\"challenges,omitempty\"`\n\n\t// An array of files of CA certificates to accept when connecting to the\n\t// ACME CA. Generally, you should only use this if the ACME CA endpoint\n\t// is internal or for development/testing purposes.\n\tTrustedRootsPEMFiles []string `json:\"trusted_roots_pem_files,omitempty\"`\n\n\t// Preferences for selecting alternate certificate chains, if offered\n\t// by the CA. By default, the first offered chain will be selected.\n\t// If configured, the chains may be sorted and the first matching chain\n\t// will be selected.\n\tPreferredChains *ChainPreference `json:\"preferred_chains,omitempty\"`\n\n\t// The validity period to ask the CA to issue a certificate for.\n\t// Default: 0 (CA chooses lifetime).\n\t// This value is used to compute the \"notAfter\" field of the ACME order;\n\t// therefore the system must have a reasonably synchronized clock.\n\t// NOTE: Not all CAs support this. Check with your CA's ACME\n\t// documentation to see if this is allowed and what values may\n\t// be used. EXPERIMENTAL: Subject to change.\n\tCertificateLifetime caddy.Duration `json:\"certificate_lifetime,omitempty\"`\n\n\t// Forward proxy module\n\tNetworkProxyRaw json.RawMessage `json:\"network_proxy,omitempty\" caddy:\"namespace=caddy.network_proxy inline_key=from\"`\n\n\trootPool *x509.CertPool\n\tlogger   *zap.Logger\n\n\ttemplate certmagic.ACMEIssuer  // set at Provision\n\tmagic    *certmagic.Config     // set at PreCheck\n\tissuer   *certmagic.ACMEIssuer // set at PreCheck; result of template + magic\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (ACMEIssuer) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.issuance.acme\",\n\t\tNew: func() caddy.Module { return new(ACMEIssuer) },\n\t}\n}\n\n// Provision sets up iss.\nfunc (iss *ACMEIssuer) Provision(ctx caddy.Context) error {\n\tiss.logger = ctx.Logger()\n\n\trepl := caddy.NewReplacer()\n\n\t// expand email address, if non-empty\n\tif iss.Email != \"\" {\n\t\temail, err := repl.ReplaceOrErr(iss.Email, true, true)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"expanding email address '%s': %v\", iss.Email, err)\n\t\t}\n\t\tiss.Email = email\n\t}\n\n\t// expand account key, if non-empty\n\tif iss.AccountKey != \"\" {\n\t\taccountKey, err := repl.ReplaceOrErr(iss.AccountKey, true, true)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"expanding account key PEM '%s': %v\", iss.AccountKey, err)\n\t\t}\n\t\tiss.AccountKey = accountKey\n\t}\n\n\t// DNS challenge provider, if not already established\n\tif iss.Challenges != nil && iss.Challenges.DNS != nil && iss.Challenges.DNS.solver == nil {\n\t\tvar prov certmagic.DNSProvider\n\t\tif iss.Challenges.DNS.ProviderRaw != nil {\n\t\t\t// a challenge provider has been locally configured - use it\n\t\t\tval, err := ctx.LoadModule(iss.Challenges.DNS, \"ProviderRaw\")\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"loading DNS provider module: %v\", err)\n\t\t\t}\n\t\t\tprov = val.(certmagic.DNSProvider)\n\t\t} else if tlsAppIface, err := ctx.AppIfConfigured(\"tls\"); err == nil {\n\t\t\t// no locally configured DNS challenge provider, but if there is\n\t\t\t// a global DNS module configured with the TLS app, use that\n\t\t\ttlsApp := tlsAppIface.(*TLS)\n\t\t\tif tlsApp.dns != nil {\n\t\t\t\tprov = tlsApp.dns.(certmagic.DNSProvider)\n\t\t\t}\n\t\t}\n\t\tif prov == nil {\n\t\t\treturn fmt.Errorf(\"DNS challenge enabled, but no DNS provider configured\")\n\t\t}\n\t\tiss.Challenges.DNS.solver = &certmagic.DNS01Solver{\n\t\t\tDNSManager: certmagic.DNSManager{\n\t\t\t\tDNSProvider:        prov,\n\t\t\t\tTTL:                time.Duration(iss.Challenges.DNS.TTL),\n\t\t\t\tPropagationDelay:   time.Duration(iss.Challenges.DNS.PropagationDelay),\n\t\t\t\tPropagationTimeout: time.Duration(iss.Challenges.DNS.PropagationTimeout),\n\t\t\t\tResolvers:          iss.Challenges.DNS.Resolvers,\n\t\t\t\tOverrideDomain:     iss.Challenges.DNS.OverrideDomain,\n\t\t\t\tLogger:             iss.logger.Named(\"dns_manager\"),\n\t\t\t},\n\t\t}\n\t}\n\n\t// add any custom CAs to trust store\n\tif len(iss.TrustedRootsPEMFiles) > 0 {\n\t\tiss.rootPool = x509.NewCertPool()\n\t\tfor _, pemFile := range iss.TrustedRootsPEMFiles {\n\t\t\tpemData, err := os.ReadFile(pemFile)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"loading trusted root CA's PEM file: %s: %v\", pemFile, err)\n\t\t\t}\n\t\t\tif !iss.rootPool.AppendCertsFromPEM(pemData) {\n\t\t\t\treturn fmt.Errorf(\"unable to add %s to trust pool: %v\", pemFile, err)\n\t\t\t}\n\t\t}\n\t}\n\n\tvar err error\n\tiss.template, err = iss.makeIssuerTemplate(ctx)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\nfunc (iss *ACMEIssuer) makeIssuerTemplate(ctx caddy.Context) (certmagic.ACMEIssuer, error) {\n\ttemplate := certmagic.ACMEIssuer{\n\t\tCA:                iss.CA,\n\t\tTestCA:            iss.TestCA,\n\t\tEmail:             iss.Email,\n\t\tProfile:           iss.Profile,\n\t\tAccountKeyPEM:     iss.AccountKey,\n\t\tCertObtainTimeout: time.Duration(iss.ACMETimeout),\n\t\tTrustedRoots:      iss.rootPool,\n\t\tExternalAccount:   iss.ExternalAccount,\n\t\tNotAfter:          time.Duration(iss.CertificateLifetime),\n\t\tLogger:            iss.logger,\n\t}\n\n\tif len(iss.NetworkProxyRaw) != 0 {\n\t\tproxyMod, err := ctx.LoadModule(iss, \"NetworkProxyRaw\")\n\t\tif err != nil {\n\t\t\treturn template, fmt.Errorf(\"failed to load network_proxy module: %v\", err)\n\t\t}\n\t\tif m, ok := proxyMod.(caddy.ProxyFuncProducer); ok {\n\t\t\ttemplate.HTTPProxy = m.ProxyFunc()\n\t\t} else {\n\t\t\treturn template, fmt.Errorf(\"network_proxy module is not `(func(*http.Request) (*url.URL, error))``\")\n\t\t}\n\t}\n\n\tif iss.Challenges != nil {\n\t\tif iss.Challenges.HTTP != nil {\n\t\t\ttemplate.DisableHTTPChallenge = iss.Challenges.HTTP.Disabled\n\t\t\ttemplate.AltHTTPPort = iss.Challenges.HTTP.AlternatePort\n\t\t}\n\t\tif iss.Challenges.TLSALPN != nil {\n\t\t\ttemplate.DisableTLSALPNChallenge = iss.Challenges.TLSALPN.Disabled\n\t\t\ttemplate.AltTLSALPNPort = iss.Challenges.TLSALPN.AlternatePort\n\t\t}\n\t\tif iss.Challenges.DNS != nil {\n\t\t\ttemplate.DNS01Solver = iss.Challenges.DNS.solver\n\t\t}\n\t\ttemplate.ListenHost = iss.Challenges.BindHost\n\t\tif iss.Challenges.Distributed != nil {\n\t\t\ttemplate.DisableDistributedSolvers = !*iss.Challenges.Distributed\n\t\t}\n\t}\n\n\tif iss.PreferredChains != nil {\n\t\ttemplate.PreferredChains = certmagic.ChainPreference{\n\t\t\tSmallest:       iss.PreferredChains.Smallest,\n\t\t\tAnyCommonName:  iss.PreferredChains.AnyCommonName,\n\t\t\tRootCommonName: iss.PreferredChains.RootCommonName,\n\t\t}\n\t}\n\n\t// ZeroSSL requires EAB, but we can generate that automatically (requires an email address be configured)\n\tif strings.HasPrefix(iss.CA, \"https://acme.zerossl.com/\") {\n\t\ttemplate.NewAccountFunc = func(ctx context.Context, acmeIss *certmagic.ACMEIssuer, acct acme.Account) (acme.Account, error) {\n\t\t\tif acmeIss.ExternalAccount != nil {\n\t\t\t\treturn acct, nil\n\t\t\t}\n\t\t\tvar err error\n\t\t\tacmeIss.ExternalAccount, acct, err = iss.generateZeroSSLEABCredentials(ctx, acct)\n\t\t\treturn acct, err\n\t\t}\n\t}\n\n\treturn template, nil\n}\n\n// SetConfig sets the associated certmagic config for this issuer.\n// This is required because ACME needs values from the config in\n// order to solve the challenges during issuance. This implements\n// the ConfigSetter interface.\nfunc (iss *ACMEIssuer) SetConfig(cfg *certmagic.Config) {\n\tiss.magic = cfg\n\tiss.issuer = certmagic.NewACMEIssuer(cfg, iss.template)\n}\n\n// PreCheck implements the certmagic.PreChecker interface.\nfunc (iss *ACMEIssuer) PreCheck(ctx context.Context, names []string, interactive bool) error {\n\treturn iss.issuer.PreCheck(ctx, names, interactive)\n}\n\n// Issue obtains a certificate for the given csr.\nfunc (iss *ACMEIssuer) Issue(ctx context.Context, csr *x509.CertificateRequest) (*certmagic.IssuedCertificate, error) {\n\treturn iss.issuer.Issue(ctx, csr)\n}\n\n// IssuerKey returns the unique issuer key for the configured CA endpoint.\nfunc (iss *ACMEIssuer) IssuerKey() string {\n\treturn iss.issuer.IssuerKey()\n}\n\n// Revoke revokes the given certificate.\nfunc (iss *ACMEIssuer) Revoke(ctx context.Context, cert certmagic.CertificateResource, reason int) error {\n\treturn iss.issuer.Revoke(ctx, cert, reason)\n}\n\n// GetACMEIssuer returns iss. This is useful when other types embed ACMEIssuer, because\n// type-asserting them to *ACMEIssuer will fail, but type-asserting them to an interface\n// with only this method will succeed, and will still allow the embedded ACMEIssuer\n// to be accessed and manipulated.\nfunc (iss *ACMEIssuer) GetACMEIssuer() *ACMEIssuer { return iss }\n\n// GetRenewalInfo wraps the underlying GetRenewalInfo method and satisfies\n// the CertMagic interface for ARI support.\nfunc (iss *ACMEIssuer) GetRenewalInfo(ctx context.Context, cert certmagic.Certificate) (acme.RenewalInfo, error) {\n\treturn iss.issuer.GetRenewalInfo(ctx, cert)\n}\n\n// generateZeroSSLEABCredentials generates ZeroSSL EAB credentials for the primary contact email\n// on the issuer. It should only be usedif the CA endpoint is ZeroSSL. An email address is required.\nfunc (iss *ACMEIssuer) generateZeroSSLEABCredentials(ctx context.Context, acct acme.Account) (*acme.EAB, acme.Account, error) {\n\tif strings.TrimSpace(iss.Email) == \"\" {\n\t\treturn nil, acme.Account{}, fmt.Errorf(\"your email address is required to use ZeroSSL's ACME endpoint\")\n\t}\n\n\tif len(acct.Contact) == 0 {\n\t\t// we borrow the email from config or the default email, so ensure it's saved with the account\n\t\tacct.Contact = []string{\"mailto:\" + iss.Email}\n\t}\n\n\tendpoint := zerossl.BaseURL + \"/acme/eab-credentials-email\"\n\tform := url.Values{\"email\": []string{iss.Email}}\n\tbody := strings.NewReader(form.Encode())\n\n\treq, err := http.NewRequestWithContext(ctx, http.MethodPost, endpoint, body)\n\tif err != nil {\n\t\treturn nil, acct, fmt.Errorf(\"forming request: %v\", err)\n\t}\n\treq.Header.Set(\"Content-Type\", \"application/x-www-form-urlencoded\")\n\treq.Header.Set(\"User-Agent\", certmagic.UserAgent)\n\n\tresp, err := http.DefaultClient.Do(req) //nolint:gosec // no SSRF since URL is from trusted config\n\tif err != nil {\n\t\treturn nil, acct, fmt.Errorf(\"performing EAB credentials request: %v\", err)\n\t}\n\tdefer resp.Body.Close()\n\n\tvar result struct {\n\t\tSuccess bool `json:\"success\"`\n\t\tError   struct {\n\t\t\tCode int    `json:\"code\"`\n\t\t\tType string `json:\"type\"`\n\t\t} `json:\"error\"`\n\t\tEABKID     string `json:\"eab_kid\"`\n\t\tEABHMACKey string `json:\"eab_hmac_key\"`\n\t}\n\terr = json.NewDecoder(resp.Body).Decode(&result)\n\tif err != nil {\n\t\treturn nil, acct, fmt.Errorf(\"decoding API response: %v\", err)\n\t}\n\tif result.Error.Code != 0 {\n\t\t// do this check first because ZeroSSL's API returns 200 on errors\n\t\treturn nil, acct, fmt.Errorf(\"failed getting EAB credentials: HTTP %d: %s (code %d)\",\n\t\t\tresp.StatusCode, result.Error.Type, result.Error.Code)\n\t}\n\tif resp.StatusCode != http.StatusOK {\n\t\treturn nil, acct, fmt.Errorf(\"failed getting EAB credentials: HTTP %d\", resp.StatusCode)\n\t}\n\n\tif c := iss.logger.Check(zapcore.InfoLevel, \"generated EAB credentials\"); c != nil {\n\t\tc.Write(zap.String(\"key_id\", result.EABKID))\n\t}\n\n\treturn &acme.EAB{\n\t\tKeyID:  result.EABKID,\n\t\tMACKey: result.EABHMACKey,\n\t}, acct, nil\n}\n\n// UnmarshalCaddyfile deserializes Caddyfile tokens into iss.\n//\n//\t... acme [<directory_url>] {\n//\t    dir <directory_url>\n//\t    test_dir <test_directory_url>\n//\t    email <email>\n//\t    profile <profile_name>\n//\t    timeout <duration>\n//\t    disable_http_challenge\n//\t    disable_tlsalpn_challenge\n//\t    alt_http_port    <port>\n//\t    alt_tlsalpn_port <port>\n//\t    eab <key_id> <mac_key>\n//\t    trusted_roots <pem_files...>\n//\t    dns <provider_name> [<options>]\n//\t    propagation_delay <duration>\n//\t    propagation_timeout <duration>\n//\t    resolvers <dns_servers...>\n//\t    dns_ttl <duration>\n//\t    dns_challenge_override_domain <domain>\n//\t    preferred_chains [smallest] {\n//\t        root_common_name <common_names...>\n//\t        any_common_name  <common_names...>\n//\t    }\n//\t}\nfunc (iss *ACMEIssuer) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume issuer name\n\n\tif d.NextArg() {\n\t\tiss.CA = d.Val()\n\t\tif d.NextArg() {\n\t\t\treturn d.ArgErr()\n\t\t}\n\t}\n\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"lifetime\":\n\t\t\tvar lifetimeStr string\n\t\t\tif !d.AllArgs(&lifetimeStr) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tlifetime, err := caddy.ParseDuration(lifetimeStr)\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid lifetime %s: %v\", lifetimeStr, err)\n\t\t\t}\n\t\t\tif lifetime < 0 {\n\t\t\t\treturn d.Errf(\"lifetime must be >= 0: %s\", lifetime)\n\t\t\t}\n\t\t\tiss.CertificateLifetime = caddy.Duration(lifetime)\n\n\t\tcase \"dir\":\n\t\t\tif iss.CA != \"\" {\n\t\t\t\treturn d.Errf(\"directory is already specified: %s\", iss.CA)\n\t\t\t}\n\t\t\tif !d.AllArgs(&iss.CA) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"test_dir\":\n\t\t\tif !d.AllArgs(&iss.TestCA) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"email\":\n\t\t\tif !d.AllArgs(&iss.Email) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"profile\":\n\t\t\tif !d.AllArgs(&iss.Profile) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"timeout\":\n\t\t\tvar timeoutStr string\n\t\t\tif !d.AllArgs(&timeoutStr) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\ttimeout, err := caddy.ParseDuration(timeoutStr)\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid timeout duration %s: %v\", timeoutStr, err)\n\t\t\t}\n\t\t\tiss.ACMETimeout = caddy.Duration(timeout)\n\n\t\tcase \"disable_http_challenge\":\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif iss.Challenges == nil {\n\t\t\t\tiss.Challenges = new(ChallengesConfig)\n\t\t\t}\n\t\t\tif iss.Challenges.HTTP == nil {\n\t\t\t\tiss.Challenges.HTTP = new(HTTPChallengeConfig)\n\t\t\t}\n\t\t\tiss.Challenges.HTTP.Disabled = true\n\n\t\tcase \"disable_tlsalpn_challenge\":\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif iss.Challenges == nil {\n\t\t\t\tiss.Challenges = new(ChallengesConfig)\n\t\t\t}\n\t\t\tif iss.Challenges.TLSALPN == nil {\n\t\t\t\tiss.Challenges.TLSALPN = new(TLSALPNChallengeConfig)\n\t\t\t}\n\t\t\tiss.Challenges.TLSALPN.Disabled = true\n\n\t\tcase \"distributed\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif d.Val() != \"false\" {\n\t\t\t\treturn d.Errf(\"only accepted value is 'false'\")\n\t\t\t}\n\t\t\tif iss.Challenges == nil {\n\t\t\t\tiss.Challenges = new(ChallengesConfig)\n\t\t\t}\n\t\t\tif iss.Challenges.Distributed == nil {\n\t\t\t\tiss.Challenges.Distributed = new(bool)\n\t\t\t}\n\n\t\tcase \"alt_http_port\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tport, err := strconv.Atoi(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid port %s: %v\", d.Val(), err)\n\t\t\t}\n\t\t\tif iss.Challenges == nil {\n\t\t\t\tiss.Challenges = new(ChallengesConfig)\n\t\t\t}\n\t\t\tif iss.Challenges.HTTP == nil {\n\t\t\t\tiss.Challenges.HTTP = new(HTTPChallengeConfig)\n\t\t\t}\n\t\t\tiss.Challenges.HTTP.AlternatePort = port\n\n\t\tcase \"alt_tlsalpn_port\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tport, err := strconv.Atoi(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid port %s: %v\", d.Val(), err)\n\t\t\t}\n\t\t\tif iss.Challenges == nil {\n\t\t\t\tiss.Challenges = new(ChallengesConfig)\n\t\t\t}\n\t\t\tif iss.Challenges.TLSALPN == nil {\n\t\t\t\tiss.Challenges.TLSALPN = new(TLSALPNChallengeConfig)\n\t\t\t}\n\t\t\tiss.Challenges.TLSALPN.AlternatePort = port\n\n\t\tcase \"eab\":\n\t\t\tiss.ExternalAccount = new(acme.EAB)\n\t\t\tif !d.AllArgs(&iss.ExternalAccount.KeyID, &iss.ExternalAccount.MACKey) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"trusted_roots\":\n\t\t\tiss.TrustedRootsPEMFiles = d.RemainingArgs()\n\n\t\tcase \"dns\":\n\t\t\tif iss.Challenges == nil {\n\t\t\t\tiss.Challenges = new(ChallengesConfig)\n\t\t\t}\n\t\t\tif iss.Challenges.DNS == nil {\n\t\t\t\tiss.Challenges.DNS = new(DNSChallengeConfig)\n\t\t\t}\n\t\t\tif d.NextArg() {\n\t\t\t\tprovName := d.Val()\n\t\t\t\tunm, err := caddyfile.UnmarshalModule(d, \"dns.providers.\"+provName)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tiss.Challenges.DNS.ProviderRaw = caddyconfig.JSONModuleObject(unm, \"name\", provName, nil)\n\t\t\t}\n\n\t\tcase \"propagation_delay\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdelayStr := d.Val()\n\t\t\tdelay, err := caddy.ParseDuration(delayStr)\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid propagation_delay duration %s: %v\", delayStr, err)\n\t\t\t}\n\t\t\tif iss.Challenges == nil {\n\t\t\t\tiss.Challenges = new(ChallengesConfig)\n\t\t\t}\n\t\t\tif iss.Challenges.DNS == nil {\n\t\t\t\tiss.Challenges.DNS = new(DNSChallengeConfig)\n\t\t\t}\n\t\t\tiss.Challenges.DNS.PropagationDelay = caddy.Duration(delay)\n\n\t\tcase \"propagation_timeout\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\ttimeoutStr := d.Val()\n\t\t\tvar timeout time.Duration\n\t\t\tif timeoutStr == \"-1\" {\n\t\t\t\ttimeout = time.Duration(-1)\n\t\t\t} else {\n\t\t\t\tvar err error\n\t\t\t\ttimeout, err = caddy.ParseDuration(timeoutStr)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn d.Errf(\"invalid propagation_timeout duration %s: %v\", timeoutStr, err)\n\t\t\t\t}\n\t\t\t}\n\t\t\tif iss.Challenges == nil {\n\t\t\t\tiss.Challenges = new(ChallengesConfig)\n\t\t\t}\n\t\t\tif iss.Challenges.DNS == nil {\n\t\t\t\tiss.Challenges.DNS = new(DNSChallengeConfig)\n\t\t\t}\n\t\t\tiss.Challenges.DNS.PropagationTimeout = caddy.Duration(timeout)\n\n\t\tcase \"resolvers\":\n\t\t\tif iss.Challenges == nil {\n\t\t\t\tiss.Challenges = new(ChallengesConfig)\n\t\t\t}\n\t\t\tif iss.Challenges.DNS == nil {\n\t\t\t\tiss.Challenges.DNS = new(DNSChallengeConfig)\n\t\t\t}\n\t\t\tiss.Challenges.DNS.Resolvers = d.RemainingArgs()\n\t\t\tif len(iss.Challenges.DNS.Resolvers) == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"dns_ttl\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tttlStr := d.Val()\n\t\t\tttl, err := caddy.ParseDuration(ttlStr)\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid dns_ttl duration %s: %v\", ttlStr, err)\n\t\t\t}\n\t\t\tif iss.Challenges == nil {\n\t\t\t\tiss.Challenges = new(ChallengesConfig)\n\t\t\t}\n\t\t\tif iss.Challenges.DNS == nil {\n\t\t\t\tiss.Challenges.DNS = new(DNSChallengeConfig)\n\t\t\t}\n\t\t\tiss.Challenges.DNS.TTL = caddy.Duration(ttl)\n\n\t\tcase \"dns_challenge_override_domain\":\n\t\t\targ := d.RemainingArgs()\n\t\t\tif len(arg) != 1 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif iss.Challenges == nil {\n\t\t\t\tiss.Challenges = new(ChallengesConfig)\n\t\t\t}\n\t\t\tif iss.Challenges.DNS == nil {\n\t\t\t\tiss.Challenges.DNS = new(DNSChallengeConfig)\n\t\t\t}\n\t\t\tiss.Challenges.DNS.OverrideDomain = arg[0]\n\n\t\tcase \"preferred_chains\":\n\t\t\tchainPref, err := ParseCaddyfilePreferredChainsOptions(d)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiss.PreferredChains = chainPref\n\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized ACME issuer property: %s\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc ParseCaddyfilePreferredChainsOptions(d *caddyfile.Dispenser) (*ChainPreference, error) {\n\tchainPref := new(ChainPreference)\n\tif d.NextArg() {\n\t\tsmallestOpt := d.Val()\n\t\tif smallestOpt == \"smallest\" {\n\t\t\ttrueBool := true\n\t\t\tchainPref.Smallest = &trueBool\n\t\t\tif d.NextArg() { // Only one argument allowed\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\tif d.NextBlock(d.Nesting()) { // Don't allow other options when smallest == true\n\t\t\t\treturn nil, d.Err(\"No more options are accepted when using the 'smallest' option\")\n\t\t\t}\n\t\t} else { // Smallest option should always be 'smallest' or unset\n\t\t\treturn nil, d.Errf(\"Invalid argument '%s'\", smallestOpt)\n\t\t}\n\t}\n\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\tswitch d.Val() {\n\t\tcase \"root_common_name\":\n\t\t\trootCommonNameOpt := d.RemainingArgs()\n\t\t\tchainPref.RootCommonName = append(chainPref.RootCommonName, rootCommonNameOpt...)\n\t\t\tif rootCommonNameOpt == nil {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\tif chainPref.AnyCommonName != nil {\n\t\t\t\treturn nil, d.Err(\"Can't set root_common_name when any_common_name is already set\")\n\t\t\t}\n\n\t\tcase \"any_common_name\":\n\t\t\tanyCommonNameOpt := d.RemainingArgs()\n\t\t\tchainPref.AnyCommonName = append(chainPref.AnyCommonName, anyCommonNameOpt...)\n\t\t\tif anyCommonNameOpt == nil {\n\t\t\t\treturn nil, d.ArgErr()\n\t\t\t}\n\t\t\tif chainPref.RootCommonName != nil {\n\t\t\t\treturn nil, d.Err(\"Can't set any_common_name when root_common_name is already set\")\n\t\t\t}\n\n\t\tdefault:\n\t\t\treturn nil, d.Errf(\"Received unrecognized parameter '%s'\", d.Val())\n\t\t}\n\t}\n\n\tif chainPref.Smallest == nil && chainPref.RootCommonName == nil && chainPref.AnyCommonName == nil {\n\t\treturn nil, d.Err(\"No options for preferred_chains received\")\n\t}\n\n\treturn chainPref, nil\n}\n\n// ChainPreference describes the client's preferred certificate chain,\n// useful if the CA offers alternate chains. The first matching chain\n// will be selected.\ntype ChainPreference struct {\n\t// Prefer chains with the fewest number of bytes.\n\tSmallest *bool `json:\"smallest,omitempty\"`\n\n\t// Select first chain having a root with one of\n\t// these common names.\n\tRootCommonName []string `json:\"root_common_name,omitempty\"`\n\n\t// Select first chain that has any issuer with one\n\t// of these common names.\n\tAnyCommonName []string `json:\"any_common_name,omitempty\"`\n}\n\n// Interface guards\nvar (\n\t_ certmagic.PreChecker        = (*ACMEIssuer)(nil)\n\t_ certmagic.Issuer            = (*ACMEIssuer)(nil)\n\t_ certmagic.Revoker           = (*ACMEIssuer)(nil)\n\t_ certmagic.RenewalInfoGetter = (*ACMEIssuer)(nil)\n\t_ caddy.Provisioner           = (*ACMEIssuer)(nil)\n\t_ ConfigSetter                = (*ACMEIssuer)(nil)\n\t_ caddyfile.Unmarshaler       = (*ACMEIssuer)(nil)\n)\n"
  },
  {
    "path": "modules/caddytls/automation.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net\"\n\t\"slices\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/mholt/acmez/v3\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\t\"golang.org/x/net/idna\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\n// AutomationConfig governs the automated management of TLS certificates.\ntype AutomationConfig struct {\n\t// The list of automation policies. The first policy matching\n\t// a certificate or subject name will be applied.\n\tPolicies []*AutomationPolicy `json:\"policies,omitempty\"`\n\n\t// On-Demand TLS defers certificate operations to the\n\t// moment they are needed, e.g. during a TLS handshake.\n\t// Useful when you don't know all the hostnames at\n\t// config-time, or when you are not in control of the\n\t// domain names you are managing certificates for.\n\t// In 2015, Caddy became the first web server to\n\t// implement this experimental technology.\n\t//\n\t// Note that this field does not enable on-demand TLS;\n\t// it only configures it for when it is used. To enable\n\t// it, create an automation policy with `on_demand`.\n\tOnDemand *OnDemandConfig `json:\"on_demand,omitempty\"`\n\n\t// Caddy staples OCSP (and caches the response) for all\n\t// qualifying certificates by default. This setting\n\t// changes how often it scans responses for freshness,\n\t// and updates them if they are getting stale. Default: 1h\n\tOCSPCheckInterval caddy.Duration `json:\"ocsp_interval,omitempty\"`\n\n\t// Every so often, Caddy will scan all loaded, managed\n\t// certificates for expiration. This setting changes how\n\t// frequently the scan for expiring certificates is\n\t// performed. Default: 10m\n\tRenewCheckInterval caddy.Duration `json:\"renew_interval,omitempty\"`\n\n\t// How often to scan storage units for old or expired\n\t// assets and remove them. These scans exert lots of\n\t// reads (and list operations) on the storage module, so\n\t// choose a longer interval for large deployments.\n\t// Default: 24h\n\t//\n\t// Storage will always be cleaned when the process first\n\t// starts. Then, a new cleaning will be started this\n\t// duration after the previous cleaning started if the\n\t// previous cleaning finished in less than half the time\n\t// of this interval (otherwise next start will be skipped).\n\tStorageCleanInterval caddy.Duration `json:\"storage_clean_interval,omitempty\"`\n\n\tdefaultPublicAutomationPolicy   *AutomationPolicy\n\tdefaultInternalAutomationPolicy *AutomationPolicy // only initialized if necessary\n}\n\n// AutomationPolicy designates the policy for automating the\n// management (obtaining, renewal, and revocation) of managed\n// TLS certificates.\n//\n// An AutomationPolicy value is not valid until it has been\n// provisioned; use the `AddAutomationPolicy()` method on the\n// TLS app to properly provision a new policy.\ntype AutomationPolicy struct {\n\t// Which subjects (hostnames or IP addresses) this policy applies to.\n\t//\n\t// This list is a filter, not a command. In other words, it is used\n\t// only to filter whether this policy should apply to a subject that\n\t// needs a certificate; it does NOT command the TLS app to manage a\n\t// certificate for that subject. To have Caddy automate a certificate\n\t// or specific subjects, use the \"automate\" certificate loader module\n\t// of the TLS app.\n\tSubjectsRaw []string `json:\"subjects,omitempty\"`\n\n\t// The modules that may issue certificates. Default: internal if all\n\t// subjects do not qualify for public certificates; otherwise acme and\n\t// zerossl.\n\tIssuersRaw []json.RawMessage `json:\"issuers,omitempty\" caddy:\"namespace=tls.issuance inline_key=module\"`\n\n\t// Modules that can get a custom certificate to use for any\n\t// given TLS handshake at handshake-time. Custom certificates\n\t// can be useful if another entity is managing certificates\n\t// and Caddy need only get it and serve it. Specifying a Manager\n\t// enables on-demand TLS, i.e. it has the side-effect of setting\n\t// the on_demand parameter to `true`.\n\t//\n\t// TODO: This is an EXPERIMENTAL feature. Subject to change or removal.\n\tManagersRaw []json.RawMessage `json:\"get_certificate,omitempty\" caddy:\"namespace=tls.get_certificate inline_key=via\"`\n\n\t// If true, certificates will be requested with MustStaple. Not all\n\t// CAs support this, and there are potentially serious consequences\n\t// of enabling this feature without proper threat modeling.\n\tMustStaple bool `json:\"must_staple,omitempty\"`\n\n\t// How long before a certificate's expiration to try renewing it,\n\t// as a function of its total lifetime. As a general and conservative\n\t// rule, it is a good idea to renew a certificate when it has about\n\t// 1/3 of its total lifetime remaining. This utilizes the majority\n\t// of the certificate's lifetime while still saving time to\n\t// troubleshoot problems. However, for extremely short-lived certs,\n\t// you may wish to increase the ratio to ~1/2.\n\tRenewalWindowRatio float64 `json:\"renewal_window_ratio,omitempty\"`\n\n\t// The type of key to generate for certificates.\n\t// Supported values: `ed25519`, `p256`, `p384`, `rsa2048`, `rsa4096`.\n\tKeyType string `json:\"key_type,omitempty\"`\n\n\t// Optionally configure a separate storage module associated with this\n\t// manager, instead of using Caddy's global/default-configured storage.\n\tStorageRaw json.RawMessage `json:\"storage,omitempty\" caddy:\"namespace=caddy.storage inline_key=module\"`\n\n\t// If true, certificates will be managed \"on demand\"; that is, during\n\t// TLS handshakes or when needed, as opposed to at startup or config\n\t// load. This enables On-Demand TLS for this policy.\n\tOnDemand bool `json:\"on_demand,omitempty\"`\n\n\t// If true, private keys already existing in storage\n\t// will be reused. Otherwise, a new key will be\n\t// created for every new certificate to mitigate\n\t// pinning and reduce the scope of key compromise.\n\t// TEMPORARY: Key pinning is against industry best practices.\n\t// This property will likely be removed in the future.\n\t// Do not rely on it forever; watch the release notes.\n\tReusePrivateKeys bool `json:\"reuse_private_keys,omitempty\"`\n\n\t// Disables OCSP stapling. Disabling OCSP stapling puts clients at\n\t// greater risk, reduces their privacy, and usually lowers client\n\t// performance. It is NOT recommended to disable this unless you\n\t// are able to justify the costs.\n\t// EXPERIMENTAL. Subject to change.\n\tDisableOCSPStapling bool `json:\"disable_ocsp_stapling,omitempty\"`\n\n\t// Overrides the URLs of OCSP responders embedded in certificates.\n\t// Each key is a OCSP server URL to override, and its value is the\n\t// replacement. An empty value will disable querying of that server.\n\t// EXPERIMENTAL. Subject to change.\n\tOCSPOverrides map[string]string `json:\"ocsp_overrides,omitempty\"`\n\n\t// Issuers and Managers store the decoded issuer and manager modules;\n\t// they are only used to populate an underlying certmagic.Config's\n\t// fields during provisioning so that the modules can survive a\n\t// re-provisioning.\n\tIssuers  []certmagic.Issuer  `json:\"-\"`\n\tManagers []certmagic.Manager `json:\"-\"`\n\n\tsubjects []string\n\tmagic    *certmagic.Config\n\tstorage  certmagic.Storage\n\n\t// Whether this policy had explicit managers configured directly on it.\n\thadExplicitManagers bool\n}\n\n// Provision sets up ap and builds its underlying CertMagic config.\nfunc (ap *AutomationPolicy) Provision(tlsApp *TLS) error {\n\t// replace placeholders in subjects to allow environment variables\n\trepl := caddy.NewReplacer()\n\tsubjects := make([]string, len(ap.SubjectsRaw))\n\tfor i, sub := range ap.SubjectsRaw {\n\t\tsub = repl.ReplaceAll(sub, \"\")\n\t\tsubASCII, err := idna.ToASCII(sub)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"could not convert automation policy subject '%s' to punycode: %v\", sub, err)\n\t\t}\n\t\tsubjects[i] = subASCII\n\t}\n\tap.subjects = subjects\n\n\t// policy-specific storage implementation\n\tif ap.StorageRaw != nil {\n\t\tval, err := tlsApp.ctx.LoadModule(ap, \"StorageRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading TLS storage module: %v\", err)\n\t\t}\n\t\tcmStorage, err := val.(caddy.StorageConverter).CertMagicStorage()\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"creating TLS storage configuration: %v\", err)\n\t\t}\n\t\tap.storage = cmStorage\n\t}\n\n\t// we don't store loaded modules directly in the certmagic config since\n\t// policy provisioning may happen more than once (during auto-HTTPS) and\n\t// loading a module clears its config bytes; thus, load the module and\n\t// store them on the policy before putting it on the config\n\n\t// load and provision any cert manager modules\n\tif ap.ManagersRaw != nil {\n\t\tap.hadExplicitManagers = true\n\t\tvals, err := tlsApp.ctx.LoadModule(ap, \"ManagersRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading external certificate manager modules: %v\", err)\n\t\t}\n\t\tfor _, getCertVal := range vals.([]any) {\n\t\t\tap.Managers = append(ap.Managers, getCertVal.(certmagic.Manager))\n\t\t}\n\t}\n\n\t// load and provision any explicitly-configured issuer modules\n\tif ap.IssuersRaw != nil {\n\t\tval, err := tlsApp.ctx.LoadModule(ap, \"IssuersRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading TLS automation management module: %s\", err)\n\t\t}\n\t\tfor _, issVal := range val.([]any) {\n\t\t\tap.Issuers = append(ap.Issuers, issVal.(certmagic.Issuer))\n\t\t}\n\t}\n\n\tissuers := ap.Issuers\n\tif len(issuers) == 0 {\n\t\tvar err error\n\t\tissuers, err = DefaultIssuersProvisioned(tlsApp.ctx)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// build certmagic.Config and attach it to the policy\n\tstorage := ap.storage\n\tif storage == nil {\n\t\tstorage = tlsApp.ctx.Storage()\n\t}\n\tcfg, err := ap.makeCertMagicConfig(tlsApp, issuers, storage)\n\tif err != nil {\n\t\treturn err\n\t}\n\tcertCacheMu.RLock()\n\tap.magic = certmagic.New(certCache, cfg)\n\tcertCacheMu.RUnlock()\n\n\t// give issuers a chance to see the config pointer\n\tfor _, issuer := range ap.magic.Issuers {\n\t\tif annoying, ok := issuer.(ConfigSetter); ok {\n\t\t\tannoying.SetConfig(ap.magic)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// makeCertMagicConfig constructs a certmagic.Config for this policy using the\n// provided issuers and storage. It encapsulates common logic shared between\n// Provision and RebuildCertMagic so we don't duplicate code.\nfunc (ap *AutomationPolicy) makeCertMagicConfig(tlsApp *TLS, issuers []certmagic.Issuer, storage certmagic.Storage) (certmagic.Config, error) {\n\t// key source\n\tkeyType := ap.KeyType\n\tif keyType != \"\" {\n\t\tvar err error\n\t\tkeyType, err = caddy.NewReplacer().ReplaceOrErr(ap.KeyType, true, true)\n\t\tif err != nil {\n\t\t\treturn certmagic.Config{}, fmt.Errorf(\"invalid key type %s: %s\", ap.KeyType, err)\n\t\t}\n\t\tif _, ok := supportedCertKeyTypes[keyType]; !ok {\n\t\t\treturn certmagic.Config{}, fmt.Errorf(\"unrecognized key type: %s\", keyType)\n\t\t}\n\t}\n\tkeySource := certmagic.StandardKeyGenerator{\n\t\tKeyType: supportedCertKeyTypes[keyType],\n\t}\n\n\tif storage == nil {\n\t\tstorage = tlsApp.ctx.Storage()\n\t}\n\n\t// on-demand TLS\n\tvar ond *certmagic.OnDemandConfig\n\tif ap.OnDemand || len(ap.Managers) > 0 {\n\t\t// permission module is now required after a number of negligence cases that allowed abuse;\n\t\t// but it may still be optional for explicit subjects (bounded, non-wildcard), for the\n\t\t// internal issuer since it doesn't cause public PKI pressure on ACME servers; subtly, it\n\t\t// is useful to allow on-demand TLS to be enabled so Managers can be used, but to still\n\t\t// prevent issuance from Issuers (when Managers don't provide a certificate) if there's no\n\t\t// permission module configured\n\t\tnoProtections := ap.isWildcardOrDefault() && !ap.onlyInternalIssuer() && (tlsApp.Automation == nil || tlsApp.Automation.OnDemand == nil || tlsApp.Automation.OnDemand.permission == nil)\n\t\tfailClosed := noProtections && !ap.hadExplicitManagers // don't allow on-demand issuance (other than implicit managers) if no managers have been explicitly configured\n\t\tif noProtections {\n\t\t\tif !ap.hadExplicitManagers {\n\t\t\t\t// no managers, no explicitly-configured permission module, this is a config error\n\t\t\t\treturn certmagic.Config{}, fmt.Errorf(\"on-demand TLS cannot be enabled without a permission module to prevent abuse; please refer to documentation for details\")\n\t\t\t}\n\t\t\t// allow on-demand to be enabled but only for the purpose of the Managers; issuance won't be allowed from Issuers\n\t\t\ttlsApp.logger.Warn(\"on-demand TLS can only get certificates from the configured external manager(s) because no ask endpoint / permission module is specified\")\n\t\t}\n\t\tond = &certmagic.OnDemandConfig{\n\t\t\tDecisionFunc: func(ctx context.Context, name string) error {\n\t\t\t\tif failClosed {\n\t\t\t\t\treturn fmt.Errorf(\"no permission module configured; certificates not allowed except from external Managers\")\n\t\t\t\t}\n\t\t\t\tif tlsApp.Automation == nil || tlsApp.Automation.OnDemand == nil {\n\t\t\t\t\treturn nil\n\t\t\t\t}\n\n\t\t\t\t// logging the remote IP can be useful for servers that want to count\n\t\t\t\t// attempts from clients to detect patterns of abuse -- it should NOT be\n\t\t\t\t// used solely for decision making, however\n\t\t\t\tvar remoteIP string\n\t\t\t\tif hello, ok := ctx.Value(certmagic.ClientHelloInfoCtxKey).(*tls.ClientHelloInfo); ok && hello != nil {\n\t\t\t\t\tif remote := hello.Conn.RemoteAddr(); remote != nil {\n\t\t\t\t\t\tremoteIP, _, _ = net.SplitHostPort(remote.String())\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif c := tlsApp.logger.Check(zapcore.DebugLevel, \"asking for permission for on-demand certificate\"); c != nil {\n\t\t\t\t\tc.Write(\n\t\t\t\t\t\tzap.String(\"remote_ip\", remoteIP),\n\t\t\t\t\t\tzap.String(\"domain\", name),\n\t\t\t\t\t)\n\t\t\t\t}\n\n\t\t\t\t// ask the permission module if this cert is allowed\n\t\t\t\tif err := tlsApp.Automation.OnDemand.permission.CertificateAllowed(ctx, name); err != nil {\n\t\t\t\t\t// distinguish true errors from denials, because it's important to elevate actual errors\n\t\t\t\t\tif errors.Is(err, ErrPermissionDenied) {\n\t\t\t\t\t\tif c := tlsApp.logger.Check(zapcore.DebugLevel, \"on-demand certificate issuance denied\"); c != nil {\n\t\t\t\t\t\t\tc.Write(\n\t\t\t\t\t\t\t\tzap.String(\"domain\", name),\n\t\t\t\t\t\t\t\tzap.Error(err),\n\t\t\t\t\t\t\t)\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tif c := tlsApp.logger.Check(zapcore.ErrorLevel, \"failed to get permission for on-demand certificate\"); c != nil {\n\t\t\t\t\t\t\tc.Write(\n\t\t\t\t\t\t\t\tzap.String(\"domain\", name),\n\t\t\t\t\t\t\t\tzap.Error(err),\n\t\t\t\t\t\t\t)\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\treturn err\n\t\t\t\t}\n\n\t\t\t\treturn nil\n\t\t\t},\n\t\t\tManagers: ap.Managers,\n\t\t}\n\t}\n\n\tcfg := certmagic.Config{\n\t\tMustStaple:         ap.MustStaple,\n\t\tRenewalWindowRatio: ap.RenewalWindowRatio,\n\t\tKeySource:          keySource,\n\t\tOnEvent:            tlsApp.onEvent,\n\t\tOnDemand:           ond,\n\t\tReusePrivateKeys:   ap.ReusePrivateKeys,\n\t\tOCSP: certmagic.OCSPConfig{\n\t\t\tDisableStapling:    ap.DisableOCSPStapling,\n\t\t\tResponderOverrides: ap.OCSPOverrides,\n\t\t},\n\t\tStorage: storage,\n\t\tIssuers: issuers,\n\t\tLogger:  tlsApp.logger,\n\t}\n\n\treturn cfg, nil\n}\n\n// IsProvisioned reports whether the automation policy has been\n// provisioned. A provisioned policy has an initialized CertMagic\n// instance (i.e. ap.magic != nil).\nfunc (ap *AutomationPolicy) IsProvisioned() bool { return ap.magic != nil }\n\n// RebuildCertMagic rebuilds the policy's CertMagic configuration from the\n// policy's already-populated fields (Issuers, Managers, storage, etc.) and\n// replaces the internal CertMagic instance. This is a lightweight\n// alternative to calling Provision because it does not re-provision\n// modules or re-run module Provision; instead, it constructs a new\n// certmagic.Config and calls SetConfig on issuers so they receive updated\n// templates (for example, alternate HTTP/TLS ports supplied by the HTTP\n// app). RebuildCertMagic should only be called when the policy's required\n// fields are already populated.\nfunc (ap *AutomationPolicy) RebuildCertMagic(tlsApp *TLS) error {\n\tcfg, err := ap.makeCertMagicConfig(tlsApp, ap.Issuers, ap.storage)\n\tif err != nil {\n\t\treturn err\n\t}\n\tcertCacheMu.RLock()\n\tap.magic = certmagic.New(certCache, cfg)\n\tcertCacheMu.RUnlock()\n\n\t// sometimes issuers may need the parent certmagic.Config in\n\t// order to function properly (for example, ACMEIssuer needs\n\t// access to the correct storage and cache so it can solve\n\t// ACME challenges -- it's an annoying, inelegant circular\n\t// dependency that I don't know how to resolve nicely!)\n\tfor _, issuer := range ap.magic.Issuers {\n\t\tif annoying, ok := issuer.(ConfigSetter); ok {\n\t\t\tannoying.SetConfig(ap.magic)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// Subjects returns the list of subjects with all placeholders replaced.\nfunc (ap *AutomationPolicy) Subjects() []string {\n\treturn ap.subjects\n}\n\n// AllInternalSubjects returns true if all the subjects on this policy are internal.\nfunc (ap *AutomationPolicy) AllInternalSubjects() bool {\n\treturn !slices.ContainsFunc(ap.subjects, func(s string) bool {\n\t\treturn !certmagic.SubjectIsInternal(s)\n\t})\n}\n\nfunc (ap *AutomationPolicy) onlyInternalIssuer() bool {\n\tif len(ap.Issuers) != 1 {\n\t\treturn false\n\t}\n\t_, ok := ap.Issuers[0].(*InternalIssuer)\n\treturn ok\n}\n\n// isWildcardOrDefault determines if the subjects include any wildcard domains,\n// or is the \"default\" policy (i.e. no subjects) which is unbounded.\nfunc (ap *AutomationPolicy) isWildcardOrDefault() bool {\n\tisWildcardOrDefault := len(ap.subjects) == 0\n\n\tfor _, sub := range ap.subjects {\n\t\tif strings.HasPrefix(sub, \"*\") {\n\t\t\tisWildcardOrDefault = true\n\t\t\tbreak\n\t\t}\n\t}\n\treturn isWildcardOrDefault\n}\n\n// DefaultIssuers returns empty Issuers (not provisioned) to be used as defaults.\n// This function is experimental and has no compatibility promises.\nfunc DefaultIssuers(userEmail string) []certmagic.Issuer {\n\tissuers := []certmagic.Issuer{new(ACMEIssuer)}\n\tif strings.TrimSpace(userEmail) != \"\" {\n\t\tissuers = append(issuers, &ACMEIssuer{\n\t\t\tCA:    certmagic.ZeroSSLProductionCA,\n\t\t\tEmail: userEmail,\n\t\t})\n\t}\n\treturn issuers\n}\n\n// DefaultIssuersProvisioned returns empty but provisioned default Issuers from\n// DefaultIssuers(). This function is experimental and has no compatibility promises.\nfunc DefaultIssuersProvisioned(ctx caddy.Context) ([]certmagic.Issuer, error) {\n\tissuers := DefaultIssuers(\"\")\n\tfor i, iss := range issuers {\n\t\tif prov, ok := iss.(caddy.Provisioner); ok {\n\t\t\terr := prov.Provision(ctx)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"provisioning default issuer %d: %T: %v\", i, iss, err)\n\t\t\t}\n\t\t}\n\t}\n\treturn issuers, nil\n}\n\n// ChallengesConfig configures the ACME challenges.\ntype ChallengesConfig struct {\n\t// HTTP configures the ACME HTTP challenge. This\n\t// challenge is enabled and used automatically\n\t// and by default.\n\tHTTP *HTTPChallengeConfig `json:\"http,omitempty\"`\n\n\t// TLSALPN configures the ACME TLS-ALPN challenge.\n\t// This challenge is enabled and used automatically\n\t// and by default.\n\tTLSALPN *TLSALPNChallengeConfig `json:\"tls-alpn,omitempty\"`\n\n\t// Configures the ACME DNS challenge. Because this\n\t// challenge typically requires credentials for\n\t// interfacing with a DNS provider, this challenge is\n\t// not enabled by default. This is the only challenge\n\t// type which does not require a direct connection\n\t// to Caddy from an external server.\n\t//\n\t// NOTE: DNS providers are currently being upgraded,\n\t// and this API is subject to change, but should be\n\t// stabilized soon.\n\tDNS *DNSChallengeConfig `json:\"dns,omitempty\"`\n\n\t// Optionally customize the host to which a listener\n\t// is bound if required for solving a challenge.\n\tBindHost string `json:\"bind_host,omitempty\"`\n\n\t// Whether distributed solving is enabled. This is\n\t// enabled by default, so this is only used to\n\t// disable it, which should only need to be done if\n\t// you cannot reliably or affordably use storage\n\t// backend for writing/distributing challenge info.\n\t// (Applies to HTTP and TLS-ALPN challenges.)\n\t// If set to false, challenges can only be solved\n\t// from the Caddy instance that initiated the\n\t// challenge, with the exception of HTTP challenges\n\t// initiated with the same ACME account that this\n\t// config uses. (Caddy can still solve those challenges\n\t// without explicitly writing the info to storage.)\n\t//\n\t// Default: true\n\tDistributed *bool `json:\"distributed,omitempty\"`\n}\n\n// HTTPChallengeConfig configures the ACME HTTP challenge.\ntype HTTPChallengeConfig struct {\n\t// If true, the HTTP challenge will be disabled.\n\tDisabled bool `json:\"disabled,omitempty\"`\n\n\t// An alternate port on which to service this\n\t// challenge. Note that the HTTP challenge port is\n\t// hard-coded into the spec and cannot be changed,\n\t// so you would have to forward packets from the\n\t// standard HTTP challenge port to this one.\n\tAlternatePort int `json:\"alternate_port,omitempty\"`\n}\n\n// TLSALPNChallengeConfig configures the ACME TLS-ALPN challenge.\ntype TLSALPNChallengeConfig struct {\n\t// If true, the TLS-ALPN challenge will be disabled.\n\tDisabled bool `json:\"disabled,omitempty\"`\n\n\t// An alternate port on which to service this\n\t// challenge. Note that the TLS-ALPN challenge port\n\t// is hard-coded into the spec and cannot be changed,\n\t// so you would have to forward packets from the\n\t// standard TLS-ALPN challenge port to this one.\n\tAlternatePort int `json:\"alternate_port,omitempty\"`\n}\n\n// DNSChallengeConfig configures the ACME DNS challenge.\n//\n// NOTE: This API is still experimental and is subject to change.\ntype DNSChallengeConfig struct {\n\t// The DNS provider module to use which will manage\n\t// the DNS records relevant to the ACME challenge.\n\t// Required.\n\tProviderRaw json.RawMessage `json:\"provider,omitempty\" caddy:\"namespace=dns.providers inline_key=name\"`\n\n\t// The TTL of the TXT record used for the DNS challenge.\n\tTTL caddy.Duration `json:\"ttl,omitempty\"`\n\n\t// How long to wait before starting propagation checks.\n\t// Default: 0 (no wait).\n\tPropagationDelay caddy.Duration `json:\"propagation_delay,omitempty\"`\n\n\t// Maximum time to wait for temporary DNS record to appear.\n\t// Set to -1 to disable propagation checks.\n\t// Default: 2 minutes.\n\tPropagationTimeout caddy.Duration `json:\"propagation_timeout,omitempty\"`\n\n\t// Custom DNS resolvers to prefer over system/built-in defaults.\n\t// Often necessary to configure when using split-horizon DNS.\n\tResolvers []string `json:\"resolvers,omitempty\"`\n\n\t// Override the domain to use for the DNS challenge. This\n\t// is to delegate the challenge to a different domain,\n\t// e.g. one that updates faster or one with a provider API.\n\tOverrideDomain string `json:\"override_domain,omitempty\"`\n\n\tsolver acmez.Solver\n}\n\n// ConfigSetter is implemented by certmagic.Issuers that\n// need access to a parent certmagic.Config as part of\n// their provisioning phase. For example, the ACMEIssuer\n// requires a config so it can access storage and the\n// cache to solve ACME challenges.\ntype ConfigSetter interface {\n\tSetConfig(cfg *certmagic.Config)\n}\n"
  },
  {
    "path": "modules/caddytls/capools.go",
    "content": "package caddytls\n\nimport (\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"os\"\n\t\"reflect\"\n\n\t\"github.com/caddyserver/certmagic\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddypki\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(InlineCAPool{})\n\tcaddy.RegisterModule(FileCAPool{})\n\tcaddy.RegisterModule(PKIRootCAPool{})\n\tcaddy.RegisterModule(PKIIntermediateCAPool{})\n\tcaddy.RegisterModule(StoragePool{})\n\tcaddy.RegisterModule(HTTPCertPool{})\n}\n\n// The interface to be implemented by all guest modules part of\n// the namespace 'tls.ca_pool.source.'\ntype CA interface {\n\tCertPool() *x509.CertPool\n}\n\n// InlineCAPool is a certificate authority pool provider coming from\n// a DER-encoded certificates in the config\ntype InlineCAPool struct {\n\t// A list of base64 DER-encoded CA certificates\n\t// against which to validate client certificates.\n\t// Client certs which are not signed by any of\n\t// these CAs will be rejected.\n\tTrustedCACerts []string `json:\"trusted_ca_certs,omitempty\"`\n\n\tpool *x509.CertPool\n}\n\n// CaddyModule implements caddy.Module.\nfunc (icp InlineCAPool) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID: \"tls.ca_pool.source.inline\",\n\t\tNew: func() caddy.Module {\n\t\t\treturn new(InlineCAPool)\n\t\t},\n\t}\n}\n\n// Provision implements caddy.Provisioner.\nfunc (icp *InlineCAPool) Provision(ctx caddy.Context) error {\n\tcaPool := x509.NewCertPool()\n\tfor i, clientCAString := range icp.TrustedCACerts {\n\t\tclientCA, err := decodeBase64DERCert(clientCAString)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"parsing certificate at index %d: %v\", i, err)\n\t\t}\n\t\tcaPool.AddCert(clientCA)\n\t}\n\ticp.pool = caPool\n\n\treturn nil\n}\n\n// Syntax:\n//\n//\ttrust_pool inline {\n//\t\ttrust_der <base64_der_cert>...\n//\t}\n//\n// The 'trust_der' directive can be specified multiple times.\nfunc (icp *InlineCAPool) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume module name\n\tif d.CountRemainingArgs() > 0 {\n\t\treturn d.ArgErr()\n\t}\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"trust_der\":\n\t\t\ticp.TrustedCACerts = append(icp.TrustedCACerts, d.RemainingArgs()...)\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized directive: %s\", d.Val())\n\t\t}\n\t}\n\tif len(icp.TrustedCACerts) == 0 {\n\t\treturn d.Err(\"no certificates specified\")\n\t}\n\treturn nil\n}\n\n// CertPool implements CA.\nfunc (icp InlineCAPool) CertPool() *x509.CertPool {\n\treturn icp.pool\n}\n\n// FileCAPool generates trusted root certificates pool from the designated DER and PEM file\ntype FileCAPool struct {\n\t// TrustedCACertPEMFiles is a list of PEM file names\n\t// from which to load certificates of trusted CAs.\n\t// Client certificates which are not signed by any of\n\t// these CA certificates will be rejected.\n\tTrustedCACertPEMFiles []string `json:\"pem_files,omitempty\"`\n\n\tpool *x509.CertPool\n}\n\n// CaddyModule implements caddy.Module.\nfunc (FileCAPool) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID: \"tls.ca_pool.source.file\",\n\t\tNew: func() caddy.Module {\n\t\t\treturn new(FileCAPool)\n\t\t},\n\t}\n}\n\n// Loads and decodes the DER and pem files to generate the certificate pool\nfunc (f *FileCAPool) Provision(ctx caddy.Context) error {\n\tcaPool := x509.NewCertPool()\n\tfor _, pemFile := range f.TrustedCACertPEMFiles {\n\t\tpemContents, err := os.ReadFile(pemFile)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"reading %s: %v\", pemFile, err)\n\t\t}\n\t\tcaPool.AppendCertsFromPEM(pemContents)\n\t}\n\tf.pool = caPool\n\treturn nil\n}\n\n// Syntax:\n//\n//\ttrust_pool file [<pem_file>...] {\n//\t\tpem_file <pem_file>...\n//\t}\n//\n// The 'pem_file' directive can be specified multiple times.\nfunc (fcap *FileCAPool) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume module name\n\tfcap.TrustedCACertPEMFiles = append(fcap.TrustedCACertPEMFiles, d.RemainingArgs()...)\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"pem_file\":\n\t\t\tfcap.TrustedCACertPEMFiles = append(fcap.TrustedCACertPEMFiles, d.RemainingArgs()...)\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized directive: %s\", d.Val())\n\t\t}\n\t}\n\tif len(fcap.TrustedCACertPEMFiles) == 0 {\n\t\treturn d.Err(\"no certificates specified\")\n\t}\n\treturn nil\n}\n\nfunc (f FileCAPool) CertPool() *x509.CertPool {\n\treturn f.pool\n}\n\n// PKIRootCAPool extracts the trusted root certificates from Caddy's native 'pki' app\ntype PKIRootCAPool struct {\n\t// List of the Authority names that are configured in the `pki` app whose root certificates are trusted\n\tAuthority []string `json:\"authority,omitempty\"`\n\n\tca   []*caddypki.CA\n\tpool *x509.CertPool\n}\n\n// CaddyModule implements caddy.Module.\nfunc (PKIRootCAPool) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID: \"tls.ca_pool.source.pki_root\",\n\t\tNew: func() caddy.Module {\n\t\t\treturn new(PKIRootCAPool)\n\t\t},\n\t}\n}\n\n// Loads the PKI app and load the root certificates into the certificate pool\nfunc (p *PKIRootCAPool) Provision(ctx caddy.Context) error {\n\tpkiApp, err := ctx.AppIfConfigured(\"pki\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"pki_root CA pool requires that a PKI app is configured: %v\", err)\n\t}\n\tpki := pkiApp.(*caddypki.PKI)\n\tfor _, caID := range p.Authority {\n\t\tc, err := pki.GetCA(ctx, caID)\n\t\tif err != nil || c == nil {\n\t\t\treturn fmt.Errorf(\"getting CA %s: %v\", caID, err)\n\t\t}\n\t\tp.ca = append(p.ca, c)\n\t}\n\n\tcaPool := x509.NewCertPool()\n\tfor _, ca := range p.ca {\n\t\tcaPool.AddCert(ca.RootCertificate())\n\t}\n\tp.pool = caPool\n\n\treturn nil\n}\n\n// Syntax:\n//\n//\ttrust_pool pki_root [<ca_name>...] {\n//\t\tauthority <ca_name>...\n//\t}\n//\n// The 'authority' directive can be specified multiple times.\nfunc (pkir *PKIRootCAPool) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume module name\n\tpkir.Authority = append(pkir.Authority, d.RemainingArgs()...)\n\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\tswitch d.Val() {\n\t\tcase \"authority\":\n\t\t\tpkir.Authority = append(pkir.Authority, d.RemainingArgs()...)\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized directive: %s\", d.Val())\n\t\t}\n\t}\n\tif len(pkir.Authority) == 0 {\n\t\treturn d.Err(\"no authorities specified\")\n\t}\n\treturn nil\n}\n\n// return the certificate pool generated with root certificates from the PKI app\nfunc (p PKIRootCAPool) CertPool() *x509.CertPool {\n\treturn p.pool\n}\n\n// PKIIntermediateCAPool extracts the trusted intermediate certificates from Caddy's native 'pki' app\ntype PKIIntermediateCAPool struct {\n\t// List of the Authority names that are configured in the `pki` app whose intermediate certificates are trusted\n\tAuthority []string `json:\"authority,omitempty\"`\n\n\tca   []*caddypki.CA\n\tpool *x509.CertPool\n}\n\n// CaddyModule implements caddy.Module.\nfunc (PKIIntermediateCAPool) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID: \"tls.ca_pool.source.pki_intermediate\",\n\t\tNew: func() caddy.Module {\n\t\t\treturn new(PKIIntermediateCAPool)\n\t\t},\n\t}\n}\n\n// Loads the PKI app and loads the intermediate certificates into the certificate pool\nfunc (p *PKIIntermediateCAPool) Provision(ctx caddy.Context) error {\n\tpkiApp, err := ctx.AppIfConfigured(\"pki\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"pki_intermediate CA pool requires that a PKI app is configured: %v\", err)\n\t}\n\tpki := pkiApp.(*caddypki.PKI)\n\tfor _, caID := range p.Authority {\n\t\tc, err := pki.GetCA(ctx, caID)\n\t\tif err != nil || c == nil {\n\t\t\treturn fmt.Errorf(\"getting CA %s: %v\", caID, err)\n\t\t}\n\t\tp.ca = append(p.ca, c)\n\t}\n\n\tcaPool := x509.NewCertPool()\n\tfor _, ca := range p.ca {\n\t\tfor _, c := range ca.IntermediateCertificateChain() {\n\t\t\tcaPool.AddCert(c)\n\t\t}\n\t}\n\tp.pool = caPool\n\treturn nil\n}\n\n// Syntax:\n//\n//\ttrust_pool pki_intermediate [<ca_name>...] {\n//\t\tauthority <ca_name>...\n//\t}\n//\n// The 'authority' directive can be specified multiple times.\nfunc (pic *PKIIntermediateCAPool) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume module name\n\tpic.Authority = append(pic.Authority, d.RemainingArgs()...)\n\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\tswitch d.Val() {\n\t\tcase \"authority\":\n\t\t\tpic.Authority = append(pic.Authority, d.RemainingArgs()...)\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized directive: %s\", d.Val())\n\t\t}\n\t}\n\tif len(pic.Authority) == 0 {\n\t\treturn d.Err(\"no authorities specified\")\n\t}\n\treturn nil\n}\n\n// return the certificate pool generated with intermediate certificates from the PKI app\nfunc (p PKIIntermediateCAPool) CertPool() *x509.CertPool {\n\treturn p.pool\n}\n\n// StoragePool extracts the trusted certificates root from Caddy storage\ntype StoragePool struct {\n\t// The storage module where the trusted root certificates are stored. Absent\n\t// explicit storage implies the use of Caddy default storage.\n\tStorageRaw json.RawMessage `json:\"storage,omitempty\" caddy:\"namespace=caddy.storage inline_key=module\"`\n\n\t// The storage key/index to the location of the certificates\n\tPEMKeys []string `json:\"pem_keys,omitempty\"`\n\n\tstorage certmagic.Storage\n\tpool    *x509.CertPool\n}\n\n// CaddyModule implements caddy.Module.\nfunc (StoragePool) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID: \"tls.ca_pool.source.storage\",\n\t\tNew: func() caddy.Module {\n\t\t\treturn new(StoragePool)\n\t\t},\n\t}\n}\n\n// Provision implements caddy.Provisioner.\nfunc (ca *StoragePool) Provision(ctx caddy.Context) error {\n\tif ca.StorageRaw != nil {\n\t\tval, err := ctx.LoadModule(ca, \"StorageRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading storage module: %v\", err)\n\t\t}\n\t\tcmStorage, err := val.(caddy.StorageConverter).CertMagicStorage()\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"creating storage configuration: %v\", err)\n\t\t}\n\t\tca.storage = cmStorage\n\t}\n\tif ca.storage == nil {\n\t\tca.storage = ctx.Storage()\n\t}\n\tif len(ca.PEMKeys) == 0 {\n\t\treturn fmt.Errorf(\"no PEM keys specified\")\n\t}\n\tcaPool := x509.NewCertPool()\n\tfor _, caID := range ca.PEMKeys {\n\t\tbs, err := ca.storage.Load(ctx, caID)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"error loading cert '%s' from storage: %s\", caID, err)\n\t\t}\n\t\tif !caPool.AppendCertsFromPEM(bs) {\n\t\t\treturn fmt.Errorf(\"failed to add certificate '%s' to pool\", caID)\n\t\t}\n\t}\n\tca.pool = caPool\n\n\treturn nil\n}\n\n// Syntax:\n//\n//\ttrust_pool storage [<storage_keys>...] {\n//\t\tstorage <storage_module>\n//\t\tkeys\t<storage_keys>...\n//\t}\n//\n// The 'keys' directive can be specified multiple times.\n// The'storage' directive is optional and defaults to the default storage module.\nfunc (sp *StoragePool) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume module name\n\tsp.PEMKeys = append(sp.PEMKeys, d.RemainingArgs()...)\n\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\tswitch d.Val() {\n\t\tcase \"storage\":\n\t\t\tif sp.StorageRaw != nil {\n\t\t\t\treturn d.Err(\"storage module already set\")\n\t\t\t}\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tmodStem := d.Val()\n\t\t\tmodID := \"caddy.storage.\" + modStem\n\t\t\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tstorage, ok := unm.(caddy.StorageConverter)\n\t\t\tif !ok {\n\t\t\t\treturn d.Errf(\"module %s is not a caddy.StorageConverter\", modID)\n\t\t\t}\n\t\t\tsp.StorageRaw = caddyconfig.JSONModuleObject(storage, \"module\", modStem, nil)\n\t\tcase \"keys\":\n\t\t\tsp.PEMKeys = append(sp.PEMKeys, d.RemainingArgs()...)\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized directive: %s\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (p StoragePool) CertPool() *x509.CertPool {\n\treturn p.pool\n}\n\n// TLSConfig holds configuration related to the TLS configuration for the\n// transport/client.\n// copied from with minor modifications: modules/caddyhttp/reverseproxy/httptransport.go\ntype TLSConfig struct {\n\t// Provides the guest module that provides the trusted certificate authority (CA) certificates\n\tCARaw json.RawMessage `json:\"ca,omitempty\" caddy:\"namespace=tls.ca_pool.source inline_key=provider\"`\n\n\t// If true, TLS verification of server certificates will be disabled.\n\t// This is insecure and may be removed in the future. Do not use this\n\t// option except in testing or local development environments.\n\tInsecureSkipVerify bool `json:\"insecure_skip_verify,omitempty\"`\n\n\t// The duration to allow a TLS handshake to a server. Default: No timeout.\n\tHandshakeTimeout caddy.Duration `json:\"handshake_timeout,omitempty\"`\n\n\t// The server name used when verifying the certificate received in the TLS\n\t// handshake. By default, this will use the upstream address' host part.\n\t// You only need to override this if your upstream address does not match the\n\t// certificate the upstream is likely to use. For example if the upstream\n\t// address is an IP address, then you would need to configure this to the\n\t// hostname being served by the upstream server. Currently, this does not\n\t// support placeholders because the TLS config is not provisioned on each\n\t// connection, so a static value must be used.\n\tServerName string `json:\"server_name,omitempty\"`\n\n\t// TLS renegotiation level. TLS renegotiation is the act of performing\n\t// subsequent handshakes on a connection after the first.\n\t// The level can be:\n\t//  - \"never\": (the default) disables renegotiation.\n\t//  - \"once\": allows a remote server to request renegotiation once per connection.\n\t//  - \"freely\": allows a remote server to repeatedly request renegotiation.\n\tRenegotiation string `json:\"renegotiation,omitempty\"`\n}\n\nfunc (t *TLSConfig) unmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\tswitch d.Val() {\n\t\tcase \"ca\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tmodStem := d.Val()\n\t\t\tmodID := \"tls.ca_pool.source.\" + modStem\n\t\t\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tca, ok := unm.(CA)\n\t\t\tif !ok {\n\t\t\t\treturn d.Errf(\"module %s is not a caddytls.CA\", modID)\n\t\t\t}\n\t\t\tt.CARaw = caddyconfig.JSONModuleObject(ca, \"provider\", modStem, nil)\n\t\tcase \"insecure_skip_verify\":\n\t\t\tt.InsecureSkipVerify = true\n\t\tcase \"handshake_timeout\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"bad timeout value '%s': %v\", d.Val(), err)\n\t\t\t}\n\t\t\tt.HandshakeTimeout = caddy.Duration(dur)\n\t\tcase \"server_name\":\n\t\t\tif !d.Args(&t.ServerName) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\tcase \"renegotiation\":\n\t\t\tif !d.Args(&t.Renegotiation) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tswitch t.Renegotiation {\n\t\t\tcase \"never\", \"once\", \"freely\":\n\t\t\t\tcontinue\n\t\t\tdefault:\n\t\t\t\tt.Renegotiation = \"\"\n\t\t\t\treturn d.Errf(\"unrecognized renegotiation level: %s\", t.Renegotiation)\n\t\t\t}\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized directive: %s\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\n// MakeTLSClientConfig returns a tls.Config usable by a client to a backend.\n// If there is no custom TLS configuration, a nil config may be returned.\n// copied from with minor modifications: modules/caddyhttp/reverseproxy/httptransport.go\nfunc (t *TLSConfig) makeTLSClientConfig(ctx caddy.Context) (*tls.Config, error) {\n\trepl, ok := ctx.Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\tif !ok || repl == nil {\n\t\trepl = caddy.NewReplacer()\n\t}\n\tcfg := new(tls.Config)\n\n\tif t.CARaw != nil {\n\t\tcaRaw, err := ctx.LoadModule(t, \"CARaw\")\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tca := caRaw.(CA)\n\t\tcfg.RootCAs = ca.CertPool()\n\t}\n\n\t// Renegotiation\n\tswitch t.Renegotiation {\n\tcase \"never\", \"\":\n\t\tcfg.Renegotiation = tls.RenegotiateNever\n\tcase \"once\":\n\t\tcfg.Renegotiation = tls.RenegotiateOnceAsClient\n\tcase \"freely\":\n\t\tcfg.Renegotiation = tls.RenegotiateFreelyAsClient\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"invalid TLS renegotiation level: %v\", t.Renegotiation)\n\t}\n\n\t// override for the server name used verify the TLS handshake\n\tcfg.ServerName = repl.ReplaceKnown(cfg.ServerName, \"\")\n\n\t// throw all security out the window\n\tcfg.InsecureSkipVerify = t.InsecureSkipVerify\n\n\t// only return a config if it's not empty\n\tif reflect.DeepEqual(cfg, new(tls.Config)) {\n\t\treturn nil, nil\n\t}\n\n\treturn cfg, nil\n}\n\n// The HTTPCertPool fetches the trusted root certificates from HTTP(S)\n// endpoints. The TLS connection properties can be customized, including custom\n// trusted root certificate. One example usage of this module is to get the trusted\n// certificates from another Caddy instance that is running the PKI app and ACME server.\ntype HTTPCertPool struct {\n\t// the list of URLs that respond with PEM-encoded certificates to trust.\n\tEndpoints []string `json:\"endpoints,omitempty\"`\n\n\t// Customize the TLS connection knobs to used during the HTTP call\n\tTLS *TLSConfig `json:\"tls,omitempty\"`\n\n\tpool *x509.CertPool\n}\n\n// CaddyModule implements caddy.Module.\nfunc (HTTPCertPool) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID: \"tls.ca_pool.source.http\",\n\t\tNew: func() caddy.Module {\n\t\t\treturn new(HTTPCertPool)\n\t\t},\n\t}\n}\n\n// Provision implements caddy.Provisioner.\nfunc (hcp *HTTPCertPool) Provision(ctx caddy.Context) error {\n\tcaPool := x509.NewCertPool()\n\n\tcustomTransport := http.DefaultTransport.(*http.Transport).Clone()\n\tif hcp.TLS != nil {\n\t\ttlsConfig, err := hcp.TLS.makeTLSClientConfig(ctx)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tcustomTransport.TLSClientConfig = tlsConfig\n\t}\n\n\thttpClient := *http.DefaultClient\n\thttpClient.Transport = customTransport\n\n\tfor _, uri := range hcp.Endpoints {\n\t\treq, err := http.NewRequestWithContext(ctx, http.MethodGet, uri, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tres, err := httpClient.Do(req) //nolint:gosec // SSRF false positive... uri comes from config\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tpembs, err := io.ReadAll(res.Body)\n\t\tres.Body.Close()\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif !caPool.AppendCertsFromPEM(pembs) {\n\t\t\treturn fmt.Errorf(\"failed to add certs from URL: %s\", uri)\n\t\t}\n\t}\n\thcp.pool = caPool\n\treturn nil\n}\n\n// Syntax:\n//\n//\ttrust_pool http [<endpoints...>] {\n//\t\t\tendpoints \t<endpoints...>\n//\t\t\ttls \t\t<tls_config>\n//\t}\n//\n// tls_config:\n//\n//\t\tca <ca_module>\n//\t\tinsecure_skip_verify\n//\t\thandshake_timeout <duration>\n//\t\tserver_name <name>\n//\t\trenegotiation <never|once|freely>\n//\n//\t<ca_module> is the name of the CA module to source the trust\n//\n// certificate pool and follows the syntax of the named CA module.\nfunc (hcp *HTTPCertPool) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume module name\n\thcp.Endpoints = append(hcp.Endpoints, d.RemainingArgs()...)\n\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\tswitch d.Val() {\n\t\tcase \"endpoints\":\n\t\t\tif d.CountRemainingArgs() == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\thcp.Endpoints = append(hcp.Endpoints, d.RemainingArgs()...)\n\t\tcase \"tls\":\n\t\t\tif hcp.TLS != nil {\n\t\t\t\treturn d.Err(\"tls block already defined\")\n\t\t\t}\n\t\t\thcp.TLS = new(TLSConfig)\n\t\t\tif err := hcp.TLS.unmarshalCaddyfile(d); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized directive: %s\", d.Val())\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// report error if the endpoints are not valid URLs\nfunc (hcp HTTPCertPool) Validate() (err error) {\n\tfor _, u := range hcp.Endpoints {\n\t\t_, e := url.Parse(u)\n\t\tif e != nil {\n\t\t\terr = errors.Join(err, e)\n\t\t}\n\t}\n\treturn err\n}\n\n// CertPool return the certificate pool generated from the HTTP responses\nfunc (hcp HTTPCertPool) CertPool() *x509.CertPool {\n\treturn hcp.pool\n}\n\nvar (\n\t_ caddy.Module          = (*InlineCAPool)(nil)\n\t_ caddy.Provisioner     = (*InlineCAPool)(nil)\n\t_ CA                    = (*InlineCAPool)(nil)\n\t_ caddyfile.Unmarshaler = (*InlineCAPool)(nil)\n\n\t_ caddy.Module          = (*FileCAPool)(nil)\n\t_ caddy.Provisioner     = (*FileCAPool)(nil)\n\t_ CA                    = (*FileCAPool)(nil)\n\t_ caddyfile.Unmarshaler = (*FileCAPool)(nil)\n\n\t_ caddy.Module          = (*PKIRootCAPool)(nil)\n\t_ caddy.Provisioner     = (*PKIRootCAPool)(nil)\n\t_ CA                    = (*PKIRootCAPool)(nil)\n\t_ caddyfile.Unmarshaler = (*PKIRootCAPool)(nil)\n\n\t_ caddy.Module          = (*PKIIntermediateCAPool)(nil)\n\t_ caddy.Provisioner     = (*PKIIntermediateCAPool)(nil)\n\t_ CA                    = (*PKIIntermediateCAPool)(nil)\n\t_ caddyfile.Unmarshaler = (*PKIIntermediateCAPool)(nil)\n\n\t_ caddy.Module          = (*StoragePool)(nil)\n\t_ caddy.Provisioner     = (*StoragePool)(nil)\n\t_ CA                    = (*StoragePool)(nil)\n\t_ caddyfile.Unmarshaler = (*StoragePool)(nil)\n\n\t_ caddy.Module          = (*HTTPCertPool)(nil)\n\t_ caddy.Provisioner     = (*HTTPCertPool)(nil)\n\t_ caddy.Validator       = (*HTTPCertPool)(nil)\n\t_ CA                    = (*HTTPCertPool)(nil)\n\t_ caddyfile.Unmarshaler = (*HTTPCertPool)(nil)\n)\n"
  },
  {
    "path": "modules/caddytls/capools_test.go",
    "content": "package caddytls\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"reflect\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/filestorage\"\n)\n\nconst (\n\ttest_der_1       = `MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==`\n\ttest_cert_file_1 = \"../../caddytest/caddy.ca.cer\"\n)\n\nfunc TestInlineCAPoolUnmarshalCaddyfile(t *testing.T) {\n\ttype args struct {\n\t\td *caddyfile.Dispenser\n\t}\n\ttests := []struct {\n\t\tname     string\n\t\targs     args\n\t\texpected InlineCAPool\n\t\twantErr  bool\n\t}{\n\t\t{\n\t\t\tname: \"configuring no certificatest produces an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\t\tinline {\n\t\t\t\t\t}\n\t\t\t\t`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"configuring certificates as arguments in-line produces an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(`\n\t\t\t\t\tinline %s\n\t\t\t\t`, test_der_1)),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"single cert\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(`\n\t\t\t\tinline {\n\t\t\t\t\ttrust_der %s\n\t\t\t\t}\n\t\t\t\t`, test_der_1)),\n\t\t\t},\n\t\t\texpected: InlineCAPool{\n\t\t\t\tTrustedCACerts: []string{test_der_1},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"multiple certs in one line\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(`\n\t\t\t\tinline {\n\t\t\t\t\ttrust_der %s %s\n\t\t\t\t}\n\t\t\t\t`, test_der_1, test_der_1),\n\t\t\t\t),\n\t\t\t},\n\t\t\texpected: InlineCAPool{\n\t\t\t\tTrustedCACerts: []string{test_der_1, test_der_1},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"multiple certs in multiple lines\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(`\n\t\t\t\t\tinline {\n\t\t\t\t\t\ttrust_der %s\n\t\t\t\t\t\ttrust_der %s\n\t\t\t\t\t}\n\t\t\t\t`, test_der_1, test_der_1)),\n\t\t\t},\n\t\t\texpected: InlineCAPool{\n\t\t\t\tTrustedCACerts: []string{test_der_1, test_der_1},\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\ticp := &InlineCAPool{}\n\t\t\tif err := icp.UnmarshalCaddyfile(tt.args.d); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"InlineCAPool.UnmarshalCaddyfile() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tif !tt.wantErr && !reflect.DeepEqual(&tt.expected, icp) {\n\t\t\t\tt.Errorf(\"InlineCAPool.UnmarshalCaddyfile() = %v, want %v\", icp, tt.expected)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestFileCAPoolUnmarshalCaddyfile(t *testing.T) {\n\ttype args struct {\n\t\td *caddyfile.Dispenser\n\t}\n\ttests := []struct {\n\t\tname     string\n\t\texpected FileCAPool\n\t\targs     args\n\t\twantErr  bool\n\t}{\n\t\t{\n\t\t\tname: \"configuring no certificatest produces an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\t\tfile {\n\t\t\t\t\t}\n\t\t\t\t`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"configuring certificates as arguments in-line produces an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(`\n\t\t\t\tfile %s\n\t\t\t\t`, test_cert_file_1)),\n\t\t\t},\n\t\t\texpected: FileCAPool{\n\t\t\t\tTrustedCACertPEMFiles: []string{test_cert_file_1},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"single cert\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(`\n\t\t\t\tfile {\n\t\t\t\t\tpem_file %s\n\t\t\t\t}\n\t\t\t\t`, test_cert_file_1)),\n\t\t\t},\n\t\t\texpected: FileCAPool{\n\t\t\t\tTrustedCACertPEMFiles: []string{test_cert_file_1},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"multiple certs inline and in-block are merged\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(`\n\t\t\t\tfile %s {\n\t\t\t\t\tpem_file %s\n\t\t\t\t}\n\t\t\t\t`, test_cert_file_1, test_cert_file_1)),\n\t\t\t},\n\t\t\texpected: FileCAPool{\n\t\t\t\tTrustedCACertPEMFiles: []string{test_cert_file_1, test_cert_file_1},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"multiple certs in one line\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(`\n\t\t\t\tfile {\n\t\t\t\t\tpem_file %s %s\n\t\t\t\t}\n\t\t\t\t`, test_der_1, test_der_1),\n\t\t\t\t),\n\t\t\t},\n\t\t\texpected: FileCAPool{\n\t\t\t\tTrustedCACertPEMFiles: []string{test_der_1, test_der_1},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"multiple certs in multiple lines\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(`\n\t\t\t\t\tfile {\n\t\t\t\t\t\tpem_file %s\n\t\t\t\t\t\tpem_file %s\n\t\t\t\t\t}\n\t\t\t\t`, test_cert_file_1, test_cert_file_1)),\n\t\t\t},\n\t\t\texpected: FileCAPool{\n\t\t\t\tTrustedCACertPEMFiles: []string{test_cert_file_1, test_cert_file_1},\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tfcap := &FileCAPool{}\n\t\t\tif err := fcap.UnmarshalCaddyfile(tt.args.d); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"FileCAPool.UnmarshalCaddyfile() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tif !tt.wantErr && !reflect.DeepEqual(&tt.expected, fcap) {\n\t\t\t\tt.Errorf(\"FileCAPool.UnmarshalCaddyfile() = %v, want %v\", fcap, tt.expected)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestPKIRootCAPoolUnmarshalCaddyfile(t *testing.T) {\n\ttype args struct {\n\t\td *caddyfile.Dispenser\n\t}\n\ttests := []struct {\n\t\tname     string\n\t\texpected PKIRootCAPool\n\t\targs     args\n\t\twantErr  bool\n\t}{\n\t\t{\n\t\t\tname: \"configuring no certificatest produces an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\t\tpki_root {\n\t\t\t\t\t}\n\t\t\t\t`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"single authority as arguments in-line\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\tpki_root ca_1\n\t\t\t\t`),\n\t\t\t},\n\t\t\texpected: PKIRootCAPool{\n\t\t\t\tAuthority: []string{\"ca_1\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"multiple authorities as arguments in-line\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\tpki_root ca_1 ca_2\n\t\t\t\t`),\n\t\t\t},\n\t\t\texpected: PKIRootCAPool{\n\t\t\t\tAuthority: []string{\"ca_1\", \"ca_2\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"single authority in block\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\tpki_root {\n\t\t\t\t\tauthority ca_1\n\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: PKIRootCAPool{\n\t\t\t\tAuthority: []string{\"ca_1\"},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"multiple authorities in one line\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\tpki_root {\n\t\t\t\t\tauthority ca_1 ca_2\n\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: PKIRootCAPool{\n\t\t\t\tAuthority: []string{\"ca_1\", \"ca_2\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"multiple authorities in multiple lines\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\t\tpki_root {\n\t\t\t\t\t\tauthority ca_1\n\t\t\t\t\t\tauthority ca_2\n\t\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: PKIRootCAPool{\n\t\t\t\tAuthority: []string{\"ca_1\", \"ca_2\"},\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tpkir := &PKIRootCAPool{}\n\t\t\tif err := pkir.UnmarshalCaddyfile(tt.args.d); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"PKIRootCAPool.UnmarshalCaddyfile() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tif !tt.wantErr && !reflect.DeepEqual(&tt.expected, pkir) {\n\t\t\t\tt.Errorf(\"PKIRootCAPool.UnmarshalCaddyfile() = %v, want %v\", pkir, tt.expected)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestPKIIntermediateCAPoolUnmarshalCaddyfile(t *testing.T) {\n\ttype args struct {\n\t\td *caddyfile.Dispenser\n\t}\n\ttests := []struct {\n\t\tname     string\n\t\texpected PKIIntermediateCAPool\n\t\targs     args\n\t\twantErr  bool\n\t}{\n\t\t{\n\t\t\tname: \"configuring no certificatest produces an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\tpki_intermediate {\n\t\t\t\t}`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"single authority as arguments in-line\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`pki_intermediate ca_1`),\n\t\t\t},\n\t\t\texpected: PKIIntermediateCAPool{\n\t\t\t\tAuthority: []string{\"ca_1\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"multiple authorities as arguments in-line\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`pki_intermediate ca_1 ca_2`),\n\t\t\t},\n\t\t\texpected: PKIIntermediateCAPool{\n\t\t\t\tAuthority: []string{\"ca_1\", \"ca_2\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"single authority in block\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\tpki_intermediate {\n\t\t\t\t\tauthority ca_1\n\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: PKIIntermediateCAPool{\n\t\t\t\tAuthority: []string{\"ca_1\"},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"multiple authorities in one line\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\tpki_intermediate {\n\t\t\t\t\tauthority ca_1 ca_2\n\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: PKIIntermediateCAPool{\n\t\t\t\tAuthority: []string{\"ca_1\", \"ca_2\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"multiple authorities in multiple lines\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\t\tpki_intermediate {\n\t\t\t\t\t\tauthority ca_1\n\t\t\t\t\t\tauthority ca_2\n\t\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: PKIIntermediateCAPool{\n\t\t\t\tAuthority: []string{\"ca_1\", \"ca_2\"},\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tpic := &PKIIntermediateCAPool{}\n\t\t\tif err := pic.UnmarshalCaddyfile(tt.args.d); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"PKIIntermediateCAPool.UnmarshalCaddyfile() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tif !tt.wantErr && !reflect.DeepEqual(&tt.expected, pic) {\n\t\t\t\tt.Errorf(\"PKIIntermediateCAPool.UnmarshalCaddyfile() = %v, want %v\", pic, tt.expected)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestStoragePoolUnmarshalCaddyfile(t *testing.T) {\n\ttype args struct {\n\t\td *caddyfile.Dispenser\n\t}\n\ttests := []struct {\n\t\tname     string\n\t\targs     args\n\t\texpected StoragePool\n\t\twantErr  bool\n\t}{\n\t\t{\n\t\t\tname: \"empty block\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`storage {\n\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: StoragePool{},\n\t\t\twantErr:  false,\n\t\t},\n\t\t{\n\t\t\tname: \"providing single storage key inline\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`storage key-1`),\n\t\t\t},\n\t\t\texpected: StoragePool{\n\t\t\t\tPEMKeys: []string{\"key-1\"},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"providing multiple storage keys inline\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`storage key-1 key-2`),\n\t\t\t},\n\t\t\texpected: StoragePool{\n\t\t\t\tPEMKeys: []string{\"key-1\", \"key-2\"},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"providing keys inside block without specifying storage type\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\t\tstorage {\n\t\t\t\t\t\tkeys key-1 key-2\n\t\t\t\t\t}\n\t\t\t\t`),\n\t\t\t},\n\t\t\texpected: StoragePool{\n\t\t\t\tPEMKeys: []string{\"key-1\", \"key-2\"},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"providing keys in-line and inside block merges them\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`storage key-1 key-2 key-3 {\n\t\t\t\t\tkeys key-4 key-5\n\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: StoragePool{\n\t\t\t\tPEMKeys: []string{\"key-1\", \"key-2\", \"key-3\", \"key-4\", \"key-5\"},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"specifying storage type in block\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`storage {\n\t\t\t\t\tstorage file_system /var/caddy/storage\n\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: StoragePool{\n\t\t\t\tStorageRaw: json.RawMessage(`{\"module\":\"file_system\",\"root\":\"/var/caddy/storage\"}`),\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tsp := &StoragePool{}\n\t\t\tif err := sp.UnmarshalCaddyfile(tt.args.d); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"StoragePool.UnmarshalCaddyfile() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tif !tt.wantErr && !reflect.DeepEqual(&tt.expected, sp) {\n\t\t\t\tt.Errorf(\"StoragePool.UnmarshalCaddyfile() = %s, want %s\", sp.StorageRaw, tt.expected.StorageRaw)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestTLSConfig_unmarshalCaddyfile(t *testing.T) {\n\ttype args struct {\n\t\td *caddyfile.Dispenser\n\t}\n\ttests := []struct {\n\t\tname     string\n\t\targs     args\n\t\texpected TLSConfig\n\t\twantErr  bool\n\t}{\n\t\t{\n\t\t\tname: \"no arguments is valid\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(` {\n\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: TLSConfig{},\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'renegotiation' to 'never' is valid\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(` {\n\t\t\t\t\trenegotiation never\n\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: TLSConfig{\n\t\t\t\tRenegotiation: \"never\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'renegotiation' to 'once' is valid\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(` {\n\t\t\t\t\trenegotiation once\n\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: TLSConfig{\n\t\t\t\tRenegotiation: \"once\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'renegotiation' to 'freely' is valid\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(` {\n\t\t\t\t\trenegotiation freely\n\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: TLSConfig{\n\t\t\t\tRenegotiation: \"freely\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'renegotiation' to other than 'none', 'once, or 'freely' is invalid\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(` {\n\t\t\t\t\trenegotiation foo\n\t\t\t\t}`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'renegotiation' without argument is invalid\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(` {\n\t\t\t\t\trenegotiation\n\t\t\t\t}`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'ca' without argument is an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`{\n\t\t\t\t\tca\n\t\t\t\t}`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'ca' to 'file' with in-line cert is valid\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`{\n\t\t\t\t\tca file /var/caddy/ca.pem\n\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: TLSConfig{\n\t\t\t\tCARaw: []byte(`{\"pem_files\":[\"/var/caddy/ca.pem\"],\"provider\":\"file\"}`),\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'ca' to 'file' with appropriate block is valid\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`{\n\t\t\t\t\tca file /var/caddy/ca.pem {\n\t\t\t\t\t\tpem_file /var/caddy/ca.pem\n\t\t\t\t\t}\n\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: TLSConfig{\n\t\t\t\tCARaw: []byte(`{\"pem_files\":[\"/var/caddy/ca.pem\",\"/var/caddy/ca.pem\"],\"provider\":\"file\"}`),\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'ca' multiple times is an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(`{\n\t\t\t\t\tca file /var/caddy/ca.pem {\n\t\t\t\t\t\tpem_file /var/caddy/ca.pem\n\t\t\t\t\t}\n\t\t\t\t\tca inline %s\n\t\t\t\t}`, test_der_1)),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'handshake_timeout' without value is an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`{\n\t\t\t\t\thandshake_timeout\n\t\t\t\t}`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'handshake_timeout' properly is successful\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`{\n\t\t\t\t\thandshake_timeout 42m\n\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: TLSConfig{\n\t\t\t\tHandshakeTimeout: caddy.Duration(42 * time.Minute),\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'server_name' without value is an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`{\n\t\t\t\t\tserver_name\n\t\t\t\t}`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'server_name' properly is successful\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`{\n\t\t\t\t\tserver_name example.com\n\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: TLSConfig{\n\t\t\t\tServerName: \"example.com\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"unsupported directives are errors\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`{\n\t\t\t\t\tfoo\n\t\t\t\t}`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\ttr := &TLSConfig{}\n\t\t\tif err := tr.unmarshalCaddyfile(tt.args.d); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"TLSConfig.unmarshalCaddyfile() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tif !tt.wantErr && !reflect.DeepEqual(&tt.expected, tr) {\n\t\t\t\tt.Errorf(\"TLSConfig.UnmarshalCaddyfile() = %v, want %v\", tr, tt.expected)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestHTTPCertPoolUnmarshalCaddyfile(t *testing.T) {\n\ttype args struct {\n\t\td *caddyfile.Dispenser\n\t}\n\ttests := []struct {\n\t\tname     string\n\t\targs     args\n\t\texpected HTTPCertPool\n\t\twantErr  bool\n\t}{\n\t\t{\n\t\t\tname: \"no block, inline http endpoint\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`http http://localhost/ca-certs`),\n\t\t\t},\n\t\t\texpected: HTTPCertPool{\n\t\t\t\tEndpoints: []string{\"http://localhost/ca-certs\"},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"no block, inline https endpoint\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`http https://localhost/ca-certs`),\n\t\t\t},\n\t\t\texpected: HTTPCertPool{\n\t\t\t\tEndpoints: []string{\"https://localhost/ca-certs\"},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"no block, mixed http and https endpoints inline\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`http http://localhost/ca-certs https://localhost/ca-certs`),\n\t\t\t},\n\t\t\texpected: HTTPCertPool{\n\t\t\t\tEndpoints: []string{\"http://localhost/ca-certs\", \"https://localhost/ca-certs\"},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"multiple endpoints in separate lines in block\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\t\thttp {\n\t\t\t\t\t\tendpoints http://localhost/ca-certs\n\t\t\t\t\t\tendpoints http://remotehost/ca-certs\n\t\t\t\t\t}\n\t\t\t\t`),\n\t\t\t},\n\t\t\texpected: HTTPCertPool{\n\t\t\t\tEndpoints: []string{\"http://localhost/ca-certs\", \"http://remotehost/ca-certs\"},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"endpoints defined inline and in block are merged\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`http http://localhost/ca-certs {\n\t\t\t\t\tendpoints http://remotehost/ca-certs\n\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: HTTPCertPool{\n\t\t\t\tEndpoints: []string{\"http://localhost/ca-certs\", \"http://remotehost/ca-certs\"},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"multiple endpoints defined in block on the same line\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`http {\n\t\t\t\t\tendpoints http://remotehost/ca-certs http://localhost/ca-certs\n\t\t\t\t}`),\n\t\t\t},\n\t\t\texpected: HTTPCertPool{\n\t\t\t\tEndpoints: []string{\"http://remotehost/ca-certs\", \"http://localhost/ca-certs\"},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"declaring 'endpoints' in block without argument is an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`http {\n\t\t\t\t\tendpoints\n\t\t\t\t}`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"multiple endpoints in separate lines in block\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\t\thttp {\n\t\t\t\t\t\tendpoints http://localhost/ca-certs\n\t\t\t\t\t\tendpoints http://remotehost/ca-certs\n\t\t\t\t\t\ttls {\n\t\t\t\t\t\t\trenegotiation freely\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t`),\n\t\t\t},\n\t\t\texpected: HTTPCertPool{\n\t\t\t\tEndpoints: []string{\"http://localhost/ca-certs\", \"http://remotehost/ca-certs\"},\n\t\t\t\tTLS: &TLSConfig{\n\t\t\t\t\tRenegotiation: \"freely\",\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\thcp := &HTTPCertPool{}\n\t\t\tif err := hcp.UnmarshalCaddyfile(tt.args.d); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"HTTPCertPool.UnmarshalCaddyfile() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tif !tt.wantErr && !reflect.DeepEqual(&tt.expected, hcp) {\n\t\t\t\tt.Errorf(\"HTTPCertPool.UnmarshalCaddyfile() = %v, want %v\", hcp, tt.expected)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "modules/caddytls/certmanagers.go",
    "content": "package caddytls\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/tailscale/tscert\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Tailscale{})\n\tcaddy.RegisterModule(HTTPCertGetter{})\n}\n\n// Tailscale is a module that can get certificates from the local Tailscale process.\ntype Tailscale struct {\n\tlogger *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Tailscale) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.get_certificate.tailscale\",\n\t\tNew: func() caddy.Module { return new(Tailscale) },\n\t}\n}\n\nfunc (ts *Tailscale) Provision(ctx caddy.Context) error {\n\tts.logger = ctx.Logger()\n\treturn nil\n}\n\nfunc (ts Tailscale) GetCertificate(ctx context.Context, hello *tls.ClientHelloInfo) (*tls.Certificate, error) {\n\tcanGetCert, err := ts.canHazCertificate(ctx, hello)\n\tif err == nil && !canGetCert {\n\t\treturn nil, nil // pass-thru: Tailscale can't offer a cert for this name\n\t}\n\tif err != nil {\n\t\tif c := ts.logger.Check(zapcore.WarnLevel, \"could not get status; will try to get certificate anyway\"); c != nil {\n\t\t\tc.Write(zap.Error(err))\n\t\t}\n\t}\n\treturn tscert.GetCertificateWithContext(ctx, hello)\n}\n\n// canHazCertificate returns true if Tailscale reports it can get a certificate for the given ClientHello.\nfunc (ts Tailscale) canHazCertificate(ctx context.Context, hello *tls.ClientHelloInfo) (bool, error) {\n\tif !strings.HasSuffix(strings.ToLower(hello.ServerName), tailscaleDomainAliasEnding) {\n\t\treturn false, nil\n\t}\n\tstatus, err := tscert.GetStatus(ctx)\n\tif err != nil {\n\t\treturn false, err\n\t}\n\tfor _, domain := range status.CertDomains {\n\t\tif certmagic.MatchWildcard(hello.ServerName, domain) {\n\t\t\treturn true, nil\n\t\t}\n\t}\n\treturn false, nil\n}\n\n// UnmarshalCaddyfile deserializes Caddyfile tokens into ts.\n//\n//\t... tailscale\nfunc (Tailscale) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume cert manager name\n\tif d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\treturn nil\n}\n\n// tailscaleDomainAliasEnding is the ending for all Tailscale custom domains.\nconst tailscaleDomainAliasEnding = \".ts.net\"\n\n// HTTPCertGetter can get a certificate via HTTP(S) request.\ntype HTTPCertGetter struct {\n\t// The URL from which to download the certificate. Required.\n\t//\n\t// The URL will be augmented with query string parameters taken\n\t// from the TLS handshake:\n\t//\n\t// - server_name: The SNI value\n\t// - signature_schemes: Comma-separated list of hex IDs of signatures\n\t// - cipher_suites: Comma-separated list of hex IDs of cipher suites\n\t//\n\t// To be valid, the response must be HTTP 200 with a PEM body\n\t// consisting of blocks for the certificate chain and the private\n\t// key.\n\t//\n\t// To indicate that this manager is not managing a certificate for\n\t// the described handshake, the endpoint should return HTTP 204\n\t// (No Content). Error statuses will indicate that the manager is\n\t// capable of providing a certificate but was unable to.\n\tURL string `json:\"url,omitempty\"`\n\n\tctx context.Context\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (hcg HTTPCertGetter) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.get_certificate.http\",\n\t\tNew: func() caddy.Module { return new(HTTPCertGetter) },\n\t}\n}\n\nfunc (hcg *HTTPCertGetter) Provision(ctx caddy.Context) error {\n\thcg.ctx = ctx\n\tif hcg.URL == \"\" {\n\t\treturn fmt.Errorf(\"URL is required\")\n\t}\n\treturn nil\n}\n\nfunc (hcg HTTPCertGetter) GetCertificate(ctx context.Context, hello *tls.ClientHelloInfo) (*tls.Certificate, error) {\n\tsigs := make([]string, len(hello.SignatureSchemes))\n\tfor i, sig := range hello.SignatureSchemes {\n\t\tsigs[i] = fmt.Sprintf(\"%x\", uint16(sig)) // you won't believe what %x uses if the val is a Stringer\n\t}\n\tsuites := make([]string, len(hello.CipherSuites))\n\tfor i, cs := range hello.CipherSuites {\n\t\tsuites[i] = fmt.Sprintf(\"%x\", cs)\n\t}\n\n\tparsed, err := url.Parse(hcg.URL)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tqs := parsed.Query()\n\tqs.Set(\"server_name\", hello.ServerName)\n\tqs.Set(\"signature_schemes\", strings.Join(sigs, \",\"))\n\tqs.Set(\"cipher_suites\", strings.Join(suites, \",\"))\n\tlocalIP, _, err := net.SplitHostPort(hello.Conn.LocalAddr().String())\n\tif err == nil && localIP != \"\" {\n\t\tqs.Set(\"local_ip\", localIP)\n\t}\n\tparsed.RawQuery = qs.Encode()\n\n\treq, err := http.NewRequestWithContext(hcg.ctx, http.MethodGet, parsed.String(), nil)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tresp, err := http.DefaultClient.Do(req) //nolint:gosec // SSRF false positive... request URI comes from config\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer resp.Body.Close()\n\tif resp.StatusCode == http.StatusNoContent {\n\t\t// endpoint is not managing certs for this handshake\n\t\treturn nil, nil\n\t}\n\tif resp.StatusCode != http.StatusOK {\n\t\treturn nil, fmt.Errorf(\"got HTTP %d\", resp.StatusCode)\n\t}\n\n\tbodyBytes, err := io.ReadAll(resp.Body)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"error reading response body: %v\", err)\n\t}\n\n\tcert, err := tlsCertFromCertAndKeyPEMBundle(bodyBytes)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &cert, nil\n}\n\n// UnmarshalCaddyfile deserializes Caddyfile tokens into ts.\n//\n//\t... http <url>\nfunc (hcg *HTTPCertGetter) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume cert manager name\n\n\tif !d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\thcg.URL = d.Val()\n\n\tif d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\tif d.NextBlock(0) {\n\t\treturn d.Err(\"block not allowed here\")\n\t}\n\treturn nil\n}\n\n// Interface guards\nvar (\n\t_ certmagic.Manager     = (*Tailscale)(nil)\n\t_ caddy.Provisioner     = (*Tailscale)(nil)\n\t_ caddyfile.Unmarshaler = (*Tailscale)(nil)\n\n\t_ certmagic.Manager     = (*HTTPCertGetter)(nil)\n\t_ caddy.Provisioner     = (*HTTPCertGetter)(nil)\n\t_ caddyfile.Unmarshaler = (*HTTPCertGetter)(nil)\n)\n"
  },
  {
    "path": "modules/caddytls/certselection.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"math/big\"\n\t\"slices\"\n\n\t\"github.com/caddyserver/certmagic\"\n\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\n// CustomCertSelectionPolicy represents a policy for selecting the certificate\n// used to complete a handshake when there may be multiple options. All fields\n// specified must match the candidate certificate for it to be chosen.\n// This was needed to solve https://github.com/caddyserver/caddy/issues/2588.\ntype CustomCertSelectionPolicy struct {\n\t// The certificate must have one of these serial numbers.\n\tSerialNumber []bigInt `json:\"serial_number,omitempty\"`\n\n\t// The certificate must have one of these organization names.\n\tSubjectOrganization []string `json:\"subject_organization,omitempty\"`\n\n\t// The certificate must use this public key algorithm.\n\tPublicKeyAlgorithm PublicKeyAlgorithm `json:\"public_key_algorithm,omitempty\"`\n\n\t// The certificate must have at least one of the tags in the list.\n\tAnyTag []string `json:\"any_tag,omitempty\"`\n\n\t// The certificate must have all of the tags in the list.\n\tAllTags []string `json:\"all_tags,omitempty\"`\n}\n\n// SelectCertificate implements certmagic.CertificateSelector. It\n// only chooses a certificate that at least meets the criteria in\n// p. It then chooses the first non-expired certificate that is\n// compatible with the client. If none are valid, it chooses the\n// first viable candidate anyway.\nfunc (p CustomCertSelectionPolicy) SelectCertificate(hello *tls.ClientHelloInfo, choices []certmagic.Certificate) (certmagic.Certificate, error) {\n\tviable := make([]certmagic.Certificate, 0, len(choices))\n\nnextChoice:\n\tfor _, cert := range choices {\n\t\tif len(p.SerialNumber) > 0 {\n\t\t\tvar found bool\n\t\t\tfor _, sn := range p.SerialNumber {\n\t\t\t\tsnInt := sn.Int // avoid taking address of iteration variable (gosec warning)\n\t\t\t\tif cert.Leaf.SerialNumber.Cmp(&snInt) == 0 {\n\t\t\t\t\tfound = true\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !found {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\n\t\tif len(p.SubjectOrganization) > 0 {\n\t\t\tfound := slices.ContainsFunc(p.SubjectOrganization, func(s string) bool {\n\t\t\t\treturn slices.Contains(cert.Leaf.Subject.Organization, s)\n\t\t\t})\n\t\t\tif !found {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\n\t\tif p.PublicKeyAlgorithm != PublicKeyAlgorithm(x509.UnknownPublicKeyAlgorithm) &&\n\t\t\tPublicKeyAlgorithm(cert.Leaf.PublicKeyAlgorithm) != p.PublicKeyAlgorithm {\n\t\t\tcontinue\n\t\t}\n\n\t\tif len(p.AnyTag) > 0 {\n\t\t\tfound := slices.ContainsFunc(p.AnyTag, cert.HasTag)\n\t\t\tif !found {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\n\t\tif len(p.AllTags) > 0 {\n\t\t\tfor _, tag := range p.AllTags {\n\t\t\t\tif !cert.HasTag(tag) {\n\t\t\t\t\tcontinue nextChoice\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// this certificate at least meets the policy's requirements,\n\t\t// but we still have to check expiration and compatibility\n\t\tviable = append(viable, cert)\n\t}\n\n\tif len(viable) == 0 {\n\t\treturn certmagic.Certificate{}, fmt.Errorf(\"no certificates matched custom selection policy\")\n\t}\n\n\treturn certmagic.DefaultCertificateSelector(hello, viable)\n}\n\n// UnmarshalCaddyfile sets up the CustomCertSelectionPolicy from Caddyfile tokens. Syntax:\n//\n//\tcert_selection {\n//\t\tall_tags             <values...>\n//\t\tany_tag              <values...>\n//\t\tpublic_key_algorithm <dsa|ecdsa|rsa>\n//\t\tserial_number        <big_integers...>\n//\t\tsubject_organization <values...>\n//\t}\nfunc (p *CustomCertSelectionPolicy) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\t_, wrapper := d.Next(), d.Val() // consume wrapper name\n\n\t// No same-line options are supported\n\tif d.CountRemainingArgs() > 0 {\n\t\treturn d.ArgErr()\n\t}\n\n\tvar hasPublicKeyAlgorithm bool\n\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\toptionName := d.Val()\n\t\tswitch optionName {\n\t\tcase \"all_tags\":\n\t\t\tif d.CountRemainingArgs() == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tp.AllTags = append(p.AllTags, d.RemainingArgs()...)\n\t\tcase \"any_tag\":\n\t\t\tif d.CountRemainingArgs() == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tp.AnyTag = append(p.AnyTag, d.RemainingArgs()...)\n\t\tcase \"public_key_algorithm\":\n\t\t\tif hasPublicKeyAlgorithm {\n\t\t\t\treturn d.Errf(\"duplicate %s option '%s'\", wrapper, optionName)\n\t\t\t}\n\t\t\tif d.CountRemainingArgs() != 1 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\td.NextArg()\n\t\t\tif err := p.PublicKeyAlgorithm.UnmarshalJSON([]byte(d.Val())); err != nil {\n\t\t\t\treturn d.Errf(\"parsing %s option '%s': %v\", wrapper, optionName, err)\n\t\t\t}\n\t\t\thasPublicKeyAlgorithm = true\n\t\tcase \"serial_number\":\n\t\t\tif d.CountRemainingArgs() == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tfor d.NextArg() {\n\t\t\t\tval, bi := d.Val(), bigInt{}\n\t\t\t\t_, ok := bi.SetString(val, 10)\n\t\t\t\tif !ok {\n\t\t\t\t\treturn d.Errf(\"parsing %s option '%s': invalid big.int value %s\", wrapper, optionName, val)\n\t\t\t\t}\n\t\t\t\tp.SerialNumber = append(p.SerialNumber, bi)\n\t\t\t}\n\t\tcase \"subject_organization\":\n\t\t\tif d.CountRemainingArgs() == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tp.SubjectOrganization = append(p.SubjectOrganization, d.RemainingArgs()...)\n\t\tdefault:\n\t\t\treturn d.ArgErr()\n\t\t}\n\n\t\t// No nested blocks are supported\n\t\tif d.NextBlock(nesting + 1) {\n\t\t\treturn d.Errf(\"malformed %s option '%s': blocks are not supported\", wrapper, optionName)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// bigInt is a big.Int type that interops with JSON encodings as a string.\ntype bigInt struct{ big.Int }\n\nfunc (bi bigInt) MarshalJSON() ([]byte, error) {\n\treturn json.Marshal(bi.String())\n}\n\nfunc (bi *bigInt) UnmarshalJSON(p []byte) error {\n\tif string(p) == \"null\" {\n\t\treturn nil\n\t}\n\tvar stringRep string\n\terr := json.Unmarshal(p, &stringRep)\n\tif err != nil {\n\t\treturn err\n\t}\n\t_, ok := bi.SetString(stringRep, 10)\n\tif !ok {\n\t\treturn fmt.Errorf(\"not a valid big integer: %s\", p)\n\t}\n\treturn nil\n}\n\n// Interface guard\nvar _ caddyfile.Unmarshaler = (*CustomCertSelectionPolicy)(nil)\n"
  },
  {
    "path": "modules/caddytls/connpolicy.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"encoding/base64\"\n\t\"encoding/json\"\n\t\"encoding/pem\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"reflect\"\n\t\"slices\"\n\t\"strings\"\n\n\t\"github.com/mholt/acmez/v3\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(LeafCertClientAuth{})\n}\n\n// ConnectionPolicies govern the establishment of TLS connections. It is\n// an ordered group of connection policies; the first matching policy will\n// be used to configure TLS connections at handshake-time.\ntype ConnectionPolicies []*ConnectionPolicy\n\n// Provision sets up each connection policy. It should be called\n// during the Validate() phase, after the TLS app (if any) is\n// already set up.\nfunc (cp ConnectionPolicies) Provision(ctx caddy.Context) error {\n\tfor i, pol := range cp {\n\t\t// matchers\n\t\tmods, err := ctx.LoadModule(pol, \"MatchersRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading handshake matchers: %v\", err)\n\t\t}\n\t\tfor _, modIface := range mods.(map[string]any) {\n\t\t\tcp[i].matchers = append(cp[i].matchers, modIface.(ConnectionMatcher))\n\t\t}\n\n\t\t// enable HTTP/2 by default\n\t\tif pol.ALPN == nil {\n\t\t\tpol.ALPN = append(pol.ALPN, defaultALPN...)\n\t\t}\n\n\t\t// pre-build standard TLS config so we don't have to at handshake-time\n\t\terr = pol.buildStandardTLSConfig(ctx)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"connection policy %d: building standard TLS config: %s\", i, err)\n\t\t}\n\n\t\tif pol.ClientAuthentication != nil && len(pol.ClientAuthentication.VerifiersRaw) > 0 {\n\t\t\tclientCertValidations, err := ctx.LoadModule(pol.ClientAuthentication, \"VerifiersRaw\")\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"loading client cert verifiers: %v\", err)\n\t\t\t}\n\t\t\tfor _, validator := range clientCertValidations.([]any) {\n\t\t\t\tcp[i].ClientAuthentication.verifiers = append(cp[i].ClientAuthentication.verifiers, validator.(ClientCertificateVerifier))\n\t\t\t}\n\t\t}\n\n\t\tif len(pol.HandshakeContextRaw) > 0 {\n\t\t\tmodIface, err := ctx.LoadModule(pol, \"HandshakeContextRaw\")\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"loading handshake context module: %v\", err)\n\t\t\t}\n\t\t\tcp[i].handshakeContext = modIface.(HandshakeContext)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// TLSConfig returns a standard-lib-compatible TLS configuration which\n// selects the first matching policy based on the ClientHello.\nfunc (cp ConnectionPolicies) TLSConfig(ctx caddy.Context) *tls.Config {\n\t// using ServerName to match policies is extremely common, especially in configs\n\t// with lots and lots of different policies; we can fast-track those by indexing\n\t// them by SNI, so we don't have to iterate potentially thousands of policies\n\t// (TODO: this map does not account for wildcards, see if this is a problem in practice? look for reports of high connection latency with wildcard certs but low latency for non-wildcards in multi-thousand-cert deployments)\n\tindexedBySNI := make(map[string]ConnectionPolicies)\n\tif len(cp) > 30 {\n\t\tfor _, p := range cp {\n\t\t\tfor _, m := range p.matchers {\n\t\t\t\tif sni, ok := m.(MatchServerName); ok {\n\t\t\t\t\tfor _, sniName := range sni {\n\t\t\t\t\t\t// index for fast lookups during handshakes\n\t\t\t\t\t\tindexedBySNI[sniName] = append(indexedBySNI[sniName], p)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tgetConfigForClient := func(hello *tls.ClientHelloInfo) (*tls.Config, error) {\n\t\t// filter policies by SNI first, if possible, to speed things up\n\t\t// when there may be lots of policies\n\t\tpossiblePolicies := cp\n\t\tif indexedPolicies, ok := indexedBySNI[hello.ServerName]; ok {\n\t\t\tpossiblePolicies = indexedPolicies\n\t\t}\n\n\tpolicyLoop:\n\t\tfor _, pol := range possiblePolicies {\n\t\t\tfor _, matcher := range pol.matchers {\n\t\t\t\tif !matcher.Match(hello) {\n\t\t\t\t\tcontinue policyLoop\n\t\t\t\t}\n\t\t\t}\n\t\t\tif pol.Drop {\n\t\t\t\treturn nil, fmt.Errorf(\"dropping connection\")\n\t\t\t}\n\t\t\treturn pol.TLSConfig, nil\n\t\t}\n\n\t\treturn nil, fmt.Errorf(\"no server TLS configuration available for ClientHello: %+v\", hello)\n\t}\n\n\ttlsCfg := &tls.Config{\n\t\tMinVersion:         tls.VersionTLS12,\n\t\tGetConfigForClient: getConfigForClient,\n\t}\n\n\t// enable ECH, if configured\n\tif tlsAppIface, err := ctx.AppIfConfigured(\"tls\"); err == nil {\n\t\ttlsApp := tlsAppIface.(*TLS)\n\n\t\tif tlsApp.EncryptedClientHello != nil && len(tlsApp.EncryptedClientHello.configs) > 0 {\n\t\t\t// if no publication was configured, we apply ECH to all server names by default,\n\t\t\t// but the TLS app needs to know what they are in this case, since they don't appear\n\t\t\t// in its config (remember, TLS connection policies are used by *other* apps to\n\t\t\t// run TLS servers) -- we skip names with placeholders\n\t\t\tif tlsApp.EncryptedClientHello.Publication == nil {\n\t\t\t\tvar echNames []string\n\t\t\t\trepl := caddy.NewReplacer()\n\t\t\t\tfor _, p := range cp {\n\t\t\t\t\tfor _, m := range p.matchers {\n\t\t\t\t\t\tif sni, ok := m.(MatchServerName); ok {\n\t\t\t\t\t\t\tfor _, name := range sni {\n\t\t\t\t\t\t\t\tfinalName := strings.ToLower(repl.ReplaceAll(name, \"\"))\n\t\t\t\t\t\t\t\techNames = append(echNames, finalName)\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\ttlsApp.RegisterServerNames(echNames)\n\t\t\t}\n\n\t\t\ttlsCfg.GetEncryptedClientHelloKeys = func(chi *tls.ClientHelloInfo) ([]tls.EncryptedClientHelloKey, error) {\n\t\t\t\ttlsApp.EncryptedClientHello.configsMu.RLock()\n\t\t\t\tdefer tlsApp.EncryptedClientHello.configsMu.RUnlock()\n\t\t\t\treturn tlsApp.EncryptedClientHello.stdlibReady, nil\n\t\t\t}\n\t\t}\n\t}\n\n\treturn tlsCfg\n}\n\n// ConnectionPolicy specifies the logic for handling a TLS handshake.\n// An empty policy is valid; safe and sensible defaults will be used.\ntype ConnectionPolicy struct {\n\t// How to match this policy with a TLS ClientHello. If\n\t// this policy is the first to match, it will be used.\n\tMatchersRaw caddy.ModuleMap `json:\"match,omitempty\" caddy:\"namespace=tls.handshake_match\"`\n\tmatchers    []ConnectionMatcher\n\n\t// How to choose a certificate if more than one matched\n\t// the given ServerName (SNI) value.\n\tCertSelection *CustomCertSelectionPolicy `json:\"certificate_selection,omitempty\"`\n\n\t// The list of cipher suites to support. Caddy's\n\t// defaults are modern and secure.\n\tCipherSuites []string `json:\"cipher_suites,omitempty\"`\n\n\t// The list of elliptic curves to support. Caddy's\n\t// defaults are modern and secure.\n\tCurves []string `json:\"curves,omitempty\"`\n\n\t// Protocols to use for Application-Layer Protocol\n\t// Negotiation (ALPN) during the handshake.\n\tALPN []string `json:\"alpn,omitempty\"`\n\n\t// Minimum TLS protocol version to allow. Default: `tls1.2`\n\tProtocolMin string `json:\"protocol_min,omitempty\"`\n\n\t// Maximum TLS protocol version to allow. Default: `tls1.3`\n\tProtocolMax string `json:\"protocol_max,omitempty\"`\n\n\t// Reject TLS connections. EXPERIMENTAL: May change.\n\tDrop bool `json:\"drop,omitempty\"`\n\n\t// Enables and configures TLS client authentication.\n\tClientAuthentication *ClientAuthentication `json:\"client_authentication,omitempty\"`\n\n\t// DefaultSNI becomes the ServerName in a ClientHello if there\n\t// is no policy configured for the empty SNI value.\n\tDefaultSNI string `json:\"default_sni,omitempty\"`\n\n\t// FallbackSNI becomes the ServerName in a ClientHello if\n\t// the original ServerName doesn't match any certificates\n\t// in the cache. The use cases for this are very niche;\n\t// typically if a client is a CDN and passes through the\n\t// ServerName of the downstream handshake but can accept\n\t// a certificate with the origin's hostname instead, then\n\t// you would set this to your origin's hostname. Note that\n\t// Caddy must be managing a certificate for this name.\n\t//\n\t// This feature is EXPERIMENTAL and subject to change or removal.\n\tFallbackSNI string `json:\"fallback_sni,omitempty\"`\n\n\t// Also known as \"SSLKEYLOGFILE\", TLS secrets will be written to\n\t// this file in NSS key log format which can then be parsed by\n\t// Wireshark and other tools. This is INSECURE as it allows other\n\t// programs or tools to decrypt TLS connections. However, this\n\t// capability can be useful for debugging and troubleshooting.\n\t// **ENABLING THIS LOG COMPROMISES SECURITY!**\n\t//\n\t// This feature is EXPERIMENTAL and subject to change or removal.\n\tInsecureSecretsLog string `json:\"insecure_secrets_log,omitempty\"`\n\n\t// A module that can manipulate the context passed into CertMagic's\n\t// certificate management functions during TLS handshakes.\n\t// EXPERIMENTAL - subject to change or removal.\n\tHandshakeContextRaw json.RawMessage `json:\"handshake_context,omitempty\" caddy:\"namespace=tls.context inline_key=module\"`\n\thandshakeContext    HandshakeContext\n\n\t// TLSConfig is the fully-formed, standard lib TLS config\n\t// used to serve TLS connections. Provision all\n\t// ConnectionPolicies to populate this. It is exported only\n\t// so it can be minimally adjusted after provisioning\n\t// if necessary (like to adjust NextProtos to disable HTTP/2),\n\t// and may be unexported in the future.\n\tTLSConfig *tls.Config `json:\"-\"`\n}\n\ntype HandshakeContext interface {\n\t// HandshakeContext returns a context to pass into CertMagic's\n\t// GetCertificate function used to serve, load, and manage certs\n\t// during TLS handshakes. Generally you'll start with the context\n\t// from the ClientHelloInfo, but you may use other information\n\t// from it as well. Return an error to abort the handshake.\n\tHandshakeContext(*tls.ClientHelloInfo) (context.Context, error)\n}\n\nfunc (p *ConnectionPolicy) buildStandardTLSConfig(ctx caddy.Context) error {\n\ttlsAppIface, err := ctx.App(\"tls\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"getting tls app: %v\", err)\n\t}\n\ttlsApp := tlsAppIface.(*TLS)\n\n\t// fill in some \"easy\" default values, but for other values\n\t// (such as slices), we should ensure that they start empty\n\t// so the user-provided config can fill them in; then we will\n\t// fill in a default config at the end if they are still unset\n\tcfg := &tls.Config{\n\t\tNextProtos: p.ALPN,\n\t\tGetCertificate: func(hello *tls.ClientHelloInfo) (*tls.Certificate, error) {\n\t\t\t// TODO: I don't love how this works: we pre-build certmagic configs\n\t\t\t// so that handshakes are faster. Unfortunately, certmagic configs are\n\t\t\t// comprised of settings from both a TLS connection policy and a TLS\n\t\t\t// automation policy. The only two fields (as of March 2020; v2 beta 17)\n\t\t\t// of a certmagic config that come from the TLS connection policy are\n\t\t\t// CertSelection and DefaultServerName, so an automation policy is what\n\t\t\t// builds the base certmagic config. Since the pre-built config is\n\t\t\t// shared, I don't think we can change any of its fields per-handshake,\n\t\t\t// hence the awkward shallow copy (dereference) here and the subsequent\n\t\t\t// changing of some of its fields. I'm worried this dereference allocates\n\t\t\t// more at handshake-time, but I don't know how to practically pre-build\n\t\t\t// a certmagic config for each combination of conn policy + automation policy...\n\t\t\tcfg := *tlsApp.getConfigForName(hello.ServerName)\n\t\t\tif p.CertSelection != nil {\n\t\t\t\t// you would think we could just set this whether or not\n\t\t\t\t// p.CertSelection is nil, but that leads to panics if\n\t\t\t\t// it is, because cfg.CertSelection is an interface,\n\t\t\t\t// so it will have a non-nil value even if the actual\n\t\t\t\t// value underlying it is nil (sigh)\n\t\t\t\tcfg.CertSelection = p.CertSelection\n\t\t\t}\n\t\t\tcfg.DefaultServerName = p.DefaultSNI\n\t\t\tcfg.FallbackServerName = p.FallbackSNI\n\n\t\t\t// TODO: experimental: if a handshake context module is configured, allow it\n\t\t\t// to modify the context before passing it into CertMagic's GetCertificate\n\t\t\tctx := hello.Context()\n\t\t\tif p.handshakeContext != nil {\n\t\t\t\tctx, err = p.handshakeContext.HandshakeContext(hello)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, fmt.Errorf(\"handshake context: %v\", err)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\treturn cfg.GetCertificateWithContext(ctx, hello)\n\t\t},\n\t\tMinVersion: tls.VersionTLS12,\n\t\tMaxVersion: tls.VersionTLS13,\n\t}\n\n\t// session tickets support\n\tif tlsApp.SessionTickets != nil {\n\t\tcfg.SessionTicketsDisabled = tlsApp.SessionTickets.Disabled\n\n\t\t// session ticket key rotation\n\t\ttlsApp.SessionTickets.register(cfg)\n\t\tctx.OnCancel(func() {\n\t\t\t// do cleanup when the context is canceled because,\n\t\t\t// though unlikely, it is possible that a context\n\t\t\t// needing a TLS server config could exist for less\n\t\t\t// than the lifetime of the whole app\n\t\t\ttlsApp.SessionTickets.unregister(cfg)\n\t\t})\n\t}\n\n\t// TODO: Clean up session ticket active locks in storage if app (or process) is being closed!\n\n\t// add all the cipher suites in order, without duplicates\n\tcipherSuitesAdded := make(map[uint16]struct{})\n\tfor _, csName := range p.CipherSuites {\n\t\tcsID := CipherSuiteID(csName)\n\t\tif csID == 0 {\n\t\t\treturn fmt.Errorf(\"unsupported cipher suite: %s\", csName)\n\t\t}\n\t\tif _, ok := cipherSuitesAdded[csID]; !ok {\n\t\t\tcipherSuitesAdded[csID] = struct{}{}\n\t\t\tcfg.CipherSuites = append(cfg.CipherSuites, csID)\n\t\t}\n\t}\n\n\t// add all the curve preferences in order, without duplicates\n\tcurvesAdded := make(map[tls.CurveID]struct{})\n\tfor _, curveName := range p.Curves {\n\t\tcurveID := SupportedCurves[curveName]\n\t\tif _, ok := curvesAdded[curveID]; !ok {\n\t\t\tcurvesAdded[curveID] = struct{}{}\n\t\t\tcfg.CurvePreferences = append(cfg.CurvePreferences, curveID)\n\t\t}\n\t}\n\n\t// ensure ALPN includes the ACME TLS-ALPN protocol\n\talpnFound := slices.Contains(p.ALPN, acmez.ACMETLS1Protocol)\n\tif !alpnFound && (cfg.NextProtos == nil || len(cfg.NextProtos) > 0) {\n\t\tcfg.NextProtos = append(cfg.NextProtos, acmez.ACMETLS1Protocol)\n\t}\n\n\t// min and max protocol versions\n\tif (p.ProtocolMin != \"\" && p.ProtocolMax != \"\") && p.ProtocolMin > p.ProtocolMax {\n\t\treturn fmt.Errorf(\"protocol min (%x) cannot be greater than protocol max (%x)\", p.ProtocolMin, p.ProtocolMax)\n\t}\n\tif p.ProtocolMin != \"\" {\n\t\tcfg.MinVersion = SupportedProtocols[p.ProtocolMin]\n\t}\n\tif p.ProtocolMax != \"\" {\n\t\tcfg.MaxVersion = SupportedProtocols[p.ProtocolMax]\n\t}\n\n\t// client authentication\n\tif p.ClientAuthentication != nil {\n\t\tif err := p.ClientAuthentication.provision(ctx); err != nil {\n\t\t\treturn fmt.Errorf(\"provisioning client CA: %v\", err)\n\t\t}\n\t\tif err := p.ClientAuthentication.ConfigureTLSConfig(cfg); err != nil {\n\t\t\treturn fmt.Errorf(\"configuring TLS client authentication: %v\", err)\n\t\t}\n\n\t\t// Prevent privilege escalation in case multiple vhosts are configured for\n\t\t// this TLS server; we could potentially figure out if that's the case, but\n\t\t// that might be complex to get right every time. Actually, two proper\n\t\t// solutions could leave tickets enabled, but I am not sure how to do them\n\t\t// properly without significant time investment; there may be new Go\n\t\t// APIs that alloaw this (Wrap/UnwrapSession?) but I do not know how to use\n\t\t// them at this time. TODO: one of these is a possible future enhancement:\n\t\t// A) Prevent resumptions across server identities (certificates): binding the ticket to the\n\t\t// certificate we would serve in a full handshake, or even bind a ticket to the exact SNI\n\t\t// it was issued under (though there are proposals for session resumption across hostnames).\n\t\t// B) Prevent resumptions falsely authenticating a client: include the realm in the ticket,\n\t\t// so that it can be validated upon resumption.\n\t\tcfg.SessionTicketsDisabled = true\n\t}\n\n\tif p.InsecureSecretsLog != \"\" {\n\t\tfilename, err := caddy.NewReplacer().ReplaceOrErr(p.InsecureSecretsLog, true, true)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tfilename, err = caddy.FastAbs(filename)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tlogFile, _, err := secretsLogPool.LoadOrNew(filename, func() (caddy.Destructor, error) {\n\t\t\tw, err := os.OpenFile(filename, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0o600)\n\t\t\treturn destructableWriter{w}, err\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.OnCancel(func() { _, _ = secretsLogPool.Delete(filename) })\n\n\t\tcfg.KeyLogWriter = logFile.(io.Writer)\n\n\t\tif c := tlsApp.logger.Check(zapcore.WarnLevel, \"TLS SECURITY COMPROMISED: secrets logging is enabled!\"); c != nil {\n\t\t\tc.Write(zap.String(\"log_filename\", filename))\n\t\t}\n\t}\n\n\tsetDefaultTLSParams(cfg)\n\n\tp.TLSConfig = cfg\n\n\treturn nil\n}\n\n// SettingsEmpty returns true if p's settings (fields\n// except the matchers) are all empty/unset.\nfunc (p ConnectionPolicy) SettingsEmpty() bool {\n\treturn p.CertSelection == nil &&\n\t\tp.CipherSuites == nil &&\n\t\tp.Curves == nil &&\n\t\tp.ALPN == nil &&\n\t\tp.ProtocolMin == \"\" &&\n\t\tp.ProtocolMax == \"\" &&\n\t\tp.ClientAuthentication == nil &&\n\t\tp.DefaultSNI == \"\" &&\n\t\tp.FallbackSNI == \"\" &&\n\t\tp.InsecureSecretsLog == \"\"\n}\n\n// SettingsEqual returns true if p's settings (fields\n// except the matchers) are the same as q.\nfunc (p ConnectionPolicy) SettingsEqual(q ConnectionPolicy) bool {\n\tp.MatchersRaw = nil\n\tq.MatchersRaw = nil\n\treturn reflect.DeepEqual(p, q)\n}\n\n// UnmarshalCaddyfile sets up the ConnectionPolicy from Caddyfile tokens. Syntax:\n//\n//\tconnection_policy {\n//\t\talpn                  <values...>\n//\t\tcert_selection {\n//\t\t\t...\n//\t\t}\n//\t\tciphers               <cipher_suites...>\n//\t\tclient_auth {\n//\t\t\t...\n//\t\t}\n//\t\tcurves                <curves...>\n//\t\tdefault_sni           <server_name>\n//\t\tmatch {\n//\t\t\t...\n//\t\t}\n//\t\tprotocols             <min> [<max>]\n//\t\t# EXPERIMENTAL:\n//\t\tdrop\n//\t\tfallback_sni          <server_name>\n//\t\tinsecure_secrets_log  <log_file>\n//\t}\nfunc (cp *ConnectionPolicy) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\t_, wrapper := d.Next(), d.Val()\n\n\t// No same-line options are supported\n\tif d.CountRemainingArgs() > 0 {\n\t\treturn d.ArgErr()\n\t}\n\n\tvar hasCertSelection, hasClientAuth, hasDefaultSNI, hasDrop,\n\t\thasFallbackSNI, hasInsecureSecretsLog, hasMatch, hasProtocols bool\n\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\toptionName := d.Val()\n\t\tswitch optionName {\n\t\tcase \"alpn\":\n\t\t\tif d.CountRemainingArgs() == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tcp.ALPN = append(cp.ALPN, d.RemainingArgs()...)\n\t\tcase \"cert_selection\":\n\t\t\tif hasCertSelection {\n\t\t\t\treturn d.Errf(\"duplicate %s option '%s'\", wrapper, optionName)\n\t\t\t}\n\t\t\tp := &CustomCertSelectionPolicy{}\n\t\t\tif err := p.UnmarshalCaddyfile(d.NewFromNextSegment()); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tcp.CertSelection, hasCertSelection = p, true\n\t\tcase \"client_auth\":\n\t\t\tif hasClientAuth {\n\t\t\t\treturn d.Errf(\"duplicate %s option '%s'\", wrapper, optionName)\n\t\t\t}\n\t\t\tca := &ClientAuthentication{}\n\t\t\tif err := ca.UnmarshalCaddyfile(d.NewFromNextSegment()); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tcp.ClientAuthentication, hasClientAuth = ca, true\n\t\tcase \"ciphers\":\n\t\t\tif d.CountRemainingArgs() == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tcp.CipherSuites = append(cp.CipherSuites, d.RemainingArgs()...)\n\t\tcase \"curves\":\n\t\t\tif d.CountRemainingArgs() == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tcp.Curves = append(cp.Curves, d.RemainingArgs()...)\n\t\tcase \"default_sni\":\n\t\t\tif hasDefaultSNI {\n\t\t\t\treturn d.Errf(\"duplicate %s option '%s'\", wrapper, optionName)\n\t\t\t}\n\t\t\tif d.CountRemainingArgs() != 1 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\t_, cp.DefaultSNI, hasDefaultSNI = d.NextArg(), d.Val(), true\n\t\tcase \"drop\": // EXPERIMENTAL\n\t\t\tif hasDrop {\n\t\t\t\treturn d.Errf(\"duplicate %s option '%s'\", wrapper, optionName)\n\t\t\t}\n\t\t\tcp.Drop, hasDrop = true, true\n\t\tcase \"fallback_sni\": // EXPERIMENTAL\n\t\t\tif hasFallbackSNI {\n\t\t\t\treturn d.Errf(\"duplicate %s option '%s'\", wrapper, optionName)\n\t\t\t}\n\t\t\tif d.CountRemainingArgs() != 1 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\t_, cp.FallbackSNI, hasFallbackSNI = d.NextArg(), d.Val(), true\n\t\tcase \"insecure_secrets_log\": // EXPERIMENTAL\n\t\t\tif hasInsecureSecretsLog {\n\t\t\t\treturn d.Errf(\"duplicate %s option '%s'\", wrapper, optionName)\n\t\t\t}\n\t\t\tif d.CountRemainingArgs() != 1 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\t_, cp.InsecureSecretsLog, hasInsecureSecretsLog = d.NextArg(), d.Val(), true\n\t\tcase \"match\":\n\t\t\tif hasMatch {\n\t\t\t\treturn d.Errf(\"duplicate %s option '%s'\", wrapper, optionName)\n\t\t\t}\n\t\t\tmatcherSet, err := ParseCaddyfileNestedMatcherSet(d)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tcp.MatchersRaw, hasMatch = matcherSet, true\n\t\tcase \"protocols\":\n\t\t\tif hasProtocols {\n\t\t\t\treturn d.Errf(\"duplicate %s option '%s'\", wrapper, optionName)\n\t\t\t}\n\t\t\tif d.CountRemainingArgs() == 0 || d.CountRemainingArgs() > 2 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\t_, cp.ProtocolMin, hasProtocols = d.NextArg(), d.Val(), true\n\t\t\tif d.NextArg() {\n\t\t\t\tcp.ProtocolMax = d.Val()\n\t\t\t}\n\t\tdefault:\n\t\t\treturn d.ArgErr()\n\t\t}\n\n\t\t// No nested blocks are supported\n\t\tif d.NextBlock(nesting + 1) {\n\t\t\treturn d.Errf(\"malformed %s option '%s': blocks are not supported\", wrapper, optionName)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// ClientAuthentication configures TLS client auth.\ntype ClientAuthentication struct {\n\t// Certificate authority module which provides the certificate pool of trusted certificates\n\tCARaw json.RawMessage `json:\"ca,omitempty\" caddy:\"namespace=tls.ca_pool.source inline_key=provider\"`\n\tca    CA\n\n\t// Deprecated: Use the `ca` field with the `tls.ca_pool.source.inline` module instead.\n\t// A list of base64 DER-encoded CA certificates\n\t// against which to validate client certificates.\n\t// Client certs which are not signed by any of\n\t// these CAs will be rejected.\n\tTrustedCACerts []string `json:\"trusted_ca_certs,omitempty\"`\n\n\t// Deprecated: Use the `ca` field with the `tls.ca_pool.source.file` module instead.\n\t// TrustedCACertPEMFiles is a list of PEM file names\n\t// from which to load certificates of trusted CAs.\n\t// Client certificates which are not signed by any of\n\t// these CA certificates will be rejected.\n\tTrustedCACertPEMFiles []string `json:\"trusted_ca_certs_pem_files,omitempty\"`\n\n\t// Deprecated: This field is deprecated and will be removed in\n\t// a future version. Please use the `validators` field instead\n\t// with the tls.client_auth.verifier.leaf module instead.\n\t//\n\t// A list of base64 DER-encoded client leaf certs\n\t// to accept. If this list is not empty, client certs\n\t// which are not in this list will be rejected.\n\tTrustedLeafCerts []string `json:\"trusted_leaf_certs,omitempty\"`\n\n\t// Client certificate verification modules. These can perform\n\t// custom client authentication checks, such as ensuring the\n\t// certificate is not revoked.\n\tVerifiersRaw []json.RawMessage `json:\"verifiers,omitempty\" caddy:\"namespace=tls.client_auth.verifier inline_key=verifier\"`\n\n\tverifiers []ClientCertificateVerifier\n\n\t// The mode for authenticating the client. Allowed values are:\n\t//\n\t// Mode | Description\n\t// -----|---------------\n\t// `request` | Ask clients for a certificate, but allow even if there isn't one; do not verify it\n\t// `require` | Require clients to present a certificate, but do not verify it\n\t// `verify_if_given` | Ask clients for a certificate; allow even if there isn't one, but verify it if there is\n\t// `require_and_verify` | Require clients to present a valid certificate that is verified\n\t//\n\t// The default mode is `require_and_verify` if any\n\t// TrustedCACerts or TrustedCACertPEMFiles or TrustedLeafCerts\n\t// are provided; otherwise, the default mode is `require`.\n\tMode string `json:\"mode,omitempty\"`\n\n\texistingVerifyPeerCert func([][]byte, [][]*x509.Certificate) error\n}\n\n// UnmarshalCaddyfile parses the Caddyfile segment to set up the client authentication. Syntax:\n//\n//\tclient_auth {\n//\t\tmode                   [request|require|verify_if_given|require_and_verify]\n//\t \ttrust_pool\t\t\t   <module> {\n//\t\t\t...\n//\t\t}\n//\t\tverifier               <module>\n//\t}\n//\n// If `mode` is not provided, it defaults to `require_and_verify` if `trust_pool` is provided.\n// Otherwise, it defaults to `require`.\nfunc (ca *ClientAuthentication) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\tfor d.NextArg() {\n\t\t// consume any tokens on the same line, if any.\n\t}\n\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\tsubdir := d.Val()\n\t\tswitch subdir {\n\t\tcase \"mode\":\n\t\t\tif d.CountRemainingArgs() > 1 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif !d.Args(&ca.Mode) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\tcase \"trusted_ca_cert\":\n\t\t\tcaddy.Log().Warn(\"The 'trusted_ca_cert' field is deprecated. Use the 'trust_pool' field instead.\")\n\t\t\tif len(ca.CARaw) != 0 {\n\t\t\t\treturn d.Err(\"cannot specify both 'trust_pool' and 'trusted_ca_cert' or 'trusted_ca_cert_file'\")\n\t\t\t}\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tca.TrustedCACerts = append(ca.TrustedCACerts, d.Val())\n\t\tcase \"trusted_leaf_cert\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tca.TrustedLeafCerts = append(ca.TrustedLeafCerts, d.Val())\n\t\tcase \"trusted_ca_cert_file\":\n\t\t\tcaddy.Log().Warn(\"The 'trusted_ca_cert_file' field is deprecated. Use the 'trust_pool' field instead.\")\n\t\t\tif len(ca.CARaw) != 0 {\n\t\t\t\treturn d.Err(\"cannot specify both 'trust_pool' and 'trusted_ca_cert' or 'trusted_ca_cert_file'\")\n\t\t\t}\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tfilename := d.Val()\n\t\t\tders, err := convertPEMFilesToDER(filename)\n\t\t\tif err != nil {\n\t\t\t\treturn d.WrapErr(err)\n\t\t\t}\n\t\t\tca.TrustedCACerts = append(ca.TrustedCACerts, ders...)\n\t\tcase \"trusted_leaf_cert_file\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tfilename := d.Val()\n\t\t\tders, err := convertPEMFilesToDER(filename)\n\t\t\tif err != nil {\n\t\t\t\treturn d.WrapErr(err)\n\t\t\t}\n\t\t\tca.TrustedLeafCerts = append(ca.TrustedLeafCerts, ders...)\n\t\tcase \"trust_pool\":\n\t\t\tif len(ca.TrustedCACerts) != 0 {\n\t\t\t\treturn d.Err(\"cannot specify both 'trust_pool' and 'trusted_ca_cert' or 'trusted_ca_cert_file'\")\n\t\t\t}\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tmodName := d.Val()\n\t\t\tmod, err := caddyfile.UnmarshalModule(d, \"tls.ca_pool.source.\"+modName)\n\t\t\tif err != nil {\n\t\t\t\treturn d.WrapErr(err)\n\t\t\t}\n\t\t\tcaMod, ok := mod.(CA)\n\t\t\tif !ok {\n\t\t\t\treturn fmt.Errorf(\"trust_pool module '%s' is not a certificate pool provider\", caMod)\n\t\t\t}\n\t\t\tca.CARaw = caddyconfig.JSONModuleObject(caMod, \"provider\", modName, nil)\n\t\tcase \"verifier\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\t\tvType := d.Val()\n\t\t\tmodID := \"tls.client_auth.verifier.\" + vType\n\t\t\tunm, err := caddyfile.UnmarshalModule(d, modID)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\t_, ok := unm.(ClientCertificateVerifier)\n\t\t\tif !ok {\n\t\t\t\treturn d.Errf(\"module '%s' is not a caddytls.ClientCertificateVerifier\", modID)\n\t\t\t}\n\t\t\tca.VerifiersRaw = append(ca.VerifiersRaw, caddyconfig.JSONModuleObject(unm, \"verifier\", vType, nil))\n\t\tdefault:\n\t\t\treturn d.Errf(\"unknown subdirective for client_auth: %s\", subdir)\n\t\t}\n\t}\n\n\t// only trust_ca_cert or trust_ca_cert_file was specified\n\tif len(ca.TrustedCACerts) > 0 {\n\t\tfileMod := &InlineCAPool{}\n\t\tfileMod.TrustedCACerts = append(fileMod.TrustedCACerts, ca.TrustedCACerts...)\n\t\tca.CARaw = caddyconfig.JSONModuleObject(fileMod, \"provider\", \"inline\", nil)\n\t\tca.TrustedCACertPEMFiles, ca.TrustedCACerts = nil, nil\n\t}\n\treturn nil\n}\n\nfunc convertPEMFilesToDER(filename string) ([]string, error) {\n\tcertDataPEM, err := os.ReadFile(filename)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tvar ders []string\n\t// while block is not nil, we have more certificates in the file\n\tfor block, rest := pem.Decode(certDataPEM); block != nil; block, rest = pem.Decode(rest) {\n\t\tif block.Type != \"CERTIFICATE\" {\n\t\t\treturn nil, fmt.Errorf(\"no CERTIFICATE pem block found in %s\", filename)\n\t\t}\n\t\tders = append(\n\t\t\tders,\n\t\t\tbase64.StdEncoding.EncodeToString(block.Bytes),\n\t\t)\n\t}\n\t// if we decoded nothing, return an error\n\tif len(ders) == 0 {\n\t\treturn nil, fmt.Errorf(\"no CERTIFICATE pem block found in %s\", filename)\n\t}\n\treturn ders, nil\n}\n\nfunc (clientauth *ClientAuthentication) provision(ctx caddy.Context) error {\n\tif len(clientauth.CARaw) > 0 && (len(clientauth.TrustedCACerts) > 0 || len(clientauth.TrustedCACertPEMFiles) > 0) {\n\t\treturn fmt.Errorf(\"conflicting config for client authentication trust CA\")\n\t}\n\n\t// convert all named file paths to inline\n\tif len(clientauth.TrustedCACertPEMFiles) > 0 {\n\t\tfor _, fpath := range clientauth.TrustedCACertPEMFiles {\n\t\t\tders, err := convertPEMFilesToDER(fpath)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tclientauth.TrustedCACerts = append(clientauth.TrustedCACerts, ders...)\n\t\t}\n\t}\n\n\t// if we have TrustedCACerts explicitly set, create an 'inline' CA and return\n\tif len(clientauth.TrustedCACerts) > 0 {\n\t\tcaPool := InlineCAPool{\n\t\t\tTrustedCACerts: clientauth.TrustedCACerts,\n\t\t}\n\t\terr := caPool.Provision(ctx)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tclientauth.ca = caPool\n\t}\n\n\t// if we don't have any CARaw set, there's not much work to do\n\tif clientauth.CARaw == nil {\n\t\treturn nil\n\t}\n\tcaRaw, err := ctx.LoadModule(clientauth, \"CARaw\")\n\tif err != nil {\n\t\treturn err\n\t}\n\tca, ok := caRaw.(CA)\n\tif !ok {\n\t\treturn fmt.Errorf(\"'ca' module '%s' is not a certificate pool provider\", ca)\n\t}\n\tclientauth.ca = ca\n\n\treturn nil\n}\n\n// Active returns true if clientauth has an actionable configuration.\nfunc (clientauth ClientAuthentication) Active() bool {\n\treturn len(clientauth.TrustedCACerts) > 0 ||\n\t\tlen(clientauth.TrustedCACertPEMFiles) > 0 ||\n\t\tlen(clientauth.TrustedLeafCerts) > 0 || // TODO: DEPRECATED\n\t\tlen(clientauth.VerifiersRaw) > 0 ||\n\t\tlen(clientauth.Mode) > 0 ||\n\t\tclientauth.CARaw != nil || clientauth.ca != nil\n}\n\n// ConfigureTLSConfig sets up cfg to enforce clientauth's configuration.\nfunc (clientauth *ClientAuthentication) ConfigureTLSConfig(cfg *tls.Config) error {\n\t// if there's no actionable client auth, simply disable it\n\tif !clientauth.Active() {\n\t\tcfg.ClientAuth = tls.NoClientCert\n\t\treturn nil\n\t}\n\n\t// enforce desired mode of client authentication\n\tif len(clientauth.Mode) > 0 {\n\t\tswitch clientauth.Mode {\n\t\tcase \"request\":\n\t\t\tcfg.ClientAuth = tls.RequestClientCert\n\t\tcase \"require\":\n\t\t\tcfg.ClientAuth = tls.RequireAnyClientCert\n\t\tcase \"verify_if_given\":\n\t\t\tcfg.ClientAuth = tls.VerifyClientCertIfGiven\n\t\tcase \"require_and_verify\":\n\t\t\tcfg.ClientAuth = tls.RequireAndVerifyClientCert\n\t\tdefault:\n\t\t\treturn fmt.Errorf(\"client auth mode not recognized: %s\", clientauth.Mode)\n\t\t}\n\t} else {\n\t\t// otherwise, set a safe default mode\n\t\tif len(clientauth.TrustedCACerts) > 0 ||\n\t\t\tlen(clientauth.TrustedCACertPEMFiles) > 0 ||\n\t\t\tlen(clientauth.TrustedLeafCerts) > 0 ||\n\t\t\tclientauth.CARaw != nil || clientauth.ca != nil {\n\t\t\tcfg.ClientAuth = tls.RequireAndVerifyClientCert\n\t\t} else {\n\t\t\tcfg.ClientAuth = tls.RequireAnyClientCert\n\t\t}\n\t}\n\n\t// enforce CA verification by adding CA certs to the ClientCAs pool\n\tif clientauth.ca != nil {\n\t\tcfg.ClientCAs = clientauth.ca.CertPool()\n\t}\n\n\t// TODO: DEPRECATED: Only here for backwards compatibility.\n\t// If leaf cert is specified, enforce by adding a client auth module\n\tif len(clientauth.TrustedLeafCerts) > 0 {\n\t\tcaddy.Log().Named(\"tls.connection_policy\").Warn(\"trusted_leaf_certs is deprecated; use leaf verifier module instead\")\n\t\tvar trustedLeafCerts []*x509.Certificate\n\t\tfor _, clientCertString := range clientauth.TrustedLeafCerts {\n\t\t\tclientCert, err := decodeBase64DERCert(clientCertString)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"parsing certificate: %v\", err)\n\t\t\t}\n\t\t\ttrustedLeafCerts = append(trustedLeafCerts, clientCert)\n\t\t}\n\t\tclientauth.verifiers = append(clientauth.verifiers, LeafCertClientAuth{trustedLeafCerts: trustedLeafCerts})\n\t}\n\n\t// if a custom verification function already exists, wrap it\n\tclientauth.existingVerifyPeerCert = cfg.VerifyPeerCertificate\n\tcfg.VerifyConnection = clientauth.verifyConnection\n\treturn nil\n}\n\n// verifyConnection is for use as a tls.Config.VerifyConnection callback\n// to do custom client certificate verification. It is intended for\n// installation only by clientauth.ConfigureTLSConfig().\n//\n// Unlike VerifyPeerCertificate, VerifyConnection is called on every\n// connection including resumed sessions, preventing session-resumption bypass.\nfunc (clientauth *ClientAuthentication) verifyConnection(cs tls.ConnectionState) error {\n\t// first use any pre-existing custom verification function\n\tif clientauth.existingVerifyPeerCert != nil {\n\t\trawCerts := make([][]byte, len(cs.PeerCertificates))\n\t\tfor i, cert := range cs.PeerCertificates {\n\t\t\trawCerts[i] = cert.Raw\n\t\t}\n\t\tif err := clientauth.existingVerifyPeerCert(rawCerts, cs.VerifiedChains); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\tfor _, verifier := range clientauth.verifiers {\n\t\tif err := verifier.VerifyClientCertificate(nil, cs.VerifiedChains); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\n// decodeBase64DERCert base64-decodes, then DER-decodes, certStr.\nfunc decodeBase64DERCert(certStr string) (*x509.Certificate, error) {\n\tderBytes, err := base64.StdEncoding.DecodeString(certStr)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn x509.ParseCertificate(derBytes)\n}\n\n// setDefaultTLSParams sets the default TLS cipher suites, protocol versions,\n// and server preferences of cfg if they are not already set; it does not\n// overwrite values, only fills in missing values.\nfunc setDefaultTLSParams(cfg *tls.Config) {\n\tif len(cfg.CipherSuites) == 0 {\n\t\tcfg.CipherSuites = getOptimalDefaultCipherSuites()\n\t}\n\n\t// Not a cipher suite, but still important for mitigating protocol downgrade attacks\n\t// (prepend since having it at end breaks http2 due to non-h2-approved suites before it)\n\tcfg.CipherSuites = append([]uint16{tls.TLS_FALLBACK_SCSV}, cfg.CipherSuites...)\n\n\tif len(cfg.CurvePreferences) == 0 {\n\t\tcfg.CurvePreferences = defaultCurves\n\t}\n\n\t// crypto/tls docs:\n\t// \"If EncryptedClientHelloKeys is set, MinVersion, if set, must be VersionTLS13.\"\n\tif cfg.EncryptedClientHelloKeys != nil && cfg.MinVersion != 0 && cfg.MinVersion < tls.VersionTLS13 {\n\t\tcfg.MinVersion = tls.VersionTLS13\n\t}\n}\n\n// LeafCertClientAuth verifies the client's leaf certificate.\ntype LeafCertClientAuth struct {\n\tLeafCertificateLoadersRaw []json.RawMessage `json:\"leaf_certs_loaders,omitempty\" caddy:\"namespace=tls.leaf_cert_loader inline_key=loader\"`\n\ttrustedLeafCerts          []*x509.Certificate\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (LeafCertClientAuth) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.client_auth.verifier.leaf\",\n\t\tNew: func() caddy.Module { return new(LeafCertClientAuth) },\n\t}\n}\n\nfunc (l *LeafCertClientAuth) Provision(ctx caddy.Context) error {\n\tif l.LeafCertificateLoadersRaw == nil {\n\t\treturn nil\n\t}\n\tval, err := ctx.LoadModule(l, \"LeafCertificateLoadersRaw\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"could not parse leaf certificates loaders: %s\", err.Error())\n\t}\n\ttrustedLeafCertloaders := []LeafCertificateLoader{}\n\tfor _, loader := range val.([]any) {\n\t\ttrustedLeafCertloaders = append(trustedLeafCertloaders, loader.(LeafCertificateLoader))\n\t}\n\ttrustedLeafCertificates := []*x509.Certificate{}\n\tfor _, loader := range trustedLeafCertloaders {\n\t\tcerts, err := loader.LoadLeafCertificates()\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"could not load leaf certificates: %s\", err.Error())\n\t\t}\n\t\ttrustedLeafCertificates = append(trustedLeafCertificates, certs...)\n\t}\n\tl.trustedLeafCerts = trustedLeafCertificates\n\treturn nil\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (l *LeafCertClientAuth) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.NextArg()\n\n\t// accommodate the use of one-liners\n\tif d.CountRemainingArgs() > 1 {\n\t\td.NextArg()\n\t\tmodName := d.Val()\n\t\tmod, err := caddyfile.UnmarshalModule(d, \"tls.leaf_cert_loader.\"+modName)\n\t\tif err != nil {\n\t\t\treturn d.WrapErr(err)\n\t\t}\n\t\tvMod, ok := mod.(LeafCertificateLoader)\n\t\tif !ok {\n\t\t\treturn fmt.Errorf(\"leaf module '%s' is not a leaf certificate loader\", vMod)\n\t\t}\n\t\tl.LeafCertificateLoadersRaw = append(\n\t\t\tl.LeafCertificateLoadersRaw,\n\t\t\tcaddyconfig.JSONModuleObject(vMod, \"loader\", modName, nil),\n\t\t)\n\t\treturn nil\n\t}\n\n\t// accommodate the use of nested blocks\n\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\tmodName := d.Val()\n\t\tmod, err := caddyfile.UnmarshalModule(d, \"tls.leaf_cert_loader.\"+modName)\n\t\tif err != nil {\n\t\t\treturn d.WrapErr(err)\n\t\t}\n\t\tvMod, ok := mod.(LeafCertificateLoader)\n\t\tif !ok {\n\t\t\treturn fmt.Errorf(\"leaf module '%s' is not a leaf certificate loader\", vMod)\n\t\t}\n\t\tl.LeafCertificateLoadersRaw = append(\n\t\t\tl.LeafCertificateLoadersRaw,\n\t\t\tcaddyconfig.JSONModuleObject(vMod, \"loader\", modName, nil),\n\t\t)\n\t}\n\treturn nil\n}\n\nfunc (l LeafCertClientAuth) VerifyClientCertificate(rawCerts [][]byte, _ [][]*x509.Certificate) error {\n\tif len(rawCerts) == 0 {\n\t\treturn fmt.Errorf(\"no client certificate provided\")\n\t}\n\n\tremoteLeafCert, err := x509.ParseCertificate(rawCerts[0])\n\tif err != nil {\n\t\treturn fmt.Errorf(\"can't parse the given certificate: %s\", err.Error())\n\t}\n\n\tif slices.ContainsFunc(l.trustedLeafCerts, remoteLeafCert.Equal) {\n\t\treturn nil\n\t}\n\n\treturn fmt.Errorf(\"client leaf certificate failed validation\")\n}\n\n// PublicKeyAlgorithm is a JSON-unmarshalable wrapper type.\ntype PublicKeyAlgorithm x509.PublicKeyAlgorithm\n\n// UnmarshalJSON satisfies json.Unmarshaler.\nfunc (a *PublicKeyAlgorithm) UnmarshalJSON(b []byte) error {\n\talgoStr := strings.ToLower(strings.Trim(string(b), `\"`))\n\talgo, ok := publicKeyAlgorithms[algoStr]\n\tif !ok {\n\t\treturn fmt.Errorf(\"unrecognized public key algorithm: %s (expected one of %v)\",\n\t\t\talgoStr, publicKeyAlgorithms)\n\t}\n\t*a = PublicKeyAlgorithm(algo)\n\treturn nil\n}\n\n// ConnectionMatcher is a type which matches TLS handshakes.\ntype ConnectionMatcher interface {\n\tMatch(*tls.ClientHelloInfo) bool\n}\n\n// LeafCertificateLoader is a type that loads the trusted leaf certificates\n// for the tls.leaf_cert_loader modules\ntype LeafCertificateLoader interface {\n\tLoadLeafCertificates() ([]*x509.Certificate, error)\n}\n\n// ClientCertificateVerifier is a type which verifies client certificates.\n// It is called during verifyPeerCertificate in the TLS handshake.\ntype ClientCertificateVerifier interface {\n\tVerifyClientCertificate(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error\n}\n\nvar defaultALPN = []string{\"h2\", \"http/1.1\"}\n\ntype destructableWriter struct{ *os.File }\n\nfunc (d destructableWriter) Destruct() error { return d.Close() }\n\nvar secretsLogPool = caddy.NewUsagePool()\n\n// Interface guards\nvar (\n\t_ caddyfile.Unmarshaler = (*ClientAuthentication)(nil)\n\t_ caddyfile.Unmarshaler = (*ConnectionPolicy)(nil)\n\t_ caddyfile.Unmarshaler = (*LeafCertClientAuth)(nil)\n)\n\n// ParseCaddyfileNestedMatcherSet parses the Caddyfile tokens for a nested\n// matcher set, and returns its raw module map value.\nfunc ParseCaddyfileNestedMatcherSet(d *caddyfile.Dispenser) (caddy.ModuleMap, error) {\n\tmatcherMap := make(map[string]ConnectionMatcher)\n\n\ttokensByMatcherName := make(map[string][]caddyfile.Token)\n\tfor nesting := d.Nesting(); d.NextArg() || d.NextBlock(nesting); {\n\t\tmatcherName := d.Val()\n\t\ttokensByMatcherName[matcherName] = append(tokensByMatcherName[matcherName], d.NextSegment()...)\n\t}\n\n\tfor matcherName, tokens := range tokensByMatcherName {\n\t\tdd := caddyfile.NewDispenser(tokens)\n\t\tdd.Next() // consume wrapper name\n\n\t\tunm, err := caddyfile.UnmarshalModule(dd, \"tls.handshake_match.\"+matcherName)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tcm, ok := unm.(ConnectionMatcher)\n\t\tif !ok {\n\t\t\treturn nil, fmt.Errorf(\"matcher module '%s' is not a connection matcher\", matcherName)\n\t\t}\n\t\tmatcherMap[matcherName] = cm\n\t}\n\n\tmatcherSet := make(caddy.ModuleMap)\n\tfor name, matcher := range matcherMap {\n\t\tjsonBytes, err := json.Marshal(matcher)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"marshaling %T matcher: %v\", matcher, err)\n\t\t}\n\t\tmatcherSet[name] = jsonBytes\n\t}\n\n\treturn matcherSet, nil\n}\n"
  },
  {
    "path": "modules/caddytls/connpolicy_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"reflect\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc TestClientAuthenticationUnmarshalCaddyfileWithDirectiveName(t *testing.T) {\n\tconst test_der_1 = `MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==`\n\tconst test_cert_file_1 = \"../../caddytest/caddy.ca.cer\"\n\ttype args struct {\n\t\td *caddyfile.Dispenser\n\t}\n\ttests := []struct {\n\t\tname     string\n\t\targs     args\n\t\texpected ClientAuthentication\n\t\twantErr  bool\n\t}{\n\t\t{\n\t\t\tname: \"empty client_auth block does not error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(\n\t\t\t\t\t`client_auth {\n\t\t\t\t\t}`,\n\t\t\t\t),\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"providing both 'trust_pool' and 'trusted_ca_cert' returns an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(\n\t\t\t\t\t`client_auth {\n\t\t\t\t\ttrust_pool inline MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\n\t\t\t\t\ttrusted_ca_cert MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkwODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl03WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45twOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNxtdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTUApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAdBgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5uNY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfKD66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEOfG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnkoNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdleIh6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==\n\t\t\t\t}`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"trust_pool without a module argument returns an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(\n\t\t\t\t\t`client_auth {\n\t\t\t\t\ttrust_pool\n\t\t\t\t}`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"providing more than 1 mode produces an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\t\tclient_auth {\n\t\t\t\t\t\tmode require request\n\t\t\t\t\t}\n\t\t\t\t`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"not providing 'mode' argument produces an error\",\n\t\t\targs: args{d: caddyfile.NewTestDispenser(`\n\t\t\t\tclient_auth {\n\t\t\t\t\tmode\n\t\t\t\t}\n\t\t\t`)},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"providing a single 'mode' argument sets the mode\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\t\tclient_auth {\n\t\t\t\t\t\tmode require\n\t\t\t\t\t}\n\t\t\t\t`),\n\t\t\t},\n\t\t\texpected: ClientAuthentication{\n\t\t\t\tMode: \"require\",\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"not providing an argument to 'trusted_ca_cert' produces an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\tclient_auth {\n\t\t\t\t\ttrusted_ca_cert\n\t\t\t\t}\n\t\t\t\t`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"not providing an argument to 'trusted_leaf_cert' produces an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\tclient_auth {\n\t\t\t\t\ttrusted_leaf_cert\n\t\t\t\t}\n\t\t\t\t`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"not providing an argument to 'trusted_ca_cert_file' produces an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\tclient_auth {\n\t\t\t\t\ttrusted_ca_cert_file\n\t\t\t\t}\n\t\t\t\t`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"not providing an argument to 'trusted_leaf_cert_file' produces an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\tclient_auth {\n\t\t\t\t\ttrusted_leaf_cert_file\n\t\t\t\t}\n\t\t\t\t`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"using 'trusted_ca_cert' adapts successfully\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(`\n\t\t\t\tclient_auth {\n\t\t\t\t\ttrusted_ca_cert %s\n\t\t\t\t}`, test_der_1)),\n\t\t\t},\n\t\t\texpected: ClientAuthentication{\n\t\t\t\tCARaw: json.RawMessage(fmt.Sprintf(`{\"provider\":\"inline\",\"trusted_ca_certs\":[\"%s\"]}`, test_der_1)),\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"using 'inline' trust_pool loads the module successfully\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(`\n\t\t\t\t\tclient_auth {\n\t\t\t\t\t\ttrust_pool inline {\n\t\t\t\t\t\t\ttrust_der\t%s\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t`, test_der_1)),\n\t\t\t},\n\t\t\texpected: ClientAuthentication{\n\t\t\t\tCARaw: json.RawMessage(fmt.Sprintf(`{\"provider\":\"inline\",\"trusted_ca_certs\":[\"%s\"]}`, test_der_1)),\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'trusted_ca_cert' and 'trust_pool' produces an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(`\n\t\t\t\tclient_auth {\n\t\t\t\t\ttrusted_ca_cert %s\n\t\t\t\t\ttrust_pool inline {\n\t\t\t\t\t\ttrust_der\t%s\n\t\t\t\t\t}\n\t\t\t\t}`, test_der_1, test_der_1)),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'trust_pool' and 'trusted_ca_cert' produces an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(`\n\t\t\t\tclient_auth {\n\t\t\t\t\ttrust_pool inline {\n\t\t\t\t\t\ttrust_der\t%s\n\t\t\t\t\t}\n\t\t\t\t\ttrusted_ca_cert %s\n\t\t\t\t}`, test_der_1, test_der_1)),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'trust_pool' and 'trusted_ca_cert' produces an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(`\n\t\t\t\tclient_auth {\n\t\t\t\t\ttrust_pool inline {\n\t\t\t\t\t\ttrust_der\t%s\n\t\t\t\t\t}\n\t\t\t\t\ttrusted_ca_cert_file %s\n\t\t\t\t}`, test_der_1, test_cert_file_1)),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"configuring 'trusted_ca_cert_file' without an argument is an error\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(`\n\t\t\t\tclient_auth {\n\t\t\t\t\ttrusted_ca_cert_file\n\t\t\t\t}\n\t\t\t\t`),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"configuring 'trusted_ca_cert_file' produces config with 'inline' provider\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(`\n\t\t\t\tclient_auth {\n\t\t\t\t\ttrusted_ca_cert_file %s\n\t\t\t\t}`, test_cert_file_1),\n\t\t\t\t),\n\t\t\t},\n\t\t\texpected: ClientAuthentication{\n\t\t\t\tCARaw: json.RawMessage(fmt.Sprintf(`{\"provider\":\"inline\",\"trusted_ca_certs\":[\"%s\"]}`, test_der_1)),\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"configuring leaf certs does not conflict with 'trust_pool'\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(`\n\t\t\t\tclient_auth {\n\t\t\t\t\ttrust_pool inline {\n\t\t\t\t\t\ttrust_der\t%s\n\t\t\t\t\t}\n\t\t\t\t\ttrusted_leaf_cert %s\n\t\t\t\t}`, test_der_1, test_der_1)),\n\t\t\t},\n\t\t\texpected: ClientAuthentication{\n\t\t\t\tCARaw:            json.RawMessage(fmt.Sprintf(`{\"provider\":\"inline\",\"trusted_ca_certs\":[\"%s\"]}`, test_der_1)),\n\t\t\t\tTrustedLeafCerts: []string{test_der_1},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"providing trusted leaf certificate file loads the cert successfully\",\n\t\t\targs: args{\n\t\t\t\td: caddyfile.NewTestDispenser(fmt.Sprintf(`\n\t\t\t\tclient_auth {\n\t\t\t\t\ttrusted_leaf_cert_file %s\n\t\t\t\t}`, test_cert_file_1)),\n\t\t\t},\n\t\t\texpected: ClientAuthentication{\n\t\t\t\tTrustedLeafCerts: []string{test_der_1},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tca := &ClientAuthentication{}\n\t\t\tif err := ca.UnmarshalCaddyfile(tt.args.d); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"ClientAuthentication.UnmarshalCaddyfile() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !tt.wantErr && !reflect.DeepEqual(&tt.expected, ca) {\n\t\t\t\tt.Errorf(\"ClientAuthentication.UnmarshalCaddyfile() = %v, want %v\", ca, tt.expected)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestClientAuthenticationProvision(t *testing.T) {\n\ttests := []struct {\n\t\tname    string\n\t\tca      ClientAuthentication\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"specifying both 'CARaw' and 'TrustedCACerts' produces an error\",\n\t\t\tca: ClientAuthentication{\n\t\t\t\tCARaw:          json.RawMessage(`{\"provider\":\"inline\",\"trusted_ca_certs\":[\"foo\"]}`),\n\t\t\t\tTrustedCACerts: []string{\"foo\"},\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"specifying both 'CARaw' and 'TrustedCACertPEMFiles' produces an error\",\n\t\t\tca: ClientAuthentication{\n\t\t\t\tCARaw:                 json.RawMessage(`{\"provider\":\"inline\",\"trusted_ca_certs\":[\"foo\"]}`),\n\t\t\t\tTrustedCACertPEMFiles: []string{\"foo\"},\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"setting 'TrustedCACerts' provisions the cert pool\",\n\t\t\tca: ClientAuthentication{\n\t\t\t\tTrustedCACerts: []string{test_der_1},\n\t\t\t},\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\terr := tt.ca.provision(caddy.Context{})\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"ClientAuthentication.provision() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !tt.wantErr {\n\t\t\t\tif tt.ca.ca.CertPool() == nil {\n\t\t\t\t\tt.Error(\"CertPool is nil, expected non-nil value\")\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "modules/caddytls/distributedstek/distributedstek.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Package distributedstek provides TLS session ticket ephemeral\n// keys (STEKs) in a distributed fashion by utilizing configured\n// storage for locking and key sharing. This allows a cluster of\n// machines to optimally resume TLS sessions in a load-balanced\n// environment without any hassle. This is similar to what\n// Twitter does, but without needing to rely on SSH, as it is\n// built into the web server this way:\n// https://blog.twitter.com/engineering/en_us/a/2013/forward-secrecy-at-twitter.html\npackage distributedstek\n\nimport (\n\t\"bytes\"\n\t\"encoding/gob\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io/fs\"\n\t\"log\"\n\t\"runtime/debug\"\n\t\"time\"\n\n\t\"github.com/caddyserver/certmagic\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddytls\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Provider{})\n}\n\n// Provider implements a distributed STEK provider. This\n// module will obtain STEKs from a storage module instead\n// of generating STEKs internally. This allows STEKs to be\n// coordinated, improving TLS session resumption in a cluster.\ntype Provider struct {\n\t// The storage module wherein to store and obtain session\n\t// ticket keys. If unset, Caddy's default/global-configured\n\t// storage module will be used.\n\tStorage json.RawMessage `json:\"storage,omitempty\" caddy:\"namespace=caddy.storage inline_key=module\"`\n\n\tstorage    certmagic.Storage\n\tstekConfig *caddytls.SessionTicketService\n\ttimer      *time.Timer\n\tctx        caddy.Context\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Provider) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.stek.distributed\",\n\t\tNew: func() caddy.Module { return new(Provider) },\n\t}\n}\n\n// Provision provisions s.\nfunc (s *Provider) Provision(ctx caddy.Context) error {\n\ts.ctx = ctx\n\n\t// unpack the storage module to use, if different from the default\n\tif s.Storage != nil {\n\t\tval, err := ctx.LoadModule(s, \"Storage\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading TLS storage module: %s\", err)\n\t\t}\n\t\tcmStorage, err := val.(caddy.StorageConverter).CertMagicStorage()\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"creating TLS storage configuration: %v\", err)\n\t\t}\n\t\ts.storage = cmStorage\n\t}\n\n\t// otherwise, use default storage\n\tif s.storage == nil {\n\t\ts.storage = ctx.Storage()\n\t}\n\n\treturn nil\n}\n\n// Initialize sets the configuration for s and returns the starting keys.\nfunc (s *Provider) Initialize(config *caddytls.SessionTicketService) ([][32]byte, error) {\n\t// keep a reference to the config; we'll need it when rotating keys\n\ts.stekConfig = config\n\n\tdstek, err := s.getSTEK()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// create timer for the remaining time on the interval;\n\t// this timer is cleaned up only when rotate() returns\n\ts.timer = time.NewTimer(time.Until(dstek.NextRotation))\n\n\treturn dstek.Keys, nil\n}\n\n// Next returns a channel which transmits the latest session ticket keys.\nfunc (s *Provider) Next(doneChan <-chan struct{}) <-chan [][32]byte {\n\tkeysChan := make(chan [][32]byte)\n\tgo s.rotate(doneChan, keysChan)\n\treturn keysChan\n}\n\nfunc (s *Provider) loadSTEK() (distributedSTEK, error) {\n\tvar sg distributedSTEK\n\tgobBytes, err := s.storage.Load(s.ctx, stekFileName)\n\tif err != nil {\n\t\treturn sg, err // don't wrap, in case error is certmagic.ErrNotExist\n\t}\n\tdec := gob.NewDecoder(bytes.NewReader(gobBytes))\n\terr = dec.Decode(&sg)\n\tif err != nil {\n\t\treturn sg, fmt.Errorf(\"STEK gob corrupted: %v\", err)\n\t}\n\treturn sg, nil\n}\n\nfunc (s *Provider) storeSTEK(dstek distributedSTEK) error {\n\tvar buf bytes.Buffer\n\terr := gob.NewEncoder(&buf).Encode(dstek)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"encoding STEK gob: %v\", err)\n\t}\n\terr = s.storage.Store(s.ctx, stekFileName, buf.Bytes())\n\tif err != nil {\n\t\treturn fmt.Errorf(\"storing STEK gob: %v\", err)\n\t}\n\treturn nil\n}\n\n// getSTEK locks and loads the current STEK from storage. If none\n// currently exists, a new STEK is created and persisted. If the\n// current STEK is outdated (NextRotation time is in the past),\n// then it is rotated and persisted. The resulting STEK is returned.\nfunc (s *Provider) getSTEK() (distributedSTEK, error) {\n\terr := s.storage.Lock(s.ctx, stekLockName)\n\tif err != nil {\n\t\treturn distributedSTEK{}, fmt.Errorf(\"failed to acquire storage lock: %v\", err)\n\t}\n\n\t//nolint:errcheck\n\tdefer s.storage.Unlock(s.ctx, stekLockName)\n\n\t// load the current STEKs from storage\n\tdstek, err := s.loadSTEK()\n\tif errors.Is(err, fs.ErrNotExist) {\n\t\t// if there is none, then make some right away\n\t\tdstek, err = s.rotateKeys(dstek)\n\t\tif err != nil {\n\t\t\treturn dstek, fmt.Errorf(\"creating new STEK: %v\", err)\n\t\t}\n\t} else if err != nil {\n\t\t// some other error, that's a problem\n\t\treturn dstek, fmt.Errorf(\"loading STEK: %v\", err)\n\t} else if time.Now().After(dstek.NextRotation) {\n\t\t// if current STEKs are outdated, rotate them\n\t\tdstek, err = s.rotateKeys(dstek)\n\t\tif err != nil {\n\t\t\treturn dstek, fmt.Errorf(\"rotating keys: %v\", err)\n\t\t}\n\t}\n\n\treturn dstek, nil\n}\n\n// rotateKeys rotates the keys of oldSTEK and returns the new distributedSTEK\n// with updated keys and timestamps. It stores the returned STEK in storage,\n// so this function must only be called in a storage-provided lock.\nfunc (s *Provider) rotateKeys(oldSTEK distributedSTEK) (distributedSTEK, error) {\n\tvar newSTEK distributedSTEK\n\tvar err error\n\n\tnewSTEK.Keys, err = s.stekConfig.RotateSTEKs(oldSTEK.Keys)\n\tif err != nil {\n\t\treturn newSTEK, err\n\t}\n\n\tnow := time.Now()\n\tnewSTEK.LastRotation = now\n\tnewSTEK.NextRotation = now.Add(time.Duration(s.stekConfig.RotationInterval))\n\n\terr = s.storeSTEK(newSTEK)\n\tif err != nil {\n\t\treturn newSTEK, err\n\t}\n\n\treturn newSTEK, nil\n}\n\n// rotate rotates keys on a regular basis, sending each updated set of\n// keys down keysChan, until doneChan is closed.\nfunc (s *Provider) rotate(doneChan <-chan struct{}, keysChan chan<- [][32]byte) {\n\tdefer func() {\n\t\tif err := recover(); err != nil {\n\t\t\tlog.Printf(\"[PANIC] distributed STEK rotation: %v\\n%s\", err, debug.Stack())\n\t\t}\n\t}()\n\tfor {\n\t\tselect {\n\t\tcase <-s.timer.C:\n\t\t\tdstek, err := s.getSTEK()\n\t\t\tif err != nil {\n\t\t\t\t// TODO: improve this handling\n\t\t\t\tlog.Printf(\"[ERROR] Loading STEK: %v\", err)\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// send the updated keys to the service\n\t\t\tkeysChan <- dstek.Keys\n\n\t\t\t// timer channel is already drained, so reset directly (see godoc)\n\t\t\ts.timer.Reset(time.Until(dstek.NextRotation))\n\n\t\tcase <-doneChan:\n\t\t\t// again, see godocs for why timer is stopped this way\n\t\t\tif !s.timer.Stop() {\n\t\t\t\t<-s.timer.C\n\t\t\t}\n\t\t\treturn\n\t\t}\n\t}\n}\n\ntype distributedSTEK struct {\n\tKeys                       [][32]byte\n\tLastRotation, NextRotation time.Time\n}\n\nconst (\n\tstekLockName = \"stek_check\"\n\tstekFileName = \"stek/stek.bin\"\n)\n\n// Interface guard\nvar _ caddytls.STEKProvider = (*Provider)(nil)\n"
  },
  {
    "path": "modules/caddytls/ech.go",
    "content": "package caddytls\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"encoding/base64\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io/fs\"\n\tweakrand \"math/rand/v2\"\n\t\"path\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/cloudflare/circl/hpke\"\n\t\"github.com/cloudflare/circl/kem\"\n\t\"github.com/libdns/libdns\"\n\t\"go.uber.org/zap\"\n\t\"golang.org/x/crypto/cryptobyte\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(ECHDNSPublisher{})\n}\n\n// ECH enables Encrypted ClientHello (ECH) and configures its management.\n//\n// ECH helps protect site names (also called \"server names\" or \"domain names\"\n// or \"SNI\"), which are normally sent over plaintext when establishing a TLS\n// connection. With ECH, the true ClientHello is encrypted and wrapped by an\n// \"outer\" ClientHello that uses a more generic, shared server name that is\n// publicly known.\n//\n// Clients need to know which public name (and other parameters) to use when\n// connecting to a site with ECH, and the methods for this vary; however,\n// major browsers support reading ECH configurations from DNS records (which\n// is typically only secure when DNS-over-HTTPS or DNS-over-TLS is enabled in\n// the client). Caddy has the ability to automatically publish ECH configs to\n// DNS records if a DNS provider is configured either in the TLS app or with\n// each individual publication config object. (Requires a custom build with a\n// DNS provider module.)\n//\n// ECH requires at least TLS 1.3, so any TLS connection policies with ECH\n// applied will automatically upgrade the minimum TLS version to 1.3, even if\n// configured to a lower version.\n//\n// EXPERIMENTAL: Subject to change.\ntype ECH struct {\n\t// The list of ECH configurations for which to automatically generate\n\t// and rotate keys. At least one is required to enable ECH.\n\t//\n\t// It is strongly recommended to use as few ECH configs as possible\n\t// to maximize the size of your anonymity set (see the ECH specification\n\t// for a definition). Typically, each server should have only one public\n\t// name, i.e. one config in this list.\n\tConfigs []ECHConfiguration `json:\"configs,omitempty\"`\n\n\t// Publication describes ways to publish ECH configs for clients to\n\t// discover and use. Without publication, most clients will not use\n\t// ECH at all, and those that do will suffer degraded performance.\n\t//\n\t// Most major browsers support ECH by way of publication to HTTPS\n\t// DNS RRs. (This also typically requires that they use DoH or DoT.)\n\tPublication []*ECHPublication `json:\"publication,omitempty\"`\n\n\tconfigsMu   *sync.RWMutex                 // protects both configs and the list of configs/keys the standard library uses\n\tconfigs     map[string][]echConfig        // map of public_name to list of configs\n\tstdlibReady []tls.EncryptedClientHelloKey // ECH configs+keys in a format the standard library can use\n}\n\n// Provision loads or creates ECH configs and returns outer names (for certificate\n// management), but does not publish any ECH configs. The DNS module is used as\n// a default for later publishing if needed.\nfunc (ech *ECH) Provision(ctx caddy.Context) ([]string, error) {\n\tech.configsMu = new(sync.RWMutex)\n\n\tlogger := ctx.Logger().Named(\"ech\")\n\n\t// set up publication modules before we need to obtain a lock in storage,\n\t// since this is strictly internal and doesn't require synchronization\n\tfor i, pub := range ech.Publication {\n\t\tmods, err := ctx.LoadModule(pub, \"PublishersRaw\")\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"loading ECH publication modules: %v\", err)\n\t\t}\n\t\tfor _, modIface := range mods.(map[string]any) {\n\t\t\tech.Publication[i].publishers = append(ech.Publication[i].publishers, modIface.(ECHPublisher))\n\t\t}\n\t}\n\n\t// the rest of provisioning needs an exclusive lock so that instances aren't\n\t// stepping on each other when setting up ECH configs\n\tstorage := ctx.Storage()\n\tif err := storage.Lock(ctx, echStorageLockName); err != nil {\n\t\treturn nil, err\n\t}\n\tdefer func() {\n\t\tif err := storage.Unlock(ctx, echStorageLockName); err != nil {\n\t\t\tlogger.Error(\"unable to unlock ECH provisioning in storage\", zap.Error(err))\n\t\t}\n\t}()\n\n\tech.configsMu.Lock()\n\tdefer ech.configsMu.Unlock()\n\n\touterNames, err := ech.setConfigsFromStorage(ctx, logger)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"loading configs from storage: %w\", err)\n\t}\n\n\t// see if we need to make any new ones based on the input configuration\n\tfor _, cfg := range ech.Configs {\n\t\tpublicName := strings.ToLower(strings.TrimSpace(cfg.PublicName))\n\n\t\tif list, ok := ech.configs[publicName]; !ok || len(list) == 0 {\n\t\t\t// no config with this public name was loaded, so create one\n\t\t\techCfg, err := generateAndStoreECHConfig(ctx, publicName)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tlogger.Debug(\"generated new ECH config\",\n\t\t\t\tzap.String(\"public_name\", echCfg.RawPublicName),\n\t\t\t\tzap.Uint8(\"id\", echCfg.ConfigID))\n\t\t\tech.configs[publicName] = append(ech.configs[publicName], echCfg)\n\t\t\touterNames = append(outerNames, publicName)\n\t\t}\n\t}\n\n\t// convert the configs into a structure ready for the std lib to use\n\tech.updateKeyList()\n\n\t// ensure any old keys are rotated out\n\tif err = ech.rotateECHKeys(ctx, logger, true); err != nil {\n\t\treturn nil, fmt.Errorf(\"rotating ECH configs: %w\", err)\n\t}\n\n\treturn outerNames, nil\n}\n\n// setConfigsFromStorage sets the ECH configs in memory to those in storage.\n// It must be called in a write lock on ech.configsMu.\nfunc (ech *ECH) setConfigsFromStorage(ctx caddy.Context, logger *zap.Logger) ([]string, error) {\n\tstorage := ctx.Storage()\n\n\tech.configs = make(map[string][]echConfig)\n\n\tvar outerNames []string\n\n\t// start by loading all the existing configs (even the older ones on the way out,\n\t// since some clients may still be using them if they haven't yet picked up on the\n\t// new configs)\n\tcfgKeys, err := storage.List(ctx, echConfigsKey, false)\n\tif err != nil && !errors.Is(err, fs.ErrNotExist) { // OK if dir doesn't exist; it will be created\n\t\treturn nil, err\n\t}\n\tfor _, cfgKey := range cfgKeys {\n\t\tcfg, err := loadECHConfig(ctx, path.Base(cfgKey))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\t// if any part of the config's folder was corrupted, the load function will\n\t\t// clean it up and not return an error, since configs are immutable and\n\t\t// fairly ephemeral... so just check that we actually got a populated config\n\t\tif cfg.configBin == nil || cfg.privKeyBin == nil {\n\t\t\tcontinue\n\t\t}\n\t\tlogger.Debug(\"loaded ECH config\",\n\t\t\tzap.String(\"public_name\", cfg.RawPublicName),\n\t\t\tzap.Uint8(\"id\", cfg.ConfigID))\n\t\tif _, seen := ech.configs[cfg.RawPublicName]; !seen {\n\t\t\touterNames = append(outerNames, cfg.RawPublicName)\n\t\t}\n\t\tech.configs[cfg.RawPublicName] = append(ech.configs[cfg.RawPublicName], cfg)\n\t}\n\n\treturn outerNames, nil\n}\n\n// rotateECHKeys updates the ECH keys/configs that are outdated if rotation is needed.\n// It should be called in a write lock on ech.configsMu. If a lock is already obtained\n// in storage, then pass true for storageSynced.\n//\n// This function sets/updates the stdlib-ready key list only if a rotation occurs.\nfunc (ech *ECH) rotateECHKeys(ctx caddy.Context, logger *zap.Logger, storageSynced bool) error {\n\tstorage := ctx.Storage()\n\n\t// all existing configs are now loaded; rotate keys \"regularly\" as recommended by the spec\n\t// (also: \"Rotating too frequently limits the client anonymity set.\" - but the more server\n\t// names, the more frequently rotation can be done safely)\n\tconst (\n\t\trotationInterval = 24 * time.Hour * 30\n\t\tdeleteAfter      = 24 * time.Hour * 90\n\t)\n\n\tif !ech.rotationNeeded(rotationInterval, deleteAfter) {\n\t\treturn nil\n\t}\n\n\t// sync this operation across cluster if not already\n\tif !storageSynced {\n\t\tif err := storage.Lock(ctx, echStorageLockName); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tdefer func() {\n\t\t\tif err := storage.Unlock(ctx, echStorageLockName); err != nil {\n\t\t\t\tlogger.Error(\"unable to unlock ECH rotation in storage\", zap.Error(err))\n\t\t\t}\n\t\t}()\n\t}\n\n\t// update what storage has, in case another instance already updated things\n\tif _, err := ech.setConfigsFromStorage(ctx, logger); err != nil {\n\t\treturn fmt.Errorf(\"updating ECH keys from storage: %v\", err)\n\t}\n\n\t// iterate the updated list and do any updates as needed\n\tfor publicName := range ech.configs {\n\t\tfor i := 0; i < len(ech.configs[publicName]); i++ {\n\t\t\tcfg := ech.configs[publicName][i]\n\t\t\tif time.Since(cfg.meta.Created) >= rotationInterval && cfg.meta.Replaced.IsZero() {\n\t\t\t\t// key is due for rotation and it hasn't been replaced yet; do that now\n\t\t\t\tlogger.Debug(\"ECH config is due for rotation\",\n\t\t\t\t\tzap.String(\"public_name\", cfg.RawPublicName),\n\t\t\t\t\tzap.Uint8(\"id\", cfg.ConfigID),\n\t\t\t\t\tzap.Time(\"created\", cfg.meta.Created),\n\t\t\t\t\tzap.Duration(\"age\", time.Since(cfg.meta.Created)),\n\t\t\t\t\tzap.Duration(\"rotation_interval\", rotationInterval))\n\n\t\t\t\t// start by generating and storing the replacement ECH config\n\t\t\t\tnewCfg, err := generateAndStoreECHConfig(ctx, publicName)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn fmt.Errorf(\"generating and storing new replacement ECH config: %w\", err)\n\t\t\t\t}\n\n\t\t\t\t// mark the key as replaced so we don't rotate it again, and instead delete it later\n\t\t\t\tech.configs[publicName][i].meta.Replaced = time.Now()\n\n\t\t\t\t// persist the updated metadata\n\t\t\t\tmetaBytes, err := json.Marshal(ech.configs[publicName][i].meta)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn fmt.Errorf(\"marshaling updated ECH config metadata: %v\", err)\n\t\t\t\t}\n\t\t\t\tif err := storage.Store(ctx, echMetaKey(cfg.ConfigID), metaBytes); err != nil {\n\t\t\t\t\treturn fmt.Errorf(\"storing updated ECH config metadata: %v\", err)\n\t\t\t\t}\n\n\t\t\t\tech.configs[publicName] = append(ech.configs[publicName], newCfg)\n\n\t\t\t\tlogger.Debug(\"rotated ECH key\",\n\t\t\t\t\tzap.String(\"public_name\", cfg.RawPublicName),\n\t\t\t\t\tzap.Uint8(\"old_id\", cfg.ConfigID),\n\t\t\t\t\tzap.Uint8(\"new_id\", newCfg.ConfigID))\n\t\t\t} else if time.Since(cfg.meta.Created) >= deleteAfter && !cfg.meta.Replaced.IsZero() {\n\t\t\t\t// key has expired and is no longer supported; delete it from storage and memory\n\t\t\t\tcfgIDKey := path.Join(echConfigsKey, strconv.Itoa(int(cfg.ConfigID)))\n\t\t\t\tif err := storage.Delete(ctx, cfgIDKey); err != nil {\n\t\t\t\t\treturn fmt.Errorf(\"deleting expired ECH config: %v\", err)\n\t\t\t\t}\n\n\t\t\t\tech.configs[publicName] = append(ech.configs[publicName][:i], ech.configs[publicName][i+1:]...)\n\t\t\t\ti--\n\n\t\t\t\tlogger.Debug(\"deleted expired ECH key\",\n\t\t\t\t\tzap.String(\"public_name\", cfg.RawPublicName),\n\t\t\t\t\tzap.Uint8(\"id\", cfg.ConfigID),\n\t\t\t\t\tzap.Duration(\"age\", time.Since(cfg.meta.Created)))\n\t\t\t}\n\t\t}\n\t}\n\n\tech.updateKeyList()\n\n\treturn nil\n}\n\n// rotationNeeded returns true if any ECH key needs to be replaced, or deleted.\n// It must be called inside a read or write lock of ech.configsMu (probably a\n// write lock, so that the rotation can occur correctly in the same lock).)\nfunc (ech *ECH) rotationNeeded(rotationInterval, deleteAfter time.Duration) bool {\n\tfor publicName := range ech.configs {\n\t\tfor i := 0; i < len(ech.configs[publicName]); i++ {\n\t\t\tcfg := ech.configs[publicName][i]\n\t\t\tif (time.Since(cfg.meta.Created) >= rotationInterval && cfg.meta.Replaced.IsZero()) ||\n\t\t\t\t(time.Since(cfg.meta.Created) >= deleteAfter && !cfg.meta.Replaced.IsZero()) {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t}\n\treturn false\n}\n\n// updateKeyList updates the list of ECH keys the std lib uses to serve ECH.\n// It must be called inside a write lock on ech.configsMu.\nfunc (ech *ECH) updateKeyList() {\n\tech.stdlibReady = []tls.EncryptedClientHelloKey{}\n\tfor _, cfgs := range ech.configs {\n\t\tfor _, cfg := range cfgs {\n\t\t\tech.stdlibReady = append(ech.stdlibReady, tls.EncryptedClientHelloKey{\n\t\t\t\tConfig:      cfg.configBin,\n\t\t\t\tPrivateKey:  cfg.privKeyBin,\n\t\t\t\tSendAsRetry: cfg.meta.Replaced.IsZero(), // only send during retries if key has not been rotated out\n\t\t\t})\n\t\t}\n\t}\n}\n\n// publishECHConfigs publishes any configs that are configured for publication and which haven't been published already.\nfunc (t *TLS) publishECHConfigs(logger *zap.Logger) error {\n\t// make publication exclusive, since we don't need to repeat this unnecessarily\n\tstorage := t.ctx.Storage()\n\tconst echLockName = \"ech_publish\"\n\tif err := storage.Lock(t.ctx, echLockName); err != nil {\n\t\treturn err\n\t}\n\tdefer func() {\n\t\tif err := storage.Unlock(t.ctx, echLockName); err != nil {\n\t\t\tlogger.Error(\"unable to unlock ECH provisioning in storage\", zap.Error(err))\n\t\t}\n\t}()\n\n\t// get the publication config, or use a default if not specified\n\t// (the default publication config should be to publish all ECH\n\t// configs to the app-global DNS provider; if no DNS provider is\n\t// configured, then this whole function is basically a no-op)\n\tpublicationList := t.EncryptedClientHello.Publication\n\tif publicationList == nil {\n\t\tif dnsProv, ok := t.dns.(ECHDNSProvider); ok {\n\t\t\tpublicationList = []*ECHPublication{\n\t\t\t\t{\n\t\t\t\t\tpublishers: []ECHPublisher{\n\t\t\t\t\t\t&ECHDNSPublisher{\n\t\t\t\t\t\t\tprovider: dnsProv,\n\t\t\t\t\t\t\tlogger:   logger,\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t}\n\t\t}\n\t}\n\n\t// for each publication config, build the list of ECH configs to\n\t// publish with it, and figure out which inner names to publish\n\t// to/for, then publish\n\tfor _, publication := range publicationList {\n\t\tt.EncryptedClientHello.configsMu.RLock()\n\t\t// this publication is either configured for specific ECH configs,\n\t\t// or we just use an implied default of all ECH configs\n\t\tvar echCfgList echConfigList\n\t\tvar configIDs []uint8 // TODO: use IDs or the outer names?\n\t\tif publication.Configs == nil {\n\t\t\t// by default, publish all configs\n\t\t\tfor _, configs := range t.EncryptedClientHello.configs {\n\t\t\t\techCfgList = append(echCfgList, configs...)\n\t\t\t\tfor _, c := range configs {\n\t\t\t\t\tconfigIDs = append(configIDs, c.ConfigID)\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tfor _, cfgOuterName := range publication.Configs {\n\t\t\t\tif cfgList, ok := t.EncryptedClientHello.configs[cfgOuterName]; ok {\n\t\t\t\t\techCfgList = append(echCfgList, cfgList...)\n\t\t\t\t\tfor _, c := range cfgList {\n\t\t\t\t\t\tconfigIDs = append(configIDs, c.ConfigID)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tt.EncryptedClientHello.configsMu.RUnlock()\n\n\t\t// marshal the ECH config list as binary for publication\n\t\techCfgListBin, err := echCfgList.MarshalBinary()\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"marshaling ECH config list: %v\", err)\n\t\t}\n\n\t\t// now we have our list of ECH configs to publish and the inner names\n\t\t// to publish for (i.e. the names being protected); iterate each publisher\n\t\t// and do the publish for any config+name that needs a publish\n\t\tfor _, publisher := range publication.publishers {\n\t\t\tpublisherKey := publisher.PublisherKey()\n\n\t\t\t// by default, publish for all (non-outer) server names, unless\n\t\t\t// a specific list of names is configured\n\t\t\tvar serverNamesSet map[string]struct{}\n\t\t\tif publication.Domains == nil {\n\t\t\t\tserverNamesSet = make(map[string]struct{}, len(t.serverNames))\n\t\t\t\tfor name := range t.serverNames {\n\t\t\t\t\t// skip Tailscale names, a special case we also handle differently in our auto-HTTPS\n\t\t\t\t\tif strings.HasSuffix(name, \".ts.net\") {\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\t\t\t\t\tserverNamesSet[name] = struct{}{}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tserverNamesSet = make(map[string]struct{}, len(publication.Domains))\n\t\t\t\tfor _, name := range publication.Domains {\n\t\t\t\t\tserverNamesSet[name] = struct{}{}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// remove any domains from the set which have already had all configs in the\n\t\t\t// list published by this publisher, to avoid always re-publishing unnecessarily\n\t\t\tfor configuredInnerName := range serverNamesSet {\n\t\t\t\tallConfigsPublished := true\n\t\t\t\tfor _, cfg := range echCfgList {\n\t\t\t\t\t// TODO: Potentially utilize the timestamp (map value) for recent-enough publication, instead of just checking for existence\n\t\t\t\t\tif _, ok := cfg.meta.Publications[publisherKey][configuredInnerName]; !ok {\n\t\t\t\t\t\tallConfigsPublished = false\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif allConfigsPublished {\n\t\t\t\t\tdelete(serverNamesSet, configuredInnerName)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// if all the (inner) domains have had this ECH config list published\n\t\t\t// by this publisher, then try the next publication config\n\t\t\tif len(serverNamesSet) == 0 {\n\t\t\t\tlogger.Debug(\"ECH config list already published by publisher for associated domains (or no domains to publish for)\",\n\t\t\t\t\tzap.Uint8s(\"config_ids\", configIDs),\n\t\t\t\t\tzap.String(\"publisher\", publisherKey))\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// convert the set of names to a slice\n\t\t\tdnsNamesToPublish := make([]string, 0, len(serverNamesSet))\n\t\t\tfor name := range serverNamesSet {\n\t\t\t\tdnsNamesToPublish = append(dnsNamesToPublish, name)\n\t\t\t}\n\n\t\t\tlogger.Debug(\"publishing ECH config list\",\n\t\t\t\tzap.String(\"publisher\", publisherKey),\n\t\t\t\tzap.Strings(\"domains\", dnsNamesToPublish),\n\t\t\t\tzap.Uint8s(\"config_ids\", configIDs))\n\n\t\t\t// publish this ECH config list with this publisher\n\t\t\tpubTime := time.Now()\n\t\t\terr := publisher.PublishECHConfigList(t.ctx, dnsNamesToPublish, echCfgListBin)\n\n\t\t\tvar publishErrs PublishECHConfigListErrors\n\t\t\tif errors.As(err, &publishErrs) {\n\t\t\t\t// at least a partial failure, maybe a complete failure, but we can\n\t\t\t\t// log each error by domain\n\t\t\t\tfor innerName, domainErr := range publishErrs {\n\t\t\t\t\tlogger.Error(\"failed to publish ECH configuration list\",\n\t\t\t\t\t\tzap.String(\"publisher\", publisherKey),\n\t\t\t\t\t\tzap.String(\"domain\", innerName),\n\t\t\t\t\t\tzap.Uint8s(\"config_ids\", configIDs),\n\t\t\t\t\t\tzap.Error(domainErr))\n\t\t\t\t}\n\t\t\t} else if err != nil {\n\t\t\t\t// generic error; assume the entire thing failed, I guess\n\t\t\t\tlogger.Error(\"failed publishing ECH configuration list\",\n\t\t\t\t\tzap.String(\"publisher\", publisherKey),\n\t\t\t\t\tzap.Strings(\"domains\", dnsNamesToPublish),\n\t\t\t\t\tzap.Uint8s(\"config_ids\", configIDs),\n\t\t\t\t\tzap.Error(err))\n\t\t\t}\n\n\t\t\tif err == nil || (len(publishErrs) > 0 && len(publishErrs) < len(dnsNamesToPublish)) {\n\t\t\t\t// if publication for at least some domains succeeded, we should update our publication\n\t\t\t\t// state for those domains to avoid unnecessarily republishing every time\n\t\t\t\tsomeAll := \"all\"\n\t\t\t\tif len(publishErrs) > 0 {\n\t\t\t\t\tsomeAll = \"some\"\n\t\t\t\t}\n\t\t\t\t// make a list of names that published successfully with this publisher\n\t\t\t\t// so that we update only their state in storage, not the failed ones\n\t\t\t\tvar successNames []string\n\t\t\t\tfor _, name := range dnsNamesToPublish {\n\t\t\t\t\tif _, ok := publishErrs[name]; !ok {\n\t\t\t\t\t\tsuccessNames = append(successNames, name)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tlogger.Info(\"successfully published ECH configuration list for \"+someAll+\" domains\",\n\t\t\t\t\tzap.String(\"publisher\", publisherKey),\n\t\t\t\t\tzap.Strings(\"domains\", successNames),\n\t\t\t\t\tzap.Uint8s(\"config_ids\", configIDs))\n\n\t\t\t\tfor _, cfg := range echCfgList {\n\t\t\t\t\tif cfg.meta.Publications == nil {\n\t\t\t\t\t\tcfg.meta.Publications = make(publicationHistory)\n\t\t\t\t\t}\n\t\t\t\t\tif _, ok := cfg.meta.Publications[publisherKey]; !ok {\n\t\t\t\t\t\tcfg.meta.Publications[publisherKey] = make(map[string]time.Time)\n\t\t\t\t\t}\n\t\t\t\t\tfor _, name := range successNames {\n\t\t\t\t\t\tcfg.meta.Publications[publisherKey][name] = pubTime\n\t\t\t\t\t}\n\t\t\t\t\tmetaBytes, err := json.Marshal(cfg.meta)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn fmt.Errorf(\"marshaling ECH config metadata: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t\tif err := t.ctx.Storage().Store(t.ctx, echMetaKey(cfg.ConfigID), metaBytes); err != nil {\n\t\t\t\t\t\treturn fmt.Errorf(\"storing updated ECH config metadata: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tlogger.Error(\"all domains failed to publish ECH configuration list (see earlier errors)\",\n\t\t\t\t\tzap.String(\"publisher\", publisherKey),\n\t\t\t\t\tzap.Strings(\"domains\", dnsNamesToPublish),\n\t\t\t\t\tzap.Uint8s(\"config_ids\", configIDs))\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// loadECHConfig loads the config from storage with the given configID.\n// An error is not actually returned in some cases the config fails to\n// load because in some cases it just means the config ID folder has\n// been cleaned up in storage, maybe due to an incomplete set of keys\n// or corrupted contents; in any case, the only rectification is to\n// delete it and make new keys (an error IS returned if deleting the\n// corrupted keys fails, for example). Check the returned echConfig for\n// non-nil privKeyBin and configBin values before using.\nfunc loadECHConfig(ctx caddy.Context, configID string) (echConfig, error) {\n\tstorage := ctx.Storage()\n\tlogger := ctx.Logger()\n\n\tcfgIDKey := path.Join(echConfigsKey, configID)\n\tkeyKey := path.Join(cfgIDKey, \"key.bin\")\n\tconfigKey := path.Join(cfgIDKey, \"config.bin\")\n\tmetaKey := path.Join(cfgIDKey, \"meta.json\")\n\n\t// if loading anything fails, might as well delete this folder and free up\n\t// the config ID; spec is designed to rotate configs frequently anyway\n\t// (I consider it a more serious error if we can't clean up the folder,\n\t// since leaving stray storage keys is confusing)\n\tprivKeyBytes, err := storage.Load(ctx, keyKey)\n\tif err != nil {\n\t\tdelErr := storage.Delete(ctx, cfgIDKey)\n\t\tif delErr != nil {\n\t\t\treturn echConfig{}, fmt.Errorf(\"error loading private key (%v) and cleaning up parent storage key %s: %v\", err, cfgIDKey, delErr)\n\t\t}\n\t\tlogger.Warn(\"could not load ECH private key; deleting its config folder\",\n\t\t\tzap.String(\"config_id\", configID),\n\t\t\tzap.Error(err))\n\t\treturn echConfig{}, nil\n\t}\n\techConfigBytes, err := storage.Load(ctx, configKey)\n\tif err != nil {\n\t\tdelErr := storage.Delete(ctx, cfgIDKey)\n\t\tif delErr != nil {\n\t\t\treturn echConfig{}, fmt.Errorf(\"error loading ECH config (%v) and cleaning up parent storage key %s: %v\", err, cfgIDKey, delErr)\n\t\t}\n\t\tlogger.Warn(\"could not load ECH config; deleting its config folder\",\n\t\t\tzap.String(\"config_id\", configID),\n\t\t\tzap.Error(err))\n\t\treturn echConfig{}, nil\n\t}\n\tvar cfg echConfig\n\tif err := cfg.UnmarshalBinary(echConfigBytes); err != nil {\n\t\tdelErr := storage.Delete(ctx, cfgIDKey)\n\t\tif delErr != nil {\n\t\t\treturn echConfig{}, fmt.Errorf(\"error loading ECH config (%v) and cleaning up parent storage key %s: %v\", err, cfgIDKey, delErr)\n\t\t}\n\t\tlogger.Warn(\"could not load ECH config; deleted its config folder\",\n\t\t\tzap.String(\"config_id\", configID),\n\t\t\tzap.Error(err))\n\t\treturn echConfig{}, nil\n\t}\n\tmetaBytes, err := storage.Load(ctx, metaKey)\n\tif errors.Is(err, fs.ErrNotExist) {\n\t\tlogger.Warn(\"ECH config metadata file missing; will recreate at next publication\",\n\t\t\tzap.String(\"config_id\", configID),\n\t\t\tzap.Error(err))\n\t} else if err != nil {\n\t\tdelErr := storage.Delete(ctx, cfgIDKey)\n\t\tif delErr != nil {\n\t\t\treturn echConfig{}, fmt.Errorf(\"error loading ECH config metadata (%v) and cleaning up parent storage key %s: %v\", err, cfgIDKey, delErr)\n\t\t}\n\t\tlogger.Warn(\"could not load ECH config metadata; deleted its folder\",\n\t\t\tzap.String(\"config_id\", configID),\n\t\t\tzap.Error(err))\n\t\treturn echConfig{}, nil\n\t}\n\tvar meta echConfigMeta\n\tif len(metaBytes) > 0 {\n\t\tif err := json.Unmarshal(metaBytes, &meta); err != nil {\n\t\t\t// even though it's just metadata, reset the whole config since we can't reliably maintain it\n\t\t\tdelErr := storage.Delete(ctx, cfgIDKey)\n\t\t\tif delErr != nil {\n\t\t\t\treturn echConfig{}, fmt.Errorf(\"error decoding ECH metadata (%v) and cleaning up parent storage key %s: %v\", err, cfgIDKey, delErr)\n\t\t\t}\n\t\t\tlogger.Warn(\"could not JSON-decode ECH metadata; deleted its config folder\",\n\t\t\t\tzap.String(\"config_id\", configID),\n\t\t\t\tzap.Error(err))\n\t\t\treturn echConfig{}, nil\n\t\t}\n\t}\n\n\tcfg.privKeyBin = privKeyBytes\n\tcfg.configBin = echConfigBytes\n\tcfg.meta = meta\n\n\treturn cfg, nil\n}\n\nfunc generateAndStoreECHConfig(ctx caddy.Context, publicName string) (echConfig, error) {\n\t// Go currently has very strict requirements for server-side ECH configs,\n\t// to quote the Go 1.24 godoc (with typos of AEAD IDs corrected):\n\t//\n\t// \"Config should be a marshalled ECHConfig associated with PrivateKey. This\n\t// must match the config provided to clients byte-for-byte. The config\n\t// should only specify the DHKEM(X25519, HKDF-SHA256) KEM ID (0x0020), the\n\t// HKDF-SHA256 KDF ID (0x0001), and a subset of the following AEAD IDs:\n\t// AES-128-GCM (0x0001), AES-256-GCM (0x0002), ChaCha20Poly1305 (0x0003).\"\n\t//\n\t// So we need to be sure we generate a config within these parameters\n\t// so the Go TLS server can use it.\n\n\t// generate a key pair\n\tconst kemChoice = hpke.KEM_X25519_HKDF_SHA256\n\tpublicKey, privateKey, err := kemChoice.Scheme().GenerateKeyPair()\n\tif err != nil {\n\t\treturn echConfig{}, err\n\t}\n\n\t// find an available config ID\n\tconfigID, err := newECHConfigID(ctx)\n\tif err != nil {\n\t\treturn echConfig{}, fmt.Errorf(\"generating unique config ID: %v\", err)\n\t}\n\n\techCfg := echConfig{\n\t\tPublicKey:     publicKey,\n\t\tVersion:       draftTLSESNI25,\n\t\tConfigID:      configID,\n\t\tRawPublicName: publicName,\n\t\tKEMID:         kemChoice,\n\t\tCipherSuites: []hpkeSymmetricCipherSuite{\n\t\t\t{\n\t\t\t\tKDFID:  hpke.KDF_HKDF_SHA256,\n\t\t\t\tAEADID: hpke.AEAD_AES128GCM,\n\t\t\t},\n\t\t\t{\n\t\t\t\tKDFID:  hpke.KDF_HKDF_SHA256,\n\t\t\t\tAEADID: hpke.AEAD_AES256GCM,\n\t\t\t},\n\t\t\t{\n\t\t\t\tKDFID:  hpke.KDF_HKDF_SHA256,\n\t\t\t\tAEADID: hpke.AEAD_ChaCha20Poly1305,\n\t\t\t},\n\t\t},\n\t}\n\tmeta := echConfigMeta{\n\t\tCreated: time.Now(),\n\t}\n\n\tprivKeyBytes, err := privateKey.MarshalBinary()\n\tif err != nil {\n\t\treturn echConfig{}, fmt.Errorf(\"marshaling ECH private key: %v\", err)\n\t}\n\techConfigBytes, err := echCfg.MarshalBinary()\n\tif err != nil {\n\t\treturn echConfig{}, fmt.Errorf(\"marshaling ECH config: %v\", err)\n\t}\n\tmetaBytes, err := json.Marshal(meta)\n\tif err != nil {\n\t\treturn echConfig{}, fmt.Errorf(\"marshaling ECH config metadata: %v\", err)\n\t}\n\n\tparentKey := path.Join(echConfigsKey, strconv.Itoa(int(configID)))\n\tkeyKey := path.Join(parentKey, \"key.bin\")\n\tconfigKey := path.Join(parentKey, \"config.bin\")\n\tmetaKey := path.Join(parentKey, \"meta.json\")\n\n\tif err := ctx.Storage().Store(ctx, keyKey, privKeyBytes); err != nil {\n\t\treturn echConfig{}, fmt.Errorf(\"storing ECH private key: %v\", err)\n\t}\n\tif err := ctx.Storage().Store(ctx, configKey, echConfigBytes); err != nil {\n\t\treturn echConfig{}, fmt.Errorf(\"storing ECH config: %v\", err)\n\t}\n\tif err := ctx.Storage().Store(ctx, metaKey, metaBytes); err != nil {\n\t\treturn echConfig{}, fmt.Errorf(\"storing ECH config metadata: %v\", err)\n\t}\n\n\techCfg.privKeyBin = privKeyBytes\n\techCfg.configBin = echConfigBytes // this contains the public key\n\techCfg.meta = meta\n\n\treturn echCfg, nil\n}\n\n// ECH represents an Encrypted ClientHello configuration.\n//\n// EXPERIMENTAL: Subject to change.\ntype ECHConfiguration struct {\n\t// The public server name (SNI) that will be used in the outer ClientHello.\n\t// This should be a domain name for which this server is authoritative,\n\t// because Caddy will try to provision a certificate for this name. As an\n\t// outer SNI, it is never used for application data (HTTPS, etc.), but it\n\t// is necessary for enabling clients to connect securely in some cases.\n\t// If this field is empty or missing, or if Caddy cannot get a certificate\n\t// for this domain (e.g. the domain's DNS records do not point to this server),\n\t// client reliability becomes brittle, and you risk coercing clients to expose\n\t// true server names in plaintext, which compromises both the privacy of the\n\t// server and makes clients more vulnerable.\n\tPublicName string `json:\"public_name\"`\n}\n\n// ECHPublication configures publication of ECH config(s). It pairs a list\n// of ECH configs with the list of domains they are assigned to protect, and\n// describes how to publish those configs for those domains.\n//\n// Most servers will have only a single publication config, unless their\n// domains are spread across multiple DNS providers or require different\n// methods of publication.\n//\n// EXPERIMENTAL: Subject to change.\ntype ECHPublication struct {\n\t// The list of ECH configurations to publish, identified by public name.\n\t// If not set, all configs will be included for publication by default.\n\t//\n\t// It is generally advised to maximize the size of your anonymity set,\n\t// which implies using as few public names as possible for your sites.\n\t// Usually, only a single public name is used to protect all the sites\n\t// for a server\n\t//\n\t// EXPERIMENTAL: This field may be renamed or have its structure changed.\n\tConfigs []string `json:\"configs,omitempty\"`\n\n\t// The list of (\"inner\") domain names which are protected with the associated\n\t// ECH configurations.\n\t//\n\t// If not set, all server names registered with the TLS module will be\n\t// added to this list implicitly. (This registration is done automatically\n\t// by other Caddy apps that use the TLS module. They should register their\n\t// configured server names for this purpose. For example, the HTTP server\n\t// registers the hostnames for which it applies automatic HTTPS. This is\n\t// not something you, the user, have to do.) Most servers\n\t//\n\t// Names in this list should not appear in any other publication config\n\t// object with the same publishers, since the publications will likely\n\t// overwrite each other.\n\t//\n\t// NOTE: In order to publish ECH configs for domains configured for\n\t// On-Demand TLS that are not explicitly enumerated elsewhere in the\n\t// config, those domain names will have to be listed here. The only\n\t// time Caddy knows which domains it is serving with On-Demand TLS is\n\t// handshake-time, which is too late for publishing ECH configs; it\n\t// means the first connections would not protect the server names,\n\t// revealing that information to observers, and thus defeating the\n\t// purpose of ECH. Hence the need to list them here so Caddy can\n\t// proactively publish ECH configs before clients connect with those\n\t// server names in plaintext.\n\tDomains []string `json:\"domains,omitempty\"`\n\n\t// How to publish the ECH configurations so clients can know to use\n\t// ECH to connect more securely to the server.\n\tPublishersRaw caddy.ModuleMap `json:\"publishers,omitempty\" caddy:\"namespace=tls.ech.publishers\"`\n\tpublishers    []ECHPublisher\n}\n\n// ECHDNSProvider can service DNS entries for ECH purposes.\ntype ECHDNSProvider interface {\n\tlibdns.RecordGetter\n\tlibdns.RecordSetter\n}\n\n// ECHDNSPublisher configures how to publish an ECH configuration to\n// DNS records for the specified domains.\n//\n// EXPERIMENTAL: Subject to change.\ntype ECHDNSPublisher struct {\n\t// The DNS provider module which will establish the HTTPS record(s).\n\tProviderRaw json.RawMessage `json:\"provider,omitempty\" caddy:\"namespace=dns.providers inline_key=name\"`\n\tprovider    ECHDNSProvider\n\n\tlogger *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (ECHDNSPublisher) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.ech.publishers.dns\",\n\t\tNew: func() caddy.Module { return new(ECHDNSPublisher) },\n\t}\n}\n\nfunc (dnsPub *ECHDNSPublisher) Provision(ctx caddy.Context) error {\n\tdnsProvMod, err := ctx.LoadModule(dnsPub, \"ProviderRaw\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"loading ECH DNS provider module: %v\", err)\n\t}\n\tprov, ok := dnsProvMod.(ECHDNSProvider)\n\tif !ok {\n\t\treturn fmt.Errorf(\"ECH DNS provider module is not an ECH DNS Provider: %v\", err)\n\t}\n\tdnsPub.provider = prov\n\tdnsPub.logger = ctx.Logger()\n\treturn nil\n}\n\n// PublisherKey returns the name of the DNS provider module.\n// We intentionally omit specific provider configuration (or a hash thereof,\n// since the config is likely sensitive, potentially containing an API key)\n// because it is unlikely that specific configuration, such as an API key,\n// is relevant to unique key use as an ECH config publisher.\nfunc (dnsPub ECHDNSPublisher) PublisherKey() string {\n\treturn string(dnsPub.provider.(caddy.Module).CaddyModule().ID)\n}\n\n// PublishECHConfigList publishes the given ECH config list (as binary) to the given DNS names.\n// If there is an error, it may be of type PublishECHConfigListErrors, detailing\n// potentially multiple errors keyed by associated innerName.\nfunc (dnsPub *ECHDNSPublisher) PublishECHConfigList(ctx context.Context, innerNames []string, configListBin []byte) error {\n\tnameservers := certmagic.RecursiveNameservers(nil) // TODO: we could make resolvers configurable\n\n\terrs := make(PublishECHConfigListErrors)\n\nnextName:\n\tfor _, domain := range innerNames {\n\t\tzone, err := certmagic.FindZoneByFQDN(ctx, dnsPub.logger, domain, nameservers)\n\t\tif err != nil {\n\t\t\terrs[domain] = fmt.Errorf(\"could not determine zone for domain: %w (domain=%s nameservers=%v)\", err, domain, nameservers)\n\t\t\tcontinue\n\t\t}\n\n\t\trelName := libdns.RelativeName(domain+\".\", zone)\n\n\t\t// get existing records for this domain; we need to make sure another\n\t\t// record exists for it so we don't accidentally trample a wildcard; we\n\t\t// also want to get any HTTPS record that may already exist for it so\n\t\t// we can augment the ech SvcParamKey with any other existing SvcParams\n\t\trecs, err := dnsPub.provider.GetRecords(ctx, zone)\n\t\tif err != nil {\n\t\t\terrs[domain] = fmt.Errorf(\"unable to get existing DNS records to publish ECH data to HTTPS DNS record: %w\", err)\n\t\t\tcontinue\n\t\t}\n\t\tvar httpsRec libdns.ServiceBinding\n\t\tvar nameHasExistingRecord bool\n\t\tfor _, rec := range recs {\n\t\t\trr := rec.RR()\n\t\t\tif rr.Name == relName {\n\t\t\t\t// CNAME records are exclusive of all other records, so we cannot publish an HTTPS\n\t\t\t\t// record for a domain that is CNAME'd. See #6922.\n\t\t\t\tif rr.Type == \"CNAME\" {\n\t\t\t\t\tdnsPub.logger.Warn(\"domain has CNAME record, so unable to publish ECH data to HTTPS record\",\n\t\t\t\t\t\tzap.String(\"domain\", domain),\n\t\t\t\t\t\tzap.String(\"cname_value\", rr.Data))\n\t\t\t\t\tcontinue nextName\n\t\t\t\t}\n\t\t\t\tnameHasExistingRecord = true\n\t\t\t\tif svcb, ok := rec.(libdns.ServiceBinding); ok && svcb.Scheme == \"https\" {\n\t\t\t\t\tif svcb.Target == \"\" || svcb.Target == \".\" {\n\t\t\t\t\t\thttpsRec = svcb\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif !nameHasExistingRecord {\n\t\t\t// Turns out if you publish a DNS record for a name that doesn't have any DNS record yet,\n\t\t\t// any wildcard records won't apply for the name anymore, meaning if a wildcard A/AAAA record\n\t\t\t// is used to resolve the domain to a server, publishing an HTTPS record could break resolution!\n\t\t\t// In theory, this should be a non-issue, at least for A/AAAA records, if the HTTPS record\n\t\t\t// includes ipv[4|6]hint SvcParamKeys,\n\t\t\tdnsPub.logger.Warn(\"domain does not have any existing records, so skipping publication of HTTPS record\",\n\t\t\t\tzap.String(\"domain\", domain),\n\t\t\t\tzap.String(\"relative_name\", relName),\n\t\t\t\tzap.String(\"zone\", zone))\n\t\t\tcontinue\n\t\t}\n\t\tparams := httpsRec.Params\n\t\tif params == nil {\n\t\t\tparams = make(libdns.SvcParams)\n\t\t}\n\n\t\t// overwrite only the \"ech\" SvcParamKey\n\t\tparams[\"ech\"] = []string{base64.StdEncoding.EncodeToString(configListBin)}\n\n\t\t// publish record\n\t\t_, err = dnsPub.provider.SetRecords(ctx, zone, []libdns.Record{\n\t\t\tlibdns.ServiceBinding{\n\t\t\t\t// HTTPS and SVCB RRs: RFC 9460 (https://www.rfc-editor.org/rfc/rfc9460)\n\t\t\t\tScheme:   \"https\",\n\t\t\t\tName:     relName,\n\t\t\t\tTTL:      5 * time.Minute, // TODO: low hard-coded value only temporary; change to a higher value once more field-tested and key rotation is implemented\n\t\t\t\tPriority: 2,               // allows a manual override with priority 1\n\t\t\t\tTarget:   \".\",\n\t\t\t\tParams:   params,\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\terrs[domain] = fmt.Errorf(\"unable to publish ECH data to HTTPS DNS record: %w (zone=%s dns_record_name=%s)\", err, zone, relName)\n\t\t\tcontinue\n\t\t}\n\t}\n\n\tif len(errs) > 0 {\n\t\treturn errs\n\t}\n\treturn nil\n}\n\n// echConfig represents an ECHConfig from the specification,\n// [draft-ietf-tls-esni-22](https://www.ietf.org/archive/id/draft-ietf-tls-esni-22.html).\ntype echConfig struct {\n\t// \"The version of ECH for which this configuration is used.\n\t// The version is the same as the code point for the\n\t// encrypted_client_hello extension. Clients MUST ignore any\n\t// ECHConfig structure with a version they do not support.\"\n\tVersion uint16\n\n\t// The \"length\" and \"contents\" fields defined next in the\n\t// structure are implicitly taken care of by cryptobyte\n\t// when encoding the following fields:\n\n\t// HpkeKeyConfig fields:\n\tConfigID     uint8\n\tKEMID        hpke.KEM\n\tPublicKey    kem.PublicKey\n\tCipherSuites []hpkeSymmetricCipherSuite\n\n\t// ECHConfigContents fields:\n\tMaxNameLength uint8\n\tRawPublicName string\n\tRawExtensions []byte\n\n\t// these fields are not part of the spec, but are here for\n\t// our use when setting up TLS servers or maintenance\n\tconfigBin  []byte\n\tprivKeyBin []byte\n\tmeta       echConfigMeta\n}\n\nfunc (echCfg echConfig) MarshalBinary() ([]byte, error) {\n\tvar b cryptobyte.Builder\n\tif err := echCfg.marshalBinary(&b); err != nil {\n\t\treturn nil, err\n\t}\n\treturn b.Bytes()\n}\n\n// UnmarshalBinary decodes the data back into an ECH config.\n//\n// Borrowed from github.com/OmarTariq612/goech with modifications.\n// Original code: Copyright (c) 2023 Omar Tariq AbdEl-Raziq\nfunc (echCfg *echConfig) UnmarshalBinary(data []byte) error {\n\tvar content cryptobyte.String\n\tb := cryptobyte.String(data)\n\n\tif !b.ReadUint16(&echCfg.Version) {\n\t\treturn errInvalidLen\n\t}\n\tif echCfg.Version != draftTLSESNI25 {\n\t\treturn fmt.Errorf(\"supported version must be %d: got %d\", draftTLSESNI25, echCfg.Version)\n\t}\n\n\tif !b.ReadUint16LengthPrefixed(&content) || !b.Empty() {\n\t\treturn errInvalidLen\n\t}\n\n\tvar t cryptobyte.String\n\tvar pk []byte\n\n\tif !content.ReadUint8(&echCfg.ConfigID) ||\n\t\t!content.ReadUint16((*uint16)(&echCfg.KEMID)) ||\n\t\t!content.ReadUint16LengthPrefixed(&t) ||\n\t\t!t.ReadBytes(&pk, len(t)) ||\n\t\t!content.ReadUint16LengthPrefixed(&t) ||\n\t\tlen(t)%4 != 0 /* the length of (KDFs and AEADs) must be divisible by 4 */ {\n\t\treturn errInvalidLen\n\t}\n\n\tif !echCfg.KEMID.IsValid() {\n\t\treturn fmt.Errorf(\"invalid KEM ID: %d\", echCfg.KEMID)\n\t}\n\n\tvar err error\n\tif echCfg.PublicKey, err = echCfg.KEMID.Scheme().UnmarshalBinaryPublicKey(pk); err != nil {\n\t\treturn fmt.Errorf(\"parsing public_key: %w\", err)\n\t}\n\n\techCfg.CipherSuites = echCfg.CipherSuites[:0]\n\n\tfor !t.Empty() {\n\t\tvar hpkeKDF, hpkeAEAD uint16\n\t\tif !t.ReadUint16(&hpkeKDF) || !t.ReadUint16(&hpkeAEAD) {\n\t\t\t// we have already checked that the length is divisible by 4\n\t\t\tpanic(\"this must not happen\")\n\t\t}\n\t\tif !hpke.KDF(hpkeKDF).IsValid() {\n\t\t\treturn fmt.Errorf(\"invalid KDF ID: %d\", hpkeKDF)\n\t\t}\n\t\tif !hpke.AEAD(hpkeAEAD).IsValid() {\n\t\t\treturn fmt.Errorf(\"invalid AEAD ID: %d\", hpkeAEAD)\n\t\t}\n\t\techCfg.CipherSuites = append(echCfg.CipherSuites, hpkeSymmetricCipherSuite{\n\t\t\tKDFID:  hpke.KDF(hpkeKDF),\n\t\t\tAEADID: hpke.AEAD(hpkeAEAD),\n\t\t})\n\t}\n\n\tvar rawPublicName []byte\n\tif !content.ReadUint8(&echCfg.MaxNameLength) ||\n\t\t!content.ReadUint8LengthPrefixed(&t) ||\n\t\t!t.ReadBytes(&rawPublicName, len(t)) ||\n\t\t!content.ReadUint16LengthPrefixed(&t) ||\n\t\t!t.ReadBytes(&echCfg.RawExtensions, len(t)) ||\n\t\t!content.Empty() {\n\t\treturn errInvalidLen\n\t}\n\techCfg.RawPublicName = string(rawPublicName)\n\n\treturn nil\n}\n\nvar errInvalidLen = errors.New(\"invalid length\")\n\n// marshalBinary writes this config to the cryptobyte builder. If there is an error,\n// it will occur before any writes have happened.\nfunc (echCfg echConfig) marshalBinary(b *cryptobyte.Builder) error {\n\tpk, err := echCfg.PublicKey.MarshalBinary()\n\tif err != nil {\n\t\treturn err\n\t}\n\tif l := len(echCfg.RawPublicName); l == 0 || l > 255 {\n\t\treturn fmt.Errorf(\"public name length (%d) must be in the range 1-255\", l)\n\t}\n\n\tb.AddUint16(echCfg.Version)\n\tb.AddUint16LengthPrefixed(func(b *cryptobyte.Builder) { // \"length\" field\n\t\tb.AddUint8(echCfg.ConfigID)\n\t\tb.AddUint16(uint16(echCfg.KEMID))\n\t\tb.AddUint16LengthPrefixed(func(b *cryptobyte.Builder) {\n\t\t\tb.AddBytes(pk)\n\t\t})\n\t\tb.AddUint16LengthPrefixed(func(b *cryptobyte.Builder) {\n\t\t\tfor _, cs := range echCfg.CipherSuites {\n\t\t\t\tb.AddUint16(uint16(cs.KDFID))\n\t\t\t\tb.AddUint16(uint16(cs.AEADID))\n\t\t\t}\n\t\t})\n\t\tb.AddUint8(uint8(min(len(echCfg.RawPublicName)+16, 255)))\n\t\tb.AddUint8LengthPrefixed(func(b *cryptobyte.Builder) {\n\t\t\tb.AddBytes([]byte(echCfg.RawPublicName))\n\t\t})\n\t\tb.AddUint16LengthPrefixed(func(child *cryptobyte.Builder) {\n\t\t\tchild.AddBytes(echCfg.RawExtensions)\n\t\t})\n\t})\n\n\treturn nil\n}\n\ntype hpkeSymmetricCipherSuite struct {\n\tKDFID  hpke.KDF\n\tAEADID hpke.AEAD\n}\n\ntype echConfigList []echConfig\n\nfunc (cl echConfigList) MarshalBinary() ([]byte, error) {\n\tvar b cryptobyte.Builder\n\tvar err error\n\n\t// the list's length prefixes the list, as with most opaque values\n\tb.AddUint16LengthPrefixed(func(b *cryptobyte.Builder) {\n\t\tfor _, cfg := range cl {\n\t\t\tif err = cfg.marshalBinary(b); err != nil {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn b.Bytes()\n}\n\nfunc newECHConfigID(ctx caddy.Context) (uint8, error) {\n\t// uint8 can be 0-255 inclusive\n\tconst uint8Range = 256\n\n\t// avoid repeating storage checks\n\ttried := make([]bool, uint8Range)\n\n\t// Try to find an available number with random rejection sampling;\n\t// i.e. choose a random number and see if it's already taken.\n\t// The hard limit on how many times we try to find an available\n\t// number is flexible... in theory, assuming uniform distribution,\n\t// 256 attempts should make each possible value show up exactly\n\t// once, but obviously that won't be the case. We can try more\n\t// times to try to ensure that every number gets a chance, which\n\t// is especially useful if few are available, or we can lower it\n\t// if we assume we should have found an available value by then\n\t// and want to limit runtime; for now I choose the middle ground\n\t// and just try as many times as there are possible values.\n\tfor i := 0; i < uint8Range && ctx.Err() == nil; i++ {\n\t\tnum := uint8(weakrand.N(uint8Range)) //nolint:gosec\n\n\t\t// don't try the same number a second time\n\t\tif tried[num] {\n\t\t\tcontinue\n\t\t}\n\t\ttried[num] = true\n\n\t\t// check to see if any of the subkeys use this config ID\n\t\tnumStr := strconv.Itoa(int(num))\n\t\ttrialPath := path.Join(echConfigsKey, numStr)\n\t\tif ctx.Storage().Exists(ctx, trialPath) {\n\t\t\tcontinue\n\t\t}\n\n\t\treturn num, nil\n\t}\n\n\tif err := ctx.Err(); err != nil {\n\t\treturn 0, err\n\t}\n\n\treturn 0, fmt.Errorf(\"depleted attempts to find an available config_id\")\n}\n\n// ECHPublisher is an interface for publishing ECHConfigList values\n// so that they can be used by clients.\ntype ECHPublisher interface {\n\t// Returns a key that is unique to this publisher and its configuration.\n\t// A publisher's ID combined with its config is a valid key.\n\t// It is used to prevent duplicating publications.\n\tPublisherKey() string\n\n\t// Publishes the ECH config list (as binary) for the given innerNames. Some\n\t// publishers may not need a list of inner/protected names, and can ignore the\n\t// argument; most, however, will want to use it to know which inner names are\n\t// to be associated with the given ECH config list.\n\t//\n\t// Implementations should return an error of type PublishECHConfigListErrors\n\t// when relevant to key errors to their associated innerName, but should never\n\t// return a non-nil PublishECHConfigListErrors when its length is 0.\n\tPublishECHConfigList(ctx context.Context, innerNames []string, echConfigList []byte) error\n}\n\n// PublishECHConfigListErrors is returned by ECHPublishers to describe one or more\n// errors publishing an ECH config list from PublishECHConfigList. A non-nil, empty\n// value of this type should never be returned.\n// nolint:errname // The linter wants \"Error\" convention, but this is a multi-error type.\ntype PublishECHConfigListErrors map[string]error\n\nfunc (p PublishECHConfigListErrors) Error() string {\n\tvar sb strings.Builder\n\tfor innerName, err := range p {\n\t\tif sb.Len() > 0 {\n\t\t\tsb.WriteString(\"; \")\n\t\t}\n\t\tsb.WriteString(innerName)\n\t\tsb.WriteString(\": \")\n\t\tsb.WriteString(err.Error())\n\t}\n\treturn sb.String()\n}\n\ntype echConfigMeta struct {\n\tCreated      time.Time          `json:\"created\"`\n\tReplaced     time.Time          `json:\"replaced,omitzero\"`\n\tPublications publicationHistory `json:\"publications\"`\n}\n\nfunc echMetaKey(configID uint8) string {\n\treturn path.Join(echConfigsKey, strconv.Itoa(int(configID)), \"meta.json\")\n}\n\n// publicationHistory is a map of publisher key to\n// map of inner name to timestamp\ntype publicationHistory map[string]map[string]time.Time\n\n// echStorageLockName is the name of the storage lock to sync ECH updates.\nconst echStorageLockName = \"ech_rotation\"\n\n// The key prefix when putting ECH configs in storage. After this\n// comes the config ID.\nconst echConfigsKey = \"ech/configs\"\n\n// https://www.ietf.org/archive/id/draft-ietf-tls-esni-25.html\nconst draftTLSESNI25 = 0xfe0d\n\n// Interface guard\nvar _ ECHPublisher = (*ECHDNSPublisher)(nil)\n"
  },
  {
    "path": "modules/caddytls/fileloader.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"crypto/tls\"\n\t\"fmt\"\n\t\"os\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(FileLoader{})\n}\n\n// FileLoader loads certificates and their associated keys from disk.\ntype FileLoader []CertKeyFilePair\n\n// Provision implements caddy.Provisioner.\nfunc (fl FileLoader) Provision(ctx caddy.Context) error {\n\trepl, ok := ctx.Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\tif !ok {\n\t\trepl = caddy.NewReplacer()\n\t}\n\tfor k, pair := range fl {\n\t\tfor i, tag := range pair.Tags {\n\t\t\tpair.Tags[i] = repl.ReplaceKnown(tag, \"\")\n\t\t}\n\t\tfl[k] = CertKeyFilePair{\n\t\t\tCertificate: repl.ReplaceKnown(pair.Certificate, \"\"),\n\t\t\tKey:         repl.ReplaceKnown(pair.Key, \"\"),\n\t\t\tFormat:      repl.ReplaceKnown(pair.Format, \"\"),\n\t\t\tTags:        pair.Tags,\n\t\t}\n\t}\n\treturn nil\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (FileLoader) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.certificates.load_files\",\n\t\tNew: func() caddy.Module { return new(FileLoader) },\n\t}\n}\n\n// CertKeyFilePair pairs certificate and key file names along with their\n// encoding format so that they can be loaded from disk.\ntype CertKeyFilePair struct {\n\t// Path to the certificate (public key) file.\n\tCertificate string `json:\"certificate\"`\n\n\t// Path to the private key file.\n\tKey string `json:\"key\"`\n\n\t// The format of the cert and key. Can be \"pem\". Default: \"pem\"\n\tFormat string `json:\"format,omitempty\"`\n\n\t// Arbitrary values to associate with this certificate.\n\t// Can be useful when you want to select a particular\n\t// certificate when there may be multiple valid candidates.\n\tTags []string `json:\"tags,omitempty\"`\n}\n\n// LoadCertificates returns the certificates to be loaded by fl.\nfunc (fl FileLoader) LoadCertificates() ([]Certificate, error) {\n\tcerts := make([]Certificate, 0, len(fl))\n\tfor _, pair := range fl {\n\t\tcertData, err := os.ReadFile(pair.Certificate)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tkeyData, err := os.ReadFile(pair.Key)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tvar cert tls.Certificate\n\t\tswitch pair.Format {\n\t\tcase \"\":\n\t\t\tfallthrough\n\n\t\tcase \"pem\":\n\t\t\t// if the start of the key file looks like an encrypted private key,\n\t\t\t// reject it with a helpful error message\n\t\t\tif strings.Contains(string(keyData[:40]), \"ENCRYPTED\") {\n\t\t\t\treturn nil, fmt.Errorf(\"encrypted private keys are not supported; please decrypt the key first\")\n\t\t\t}\n\n\t\t\tcert, err = tls.X509KeyPair(certData, keyData)\n\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"unrecognized certificate/key encoding format: %s\", pair.Format)\n\t\t}\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tcerts = append(certs, Certificate{Certificate: cert, Tags: pair.Tags})\n\t}\n\treturn certs, nil\n}\n\n// Interface guard\nvar (\n\t_ CertificateLoader = (FileLoader)(nil)\n\t_ caddy.Provisioner = (FileLoader)(nil)\n)\n"
  },
  {
    "path": "modules/caddytls/folderloader.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"bytes\"\n\t\"crypto/tls\"\n\t\"encoding/pem\"\n\t\"fmt\"\n\t\"io/fs\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(FolderLoader{})\n}\n\n// FolderLoader loads certificates and their associated keys from disk\n// by recursively walking the specified directories, looking for PEM\n// files which contain both a certificate and a key.\ntype FolderLoader []string\n\n// CaddyModule returns the Caddy module information.\nfunc (FolderLoader) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.certificates.load_folders\",\n\t\tNew: func() caddy.Module { return new(FolderLoader) },\n\t}\n}\n\n// Provision implements caddy.Provisioner.\nfunc (fl FolderLoader) Provision(ctx caddy.Context) error {\n\trepl, ok := ctx.Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\tif !ok {\n\t\trepl = caddy.NewReplacer()\n\t}\n\tfor k, path := range fl {\n\t\tfl[k] = repl.ReplaceKnown(path, \"\")\n\t}\n\treturn nil\n}\n\n// LoadCertificates loads all the certificates+keys in the directories\n// listed in fl from all files ending with .pem. This method of loading\n// certificates expects the certificate and key to be bundled into the\n// same file.\nfunc (fl FolderLoader) LoadCertificates() ([]Certificate, error) {\n\tvar certs []Certificate\n\tfor _, dir := range fl {\n\t\troot, err := os.OpenRoot(dir)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"unable to open root directory %s: %w\", dir, err)\n\t\t}\n\t\terr = filepath.WalkDir(dir, func(fpath string, d fs.DirEntry, err error) error {\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"unable to traverse into path: %s\", fpath)\n\t\t\t}\n\t\t\tif d.IsDir() {\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\tif !strings.HasSuffix(strings.ToLower(d.Name()), \".pem\") {\n\t\t\t\treturn nil\n\t\t\t}\n\n\t\t\trel, err := filepath.Rel(dir, fpath)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"unable to get relative path for %s: %w\", fpath, err)\n\t\t\t}\n\n\t\t\tbundle, err := root.ReadFile(rel)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tcert, err := tlsCertFromCertAndKeyPEMBundle(bundle)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"%s: %w\", fpath, err)\n\t\t\t}\n\n\t\t\tcerts = append(certs, Certificate{Certificate: cert})\n\t\t\treturn nil\n\t\t})\n\t\t_ = root.Close()\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"walking certificates directory %s: %w\", dir, err)\n\t\t}\n\t}\n\treturn certs, nil\n}\n\nfunc tlsCertFromCertAndKeyPEMBundle(bundle []byte) (tls.Certificate, error) {\n\tcertBuilder, keyBuilder := new(bytes.Buffer), new(bytes.Buffer)\n\tvar foundKey bool // use only the first key in the file\n\n\tfor {\n\t\t// Decode next block so we can see what type it is\n\t\tvar derBlock *pem.Block\n\t\tderBlock, bundle = pem.Decode(bundle)\n\t\tif derBlock == nil {\n\t\t\tbreak\n\t\t}\n\n\t\tif derBlock.Type == \"CERTIFICATE\" {\n\t\t\t// Re-encode certificate as PEM, appending to certificate chain\n\t\t\tif err := pem.Encode(certBuilder, derBlock); err != nil {\n\t\t\t\treturn tls.Certificate{}, err\n\t\t\t}\n\t\t} else if derBlock.Type == \"EC PARAMETERS\" {\n\t\t\t// EC keys generated from openssl can be composed of two blocks:\n\t\t\t// parameters and key (parameter block should come first)\n\t\t\tif !foundKey {\n\t\t\t\t// Encode parameters\n\t\t\t\tif err := pem.Encode(keyBuilder, derBlock); err != nil {\n\t\t\t\t\treturn tls.Certificate{}, err\n\t\t\t\t}\n\n\t\t\t\t// Key must immediately follow\n\t\t\t\tderBlock, bundle = pem.Decode(bundle)\n\t\t\t\tif derBlock == nil || derBlock.Type != \"EC PRIVATE KEY\" {\n\t\t\t\t\treturn tls.Certificate{}, fmt.Errorf(\"expected elliptic private key to immediately follow EC parameters\")\n\t\t\t\t}\n\t\t\t\tif err := pem.Encode(keyBuilder, derBlock); err != nil {\n\t\t\t\t\treturn tls.Certificate{}, err\n\t\t\t\t}\n\t\t\t\tfoundKey = true\n\t\t\t}\n\t\t} else if derBlock.Type == \"PRIVATE KEY\" || strings.HasSuffix(derBlock.Type, \" PRIVATE KEY\") {\n\t\t\t// RSA key\n\t\t\tif !foundKey {\n\t\t\t\tif err := pem.Encode(keyBuilder, derBlock); err != nil {\n\t\t\t\t\treturn tls.Certificate{}, err\n\t\t\t\t}\n\t\t\t\tfoundKey = true\n\t\t\t}\n\t\t} else {\n\t\t\treturn tls.Certificate{}, fmt.Errorf(\"unrecognized PEM block type: %s\", derBlock.Type)\n\t\t}\n\t}\n\n\tcertPEMBytes, keyPEMBytes := certBuilder.Bytes(), keyBuilder.Bytes()\n\tif len(certPEMBytes) == 0 {\n\t\treturn tls.Certificate{}, fmt.Errorf(\"failed to parse PEM data\")\n\t}\n\tif len(keyPEMBytes) == 0 {\n\t\treturn tls.Certificate{}, fmt.Errorf(\"no private key block found\")\n\t}\n\n\t// if the start of the key file looks like an encrypted private key,\n\t// reject it with a helpful error message\n\tif strings.HasPrefix(string(keyPEMBytes[:40]), \"ENCRYPTED\") {\n\t\treturn tls.Certificate{}, fmt.Errorf(\"encrypted private keys are not supported; please decrypt the key first\")\n\t}\n\n\tcert, err := tls.X509KeyPair(certPEMBytes, keyPEMBytes)\n\tif err != nil {\n\t\treturn tls.Certificate{}, fmt.Errorf(\"making X509 key pair: %v\", err)\n\t}\n\n\treturn cert, nil\n}\n\nvar (\n\t_ CertificateLoader = (FolderLoader)(nil)\n\t_ caddy.Provisioner = (FolderLoader)(nil)\n)\n"
  },
  {
    "path": "modules/caddytls/internalissuer.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"crypto/x509\"\n\t\"encoding/pem\"\n\t\"time\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/smallstep/certificates/authority/provisioner\"\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddypki\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(InternalIssuer{})\n}\n\n// InternalIssuer is a certificate issuer that generates\n// certificates internally using a locally-configured\n// CA which can be customized using the `pki` app.\ntype InternalIssuer struct {\n\t// The ID of the CA to use for signing. The default\n\t// CA ID is \"local\". The CA can be configured with the\n\t// `pki` app.\n\tCA string `json:\"ca,omitempty\"`\n\n\t// The validity period of certificates.\n\tLifetime caddy.Duration `json:\"lifetime,omitempty\"`\n\n\t// If true, the root will be the issuer instead of\n\t// the intermediate. This is NOT recommended and should\n\t// only be used when devices/clients do not properly\n\t// validate certificate chains.\n\tSignWithRoot bool `json:\"sign_with_root,omitempty\"`\n\n\tca     *caddypki.CA\n\tlogger *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (InternalIssuer) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.issuance.internal\",\n\t\tNew: func() caddy.Module { return new(InternalIssuer) },\n\t}\n}\n\n// Provision sets up the issuer.\nfunc (iss *InternalIssuer) Provision(ctx caddy.Context) error {\n\tiss.logger = ctx.Logger()\n\n\t// set some defaults\n\tif iss.CA == \"\" {\n\t\tiss.CA = caddypki.DefaultCAID\n\t}\n\n\t// get a reference to the configured CA\n\tappModule, err := ctx.App(\"pki\")\n\tif err != nil {\n\t\treturn err\n\t}\n\tpkiApp := appModule.(*caddypki.PKI)\n\tca, err := pkiApp.GetCA(ctx, iss.CA)\n\tif err != nil {\n\t\treturn err\n\t}\n\tiss.ca = ca\n\n\t// set any other default values\n\tif iss.Lifetime == 0 {\n\t\tiss.Lifetime = caddy.Duration(defaultInternalCertLifetime)\n\t}\n\n\treturn nil\n}\n\n// IssuerKey returns the unique issuer key for the\n// configured CA endpoint.\nfunc (iss InternalIssuer) IssuerKey() string {\n\treturn iss.ca.ID\n}\n\n// Issue issues a certificate to satisfy the CSR.\nfunc (iss InternalIssuer) Issue(ctx context.Context, csr *x509.CertificateRequest) (*certmagic.IssuedCertificate, error) {\n\t// prepare the signing authority\n\tauthCfg := caddypki.AuthorityConfig{\n\t\tSignWithRoot: iss.SignWithRoot,\n\t}\n\tauth, err := iss.ca.NewAuthority(authCfg)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// get the cert (public key) that will be used for signing\n\tvar issuerCert *x509.Certificate\n\tif iss.SignWithRoot {\n\t\tissuerCert = iss.ca.RootCertificate()\n\t} else {\n\t\tchain := iss.ca.IntermediateCertificateChain()\n\t\tissuerCert = chain[0]\n\t}\n\n\t// ensure issued certificate does not expire later than its issuer\n\tlifetime := time.Duration(iss.Lifetime)\n\tif time.Now().Add(lifetime).After(issuerCert.NotAfter) {\n\t\tlifetime = time.Until(issuerCert.NotAfter)\n\t\tiss.logger.Warn(\"cert lifetime would exceed issuer NotAfter, clamping lifetime\",\n\t\t\tzap.Duration(\"orig_lifetime\", time.Duration(iss.Lifetime)),\n\t\t\tzap.Duration(\"lifetime\", lifetime),\n\t\t\tzap.Time(\"not_after\", issuerCert.NotAfter),\n\t\t)\n\t}\n\n\tcertChain, err := auth.SignWithContext(ctx, csr, provisioner.SignOptions{}, customCertLifetime(caddy.Duration(lifetime)))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tvar buf bytes.Buffer\n\tfor _, cert := range certChain {\n\t\terr := pem.Encode(&buf, &pem.Block{Type: \"CERTIFICATE\", Bytes: cert.Raw})\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\treturn &certmagic.IssuedCertificate{\n\t\tCertificate: buf.Bytes(),\n\t}, nil\n}\n\n// UnmarshalCaddyfile deserializes Caddyfile tokens into iss.\n//\n//\t... internal {\n//\t    ca       <name>\n//\t    lifetime <duration>\n//\t    sign_with_root\n//\t}\nfunc (iss *InternalIssuer) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume issuer name\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"ca\":\n\t\t\tif !d.AllArgs(&iss.CA) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"lifetime\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdur, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiss.Lifetime = caddy.Duration(dur)\n\n\t\tcase \"sign_with_root\":\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tiss.SignWithRoot = true\n\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized subdirective '%s'\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\n// customCertLifetime allows us to customize certificates that are issued\n// by Smallstep libs, particularly the NotBefore & NotAfter dates.\ntype customCertLifetime time.Duration\n\nfunc (d customCertLifetime) Modify(cert *x509.Certificate, _ provisioner.SignOptions) error {\n\tcert.NotBefore = time.Now()\n\tcert.NotAfter = cert.NotBefore.Add(time.Duration(d))\n\treturn nil\n}\n\nconst defaultInternalCertLifetime = 12 * time.Hour\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner               = (*InternalIssuer)(nil)\n\t_ certmagic.Issuer                = (*InternalIssuer)(nil)\n\t_ provisioner.CertificateModifier = (*customCertLifetime)(nil)\n)\n"
  },
  {
    "path": "modules/caddytls/internalissuer_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"crypto/rand\"\n\t\"crypto/x509\"\n\t\"crypto/x509/pkix\"\n\t\"encoding/pem\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddypki\"\n\t\"go.uber.org/zap\"\n\n\t\"go.step.sm/crypto/keyutil\"\n\t\"go.step.sm/crypto/pemutil\"\n)\n\nfunc TestInternalIssuer_Issue(t *testing.T) {\n\trootSigner, err := keyutil.GenerateDefaultSigner()\n\tif err != nil {\n\t\tt.Fatalf(\"Creating root signer failed: %v\", err)\n\t}\n\n\ttmpl := &x509.Certificate{\n\t\tSubject:    pkix.Name{CommonName: \"test-root\"},\n\t\tIsCA:       true,\n\t\tMaxPathLen: 3,\n\t\tNotAfter:   time.Now().Add(7 * 24 * time.Hour),\n\t\tNotBefore:  time.Now().Add(-7 * 24 * time.Hour),\n\t}\n\trootBytes, err := x509.CreateCertificate(rand.Reader, tmpl, tmpl, rootSigner.Public(), rootSigner)\n\tif err != nil {\n\t\tt.Fatalf(\"Creating root certificate failed: %v\", err)\n\t}\n\n\troot, err := x509.ParseCertificate(rootBytes)\n\tif err != nil {\n\t\tt.Fatalf(\"Parsing root certificate failed: %v\", err)\n\t}\n\n\tfirstIntermediateSigner, err := keyutil.GenerateDefaultSigner()\n\tif err != nil {\n\t\tt.Fatalf(\"Creating intermedaite signer failed: %v\", err)\n\t}\n\n\tfirstIntermediateBytes, err := x509.CreateCertificate(rand.Reader, &x509.Certificate{\n\t\tSubject:    pkix.Name{CommonName: \"test-first-intermediate\"},\n\t\tIsCA:       true,\n\t\tMaxPathLen: 2,\n\t\tNotAfter:   time.Now().Add(24 * time.Hour),\n\t\tNotBefore:  time.Now().Add(-24 * time.Hour),\n\t}, root, firstIntermediateSigner.Public(), rootSigner)\n\tif err != nil {\n\t\tt.Fatalf(\"Creating intermediate certificate failed: %v\", err)\n\t}\n\n\tfirstIntermediate, err := x509.ParseCertificate(firstIntermediateBytes)\n\tif err != nil {\n\t\tt.Fatalf(\"Parsing intermediate certificate failed: %v\", err)\n\t}\n\n\tsecondIntermediateSigner, err := keyutil.GenerateDefaultSigner()\n\tif err != nil {\n\t\tt.Fatalf(\"Creating second intermedaite signer failed: %v\", err)\n\t}\n\n\tsecondIntermediateBytes, err := x509.CreateCertificate(rand.Reader, &x509.Certificate{\n\t\tSubject:    pkix.Name{CommonName: \"test-second-intermediate\"},\n\t\tIsCA:       true,\n\t\tMaxPathLen: 2,\n\t\tNotAfter:   time.Now().Add(24 * time.Hour),\n\t\tNotBefore:  time.Now().Add(-24 * time.Hour),\n\t}, firstIntermediate, secondIntermediateSigner.Public(), firstIntermediateSigner)\n\tif err != nil {\n\t\tt.Fatalf(\"Creating second intermediate certificate failed: %v\", err)\n\t}\n\n\tsecondIntermediate, err := x509.ParseCertificate(secondIntermediateBytes)\n\tif err != nil {\n\t\tt.Fatalf(\"Parsing second intermediate certificate failed: %v\", err)\n\t}\n\n\tdir := t.TempDir()\n\tstorageDir := filepath.Join(dir, \"certmagic\")\n\trootCertFile := filepath.Join(dir, \"root.pem\")\n\tif _, err = pemutil.Serialize(root, pemutil.WithFilename(rootCertFile)); err != nil {\n\t\tt.Fatalf(\"Failed serializing root certificate: %v\", err)\n\t}\n\tintermediateCertFile := filepath.Join(dir, \"intermediate.pem\")\n\tif _, err = pemutil.Serialize(firstIntermediate, pemutil.WithFilename(intermediateCertFile)); err != nil {\n\t\tt.Fatalf(\"Failed serializing intermediate certificate: %v\", err)\n\t}\n\tintermediateKeyFile := filepath.Join(dir, \"intermediate.key\")\n\tif _, err = pemutil.Serialize(firstIntermediateSigner, pemutil.WithFilename(intermediateKeyFile)); err != nil {\n\t\tt.Fatalf(\"Failed serializing intermediate key: %v\", err)\n\t}\n\n\tvar intermediateChainContents []byte\n\tintermediateChain := []*x509.Certificate{secondIntermediate, firstIntermediate}\n\tfor _, cert := range intermediateChain {\n\t\tb, err := pemutil.Serialize(cert)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed serializing intermediate certificate: %v\", err)\n\t\t}\n\t\tintermediateChainContents = append(intermediateChainContents, pem.EncodeToMemory(b)...)\n\t}\n\tintermediateChainFile := filepath.Join(dir, \"intermediates.pem\")\n\tif err := os.WriteFile(intermediateChainFile, intermediateChainContents, 0644); err != nil {\n\t\tt.Fatalf(\"Failed writing intermediate chain: %v\", err)\n\t}\n\tintermediateChainKeyFile := filepath.Join(dir, \"intermediates.key\")\n\tif _, err = pemutil.Serialize(secondIntermediateSigner, pemutil.WithFilename(intermediateChainKeyFile)); err != nil {\n\t\tt.Fatalf(\"Failed serializing intermediate key: %v\", err)\n\t}\n\n\tsigner, err := keyutil.GenerateDefaultSigner()\n\tif err != nil {\n\t\tt.Fatalf(\"Failed creating signer: %v\", err)\n\t}\n\n\tcsrBytes, err := x509.CreateCertificateRequest(rand.Reader, &x509.CertificateRequest{\n\t\tSubject: pkix.Name{CommonName: \"test\"},\n\t}, signer)\n\tif err != nil {\n\t\tt.Fatalf(\"Failed creating CSR: %v\", err)\n\t}\n\n\tcsr, err := x509.ParseCertificateRequest(csrBytes)\n\tif err != nil {\n\t\tt.Fatalf(\"Failed parsing CSR: %v\", err)\n\t}\n\n\tt.Run(\"generated-with-defaults\", func(t *testing.T) {\n\t\tcaddyCtx, cancel := caddy.NewContext(caddy.Context{Context: t.Context()})\n\t\tt.Cleanup(cancel)\n\t\tlogger := zap.NewNop()\n\n\t\tca := &caddypki.CA{\n\t\t\tStorageRaw: []byte(fmt.Sprintf(`{\"module\": \"file_system\", \"root\": %q}`, storageDir)),\n\t\t}\n\t\tif err := ca.Provision(caddyCtx, \"local-test-generated\", logger); err != nil {\n\t\t\tt.Fatalf(\"Failed provisioning CA: %v\", err)\n\t\t}\n\n\t\tiss := InternalIssuer{\n\t\t\tSignWithRoot: false,\n\t\t\tca:           ca,\n\t\t\tlogger:       logger,\n\t\t}\n\n\t\tc, err := iss.Issue(t.Context(), csr)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed issuing certificate: %v\", err)\n\t\t}\n\n\t\tchain, err := pemutil.ParseCertificateBundle(c.Certificate)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed issuing certificate: %v\", err)\n\t\t}\n\t\tif len(chain) != 2 {\n\t\t\tt.Errorf(\"Expected 2 certificates in chain; got %d\", len(chain))\n\t\t}\n\t})\n\n\tt.Run(\"single-intermediate-from-disk\", func(t *testing.T) {\n\t\tcaddyCtx, cancel := caddy.NewContext(caddy.Context{Context: t.Context()})\n\t\tt.Cleanup(cancel)\n\t\tlogger := zap.NewNop()\n\n\t\tca := &caddypki.CA{\n\t\t\tRoot: &caddypki.KeyPair{\n\t\t\t\tCertificate: rootCertFile,\n\t\t\t},\n\t\t\tIntermediate: &caddypki.KeyPair{\n\t\t\t\tCertificate: intermediateCertFile,\n\t\t\t\tPrivateKey:  intermediateKeyFile,\n\t\t\t},\n\t\t\tStorageRaw: []byte(fmt.Sprintf(`{\"module\": \"file_system\", \"root\": %q}`, storageDir)),\n\t\t}\n\n\t\tif err := ca.Provision(caddyCtx, \"local-test-single-intermediate\", logger); err != nil {\n\t\t\tt.Fatalf(\"Failed provisioning CA: %v\", err)\n\t\t}\n\n\t\tiss := InternalIssuer{\n\t\t\tca:           ca,\n\t\t\tSignWithRoot: false,\n\t\t\tlogger:       logger,\n\t\t}\n\n\t\tc, err := iss.Issue(t.Context(), csr)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed issuing certificate: %v\", err)\n\t\t}\n\n\t\tchain, err := pemutil.ParseCertificateBundle(c.Certificate)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed issuing certificate: %v\", err)\n\t\t}\n\t\tif len(chain) != 2 {\n\t\t\tt.Errorf(\"Expected 2 certificates in chain; got %d\", len(chain))\n\t\t}\n\t})\n\n\tt.Run(\"multiple-intermediates-from-disk\", func(t *testing.T) {\n\t\tcaddyCtx, cancel := caddy.NewContext(caddy.Context{Context: t.Context()})\n\t\tt.Cleanup(cancel)\n\t\tlogger := zap.NewNop()\n\n\t\tca := &caddypki.CA{\n\t\t\tRoot: &caddypki.KeyPair{\n\t\t\t\tCertificate: rootCertFile,\n\t\t\t},\n\t\t\tIntermediate: &caddypki.KeyPair{\n\t\t\t\tCertificate: intermediateChainFile,\n\t\t\t\tPrivateKey:  intermediateChainKeyFile,\n\t\t\t},\n\t\t\tStorageRaw: []byte(fmt.Sprintf(`{\"module\": \"file_system\", \"root\": %q}`, storageDir)),\n\t\t}\n\n\t\tif err := ca.Provision(caddyCtx, \"local-test\", zap.NewNop()); err != nil {\n\t\t\tt.Fatalf(\"Failed provisioning CA: %v\", err)\n\t\t}\n\n\t\tiss := InternalIssuer{\n\t\t\tca:           ca,\n\t\t\tSignWithRoot: false,\n\t\t\tlogger:       logger,\n\t\t}\n\n\t\tc, err := iss.Issue(t.Context(), csr)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed issuing certificate: %v\", err)\n\t\t}\n\n\t\tchain, err := pemutil.ParseCertificateBundle(c.Certificate)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed issuing certificate: %v\", err)\n\t\t}\n\t\tif len(chain) != 3 {\n\t\t\tt.Errorf(\"Expected 3 certificates in chain; got %d\", len(chain))\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "modules/caddytls/leaffileloader.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"crypto/x509\"\n\t\"encoding/pem\"\n\t\"fmt\"\n\t\"os\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(LeafFileLoader{})\n}\n\n// LeafFileLoader loads leaf certificates from disk.\ntype LeafFileLoader struct {\n\tFiles []string `json:\"files,omitempty\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (LeafFileLoader) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.leaf_cert_loader.file\",\n\t\tNew: func() caddy.Module { return new(LeafFileLoader) },\n\t}\n}\n\n// Provision implements caddy.Provisioner.\nfunc (fl *LeafFileLoader) Provision(ctx caddy.Context) error {\n\trepl, ok := ctx.Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\tif !ok {\n\t\trepl = caddy.NewReplacer()\n\t}\n\tfor k, path := range fl.Files {\n\t\tfl.Files[k] = repl.ReplaceKnown(path, \"\")\n\t}\n\treturn nil\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (fl *LeafFileLoader) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.NextArg()\n\tfl.Files = append(fl.Files, d.RemainingArgs()...)\n\treturn nil\n}\n\n// LoadLeafCertificates returns the certificates to be loaded by fl.\nfunc (fl LeafFileLoader) LoadLeafCertificates() ([]*x509.Certificate, error) {\n\tcertificates := make([]*x509.Certificate, 0, len(fl.Files))\n\tfor _, path := range fl.Files {\n\t\tders, err := convertPEMFilesToDERBytes(path)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tcerts, err := x509.ParseCertificates(ders)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tcertificates = append(certificates, certs...)\n\t}\n\treturn certificates, nil\n}\n\nfunc convertPEMFilesToDERBytes(filename string) ([]byte, error) {\n\tcertDataPEM, err := os.ReadFile(filename)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tvar ders []byte\n\t// while block is not nil, we have more certificates in the file\n\tfor block, rest := pem.Decode(certDataPEM); block != nil; block, rest = pem.Decode(rest) {\n\t\tif block.Type != \"CERTIFICATE\" {\n\t\t\treturn nil, fmt.Errorf(\"no CERTIFICATE pem block found in %s\", filename)\n\t\t}\n\t\tders = append(\n\t\t\tders,\n\t\t\tblock.Bytes...,\n\t\t)\n\t}\n\t// if we decoded nothing, return an error\n\tif len(ders) == 0 {\n\t\treturn nil, fmt.Errorf(\"no CERTIFICATE pem block found in %s\", filename)\n\t}\n\treturn ders, nil\n}\n\n// Interface guard\nvar (\n\t_ LeafCertificateLoader = (*LeafFileLoader)(nil)\n\t_ caddy.Provisioner     = (*LeafFileLoader)(nil)\n\t_ caddyfile.Unmarshaler = (*LeafFileLoader)(nil)\n)\n"
  },
  {
    "path": "modules/caddytls/leaffileloader_test.go",
    "content": "package caddytls\n\nimport (\n\t\"context\"\n\t\"encoding/pem\"\n\t\"os\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc TestLeafFileLoader(t *testing.T) {\n\tfl := LeafFileLoader{Files: []string{\"../../caddytest/leafcert.pem\"}}\n\tfl.Provision(caddy.Context{Context: context.Background()})\n\n\tout, err := fl.LoadLeafCertificates()\n\tif err != nil {\n\t\tt.Errorf(\"Leaf certs file loading test failed: %v\", err)\n\t}\n\tif len(out) != 1 {\n\t\tt.Errorf(\"Error loading leaf cert in memory struct\")\n\t\treturn\n\t}\n\tpemBytes := pem.EncodeToMemory(&pem.Block{Type: \"CERTIFICATE\", Bytes: out[0].Raw})\n\n\tpemFileBytes, err := os.ReadFile(\"../../caddytest/leafcert.pem\")\n\tif err != nil {\n\t\tt.Errorf(\"Unable to read the example certificate from the file\")\n\t}\n\n\t// Remove /r because windows.\n\tpemFileString := strings.ReplaceAll(string(pemFileBytes), \"\\r\\n\", \"\\n\")\n\n\tif string(pemBytes) != pemFileString {\n\t\tt.Errorf(\"Leaf Certificate File Loader: Failed to load the correct certificate\")\n\t}\n}\n"
  },
  {
    "path": "modules/caddytls/leaffolderloader.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"crypto/x509\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(LeafFolderLoader{})\n}\n\n// LeafFolderLoader loads certificates from disk\n// by recursively walking the specified directories, looking for PEM\n// files which contain a certificate.\ntype LeafFolderLoader struct {\n\tFolders []string `json:\"folders,omitempty\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (LeafFolderLoader) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.leaf_cert_loader.folder\",\n\t\tNew: func() caddy.Module { return new(LeafFolderLoader) },\n\t}\n}\n\n// Provision implements caddy.Provisioner.\nfunc (fl *LeafFolderLoader) Provision(ctx caddy.Context) error {\n\trepl, ok := ctx.Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\tif !ok {\n\t\trepl = caddy.NewReplacer()\n\t}\n\tfor k, path := range fl.Folders {\n\t\tfl.Folders[k] = repl.ReplaceKnown(path, \"\")\n\t}\n\treturn nil\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (fl *LeafFolderLoader) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.NextArg()\n\tfl.Folders = append(fl.Folders, d.RemainingArgs()...)\n\treturn nil\n}\n\n// LoadLeafCertificates loads all the leaf certificates in the directories\n// listed in fl from all files ending with .pem.\nfunc (fl LeafFolderLoader) LoadLeafCertificates() ([]*x509.Certificate, error) {\n\tvar certs []*x509.Certificate\n\tfor _, dir := range fl.Folders {\n\t\terr := filepath.Walk(dir, func(fpath string, info os.FileInfo, err error) error {\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"unable to traverse into path: %s\", fpath)\n\t\t\t}\n\t\t\tif info.IsDir() {\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\tif !strings.HasSuffix(strings.ToLower(info.Name()), \".pem\") {\n\t\t\t\treturn nil\n\t\t\t}\n\n\t\t\tcertData, err := convertPEMFilesToDERBytes(fpath)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tcert, err := x509.ParseCertificate(certData)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"%s: %w\", fpath, err)\n\t\t\t}\n\n\t\t\tcerts = append(certs, cert)\n\n\t\t\treturn nil\n\t\t})\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\treturn certs, nil\n}\n\nvar (\n\t_ LeafCertificateLoader = (*LeafFolderLoader)(nil)\n\t_ caddy.Provisioner     = (*LeafFolderLoader)(nil)\n\t_ caddyfile.Unmarshaler = (*LeafFolderLoader)(nil)\n)\n"
  },
  {
    "path": "modules/caddytls/leaffolderloader_test.go",
    "content": "package caddytls\n\nimport (\n\t\"context\"\n\t\"encoding/pem\"\n\t\"os\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc TestLeafFolderLoader(t *testing.T) {\n\tfl := LeafFolderLoader{Folders: []string{\"../../caddytest\"}}\n\tfl.Provision(caddy.Context{Context: context.Background()})\n\n\tout, err := fl.LoadLeafCertificates()\n\tif err != nil {\n\t\tt.Errorf(\"Leaf certs folder loading test failed: %v\", err)\n\t}\n\tif len(out) != 1 {\n\t\tt.Errorf(\"Error loading leaf cert in memory struct\")\n\t\treturn\n\t}\n\tpemBytes := pem.EncodeToMemory(&pem.Block{Type: \"CERTIFICATE\", Bytes: out[0].Raw})\n\tpemFileBytes, err := os.ReadFile(\"../../caddytest/leafcert.pem\")\n\tif err != nil {\n\t\tt.Errorf(\"Unable to read the example certificate from the file\")\n\t}\n\n\t// Remove /r because windows.\n\tpemFileString := strings.ReplaceAll(string(pemFileBytes), \"\\r\\n\", \"\\n\")\n\n\tif string(pemBytes) != pemFileString {\n\t\tt.Errorf(\"Leaf Certificate Folder Loader: Failed to load the correct certificate\")\n\t}\n}\n"
  },
  {
    "path": "modules/caddytls/leafpemloader.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"crypto/x509\"\n\t\"fmt\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(LeafPEMLoader{})\n}\n\n// LeafPEMLoader loads leaf certificates by\n// decoding their PEM blocks directly. This has the advantage\n// of not needing to store them on disk at all.\ntype LeafPEMLoader struct {\n\tCertificates []string `json:\"certificates,omitempty\"`\n}\n\n// Provision implements caddy.Provisioner.\nfunc (pl *LeafPEMLoader) Provision(ctx caddy.Context) error {\n\trepl, ok := ctx.Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\tif !ok {\n\t\trepl = caddy.NewReplacer()\n\t}\n\tfor i, cert := range pl.Certificates {\n\t\tpl.Certificates[i] = repl.ReplaceKnown(cert, \"\")\n\t}\n\treturn nil\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (LeafPEMLoader) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.leaf_cert_loader.pem\",\n\t\tNew: func() caddy.Module { return new(LeafPEMLoader) },\n\t}\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (fl *LeafPEMLoader) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.NextArg()\n\tfl.Certificates = append(fl.Certificates, d.RemainingArgs()...)\n\treturn nil\n}\n\n// LoadLeafCertificates returns the certificates contained in pl.\nfunc (pl LeafPEMLoader) LoadLeafCertificates() ([]*x509.Certificate, error) {\n\tcerts := make([]*x509.Certificate, 0, len(pl.Certificates))\n\tfor i, cert := range pl.Certificates {\n\t\tderBytes, err := convertPEMToDER([]byte(cert))\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"PEM leaf certificate loader, cert %d: %v\", i, err)\n\t\t}\n\t\tcert, err := x509.ParseCertificate(derBytes)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"PEM cert %d: %v\", i, err)\n\t\t}\n\t\tcerts = append(certs, cert)\n\t}\n\treturn certs, nil\n}\n\n// Interface guard\nvar (\n\t_ LeafCertificateLoader = (*LeafPEMLoader)(nil)\n\t_ caddy.Provisioner     = (*LeafPEMLoader)(nil)\n)\n"
  },
  {
    "path": "modules/caddytls/leafpemloader_test.go",
    "content": "package caddytls\n\nimport (\n\t\"context\"\n\t\"encoding/pem\"\n\t\"os\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc TestLeafPEMLoader(t *testing.T) {\n\tpl := LeafPEMLoader{Certificates: []string{`\n-----BEGIN CERTIFICATE-----\nMIICUTCCAfugAwIBAgIBADANBgkqhkiG9w0BAQQFADBXMQswCQYDVQQGEwJDTjEL\nMAkGA1UECBMCUE4xCzAJBgNVBAcTAkNOMQswCQYDVQQKEwJPTjELMAkGA1UECxMC\nVU4xFDASBgNVBAMTC0hlcm9uZyBZYW5nMB4XDTA1MDcxNTIxMTk0N1oXDTA1MDgx\nNDIxMTk0N1owVzELMAkGA1UEBhMCQ04xCzAJBgNVBAgTAlBOMQswCQYDVQQHEwJD\nTjELMAkGA1UEChMCT04xCzAJBgNVBAsTAlVOMRQwEgYDVQQDEwtIZXJvbmcgWWFu\nZzBcMA0GCSqGSIb3DQEBAQUAA0sAMEgCQQCp5hnG7ogBhtlynpOS21cBewKE/B7j\nV14qeyslnr26xZUsSVko36ZnhiaO/zbMOoRcKK9vEcgMtcLFuQTWDl3RAgMBAAGj\ngbEwga4wHQYDVR0OBBYEFFXI70krXeQDxZgbaCQoR4jUDncEMH8GA1UdIwR4MHaA\nFFXI70krXeQDxZgbaCQoR4jUDncEoVukWTBXMQswCQYDVQQGEwJDTjELMAkGA1UE\nCBMCUE4xCzAJBgNVBAcTAkNOMQswCQYDVQQKEwJPTjELMAkGA1UECxMCVU4xFDAS\nBgNVBAMTC0hlcm9uZyBZYW5nggEAMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEE\nBQADQQA/ugzBrjjK9jcWnDVfGHlk3icNRq0oV7Ri32z/+HQX67aRfgZu7KWdI+Ju\nWm7DCfrPNGVwFWUQOmsPue9rZBgO\n-----END CERTIFICATE-----\n`}}\n\tpl.Provision(caddy.Context{Context: context.Background()})\n\n\tout, err := pl.LoadLeafCertificates()\n\tif err != nil {\n\t\tt.Errorf(\"Leaf certs pem loading test failed: %v\", err)\n\t}\n\tif len(out) != 1 {\n\t\tt.Errorf(\"Error loading leaf cert in memory struct\")\n\t\treturn\n\t}\n\tpemBytes := pem.EncodeToMemory(&pem.Block{Type: \"CERTIFICATE\", Bytes: out[0].Raw})\n\n\tpemFileBytes, err := os.ReadFile(\"../../caddytest/leafcert.pem\")\n\tif err != nil {\n\t\tt.Errorf(\"Unable to read the example certificate from the file\")\n\t}\n\n\t// Remove /r because windows.\n\tpemFileString := strings.ReplaceAll(string(pemFileBytes), \"\\r\\n\", \"\\n\")\n\n\tif string(pemBytes) != pemFileString {\n\t\tt.Errorf(\"Leaf Certificate Folder Loader: Failed to load the correct certificate\")\n\t}\n}\n"
  },
  {
    "path": "modules/caddytls/leafstorageloader.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"crypto/x509\"\n\t\"encoding/json\"\n\t\"encoding/pem\"\n\t\"fmt\"\n\n\t\"github.com/caddyserver/certmagic\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(LeafStorageLoader{})\n}\n\n// LeafStorageLoader loads leaf certificates from the\n// globally configured storage module.\ntype LeafStorageLoader struct {\n\t// A list of certificate file names to be loaded from storage.\n\tCertificates []string `json:\"certificates,omitempty\"`\n\n\t// The storage module where the trusted leaf certificates are stored. Absent\n\t// explicit storage implies the use of Caddy default storage.\n\tStorageRaw json.RawMessage `json:\"storage,omitempty\" caddy:\"namespace=caddy.storage inline_key=module\"`\n\n\t// Reference to the globally configured storage module.\n\tstorage certmagic.Storage\n\n\tctx caddy.Context\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (LeafStorageLoader) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.leaf_cert_loader.storage\",\n\t\tNew: func() caddy.Module { return new(LeafStorageLoader) },\n\t}\n}\n\n// Provision loads the storage module for sl.\nfunc (sl *LeafStorageLoader) Provision(ctx caddy.Context) error {\n\tif sl.StorageRaw != nil {\n\t\tval, err := ctx.LoadModule(sl, \"StorageRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading storage module: %v\", err)\n\t\t}\n\t\tcmStorage, err := val.(caddy.StorageConverter).CertMagicStorage()\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"creating storage configuration: %v\", err)\n\t\t}\n\t\tsl.storage = cmStorage\n\t}\n\tif sl.storage == nil {\n\t\tsl.storage = ctx.Storage()\n\t}\n\tsl.ctx = ctx\n\n\trepl, ok := ctx.Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\tif !ok {\n\t\trepl = caddy.NewReplacer()\n\t}\n\tfor k, path := range sl.Certificates {\n\t\tsl.Certificates[k] = repl.ReplaceKnown(path, \"\")\n\t}\n\treturn nil\n}\n\n// LoadLeafCertificates returns the certificates to be loaded by sl.\nfunc (sl LeafStorageLoader) LoadLeafCertificates() ([]*x509.Certificate, error) {\n\tcertificates := make([]*x509.Certificate, 0, len(sl.Certificates))\n\tfor _, path := range sl.Certificates {\n\t\tcertData, err := sl.storage.Load(sl.ctx, path)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tders, err := convertPEMToDER(certData)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tcerts, err := x509.ParseCertificates(ders)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tcertificates = append(certificates, certs...)\n\t}\n\treturn certificates, nil\n}\n\nfunc convertPEMToDER(pemData []byte) ([]byte, error) {\n\tvar ders []byte\n\t// while block is not nil, we have more certificates in the file\n\tfor block, rest := pem.Decode(pemData); block != nil; block, rest = pem.Decode(rest) {\n\t\tif block.Type != \"CERTIFICATE\" {\n\t\t\treturn nil, fmt.Errorf(\"no CERTIFICATE pem block found in the given pem data\")\n\t\t}\n\t\tders = append(\n\t\t\tders,\n\t\t\tblock.Bytes...,\n\t\t)\n\t}\n\t// if we decoded nothing, return an error\n\tif len(ders) == 0 {\n\t\treturn nil, fmt.Errorf(\"no CERTIFICATE pem block found in the given pem data\")\n\t}\n\treturn ders, nil\n}\n\n// Interface guard\nvar (\n\t_ LeafCertificateLoader = (*LeafStorageLoader)(nil)\n\t_ caddy.Provisioner     = (*LeafStorageLoader)(nil)\n)\n"
  },
  {
    "path": "modules/caddytls/matchers.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/netip\"\n\t\"regexp\"\n\t\"slices\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/internal\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(MatchServerName{})\n\tcaddy.RegisterModule(MatchServerNameRE{})\n\tcaddy.RegisterModule(MatchRemoteIP{})\n\tcaddy.RegisterModule(MatchLocalIP{})\n}\n\n// MatchServerName matches based on SNI. Names in\n// this list may use left-most-label wildcards,\n// similar to wildcard certificates.\ntype MatchServerName []string\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchServerName) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.handshake_match.sni\",\n\t\tNew: func() caddy.Module { return new(MatchServerName) },\n\t}\n}\n\n// Match matches hello based on SNI.\nfunc (m MatchServerName) Match(hello *tls.ClientHelloInfo) bool {\n\tvar repl *caddy.Replacer\n\t// caddytls.TestServerNameMatcher calls this function without any context\n\tif ctx := hello.Context(); ctx != nil {\n\t\t// In some situations the existing context may have no replacer\n\t\tif replAny := ctx.Value(caddy.ReplacerCtxKey); replAny != nil {\n\t\t\trepl = replAny.(*caddy.Replacer)\n\t\t}\n\t}\n\n\tif repl == nil {\n\t\trepl = caddy.NewReplacer()\n\t}\n\n\tfor _, name := range m {\n\t\trs := repl.ReplaceAll(name, \"\")\n\t\tif certmagic.MatchWildcard(hello.ServerName, rs) {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\n// UnmarshalCaddyfile sets up the MatchServerName from Caddyfile tokens. Syntax:\n//\n//\tsni <domains...>\nfunc (m *MatchServerName) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\tfor d.Next() {\n\t\twrapper := d.Val()\n\n\t\t// At least one same-line option must be provided\n\t\tif d.CountRemainingArgs() == 0 {\n\t\t\treturn d.ArgErr()\n\t\t}\n\n\t\t*m = append(*m, d.RemainingArgs()...)\n\n\t\t// No blocks are supported\n\t\tif d.NextBlock(d.Nesting()) {\n\t\t\treturn d.Errf(\"malformed TLS handshake matcher '%s': blocks are not supported\", wrapper)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// MatchRegexp is an embeddable type for matching\n// using regular expressions. It adds placeholders\n// to the request's replacer. In fact, it is a copy of\n// caddyhttp.MatchRegexp with a local replacer prefix\n// and placeholders support in a regular expression pattern.\ntype MatchRegexp struct {\n\t// A unique name for this regular expression. Optional,\n\t// but useful to prevent overwriting captures from other\n\t// regexp matchers.\n\tName string `json:\"name,omitempty\"`\n\n\t// The regular expression to evaluate, in RE2 syntax,\n\t// which is the same general syntax used by Go, Perl,\n\t// and Python. For details, see\n\t// [Go's regexp package](https://golang.org/pkg/regexp/).\n\t// Captures are accessible via placeholders. Unnamed\n\t// capture groups are exposed as their numeric, 1-based\n\t// index, while named capture groups are available by\n\t// the capture group name.\n\tPattern string `json:\"pattern\"`\n\n\tcompiled *regexp.Regexp\n}\n\n// Provision compiles the regular expression which may include placeholders.\nfunc (mre *MatchRegexp) Provision(caddy.Context) error {\n\trepl := caddy.NewReplacer()\n\tre, err := regexp.Compile(repl.ReplaceAll(mre.Pattern, \"\"))\n\tif err != nil {\n\t\treturn fmt.Errorf(\"compiling matcher regexp %s: %v\", mre.Pattern, err)\n\t}\n\tmre.compiled = re\n\treturn nil\n}\n\n// Validate ensures mre is set up correctly.\nfunc (mre *MatchRegexp) Validate() error {\n\tif mre.Name != \"\" && !wordRE.MatchString(mre.Name) {\n\t\treturn fmt.Errorf(\"invalid regexp name (must contain only word characters): %s\", mre.Name)\n\t}\n\treturn nil\n}\n\n// Match returns true if input matches the compiled regular\n// expression in m. It sets values on the replacer repl\n// associated with capture groups, using the given scope\n// (namespace).\nfunc (mre *MatchRegexp) Match(input string, repl *caddy.Replacer) bool {\n\tmatches := mre.compiled.FindStringSubmatch(input)\n\tif matches == nil {\n\t\treturn false\n\t}\n\n\t// save all capture groups, first by index\n\tfor i, match := range matches {\n\t\tkeySuffix := \".\" + strconv.Itoa(i)\n\t\tif mre.Name != \"\" {\n\t\t\trepl.Set(regexpPlaceholderPrefix+\".\"+mre.Name+keySuffix, match)\n\t\t}\n\t\trepl.Set(regexpPlaceholderPrefix+keySuffix, match)\n\t}\n\n\t// then by name\n\tfor i, name := range mre.compiled.SubexpNames() {\n\t\t// skip the first element (the full match), and empty names\n\t\tif i == 0 || name == \"\" {\n\t\t\tcontinue\n\t\t}\n\n\t\tkeySuffix := \".\" + name\n\t\tif mre.Name != \"\" {\n\t\t\trepl.Set(regexpPlaceholderPrefix+\".\"+mre.Name+keySuffix, matches[i])\n\t\t}\n\t\trepl.Set(regexpPlaceholderPrefix+keySuffix, matches[i])\n\t}\n\n\treturn true\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (mre *MatchRegexp) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\t// iterate to merge multiple matchers into one\n\tfor d.Next() {\n\t\t// If this is the second iteration of the loop\n\t\t// then there's more than one *_regexp matcher,\n\t\t// and we would end up overwriting the old one\n\t\tif mre.Pattern != \"\" {\n\t\t\treturn d.Err(\"regular expression can only be used once per named matcher\")\n\t\t}\n\n\t\targs := d.RemainingArgs()\n\t\tswitch len(args) {\n\t\tcase 1:\n\t\t\tmre.Pattern = args[0]\n\t\tcase 2:\n\t\t\tmre.Name = args[0]\n\t\t\tmre.Pattern = args[1]\n\t\tdefault:\n\t\t\treturn d.ArgErr()\n\t\t}\n\n\t\t// Default to the named matcher's name, if no regexp name is provided.\n\t\t// Note: it requires d.SetContext(caddyfile.MatcherNameCtxKey, value)\n\t\t// called before this unmarshalling, otherwise it wouldn't work.\n\t\tif mre.Name == \"\" {\n\t\t\tmre.Name = d.GetContextString(caddyfile.MatcherNameCtxKey)\n\t\t}\n\n\t\tif d.NextBlock(0) {\n\t\t\treturn d.Err(\"malformed regexp matcher: blocks are not supported\")\n\t\t}\n\t}\n\treturn nil\n}\n\n// MatchServerNameRE matches based on SNI using a regular expression.\ntype MatchServerNameRE struct{ MatchRegexp }\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchServerNameRE) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.handshake_match.sni_regexp\",\n\t\tNew: func() caddy.Module { return new(MatchServerNameRE) },\n\t}\n}\n\n// Match matches hello based on SNI using a regular expression.\nfunc (m MatchServerNameRE) Match(hello *tls.ClientHelloInfo) bool {\n\t// Note: caddytls.TestServerNameMatcher calls this function without any context\n\tctx := hello.Context()\n\tif ctx == nil {\n\t\t// layer4.Connection implements GetContext() to pass its context here,\n\t\t// since hello.Context() returns nil\n\t\tif mayHaveContext, ok := hello.Conn.(interface{ GetContext() context.Context }); ok {\n\t\t\tctx = mayHaveContext.GetContext()\n\t\t}\n\t}\n\n\tvar repl *caddy.Replacer\n\tif ctx != nil {\n\t\t// In some situations the existing context may have no replacer\n\t\tif replAny := ctx.Value(caddy.ReplacerCtxKey); replAny != nil {\n\t\t\trepl = replAny.(*caddy.Replacer)\n\t\t}\n\t}\n\n\tif repl == nil {\n\t\trepl = caddy.NewReplacer()\n\t}\n\n\treturn m.MatchRegexp.Match(hello.ServerName, repl)\n}\n\n// MatchRemoteIP matches based on the remote IP of the\n// connection. Specific IPs or CIDR ranges can be specified.\n//\n// Note that IPs can sometimes be spoofed, so do not rely\n// on this as a replacement for actual authentication.\ntype MatchRemoteIP struct {\n\t// The IPs or CIDR ranges to match.\n\tRanges []string `json:\"ranges,omitempty\"`\n\n\t// The IPs or CIDR ranges to *NOT* match.\n\tNotRanges []string `json:\"not_ranges,omitempty\"`\n\n\tcidrs    []netip.Prefix\n\tnotCidrs []netip.Prefix\n\tlogger   *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchRemoteIP) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.handshake_match.remote_ip\",\n\t\tNew: func() caddy.Module { return new(MatchRemoteIP) },\n\t}\n}\n\n// Provision parses m's IP ranges, either from IP or CIDR expressions.\nfunc (m *MatchRemoteIP) Provision(ctx caddy.Context) error {\n\trepl := caddy.NewReplacer()\n\tm.logger = ctx.Logger()\n\tfor _, str := range m.Ranges {\n\t\trs := repl.ReplaceAll(str, \"\")\n\t\tcidrs, err := m.parseIPRange(rs)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tm.cidrs = append(m.cidrs, cidrs...)\n\t}\n\tfor _, str := range m.NotRanges {\n\t\trs := repl.ReplaceAll(str, \"\")\n\t\tcidrs, err := m.parseIPRange(rs)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tm.notCidrs = append(m.notCidrs, cidrs...)\n\t}\n\treturn nil\n}\n\n// Match matches hello based on the connection's remote IP.\nfunc (m MatchRemoteIP) Match(hello *tls.ClientHelloInfo) bool {\n\tremoteAddr := hello.Conn.RemoteAddr().String()\n\tipStr, _, err := net.SplitHostPort(remoteAddr)\n\tif err != nil {\n\t\tipStr = remoteAddr // weird; maybe no port?\n\t}\n\tipAddr, err := netip.ParseAddr(ipStr)\n\tif err != nil {\n\t\tif c := m.logger.Check(zapcore.ErrorLevel, \"invalid client IP address\"); c != nil {\n\t\t\tc.Write(zap.String(\"ip\", ipStr))\n\t\t}\n\t\treturn false\n\t}\n\treturn (len(m.cidrs) == 0 || m.matches(ipAddr, m.cidrs)) &&\n\t\t(len(m.notCidrs) == 0 || !m.matches(ipAddr, m.notCidrs))\n}\n\nfunc (MatchRemoteIP) parseIPRange(str string) ([]netip.Prefix, error) {\n\tvar cidrs []netip.Prefix\n\tif strings.Contains(str, \"/\") {\n\t\tipNet, err := netip.ParsePrefix(str)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"parsing CIDR expression: %v\", err)\n\t\t}\n\t\tcidrs = append(cidrs, ipNet)\n\t} else {\n\t\tipAddr, err := netip.ParseAddr(str)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"invalid IP address: '%s': %v\", str, err)\n\t\t}\n\t\tip := netip.PrefixFrom(ipAddr, ipAddr.BitLen())\n\t\tcidrs = append(cidrs, ip)\n\t}\n\treturn cidrs, nil\n}\n\nfunc (MatchRemoteIP) matches(ip netip.Addr, ranges []netip.Prefix) bool {\n\treturn slices.ContainsFunc(ranges, func(prefix netip.Prefix) bool {\n\t\treturn prefix.Contains(ip)\n\t})\n}\n\n// UnmarshalCaddyfile sets up the MatchRemoteIP from Caddyfile tokens. Syntax:\n//\n//\tremote_ip <ranges...>\n//\n// Note: IPs and CIDRs prefixed with ! symbol are treated as not_ranges\nfunc (m *MatchRemoteIP) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\tfor d.Next() {\n\t\twrapper := d.Val()\n\n\t\t// At least one same-line option must be provided\n\t\tif d.CountRemainingArgs() == 0 {\n\t\t\treturn d.ArgErr()\n\t\t}\n\n\t\tfor d.NextArg() {\n\t\t\tval := d.Val()\n\t\t\tvar exclamation bool\n\t\t\tif len(val) > 1 && val[0] == '!' {\n\t\t\t\texclamation, val = true, val[1:]\n\t\t\t}\n\t\t\tranges := []string{val}\n\t\t\tif val == \"private_ranges\" {\n\t\t\t\tranges = internal.PrivateRangesCIDR()\n\t\t\t}\n\t\t\tif exclamation {\n\t\t\t\tm.NotRanges = append(m.NotRanges, ranges...)\n\t\t\t} else {\n\t\t\t\tm.Ranges = append(m.Ranges, ranges...)\n\t\t\t}\n\t\t}\n\n\t\t// No blocks are supported\n\t\tif d.NextBlock(d.Nesting()) {\n\t\t\treturn d.Errf(\"malformed TLS handshake matcher '%s': blocks are not supported\", wrapper)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// MatchLocalIP matches based on the IP address of the interface\n// receiving the connection. Specific IPs or CIDR ranges can be specified.\ntype MatchLocalIP struct {\n\t// The IPs or CIDR ranges to match.\n\tRanges []string `json:\"ranges,omitempty\"`\n\n\tcidrs  []netip.Prefix\n\tlogger *zap.Logger\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (MatchLocalIP) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.handshake_match.local_ip\",\n\t\tNew: func() caddy.Module { return new(MatchLocalIP) },\n\t}\n}\n\n// Provision parses m's IP ranges, either from IP or CIDR expressions.\nfunc (m *MatchLocalIP) Provision(ctx caddy.Context) error {\n\trepl := caddy.NewReplacer()\n\tm.logger = ctx.Logger()\n\tfor _, str := range m.Ranges {\n\t\trs := repl.ReplaceAll(str, \"\")\n\t\tcidrs, err := m.parseIPRange(rs)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tm.cidrs = append(m.cidrs, cidrs...)\n\t}\n\treturn nil\n}\n\n// Match matches hello based on the connection's remote IP.\nfunc (m MatchLocalIP) Match(hello *tls.ClientHelloInfo) bool {\n\tlocalAddr := hello.Conn.LocalAddr().String()\n\tipStr, _, err := net.SplitHostPort(localAddr)\n\tif err != nil {\n\t\tipStr = localAddr // weird; maybe no port?\n\t}\n\tipAddr, err := netip.ParseAddr(ipStr)\n\tif err != nil {\n\t\tif c := m.logger.Check(zapcore.ErrorLevel, \"invalid local IP address\"); c != nil {\n\t\t\tc.Write(zap.String(\"ip\", ipStr))\n\t\t}\n\t\treturn false\n\t}\n\treturn (len(m.cidrs) == 0 || m.matches(ipAddr, m.cidrs))\n}\n\nfunc (MatchLocalIP) parseIPRange(str string) ([]netip.Prefix, error) {\n\tvar cidrs []netip.Prefix\n\tif strings.Contains(str, \"/\") {\n\t\tipNet, err := netip.ParsePrefix(str)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"parsing CIDR expression: %v\", err)\n\t\t}\n\t\tcidrs = append(cidrs, ipNet)\n\t} else {\n\t\tipAddr, err := netip.ParseAddr(str)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"invalid IP address: '%s': %v\", str, err)\n\t\t}\n\t\tip := netip.PrefixFrom(ipAddr, ipAddr.BitLen())\n\t\tcidrs = append(cidrs, ip)\n\t}\n\treturn cidrs, nil\n}\n\nfunc (MatchLocalIP) matches(ip netip.Addr, ranges []netip.Prefix) bool {\n\treturn slices.ContainsFunc(ranges, func(prefix netip.Prefix) bool {\n\t\treturn prefix.Contains(ip)\n\t})\n}\n\n// UnmarshalCaddyfile sets up the MatchLocalIP from Caddyfile tokens. Syntax:\n//\n//\tlocal_ip <ranges...>\nfunc (m *MatchLocalIP) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\tfor d.Next() {\n\t\twrapper := d.Val()\n\n\t\t// At least one same-line option must be provided\n\t\tif d.CountRemainingArgs() == 0 {\n\t\t\treturn d.ArgErr()\n\t\t}\n\n\t\tfor d.NextArg() {\n\t\t\tval := d.Val()\n\t\t\tif val == \"private_ranges\" {\n\t\t\t\tm.Ranges = append(m.Ranges, internal.PrivateRangesCIDR()...)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tm.Ranges = append(m.Ranges, val)\n\t\t}\n\n\t\t// No blocks are supported\n\t\tif d.NextBlock(d.Nesting()) {\n\t\t\treturn d.Errf(\"malformed TLS handshake matcher '%s': blocks are not supported\", wrapper)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// Interface guards\nvar (\n\t_ ConnectionMatcher = (*MatchLocalIP)(nil)\n\t_ ConnectionMatcher = (*MatchRemoteIP)(nil)\n\t_ ConnectionMatcher = (*MatchServerName)(nil)\n\t_ ConnectionMatcher = (*MatchServerNameRE)(nil)\n\n\t_ caddy.Provisioner = (*MatchLocalIP)(nil)\n\t_ caddy.Provisioner = (*MatchRemoteIP)(nil)\n\t_ caddy.Provisioner = (*MatchServerNameRE)(nil)\n\n\t_ caddyfile.Unmarshaler = (*MatchLocalIP)(nil)\n\t_ caddyfile.Unmarshaler = (*MatchRemoteIP)(nil)\n\t_ caddyfile.Unmarshaler = (*MatchServerName)(nil)\n\t_ caddyfile.Unmarshaler = (*MatchServerNameRE)(nil)\n)\n\nvar wordRE = regexp.MustCompile(`\\w+`)\n\nconst regexpPlaceholderPrefix = \"tls.regexp\"\n"
  },
  {
    "path": "modules/caddytls/matchers_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"net\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc TestServerNameMatcher(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tnames  []string\n\t\tinput  string\n\t\texpect bool\n\t}{\n\t\t{\n\t\t\tnames:  []string{\"example.com\"},\n\t\t\tinput:  \"example.com\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tnames:  []string{\"example.com\"},\n\t\t\tinput:  \"foo.com\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tnames:  []string{\"example.com\"},\n\t\t\tinput:  \"\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tnames:  []string{},\n\t\t\tinput:  \"\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tnames:  []string{\"foo\", \"example.com\"},\n\t\t\tinput:  \"example.com\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tnames:  []string{\"foo\", \"example.com\"},\n\t\t\tinput:  \"sub.example.com\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tnames:  []string{\"foo\", \"example.com\"},\n\t\t\tinput:  \"foo.com\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tnames:  []string{\"*.example.com\"},\n\t\t\tinput:  \"example.com\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tnames:  []string{\"*.example.com\"},\n\t\t\tinput:  \"sub.example.com\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tnames:  []string{\"*.example.com\", \"*.sub.example.com\"},\n\t\t\tinput:  \"sub2.sub.example.com\",\n\t\t\texpect: true,\n\t\t},\n\t} {\n\t\tchi := &tls.ClientHelloInfo{ServerName: tc.input}\n\t\tactual := MatchServerName(tc.names).Match(chi)\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d: Expected %t but got %t (input=%s match=%v)\",\n\t\t\t\ti, tc.expect, actual, tc.input, tc.names)\n\t\t}\n\t}\n}\n\nfunc TestServerNameREMatcher(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tpattern string\n\t\tinput   string\n\t\texpect  bool\n\t}{\n\t\t{\n\t\t\tpattern: \"^example\\\\.(com|net)$\",\n\t\t\tinput:   \"example.com\",\n\t\t\texpect:  true,\n\t\t},\n\t\t{\n\t\t\tpattern: \"^example\\\\.(com|net)$\",\n\t\t\tinput:   \"foo.com\",\n\t\t\texpect:  false,\n\t\t},\n\t\t{\n\t\t\tpattern: \"^example\\\\.(com|net)$\",\n\t\t\tinput:   \"\",\n\t\t\texpect:  false,\n\t\t},\n\t\t{\n\t\t\tpattern: \"\",\n\t\t\tinput:   \"\",\n\t\t\texpect:  true,\n\t\t},\n\t\t{\n\t\t\tpattern: \"^example\\\\.(com|net)$\",\n\t\t\tinput:   \"foo.example.com\",\n\t\t\texpect:  false,\n\t\t},\n\t} {\n\t\tchi := &tls.ClientHelloInfo{ServerName: tc.input}\n\t\tmre := MatchServerNameRE{MatchRegexp{Pattern: tc.pattern}}\n\t\tctx, _ := caddy.NewContext(caddy.Context{Context: context.Background()})\n\t\tif mre.Provision(ctx) != nil {\n\t\t\tt.Errorf(\"Test %d: Failed to provision a regexp matcher (pattern=%v)\", i, tc.pattern)\n\t\t}\n\t\tactual := mre.Match(chi)\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d: Expected %t but got %t (input=%s match=%v)\",\n\t\t\t\ti, tc.expect, actual, tc.input, tc.pattern)\n\t\t}\n\t}\n}\n\nfunc TestRemoteIPMatcher(t *testing.T) {\n\tctx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tdefer cancel()\n\n\tfor i, tc := range []struct {\n\t\tranges    []string\n\t\tnotRanges []string\n\t\tinput     string\n\t\texpect    bool\n\t}{\n\t\t{\n\t\t\tranges: []string{\"127.0.0.1\"},\n\t\t\tinput:  \"127.0.0.1:12345\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tranges: []string{\"127.0.0.1\"},\n\t\t\tinput:  \"127.0.0.2:12345\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tranges: []string{\"127.0.0.1/16\"},\n\t\t\tinput:  \"127.0.1.23:12345\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tranges: []string{\"127.0.0.1\", \"192.168.1.105\"},\n\t\t\tinput:  \"192.168.1.105:12345\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tnotRanges: []string{\"127.0.0.1\"},\n\t\t\tinput:     \"127.0.0.1:12345\",\n\t\t\texpect:    false,\n\t\t},\n\t\t{\n\t\t\tnotRanges: []string{\"127.0.0.2\"},\n\t\t\tinput:     \"127.0.0.1:12345\",\n\t\t\texpect:    true,\n\t\t},\n\t\t{\n\t\t\tranges:    []string{\"127.0.0.1\"},\n\t\t\tnotRanges: []string{\"127.0.0.2\"},\n\t\t\tinput:     \"127.0.0.1:12345\",\n\t\t\texpect:    true,\n\t\t},\n\t\t{\n\t\t\tranges:    []string{\"127.0.0.2\"},\n\t\t\tnotRanges: []string{\"127.0.0.2\"},\n\t\t\tinput:     \"127.0.0.2:12345\",\n\t\t\texpect:    false,\n\t\t},\n\t\t{\n\t\t\tranges:    []string{\"127.0.0.2\"},\n\t\t\tnotRanges: []string{\"127.0.0.2\"},\n\t\t\tinput:     \"127.0.0.3:12345\",\n\t\t\texpect:    false,\n\t\t},\n\t} {\n\t\tmatcher := MatchRemoteIP{Ranges: tc.ranges, NotRanges: tc.notRanges}\n\t\terr := matcher.Provision(ctx)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Test %d: Provision failed: %v\", i, err)\n\t\t}\n\n\t\taddr := testAddr(tc.input)\n\t\tchi := &tls.ClientHelloInfo{Conn: testConn{addr: addr}}\n\n\t\tactual := matcher.Match(chi)\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d: Expected %t but got %t (input=%s ranges=%v notRanges=%v)\",\n\t\t\t\ti, tc.expect, actual, tc.input, tc.ranges, tc.notRanges)\n\t\t}\n\t}\n}\n\nfunc TestLocalIPMatcher(t *testing.T) {\n\tctx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})\n\tdefer cancel()\n\n\tfor i, tc := range []struct {\n\t\tranges []string\n\t\tinput  string\n\t\texpect bool\n\t}{\n\t\t{\n\t\t\tranges: []string{\"127.0.0.1\"},\n\t\t\tinput:  \"127.0.0.1:12345\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tranges: []string{\"127.0.0.1\"},\n\t\t\tinput:  \"127.0.0.2:12345\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tranges: []string{\"127.0.0.1/16\"},\n\t\t\tinput:  \"127.0.1.23:12345\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tranges: []string{\"127.0.0.1\", \"192.168.1.105\"},\n\t\t\tinput:  \"192.168.1.105:12345\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tinput:  \"127.0.0.1:12345\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tranges: []string{\"127.0.0.1\"},\n\t\t\tinput:  \"127.0.0.1:12345\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tranges: []string{\"127.0.0.2\"},\n\t\t\tinput:  \"127.0.0.3:12345\",\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tranges: []string{\"127.0.0.2\"},\n\t\t\tinput:  \"127.0.0.2\",\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tranges: []string{\"127.0.0.2\"},\n\t\t\tinput:  \"127.0.0.300\",\n\t\t\texpect: false,\n\t\t},\n\t} {\n\t\tmatcher := MatchLocalIP{Ranges: tc.ranges}\n\t\terr := matcher.Provision(ctx)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Test %d: Provision failed: %v\", i, err)\n\t\t}\n\n\t\taddr := testAddr(tc.input)\n\t\tchi := &tls.ClientHelloInfo{Conn: testConn{addr: addr}}\n\n\t\tactual := matcher.Match(chi)\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d: Expected %t but got %t (input=%s ranges=%v)\",\n\t\t\t\ti, tc.expect, actual, tc.input, tc.ranges)\n\t\t}\n\t}\n}\n\ntype testConn struct {\n\t*net.TCPConn\n\taddr testAddr\n}\n\nfunc (tc testConn) RemoteAddr() net.Addr { return tc.addr }\nfunc (tc testConn) LocalAddr() net.Addr  { return tc.addr }\n\ntype testAddr string\n\nfunc (testAddr) Network() string   { return \"tcp\" }\nfunc (ta testAddr) String() string { return string(ta) }\n"
  },
  {
    "path": "modules/caddytls/ondemand.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"time\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(PermissionByHTTP{})\n}\n\n// OnDemandConfig configures on-demand TLS, for obtaining\n// needed certificates at handshake-time. Because this\n// feature can easily be abused, Caddy must ask permission\n// to your application whether a particular domain is allowed\n// to have a certificate issued for it.\ntype OnDemandConfig struct {\n\t// Deprecated. WILL BE REMOVED SOON. Use 'permission' instead with the `http` module.\n\tAsk string `json:\"ask,omitempty\"`\n\n\t// REQUIRED. A module that will determine whether a\n\t// certificate is allowed to be loaded from storage\n\t// or obtained from an issuer on demand.\n\tPermissionRaw json.RawMessage `json:\"permission,omitempty\" caddy:\"namespace=tls.permission inline_key=module\"`\n\tpermission    OnDemandPermission\n}\n\n// OnDemandPermission is a type that can give permission for\n// whether a certificate should be allowed to be obtained or\n// loaded from storage on-demand.\n// EXPERIMENTAL: This API is experimental and subject to change.\ntype OnDemandPermission interface {\n\t// CertificateAllowed returns nil if a certificate for the given\n\t// name is allowed to be either obtained from an issuer or loaded\n\t// from storage on-demand.\n\t//\n\t// The context passed in has the associated *tls.ClientHelloInfo\n\t// value available at the certmagic.ClientHelloInfoCtxKey key.\n\t//\n\t// In the worst case, this function may be called as frequently\n\t// as every TLS handshake, so it should return as quick as possible\n\t// to reduce latency. In the normal case, this function is only\n\t// called when a certificate is needed that is not already loaded\n\t// into memory ready to serve.\n\tCertificateAllowed(ctx context.Context, name string) error\n}\n\n// PermissionByHTTP determines permission for a TLS certificate by\n// making a request to an HTTP endpoint.\ntype PermissionByHTTP struct {\n\t// The endpoint to access. It should be a full URL.\n\t// A query string parameter \"domain\" will be added to it,\n\t// containing the domain (or IP) for the desired certificate,\n\t// like so: `?domain=example.com`. Generally, this endpoint\n\t// is not exposed publicly to avoid a minor information leak\n\t// (which domains are serviced by your application).\n\t//\n\t// The endpoint must return a 200 OK status if a certificate\n\t// is allowed; anything else will cause it to be denied.\n\t// Redirects are not followed.\n\tEndpoint string `json:\"endpoint\"`\n\n\tlogger   *zap.Logger\n\treplacer *caddy.Replacer\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (PermissionByHTTP) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.permission.http\",\n\t\tNew: func() caddy.Module { return new(PermissionByHTTP) },\n\t}\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (p *PermissionByHTTP) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\tif !d.Next() {\n\t\treturn nil\n\t}\n\tif !d.AllArgs(&p.Endpoint) {\n\t\treturn d.ArgErr()\n\t}\n\treturn nil\n}\n\nfunc (p *PermissionByHTTP) Provision(ctx caddy.Context) error {\n\tp.logger = ctx.Logger()\n\tp.replacer = caddy.NewReplacer()\n\treturn nil\n}\n\nfunc (p PermissionByHTTP) CertificateAllowed(ctx context.Context, name string) error {\n\t// run replacer on endpoint URL (for environment variables) -- return errors to prevent surprises (#5036)\n\taskEndpoint, err := p.replacer.ReplaceOrErr(p.Endpoint, true, true)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"preparing 'ask' endpoint: %v\", err)\n\t}\n\n\taskURL, err := url.Parse(askEndpoint)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"parsing ask URL: %v\", err)\n\t}\n\tqs := askURL.Query()\n\tqs.Set(\"domain\", name)\n\taskURL.RawQuery = qs.Encode()\n\taskURLString := askURL.String()\n\n\tvar remote string\n\tif chi, ok := ctx.Value(certmagic.ClientHelloInfoCtxKey).(*tls.ClientHelloInfo); ok && chi != nil {\n\t\tremote = chi.Conn.RemoteAddr().String()\n\t}\n\n\tif c := p.logger.Check(zapcore.DebugLevel, \"asking permission endpoint\"); c != nil {\n\t\tc.Write(\n\t\t\tzap.String(\"remote\", remote),\n\t\t\tzap.String(\"domain\", name),\n\t\t\tzap.String(\"url\", askURLString),\n\t\t)\n\t}\n\n\tresp, err := onDemandAskClient.Get(askURLString)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"checking %v to determine if certificate for hostname '%s' should be allowed: %v\",\n\t\t\taskEndpoint, name, err)\n\t}\n\tresp.Body.Close()\n\n\tif c := p.logger.Check(zapcore.DebugLevel, \"response from permission endpoint\"); c != nil {\n\t\tc.Write(\n\t\t\tzap.String(\"remote\", remote),\n\t\t\tzap.String(\"domain\", name),\n\t\t\tzap.String(\"url\", askURLString),\n\t\t\tzap.Int(\"status\", resp.StatusCode),\n\t\t)\n\t}\n\n\tif resp.StatusCode < 200 || resp.StatusCode > 299 {\n\t\treturn fmt.Errorf(\"%s: %w %s - non-2xx status code %d\", name, ErrPermissionDenied, askEndpoint, resp.StatusCode)\n\t}\n\n\treturn nil\n}\n\n// ErrPermissionDenied is an error that should be wrapped or returned when the\n// configured permission module does not allow a certificate to be issued,\n// to distinguish that from other errors such as connection failure.\nvar ErrPermissionDenied = errors.New(\"certificate not allowed by permission module\")\n\n// These perpetual values are used for on-demand TLS.\nvar (\n\tonDemandAskClient = &http.Client{\n\t\tTimeout: 10 * time.Second,\n\t\tCheckRedirect: func(req *http.Request, via []*http.Request) error {\n\t\t\treturn fmt.Errorf(\"following http redirects is not allowed\")\n\t\t},\n\t}\n)\n\n// Interface guards\nvar (\n\t_ OnDemandPermission = (*PermissionByHTTP)(nil)\n\t_ caddy.Provisioner  = (*PermissionByHTTP)(nil)\n)\n"
  },
  {
    "path": "modules/caddytls/pemloader.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"crypto/tls\"\n\t\"fmt\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(PEMLoader{})\n}\n\n// PEMLoader loads certificates and their associated keys by\n// decoding their PEM blocks directly. This has the advantage\n// of not needing to store them on disk at all.\ntype PEMLoader []CertKeyPEMPair\n\n// Provision implements caddy.Provisioner.\nfunc (pl PEMLoader) Provision(ctx caddy.Context) error {\n\trepl, ok := ctx.Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\tif !ok {\n\t\trepl = caddy.NewReplacer()\n\t}\n\tfor k, pair := range pl {\n\t\tfor i, tag := range pair.Tags {\n\t\t\tpair.Tags[i] = repl.ReplaceKnown(tag, \"\")\n\t\t}\n\t\tpl[k] = CertKeyPEMPair{\n\t\t\tCertificatePEM: repl.ReplaceKnown(pair.CertificatePEM, \"\"),\n\t\t\tKeyPEM:         repl.ReplaceKnown(pair.KeyPEM, \"\"),\n\t\t\tTags:           pair.Tags,\n\t\t}\n\t}\n\treturn nil\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (PEMLoader) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.certificates.load_pem\",\n\t\tNew: func() caddy.Module { return new(PEMLoader) },\n\t}\n}\n\n// CertKeyPEMPair pairs certificate and key PEM blocks.\ntype CertKeyPEMPair struct {\n\t// The certificate (public key) in PEM format.\n\tCertificatePEM string `json:\"certificate\"`\n\n\t// The private key in PEM format.\n\tKeyPEM string `json:\"key\"`\n\n\t// Arbitrary values to associate with this certificate.\n\t// Can be useful when you want to select a particular\n\t// certificate when there may be multiple valid candidates.\n\tTags []string `json:\"tags,omitempty\"`\n}\n\n// LoadCertificates returns the certificates contained in pl.\nfunc (pl PEMLoader) LoadCertificates() ([]Certificate, error) {\n\tcerts := make([]Certificate, 0, len(pl))\n\tfor i, pair := range pl {\n\t\tcert, err := tls.X509KeyPair([]byte(pair.CertificatePEM), []byte(pair.KeyPEM))\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"PEM pair %d: %v\", i, err)\n\t\t}\n\t\tcerts = append(certs, Certificate{\n\t\t\tCertificate: cert,\n\t\t\tTags:        pair.Tags,\n\t\t})\n\t}\n\treturn certs, nil\n}\n\n// Interface guard\nvar (\n\t_ CertificateLoader = (PEMLoader)(nil)\n\t_ caddy.Provisioner = (PEMLoader)(nil)\n)\n"
  },
  {
    "path": "modules/caddytls/sessiontickets.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"crypto/rand\"\n\t\"crypto/tls\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"log\"\n\t\"runtime/debug\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\n// SessionTicketService configures and manages TLS session tickets.\ntype SessionTicketService struct {\n\t// KeySource is the method by which Caddy produces or obtains\n\t// TLS session ticket keys (STEKs). By default, Caddy generates\n\t// them internally using a secure pseudorandom source.\n\tKeySource json.RawMessage `json:\"key_source,omitempty\" caddy:\"namespace=tls.stek inline_key=provider\"`\n\n\t// How often Caddy rotates STEKs. Default: 12h.\n\tRotationInterval caddy.Duration `json:\"rotation_interval,omitempty\"`\n\n\t// The maximum number of keys to keep in rotation. Default: 4.\n\tMaxKeys int `json:\"max_keys,omitempty\"`\n\n\t// Disables STEK rotation.\n\tDisableRotation bool `json:\"disable_rotation,omitempty\"`\n\n\t// Disables TLS session resumption by tickets.\n\tDisabled bool `json:\"disabled,omitempty\"`\n\n\tkeySource   STEKProvider\n\tconfigs     map[*tls.Config]struct{}\n\tstopChan    chan struct{}\n\tcurrentKeys [][32]byte\n\tmu          *sync.Mutex\n}\n\nfunc (s *SessionTicketService) provision(ctx caddy.Context) error {\n\ts.configs = make(map[*tls.Config]struct{})\n\ts.mu = new(sync.Mutex)\n\n\t// establish sane defaults\n\tif s.RotationInterval == 0 {\n\t\ts.RotationInterval = caddy.Duration(defaultSTEKRotationInterval)\n\t}\n\tif s.MaxKeys <= 0 {\n\t\ts.MaxKeys = defaultMaxSTEKs\n\t}\n\tif s.KeySource == nil {\n\t\ts.KeySource = json.RawMessage(`{\"provider\":\"standard\"}`)\n\t}\n\n\t// load the STEK module, which will provide keys\n\tval, err := ctx.LoadModule(s, \"KeySource\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"loading TLS session ticket ephemeral keys provider module: %s\", err)\n\t}\n\ts.keySource = val.(STEKProvider)\n\n\t// if session tickets or just rotation are\n\t// disabled, no need to start service\n\tif s.Disabled || s.DisableRotation {\n\t\treturn nil\n\t}\n\n\t// start the STEK module; this ensures we have\n\t// a starting key before any config needs one\n\treturn s.start()\n}\n\n// start loads the starting STEKs and spawns a goroutine\n// which loops to rotate the STEKs, which continues until\n// stop() is called. If start() was already called, this\n// is a no-op.\nfunc (s *SessionTicketService) start() error {\n\tif s.stopChan != nil {\n\t\treturn nil\n\t}\n\ts.stopChan = make(chan struct{})\n\n\t// initializing the key source gives us our\n\t// initial key(s) to start with; if successful,\n\t// we need to be sure to call Next() so that\n\t// the key source can know when it is done\n\tinitialKeys, err := s.keySource.Initialize(s)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"setting STEK module configuration: %v\", err)\n\t}\n\n\ts.mu.Lock()\n\ts.currentKeys = initialKeys\n\ts.mu.Unlock()\n\n\t// keep the keys rotated\n\tgo s.stayUpdated()\n\n\treturn nil\n}\n\n// stayUpdated is a blocking function which rotates\n// the keys whenever new ones are sent. It reads\n// from keysChan until s.stop() is called.\nfunc (s *SessionTicketService) stayUpdated() {\n\tdefer func() {\n\t\tif err := recover(); err != nil {\n\t\t\tlog.Printf(\"[PANIC] session ticket service: %v\\n%s\", err, debug.Stack())\n\t\t}\n\t}()\n\n\t// this call is essential when Initialize()\n\t// returns without error, because the stop\n\t// channel is the only way the key source\n\t// will know when to clean up\n\tkeysChan := s.keySource.Next(s.stopChan)\n\n\tfor {\n\t\tselect {\n\t\tcase newKeys := <-keysChan:\n\t\t\ts.mu.Lock()\n\t\t\ts.currentKeys = newKeys\n\t\t\tconfigs := s.configs\n\t\t\ts.mu.Unlock()\n\t\t\tfor cfg := range configs {\n\t\t\t\tcfg.SetSessionTicketKeys(newKeys)\n\t\t\t}\n\t\tcase <-s.stopChan:\n\t\t\treturn\n\t\t}\n\t}\n}\n\n// stop terminates the key rotation goroutine.\nfunc (s *SessionTicketService) stop() {\n\tif s.stopChan != nil {\n\t\tclose(s.stopChan)\n\t}\n}\n\n// register sets the session ticket keys on cfg\n// and keeps them updated. Any values registered\n// must be unregistered, or they will not be\n// garbage-collected. s.start() must have been\n// called first. If session tickets are disabled\n// or if ticket key rotation is disabled, this\n// function is a no-op.\nfunc (s *SessionTicketService) register(cfg *tls.Config) {\n\tif s.Disabled || s.DisableRotation {\n\t\treturn\n\t}\n\ts.mu.Lock()\n\tcfg.SetSessionTicketKeys(s.currentKeys)\n\ts.configs[cfg] = struct{}{}\n\ts.mu.Unlock()\n}\n\n// unregister stops session key management on cfg and\n// removes the internal stored reference to cfg. If\n// session tickets are disabled or if ticket key rotation\n// is disabled, this function is a no-op.\nfunc (s *SessionTicketService) unregister(cfg *tls.Config) {\n\tif s.Disabled || s.DisableRotation {\n\t\treturn\n\t}\n\ts.mu.Lock()\n\tdelete(s.configs, cfg)\n\ts.mu.Unlock()\n}\n\n// RotateSTEKs rotates the keys in keys by producing a new key and eliding\n// the oldest one. The new slice of keys is returned.\nfunc (s SessionTicketService) RotateSTEKs(keys [][32]byte) ([][32]byte, error) {\n\t// produce a new key\n\tnewKey, err := s.generateSTEK()\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"generating STEK: %v\", err)\n\t}\n\n\t// we need to prepend this new key to the list of\n\t// keys so that it is preferred, but we need to be\n\t// careful that we do not grow the slice larger\n\t// than MaxKeys, otherwise we'll be storing one\n\t// more key in memory than we expect; so be sure\n\t// that the slice does not grow beyond the limit\n\t// even for a brief period of time, since there's\n\t// no guarantee when that extra allocation will\n\t// be overwritten; this is why we first trim the\n\t// length to one less the max, THEN prepend the\n\t// new key\n\tif len(keys) >= s.MaxKeys {\n\t\tkeys[len(keys)-1] = [32]byte{} // zero-out memory of oldest key\n\t\tkeys = keys[:s.MaxKeys-1]      // trim length of slice\n\t}\n\tkeys = append([][32]byte{newKey}, keys...) // prepend new key\n\n\treturn keys, nil\n}\n\n// generateSTEK generates key material suitable for use as a\n// session ticket ephemeral key.\nfunc (s *SessionTicketService) generateSTEK() ([32]byte, error) {\n\tvar newTicketKey [32]byte\n\t_, err := io.ReadFull(rand.Reader, newTicketKey[:])\n\treturn newTicketKey, err\n}\n\n// STEKProvider is a type that can provide session ticket ephemeral\n// keys (STEKs).\ntype STEKProvider interface {\n\t// Initialize provides the STEK configuration to the STEK\n\t// module so that it can obtain and manage keys accordingly.\n\t// It returns the initial key(s) to use. Implementations can\n\t// rely on Next() being called if Initialize() returns\n\t// without error, so that it may know when it is done.\n\tInitialize(config *SessionTicketService) ([][32]byte, error)\n\n\t// Next returns the channel through which the next session\n\t// ticket keys will be transmitted until doneChan is closed.\n\t// Keys should be sent on keysChan as they are updated.\n\t// When doneChan is closed, any resources allocated in\n\t// Initialize() must be cleaned up.\n\tNext(doneChan <-chan struct{}) (keysChan <-chan [][32]byte)\n}\n\nconst (\n\tdefaultSTEKRotationInterval = 12 * time.Hour\n\tdefaultMaxSTEKs             = 4\n)\n"
  },
  {
    "path": "modules/caddytls/standardstek/stek.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage standardstek\n\nimport (\n\t\"log\"\n\t\"runtime/debug\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddytls\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(standardSTEKProvider{})\n}\n\ntype standardSTEKProvider struct {\n\tstekConfig *caddytls.SessionTicketService\n\ttimer      *time.Timer\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (standardSTEKProvider) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.stek.standard\",\n\t\tNew: func() caddy.Module { return new(standardSTEKProvider) },\n\t}\n}\n\n// Initialize sets the configuration for s and returns the starting keys.\nfunc (s *standardSTEKProvider) Initialize(config *caddytls.SessionTicketService) ([][32]byte, error) {\n\t// keep a reference to the config; we'll need it when rotating keys\n\ts.stekConfig = config\n\n\titvl := time.Duration(s.stekConfig.RotationInterval)\n\n\tmutex.Lock()\n\tdefer mutex.Unlock()\n\n\t// if this is our first rotation or we are overdue\n\t// for one, perform a rotation immediately; otherwise,\n\t// we assume that the keys are non-empty and fresh\n\tsince := time.Since(lastRotation)\n\tif lastRotation.IsZero() || since > itvl {\n\t\tvar err error\n\t\tkeys, err = s.stekConfig.RotateSTEKs(keys)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tsince = 0 // since this is overdue or is the first rotation, use full interval\n\t\tlastRotation = time.Now()\n\t}\n\n\t// create timer for the remaining time on the interval;\n\t// this timer is cleaned up only when Next() returns\n\ts.timer = time.NewTimer(itvl - since)\n\n\treturn keys, nil\n}\n\n// Next returns a channel which transmits the latest session ticket keys.\nfunc (s *standardSTEKProvider) Next(doneChan <-chan struct{}) <-chan [][32]byte {\n\tkeysChan := make(chan [][32]byte)\n\tgo s.rotate(doneChan, keysChan)\n\treturn keysChan\n}\n\n// rotate rotates keys on a regular basis, sending each updated set of\n// keys down keysChan, until doneChan is closed.\nfunc (s *standardSTEKProvider) rotate(doneChan <-chan struct{}, keysChan chan<- [][32]byte) {\n\tdefer func() {\n\t\tif err := recover(); err != nil {\n\t\t\tlog.Printf(\"[PANIC] standard STEK rotation: %v\\n%s\", err, debug.Stack())\n\t\t}\n\t}()\n\tfor {\n\t\tselect {\n\t\tcase now := <-s.timer.C:\n\t\t\t// copy the slice header to avoid races\n\t\t\tmutex.RLock()\n\t\t\tkeysCopy := keys\n\t\t\tmutex.RUnlock()\n\n\t\t\t// generate a new key, rotating old ones\n\t\t\tvar err error\n\t\t\tkeysCopy, err = s.stekConfig.RotateSTEKs(keysCopy)\n\t\t\tif err != nil {\n\t\t\t\t// TODO: improve this handling\n\t\t\t\tlog.Printf(\"[ERROR] Generating STEK: %v\", err)\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// replace keys slice with updated value and\n\t\t\t// record the timestamp of rotation\n\t\t\tmutex.Lock()\n\t\t\tkeys = keysCopy\n\t\t\tlastRotation = now\n\t\t\tmutex.Unlock()\n\n\t\t\t// send the updated keys to the service\n\t\t\tkeysChan <- keysCopy\n\n\t\t\t// timer channel is already drained, so reset directly (see godoc)\n\t\t\ts.timer.Reset(time.Duration(s.stekConfig.RotationInterval))\n\n\t\tcase <-doneChan:\n\t\t\t// again, see godocs for why timer is stopped this way\n\t\t\tif !s.timer.Stop() {\n\t\t\t\t<-s.timer.C\n\t\t\t}\n\t\t\treturn\n\t\t}\n\t}\n}\n\nvar (\n\tlastRotation time.Time\n\tkeys         [][32]byte\n\tmutex        sync.RWMutex // protects keys and lastRotation\n)\n\n// Interface guard\nvar _ caddytls.STEKProvider = (*standardSTEKProvider)(nil)\n"
  },
  {
    "path": "modules/caddytls/storageloader.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"crypto/tls\"\n\t\"fmt\"\n\t\"strings\"\n\n\t\"github.com/caddyserver/certmagic\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(StorageLoader{})\n}\n\n// StorageLoader loads certificates and their associated keys\n// from the globally configured storage module.\ntype StorageLoader struct {\n\t// A list of pairs of certificate and key file names along with their\n\t// encoding format so that they can be loaded from storage.\n\tPairs []CertKeyFilePair `json:\"pairs,omitempty\"`\n\n\t// Reference to the globally configured storage module.\n\tstorage certmagic.Storage\n\n\tctx caddy.Context\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (StorageLoader) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.certificates.load_storage\",\n\t\tNew: func() caddy.Module { return new(StorageLoader) },\n\t}\n}\n\n// Provision loads the storage module for sl.\nfunc (sl *StorageLoader) Provision(ctx caddy.Context) error {\n\tsl.storage = ctx.Storage()\n\tsl.ctx = ctx\n\n\trepl, ok := ctx.Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\tif !ok {\n\t\trepl = caddy.NewReplacer()\n\t}\n\tfor k, pair := range sl.Pairs {\n\t\tfor i, tag := range pair.Tags {\n\t\t\tpair.Tags[i] = repl.ReplaceKnown(tag, \"\")\n\t\t}\n\t\tsl.Pairs[k] = CertKeyFilePair{\n\t\t\tCertificate: repl.ReplaceKnown(pair.Certificate, \"\"),\n\t\t\tKey:         repl.ReplaceKnown(pair.Key, \"\"),\n\t\t\tFormat:      repl.ReplaceKnown(pair.Format, \"\"),\n\t\t\tTags:        pair.Tags,\n\t\t}\n\t}\n\treturn nil\n}\n\n// LoadCertificates returns the certificates to be loaded by sl.\nfunc (sl StorageLoader) LoadCertificates() ([]Certificate, error) {\n\tcerts := make([]Certificate, 0, len(sl.Pairs))\n\tfor _, pair := range sl.Pairs {\n\t\tcertData, err := sl.storage.Load(sl.ctx, pair.Certificate)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tkeyData, err := sl.storage.Load(sl.ctx, pair.Key)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tvar cert tls.Certificate\n\t\tswitch pair.Format {\n\t\tcase \"\":\n\t\t\tfallthrough\n\n\t\tcase \"pem\":\n\t\t\t// if the start of the key file looks like an encrypted private key,\n\t\t\t// reject it with a helpful error message\n\t\t\tif strings.Contains(string(keyData[:40]), \"ENCRYPTED\") {\n\t\t\t\treturn nil, fmt.Errorf(\"encrypted private keys are not supported; please decrypt the key first\")\n\t\t\t}\n\n\t\t\tcert, err = tls.X509KeyPair(certData, keyData)\n\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"unrecognized certificate/key encoding format: %s\", pair.Format)\n\t\t}\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tcerts = append(certs, Certificate{Certificate: cert, Tags: pair.Tags})\n\t}\n\treturn certs, nil\n}\n\n// Interface guard\nvar (\n\t_ CertificateLoader = (*StorageLoader)(nil)\n\t_ caddy.Provisioner = (*StorageLoader)(nil)\n)\n"
  },
  {
    "path": "modules/caddytls/tls.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"log\"\n\t\"net\"\n\t\"net/http\"\n\t\"runtime/debug\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/libdns/libdns\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/internal\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyevents\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(TLS{})\n\tcaddy.RegisterModule(AutomateLoader{})\n}\n\nvar (\n\tcertCache   *certmagic.Cache\n\tcertCacheMu sync.RWMutex\n)\n\n// TLS provides TLS facilities including certificate\n// loading and management, client auth, and more.\ntype TLS struct {\n\t// Certificates to load into memory for quick recall during\n\t// TLS handshakes. Each key is the name of a certificate\n\t// loader module.\n\t//\n\t// The \"automate\" certificate loader module can be used to\n\t// specify a list of subjects that need certificates to be\n\t// managed automatically, including subdomains that may\n\t// already be covered by a managed wildcard certificate.\n\t// The first matching automation policy will be used\n\t// to manage automated certificate(s).\n\t//\n\t// All loaded certificates get pooled\n\t// into the same cache and may be used to complete TLS\n\t// handshakes for the relevant server names (SNI).\n\t// Certificates loaded manually (anything other than\n\t// \"automate\") are not automatically managed and will\n\t// have to be refreshed manually before they expire.\n\tCertificatesRaw caddy.ModuleMap `json:\"certificates,omitempty\" caddy:\"namespace=tls.certificates\"`\n\n\t// Configures certificate automation.\n\tAutomation *AutomationConfig `json:\"automation,omitempty\"`\n\n\t// Configures session ticket ephemeral keys (STEKs).\n\tSessionTickets *SessionTicketService `json:\"session_tickets,omitempty\"`\n\n\t// Configures the in-memory certificate cache.\n\tCache *CertCacheOptions `json:\"cache,omitempty\"`\n\n\t// Disables OCSP stapling for manually-managed certificates only.\n\t// To configure OCSP stapling for automated certificates, use an\n\t// automation policy instead.\n\t//\n\t// Disabling OCSP stapling puts clients at greater risk, reduces their\n\t// privacy, and usually lowers client performance. It is NOT recommended\n\t// to disable this unless you are able to justify the costs.\n\t//\n\t// EXPERIMENTAL. Subject to change.\n\tDisableOCSPStapling bool `json:\"disable_ocsp_stapling,omitempty\"`\n\n\t// Disables checks in certmagic that the configured storage is ready\n\t// and able to handle writing new content to it. These checks are\n\t// intended to prevent information loss (newly issued certificates), but\n\t// can be expensive on the storage.\n\t//\n\t// Disabling these checks should only be done when the storage\n\t// can be trusted to have enough capacity and no other problems.\n\t//\n\t// EXPERIMENTAL. Subject to change.\n\tDisableStorageCheck bool `json:\"disable_storage_check,omitempty\"`\n\n\t// Disables the automatic cleanup of the storage backend.\n\t// This is useful when TLS is not being used to store certificates\n\t// and the user wants run their server in a read-only mode.\n\t//\n\t// Storage cleaning creates two files: instance.uuid and last_clean.json.\n\t// The instance.uuid file is used to identify the instance of Caddy\n\t// in a cluster. The last_clean.json file is used to store the last\n\t// time the storage was cleaned.\n\t//\n\t// EXPERIMENTAL. Subject to change.\n\tDisableStorageClean bool `json:\"disable_storage_clean,omitempty\"`\n\n\t// Enable Encrypted ClientHello (ECH). ECH protects the server name\n\t// (SNI) and other sensitive parameters of a normally-plaintext TLS\n\t// ClientHello during a handshake.\n\t//\n\t// EXPERIMENTAL: Subject to change.\n\tEncryptedClientHello *ECH `json:\"encrypted_client_hello,omitempty\"`\n\n\t// The default DNS provider module to use when a DNS module is needed.\n\t//\n\t// EXPERIMENTAL: Subject to change.\n\tDNSRaw json.RawMessage `json:\"dns,omitempty\" caddy:\"namespace=dns.providers inline_key=name\"`\n\n\t// The default DNS resolvers to use for TLS-related DNS operations, specifically\n\t// for ACME DNS challenges and ACME server DNS validations.\n\t// If not specified, the system default resolvers will be used.\n\t//\n\t// EXPERIMENTAL: Subject to change.\n\tResolvers []string `json:\"resolvers,omitempty\"`\n\n\tdns                any // technically, it should be any/all of the libdns interfaces (RecordSetter, RecordAppender, etc.)\n\tcertificateLoaders []CertificateLoader\n\tautomateNames      map[string]struct{}\n\tctx                caddy.Context\n\tstorageCleanTicker *time.Ticker\n\tstorageCleanStop   chan struct{}\n\tlogger             *zap.Logger\n\tevents             *caddyevents.App\n\n\tserverNames   map[string]struct{}\n\tserverNamesMu *sync.Mutex\n\n\t// set of subjects with managed certificates,\n\t// and hashes of manually-loaded certificates\n\t// (managing's value is an optional issuer key, for distinction)\n\tmanaging, loaded map[string]string\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (TLS) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls\",\n\t\tNew: func() caddy.Module { return new(TLS) },\n\t}\n}\n\n// Provision sets up the configuration for the TLS app.\nfunc (t *TLS) Provision(ctx caddy.Context) error {\n\teventsAppIface, err := ctx.App(\"events\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"getting events app: %v\", err)\n\t}\n\tt.events = eventsAppIface.(*caddyevents.App)\n\tt.ctx = ctx\n\tt.logger = ctx.Logger()\n\trepl := caddy.NewReplacer()\n\tt.managing, t.loaded = make(map[string]string), make(map[string]string)\n\tt.serverNames = make(map[string]struct{})\n\tt.serverNamesMu = new(sync.Mutex)\n\n\t// set up default DNS module, if any, and make sure it implements all the\n\t// common libdns interfaces, since it could be used for a variety of things\n\t// (do this before provisioning other modules, since they may rely on this)\n\tif len(t.DNSRaw) > 0 {\n\t\tdnsMod, err := ctx.LoadModule(t, \"DNSRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading overall DNS provider module: %v\", err)\n\t\t}\n\t\tswitch dnsMod.(type) {\n\t\tcase interface {\n\t\t\tlibdns.RecordAppender\n\t\t\tlibdns.RecordDeleter\n\t\t\tlibdns.RecordGetter\n\t\t\tlibdns.RecordSetter\n\t\t}:\n\t\tdefault:\n\t\t\treturn fmt.Errorf(\"DNS module does not implement the most common libdns interfaces: %T\", dnsMod)\n\t\t}\n\t\tt.dns = dnsMod\n\t}\n\n\t// set up a new certificate cache; this (re)loads all certificates\n\tcacheOpts := certmagic.CacheOptions{\n\t\tGetConfigForCert: func(cert certmagic.Certificate) (*certmagic.Config, error) {\n\t\t\treturn t.getConfigForName(cert.Names[0]), nil\n\t\t},\n\t\tLogger: t.logger.Named(\"cache\"),\n\t}\n\tif t.Automation != nil {\n\t\tcacheOpts.OCSPCheckInterval = time.Duration(t.Automation.OCSPCheckInterval)\n\t\tcacheOpts.RenewCheckInterval = time.Duration(t.Automation.RenewCheckInterval)\n\t}\n\tif t.Cache != nil {\n\t\tcacheOpts.Capacity = t.Cache.Capacity\n\t}\n\tif cacheOpts.Capacity <= 0 {\n\t\tcacheOpts.Capacity = 10000\n\t}\n\n\tcertCacheMu.Lock()\n\tif certCache == nil {\n\t\tcertCache = certmagic.NewCache(cacheOpts)\n\t} else {\n\t\tcertCache.SetOptions(cacheOpts)\n\t}\n\tcertCacheMu.Unlock()\n\n\t// certificate loaders\n\tval, err := ctx.LoadModule(t, \"CertificatesRaw\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"loading certificate loader modules: %s\", err)\n\t}\n\tfor modName, modIface := range val.(map[string]any) {\n\t\tif modName == \"automate\" {\n\t\t\t// special case; these will be loaded in later using our automation facilities,\n\t\t\t// which we want to avoid doing during provisioning\n\t\t\tif automateNames, ok := modIface.(*AutomateLoader); ok && automateNames != nil {\n\t\t\t\tif t.automateNames == nil {\n\t\t\t\t\tt.automateNames = make(map[string]struct{})\n\t\t\t\t}\n\t\t\t\trepl := caddy.NewReplacer()\n\t\t\t\tfor _, sub := range *automateNames {\n\t\t\t\t\tt.automateNames[repl.ReplaceAll(sub, \"\")] = struct{}{}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\treturn fmt.Errorf(\"loading certificates with 'automate' requires array of strings, got: %T\", modIface)\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\t\tt.certificateLoaders = append(t.certificateLoaders, modIface.(CertificateLoader))\n\t}\n\n\t// using the certificate loaders we just initialized, load\n\t// manual/static (unmanaged) certificates - we do this in\n\t// provision so that other apps (such as http) can know which\n\t// certificates have been manually loaded, and also so that\n\t// commands like validate can be a better test\n\tcertCacheMu.RLock()\n\tmagic := certmagic.New(certCache, certmagic.Config{\n\t\tStorage: ctx.Storage(),\n\t\tLogger:  t.logger,\n\t\tOnEvent: t.onEvent,\n\t\tOCSP: certmagic.OCSPConfig{\n\t\t\tDisableStapling: t.DisableOCSPStapling,\n\t\t},\n\t\tDisableStorageCheck: t.DisableStorageCheck,\n\t})\n\tcertCacheMu.RUnlock()\n\tfor _, loader := range t.certificateLoaders {\n\t\tcerts, err := loader.LoadCertificates()\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading certificates: %v\", err)\n\t\t}\n\t\tfor _, cert := range certs {\n\t\t\thash, err := magic.CacheUnmanagedTLSCertificate(ctx, cert.Certificate, cert.Tags)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"caching unmanaged certificate: %v\", err)\n\t\t\t}\n\t\t\tt.loaded[hash] = \"\"\n\t\t}\n\t}\n\n\t// on-demand permission module\n\tif t.Automation != nil && t.Automation.OnDemand != nil && t.Automation.OnDemand.PermissionRaw != nil {\n\t\tif t.Automation.OnDemand.Ask != \"\" {\n\t\t\treturn fmt.Errorf(\"on-demand TLS config conflict: both 'ask' endpoint and a 'permission' module are specified; 'ask' is deprecated, so use only the permission module\")\n\t\t}\n\t\tval, err := ctx.LoadModule(t.Automation.OnDemand, \"PermissionRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading on-demand TLS permission module: %v\", err)\n\t\t}\n\t\tt.Automation.OnDemand.permission = val.(OnDemandPermission)\n\t}\n\n\t// automation/management policies\n\tif t.Automation == nil {\n\t\tt.Automation = new(AutomationConfig)\n\t}\n\tt.Automation.defaultPublicAutomationPolicy = new(AutomationPolicy)\n\terr = t.Automation.defaultPublicAutomationPolicy.Provision(t)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"provisioning default public automation policy: %v\", err)\n\t}\n\tfor n := range t.automateNames {\n\t\t// if any names specified by the \"automate\" loader do not qualify for a public\n\t\t// certificate, we should initialize a default internal automation policy\n\t\t// (but we don't want to do this unnecessarily, since it may prompt for password!)\n\t\tif certmagic.SubjectQualifiesForPublicCert(n) {\n\t\t\tcontinue\n\t\t}\n\t\tt.Automation.defaultInternalAutomationPolicy = &AutomationPolicy{\n\t\t\tIssuersRaw: []json.RawMessage{json.RawMessage(`{\"module\":\"internal\"}`)},\n\t\t}\n\t\terr = t.Automation.defaultInternalAutomationPolicy.Provision(t)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"provisioning default internal automation policy: %v\", err)\n\t\t}\n\t\tbreak\n\t}\n\tfor i, ap := range t.Automation.Policies {\n\t\terr := ap.Provision(t)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"provisioning automation policy %d: %v\", i, err)\n\t\t}\n\t}\n\n\t// run replacer on ask URL (for environment variables) -- return errors to prevent surprises (#5036)\n\tif t.Automation != nil && t.Automation.OnDemand != nil && t.Automation.OnDemand.Ask != \"\" {\n\t\tt.Automation.OnDemand.Ask, err = repl.ReplaceOrErr(t.Automation.OnDemand.Ask, true, true)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"preparing 'ask' endpoint: %v\", err)\n\t\t}\n\t\tperm := PermissionByHTTP{\n\t\t\tEndpoint: t.Automation.OnDemand.Ask,\n\t\t}\n\t\tif err := perm.Provision(ctx); err != nil {\n\t\t\treturn fmt.Errorf(\"provisioning 'ask' module: %v\", err)\n\t\t}\n\t\tt.Automation.OnDemand.permission = perm\n\t}\n\n\t// session ticket ephemeral keys (STEK) service and provider\n\tif t.SessionTickets != nil {\n\t\terr := t.SessionTickets.provision(ctx)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"provisioning session tickets configuration: %v\", err)\n\t\t}\n\t}\n\n\t// ECH (Encrypted ClientHello) initialization\n\tif t.EncryptedClientHello != nil {\n\t\touterNames, err := t.EncryptedClientHello.Provision(ctx)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"provisioning Encrypted ClientHello components: %v\", err)\n\t\t}\n\n\t\t// outer names should have certificates to reduce client brittleness\n\t\tfor _, outerName := range outerNames {\n\t\t\tif outerName == \"\" {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif !t.HasCertificateForSubject(outerName) {\n\t\t\t\tif t.automateNames == nil {\n\t\t\t\t\tt.automateNames = make(map[string]struct{})\n\t\t\t\t}\n\t\t\t\tt.automateNames[outerName] = struct{}{}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// Validate validates t's configuration.\nfunc (t *TLS) Validate() error {\n\tif t.Automation != nil {\n\t\t// ensure that host aren't repeated; since only the first\n\t\t// automation policy is used, repeating a host in the lists\n\t\t// isn't useful and is probably a mistake; same for two\n\t\t// catch-all/default policies\n\t\tvar hasDefault bool\n\t\thostSet := make(map[string]int)\n\t\tfor i, ap := range t.Automation.Policies {\n\t\t\tif len(ap.subjects) == 0 {\n\t\t\t\tif hasDefault {\n\t\t\t\t\treturn fmt.Errorf(\"automation policy %d is the second policy that acts as default/catch-all, but will never be used\", i)\n\t\t\t\t}\n\t\t\t\thasDefault = true\n\t\t\t}\n\t\t\tfor _, h := range ap.subjects {\n\t\t\t\tif first, ok := hostSet[h]; ok {\n\t\t\t\t\treturn fmt.Errorf(\"automation policy %d: cannot apply more than one automation policy to host: %s (first match in policy %d)\", i, h, first)\n\t\t\t\t}\n\t\t\t\thostSet[h] = i\n\t\t\t}\n\t\t}\n\t}\n\tif t.Cache != nil {\n\t\tif t.Cache.Capacity < 0 {\n\t\t\treturn fmt.Errorf(\"cache capacity must be >= 0\")\n\t\t}\n\t}\n\treturn nil\n}\n\n// Start activates the TLS module.\nfunc (t *TLS) Start() error {\n\t// warn if on-demand TLS is enabled but no restrictions are in place\n\tif t.Automation.OnDemand == nil || (t.Automation.OnDemand.Ask == \"\" && t.Automation.OnDemand.permission == nil) {\n\t\tfor _, ap := range t.Automation.Policies {\n\t\t\tif ap.OnDemand && ap.isWildcardOrDefault() {\n\t\t\t\tif c := t.logger.Check(zapcore.WarnLevel, \"YOUR SERVER MAY BE VULNERABLE TO ABUSE: on-demand TLS is enabled, but no protections are in place\"); c != nil {\n\t\t\t\t\tc.Write(zap.String(\"docs\", \"https://caddyserver.com/docs/automatic-https#on-demand-tls\"))\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\n\t// now that we are running, and all manual certificates have\n\t// been loaded, time to load the automated/managed certificates\n\terr := t.Manage(t.automateNames)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"automate: managing %v: %v\", t.automateNames, err)\n\t}\n\n\tif t.EncryptedClientHello != nil {\n\t\techLogger := t.logger.Named(\"ech\")\n\n\t\t// publish ECH configs in the background; does not need to block\n\t\t// server startup, as it could take a while; then keep keys rotated\n\t\tgo func() {\n\t\t\t// publish immediately first\n\t\t\tif err := t.publishECHConfigs(echLogger); err != nil {\n\t\t\t\techLogger.Error(\"publication(s) failed\", zap.Error(err))\n\t\t\t}\n\n\t\t\t// then every so often, rotate and publish if needed\n\t\t\t// (both of these functions only do something if needed)\n\t\t\tfor {\n\t\t\t\tselect {\n\t\t\t\tcase <-time.After(1 * time.Hour):\n\t\t\t\t\t// ensure old keys are rotated out\n\t\t\t\t\tt.EncryptedClientHello.configsMu.Lock()\n\t\t\t\t\terr = t.EncryptedClientHello.rotateECHKeys(t.ctx, echLogger, false)\n\t\t\t\t\tt.EncryptedClientHello.configsMu.Unlock()\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\techLogger.Error(\"rotating ECH configs failed\", zap.Error(err))\n\t\t\t\t\t\treturn\n\t\t\t\t\t}\n\t\t\t\t\terr := t.publishECHConfigs(echLogger)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\techLogger.Error(\"publication(s) failed\", zap.Error(err))\n\t\t\t\t\t}\n\t\t\t\tcase <-t.ctx.Done():\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t}\n\t\t}()\n\t}\n\n\tif !t.DisableStorageClean {\n\t\t// start the storage cleaner goroutine and ticker,\n\t\t// which cleans out expired certificates and more\n\t\tt.keepStorageClean()\n\t}\n\n\treturn nil\n}\n\n// Stop stops the TLS module and cleans up any allocations.\nfunc (t *TLS) Stop() error {\n\t// stop the storage cleaner goroutine and ticker\n\tif t.storageCleanStop != nil {\n\t\tclose(t.storageCleanStop)\n\t}\n\tif t.storageCleanTicker != nil {\n\t\tt.storageCleanTicker.Stop()\n\t}\n\treturn nil\n}\n\n// Cleanup frees up resources allocated during Provision.\nfunc (t *TLS) Cleanup() error {\n\t// stop the session ticket rotation goroutine\n\tif t.SessionTickets != nil {\n\t\tt.SessionTickets.stop()\n\t}\n\n\t// if a new TLS app was loaded, remove certificates from the cache that are no longer\n\t// being managed or loaded by the new config; if there is no more TLS app running,\n\t// then stop cert maintenance and let the cert cache be GC'ed\n\tif nextTLS, err := caddy.ActiveContext().AppIfConfigured(\"tls\"); err == nil && nextTLS != nil {\n\t\tnextTLSApp := nextTLS.(*TLS)\n\n\t\t// compute which certificates were managed or loaded into the cert cache by this\n\t\t// app instance (which is being stopped) that are not managed or loaded by the\n\t\t// new app instance (which just started), and remove them from the cache\n\t\tvar noLongerManaged []certmagic.SubjectIssuer\n\t\tvar noLongerLoaded []string\n\t\treManage := make(map[string]struct{})\n\t\tfor subj, currentIssuerKey := range t.managing {\n\t\t\t// It's a bit nuanced: managed certs can sometimes be different enough that we have to\n\t\t\t// swap them out for a different one, even if they are for the same subject/domain.\n\t\t\t// We consider \"private\" certs (internal CA/locally-trusted/etc) to be significantly\n\t\t\t// distinct from \"public\" certs (production CAs/globally-trusted/etc) because of the\n\t\t\t// implications when it comes to actual deployments: switching between an internal CA\n\t\t\t// and a production CA, for example, is quite significant. Switching from one public CA\n\t\t\t// to another, however, is not, and for our purposes we consider those to be the same.\n\t\t\t// Anyway, if the next TLS app does not manage a cert for this name at all, definitely\n\t\t\t// remove it from the cache. But if it does, and it's not the same kind of issuer/CA\n\t\t\t// as we have, also remove it, so that it can swap it out for the right one.\n\t\t\tif nextIssuerKey, ok := nextTLSApp.managing[subj]; !ok || nextIssuerKey != currentIssuerKey {\n\t\t\t\t// next app is not managing a cert for this domain at all or is using a different issuer, so remove it\n\t\t\t\tnoLongerManaged = append(noLongerManaged, certmagic.SubjectIssuer{Subject: subj, IssuerKey: currentIssuerKey})\n\n\t\t\t\t// then, if the next app is managing a cert for this name, but with a different issuer, re-manage it\n\t\t\t\tif ok && nextIssuerKey != currentIssuerKey {\n\t\t\t\t\treManage[subj] = struct{}{}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tfor hash := range t.loaded {\n\t\t\tif _, ok := nextTLSApp.loaded[hash]; !ok {\n\t\t\t\tnoLongerLoaded = append(noLongerLoaded, hash)\n\t\t\t}\n\t\t}\n\n\t\t// remove the certs\n\t\tcertCacheMu.RLock()\n\t\tcertCache.RemoveManaged(noLongerManaged)\n\t\tcertCache.Remove(noLongerLoaded)\n\t\tcertCacheMu.RUnlock()\n\n\t\t// give the new TLS app a \"kick\" to manage certs that it is configured for\n\t\t// with its own configuration instead of the one we just evicted\n\t\tif err := nextTLSApp.Manage(reManage); err != nil {\n\t\t\tif c := t.logger.Check(zapcore.ErrorLevel, \"re-managing unloaded certificates with new config\"); c != nil {\n\t\t\t\tc.Write(\n\t\t\t\t\tzap.Strings(\"subjects\", internal.MaxSizeSubjectsListForLog(reManage, 1000)),\n\t\t\t\t\tzap.Error(err),\n\t\t\t\t)\n\t\t\t}\n\t\t}\n\t} else {\n\t\t// no more TLS app running, so delete in-memory cert cache, if it was created yet\n\t\tcertCacheMu.RLock()\n\t\thasCache := certCache != nil\n\t\tcertCacheMu.RUnlock()\n\t\tif hasCache {\n\t\t\tcertCache.Stop()\n\t\t\tcertCacheMu.Lock()\n\t\t\tcertCache = nil\n\t\t\tcertCacheMu.Unlock()\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// Manage immediately begins managing subjects according to the\n// matching automation policy. The subjects are given in a map\n// to prevent duplication and also because quick lookups are\n// needed to assess wildcard coverage, if any, depending on\n// certain config parameters (with lots of subjects, computing\n// wildcard coverage over a slice can be highly inefficient).\nfunc (t *TLS) Manage(subjects map[string]struct{}) error {\n\t// for a large number of names, we can be more memory-efficient\n\t// by making only one certmagic.Config for all the names that\n\t// use that config, rather than calling ManageAsync once for\n\t// every name; so first, bin names by AutomationPolicy\n\tpolicyToNames := make(map[*AutomationPolicy][]string)\n\tfor subj := range subjects {\n\t\tap := t.getAutomationPolicyForName(subj)\n\t\t// by default, if a wildcard that covers the subj is also being\n\t\t// managed, either by a previous call to Manage or by this one,\n\t\t// prefer using that over individual certs for its subdomains;\n\t\t// but users can disable this and force getting a certificate for\n\t\t// subdomains by adding the name to the 'automate' cert loader\n\t\tif t.managingWildcardFor(subj, subjects) {\n\t\t\tif _, ok := t.automateNames[subj]; !ok {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\t\tpolicyToNames[ap] = append(policyToNames[ap], subj)\n\t}\n\n\t// now that names are grouped by policy, we can simply make one\n\t// certmagic.Config for each (potentially large) group of names\n\t// and call ManageAsync just once for the whole batch\n\tfor ap, names := range policyToNames {\n\t\terr := ap.magic.ManageAsync(t.ctx.Context, names)\n\t\tif err != nil {\n\t\t\tconst maxNamesToDisplay = 100\n\t\t\tif len(names) > maxNamesToDisplay {\n\t\t\t\tnames = append(names[:maxNamesToDisplay], fmt.Sprintf(\"(and %d more...)\", len(names)-maxNamesToDisplay))\n\t\t\t}\n\t\t\treturn fmt.Errorf(\"automate: manage %v: %v\", names, err)\n\t\t}\n\t\tfor _, name := range names {\n\t\t\t// certs that are issued solely by our internal issuer are a little bit of\n\t\t\t// a special case: if you have an initial config that manages example.com\n\t\t\t// using internal CA, then after testing it you switch to a production CA,\n\t\t\t// you wouldn't want to keep using the same self-signed cert, obviously;\n\t\t\t// so we differentiate these by associating the subject with its issuer key;\n\t\t\t// we do this because CertMagic has no notion of \"InternalIssuer\" like we\n\t\t\t// do, so we have to do this logic ourselves\n\t\t\tvar issuerKey string\n\t\t\tif len(ap.Issuers) == 1 {\n\t\t\t\tif intIss, ok := ap.Issuers[0].(*InternalIssuer); ok && intIss != nil {\n\t\t\t\t\tissuerKey = intIss.IssuerKey()\n\t\t\t\t}\n\t\t\t}\n\t\t\tt.managing[name] = issuerKey\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// managingWildcardFor returns true if the app is managing a certificate that covers that\n// subject name (including consideration of wildcards), either from its internal list of\n// names that it IS managing certs for, or from the otherSubjsToManage which includes names\n// that WILL be managed.\nfunc (t *TLS) managingWildcardFor(subj string, otherSubjsToManage map[string]struct{}) bool {\n\t// TODO: we could also consider manually-loaded certs using t.HasCertificateForSubject(),\n\t// but that does not account for how manually-loaded certs may be restricted as to which\n\t// hostnames or ClientHellos they can be used with by tags, etc; I don't *think* anyone\n\t// necessarily wants this anyway, but I thought I'd note this here for now (if we did\n\t// consider manually-loaded certs, we'd probably want to rename the method since it\n\t// wouldn't be just about managed certs anymore)\n\n\t// IP addresses must match exactly\n\tif ip := net.ParseIP(subj); ip != nil {\n\t\t_, managing := t.managing[subj]\n\t\treturn managing\n\t}\n\n\t// replace labels of the domain with wildcards until we get a match\n\tlabels := strings.Split(subj, \".\")\n\tfor i := range labels {\n\t\tif labels[i] == \"*\" {\n\t\t\tcontinue\n\t\t}\n\t\tlabels[i] = \"*\"\n\t\tcandidate := strings.Join(labels, \".\")\n\t\tif _, ok := t.managing[candidate]; ok {\n\t\t\treturn true\n\t\t}\n\t\tif _, ok := otherSubjsToManage[candidate]; ok {\n\t\t\treturn true\n\t\t}\n\t}\n\n\treturn false\n}\n\n// RegisterServerNames registers the provided DNS names with the TLS app.\n// This is currently used to auto-publish Encrypted ClientHello (ECH)\n// configurations, if enabled. Use of this function by apps using the TLS\n// app removes the need for the user to redundantly specify domain names\n// in their configuration. This function separates hostname and port\n// (keeping only the hotsname) and filters IP addresses, which can't be\n// used with ECH.\n//\n// EXPERIMENTAL: This function and its semantics/behavior are subject to change.\nfunc (t *TLS) RegisterServerNames(dnsNames []string) {\n\tt.serverNamesMu.Lock()\n\tfor _, name := range dnsNames {\n\t\thost, _, err := net.SplitHostPort(name)\n\t\tif err != nil {\n\t\t\thost = name\n\t\t}\n\t\tif strings.TrimSpace(host) != \"\" && !certmagic.SubjectIsIP(host) {\n\t\t\tt.serverNames[strings.ToLower(host)] = struct{}{}\n\t\t}\n\t}\n\tt.serverNamesMu.Unlock()\n}\n\n// HandleHTTPChallenge ensures that the ACME HTTP challenge or ZeroSSL HTTP\n// validation request is handled for the certificate named by r.Host, if it\n// is an HTTP challenge request. It requires that the automation policy for\n// r.Host has an issuer that implements GetACMEIssuer() or is a *ZeroSSLIssuer.\nfunc (t *TLS) HandleHTTPChallenge(w http.ResponseWriter, r *http.Request) bool {\n\tacmeChallenge := certmagic.LooksLikeHTTPChallenge(r)\n\tzerosslValidation := certmagic.LooksLikeZeroSSLHTTPValidation(r)\n\n\t// no-op if it's not an ACME challenge request\n\tif !acmeChallenge && !zerosslValidation {\n\t\treturn false\n\t}\n\n\t// try all the issuers until we find the one that initiated the challenge\n\tap := t.getAutomationPolicyForName(r.Host)\n\n\tif acmeChallenge {\n\t\ttype acmeCapable interface{ GetACMEIssuer() *ACMEIssuer }\n\n\t\tfor _, iss := range ap.magic.Issuers {\n\t\t\tif acmeIssuer, ok := iss.(acmeCapable); ok {\n\t\t\t\tif acmeIssuer.GetACMEIssuer().issuer.HandleHTTPChallenge(w, r) {\n\t\t\t\t\treturn true\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// it's possible another server in this process initiated the challenge;\n\t\t// users have requested that Caddy only handle HTTP challenges it initiated,\n\t\t// so that users can proxy the others through to their backends; but we\n\t\t// might not have an automation policy for all identifiers that are trying\n\t\t// to get certificates (e.g. the admin endpoint), so we do this manual check\n\t\tif challenge, ok := certmagic.GetACMEChallenge(r.Host); ok {\n\t\t\treturn certmagic.SolveHTTPChallenge(t.logger, w, r, challenge.Challenge)\n\t\t}\n\t} else if zerosslValidation {\n\t\tfor _, iss := range ap.magic.Issuers {\n\t\t\tif ziss, ok := iss.(*ZeroSSLIssuer); ok {\n\t\t\t\tif ziss.issuer.HandleZeroSSLHTTPValidation(w, r) {\n\t\t\t\t\treturn true\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn false\n}\n\n// AddAutomationPolicy provisions and adds ap to the list of the app's\n// automation policies. If an existing automation policy exists that has\n// fewer hosts in its list than ap does, ap will be inserted before that\n// other policy (this helps ensure that ap will be prioritized/chosen\n// over, say, a catch-all policy).\nfunc (t *TLS) AddAutomationPolicy(ap *AutomationPolicy) error {\n\tif t.Automation == nil {\n\t\tt.Automation = new(AutomationConfig)\n\t}\n\terr := ap.Provision(t)\n\tif err != nil {\n\t\treturn err\n\t}\n\t// sort new automation policies just before any other which is a superset\n\t// of this one; if we find an existing policy that covers every subject in\n\t// ap but less specifically (e.g. a catch-all policy, or one with wildcards\n\t// or with fewer subjects), insert ap just before it, otherwise ap would\n\t// never be used because the first matching policy is more general\n\tfor i, existing := range t.Automation.Policies {\n\t\t// first see if existing is superset of ap for all names\n\t\tvar otherIsSuperset bool\n\touter:\n\t\tfor _, thisSubj := range ap.subjects {\n\t\t\tfor _, otherSubj := range existing.subjects {\n\t\t\t\tif certmagic.MatchWildcard(thisSubj, otherSubj) {\n\t\t\t\t\totherIsSuperset = true\n\t\t\t\t\tbreak outer\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\t// if existing AP is a superset or if it contains fewer names (i.e. is\n\t\t// more general), then new AP is more specific, so insert before it\n\t\tif otherIsSuperset || len(existing.SubjectsRaw) < len(ap.SubjectsRaw) {\n\t\t\tt.Automation.Policies = append(t.Automation.Policies[:i],\n\t\t\t\tappend([]*AutomationPolicy{ap}, t.Automation.Policies[i:]...)...)\n\t\t\treturn nil\n\t\t}\n\t}\n\t// otherwise just append the new one\n\tt.Automation.Policies = append(t.Automation.Policies, ap)\n\treturn nil\n}\n\nfunc (t *TLS) getConfigForName(name string) *certmagic.Config {\n\tap := t.getAutomationPolicyForName(name)\n\treturn ap.magic\n}\n\n// getAutomationPolicyForName returns the first matching automation policy\n// for the given subject name. If no matching policy can be found, the\n// default policy is used, depending on whether the name qualifies for a\n// public certificate or not.\nfunc (t *TLS) getAutomationPolicyForName(name string) *AutomationPolicy {\n\tfor _, ap := range t.Automation.Policies {\n\t\tif len(ap.subjects) == 0 {\n\t\t\treturn ap // no host filter is an automatic match\n\t\t}\n\t\tfor _, h := range ap.subjects {\n\t\t\tif certmagic.MatchWildcard(name, h) {\n\t\t\t\treturn ap\n\t\t\t}\n\t\t}\n\t}\n\tif certmagic.SubjectQualifiesForPublicCert(name) || t.Automation.defaultInternalAutomationPolicy == nil {\n\t\treturn t.Automation.defaultPublicAutomationPolicy\n\t}\n\treturn t.Automation.defaultInternalAutomationPolicy\n}\n\n// AllMatchingCertificates returns the list of all certificates in\n// the cache which could be used to satisfy the given SAN.\nfunc AllMatchingCertificates(san string) []certmagic.Certificate {\n\treturn certCache.AllMatchingCertificates(san)\n}\n\nfunc (t *TLS) HasCertificateForSubject(subject string) bool {\n\tcertCacheMu.RLock()\n\tallMatchingCerts := certCache.AllMatchingCertificates(subject)\n\tcertCacheMu.RUnlock()\n\tfor _, cert := range allMatchingCerts {\n\t\t// check if the cert is manually loaded by this config\n\t\tif _, ok := t.loaded[cert.Hash()]; ok {\n\t\t\treturn true\n\t\t}\n\t\t// check if the cert is automatically managed by this config\n\t\tfor _, name := range cert.Names {\n\t\t\tif _, ok := t.managing[name]; ok {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t}\n\treturn false\n}\n\n// keepStorageClean starts a goroutine that immediately cleans up all\n// known storage units if it was not recently done, and then runs the\n// operation at every tick from t.storageCleanTicker.\nfunc (t *TLS) keepStorageClean() {\n\tt.storageCleanTicker = time.NewTicker(t.storageCleanInterval())\n\tt.storageCleanStop = make(chan struct{})\n\tgo func() {\n\t\tdefer func() {\n\t\t\tif err := recover(); err != nil {\n\t\t\t\tlog.Printf(\"[PANIC] storage cleaner: %v\\n%s\", err, debug.Stack())\n\t\t\t}\n\t\t}()\n\t\tt.cleanStorageUnits()\n\t\tfor {\n\t\t\tselect {\n\t\t\tcase <-t.storageCleanStop:\n\t\t\t\treturn\n\t\t\tcase <-t.storageCleanTicker.C:\n\t\t\t\tt.cleanStorageUnits()\n\t\t\t}\n\t\t}\n\t}()\n}\n\nfunc (t *TLS) cleanStorageUnits() {\n\tstorageCleanMu.Lock()\n\tdefer storageCleanMu.Unlock()\n\n\t// TODO: This check might not be needed anymore now that CertMagic syncs\n\t// and throttles storage cleaning globally across the cluster.\n\t// The original comment below might be outdated:\n\t//\n\t// If storage was cleaned recently, don't do it again for now. Although the ticker\n\t// calling this function drops missed ticks for us, config reloads discard the old\n\t// ticker and replace it with a new one, possibly invoking a cleaning to happen again\n\t// too soon. (We divide the interval by 2 because the actual cleaning takes non-zero\n\t// time, and we don't want to skip cleanings if we don't have to; whereas if a cleaning\n\t// took most of the interval, we'd probably want to skip the next one so we aren't\n\t// constantly cleaning. This allows cleanings to take up to half the interval's\n\t// duration before we decide to skip the next one.)\n\tif !storageClean.IsZero() && time.Since(storageClean) < t.storageCleanInterval()/2 {\n\t\treturn\n\t}\n\n\tid, err := caddy.InstanceID()\n\tif err != nil {\n\t\tif c := t.logger.Check(zapcore.WarnLevel, \"unable to get instance ID; storage clean stamps will be incomplete\"); c != nil {\n\t\t\tc.Write(zap.Error(err))\n\t\t}\n\t}\n\toptions := certmagic.CleanStorageOptions{\n\t\tLogger:                 t.logger,\n\t\tInstanceID:             id.String(),\n\t\tInterval:               t.storageCleanInterval(),\n\t\tOCSPStaples:            true,\n\t\tExpiredCerts:           true,\n\t\tExpiredCertGracePeriod: 24 * time.Hour * 14,\n\t}\n\n\t// start with the default/global storage\n\terr = certmagic.CleanStorage(t.ctx, t.ctx.Storage(), options)\n\tif err != nil {\n\t\t// probably don't want to return early, since we should still\n\t\t// see if any other storages can get cleaned up\n\t\tif c := t.logger.Check(zapcore.ErrorLevel, \"could not clean default/global storage\"); c != nil {\n\t\t\tc.Write(zap.Error(err))\n\t\t}\n\t}\n\n\t// then clean each storage defined in ACME automation policies\n\tif t.Automation != nil {\n\t\tfor _, ap := range t.Automation.Policies {\n\t\t\tif ap.storage == nil {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif err := certmagic.CleanStorage(t.ctx, ap.storage, options); err != nil {\n\t\t\t\tif c := t.logger.Check(zapcore.ErrorLevel, \"could not clean storage configured in automation policy\"); c != nil {\n\t\t\t\t\tc.Write(zap.Error(err))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t// remember last time storage was finished cleaning\n\tstorageClean = time.Now()\n\n\tt.logger.Info(\"finished cleaning storage units\")\n}\n\nfunc (t *TLS) storageCleanInterval() time.Duration {\n\tif t.Automation != nil && t.Automation.StorageCleanInterval > 0 {\n\t\treturn time.Duration(t.Automation.StorageCleanInterval)\n\t}\n\treturn defaultStorageCleanInterval\n}\n\n// onEvent translates CertMagic events into Caddy events then dispatches them.\nfunc (t *TLS) onEvent(ctx context.Context, eventName string, data map[string]any) error {\n\tevt := t.events.Emit(t.ctx, eventName, data)\n\treturn evt.Aborted\n}\n\n// CertificateLoader is a type that can load certificates.\n// Certificates can optionally be associated with tags.\ntype CertificateLoader interface {\n\tLoadCertificates() ([]Certificate, error)\n}\n\n// Certificate is a TLS certificate, optionally\n// associated with arbitrary tags.\ntype Certificate struct {\n\ttls.Certificate\n\tTags []string\n}\n\n// AutomateLoader will automatically manage certificates for the names in the\n// list, including obtaining and renewing certificates. Automated certificates\n// are managed according to their matching automation policy, configured\n// elsewhere in this app.\n//\n// Technically, this is a no-op certificate loader module that is treated as\n// a special case: it uses this app's automation features to load certificates\n// for the list of hostnames, rather than loading certificates manually. But\n// the end result is the same: certificates for these subject names will be\n// loaded into the in-memory cache and may then be used.\ntype AutomateLoader []string\n\n// CaddyModule returns the Caddy module information.\nfunc (AutomateLoader) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.certificates.automate\",\n\t\tNew: func() caddy.Module { return new(AutomateLoader) },\n\t}\n}\n\n// CertCacheOptions configures the certificate cache.\ntype CertCacheOptions struct {\n\t// Maximum number of certificates to allow in the\n\t// cache. If reached, certificates will be randomly\n\t// evicted to make room for new ones. Default: 10,000\n\tCapacity int `json:\"capacity,omitempty\"`\n}\n\n// Variables related to storage cleaning.\nvar (\n\tdefaultStorageCleanInterval = 24 * time.Hour\n\n\tstorageClean   time.Time\n\tstorageCleanMu sync.Mutex\n)\n\n// Interface guards\nvar (\n\t_ caddy.App          = (*TLS)(nil)\n\t_ caddy.Provisioner  = (*TLS)(nil)\n\t_ caddy.Validator    = (*TLS)(nil)\n\t_ caddy.CleanerUpper = (*TLS)(nil)\n)\n"
  },
  {
    "path": "modules/caddytls/values.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"fmt\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"github.com/klauspost/cpuid/v2\"\n)\n\n// CipherSuiteNameSupported returns true if name is\n// a supported cipher suite.\nfunc CipherSuiteNameSupported(name string) bool {\n\treturn CipherSuiteID(name) != 0\n}\n\n// CipherSuiteID returns the ID of the cipher suite associated with\n// the given name, or 0 if the name is not recognized/supported.\nfunc CipherSuiteID(name string) uint16 {\n\tfor _, cs := range SupportedCipherSuites() {\n\t\tif cs.Name == name {\n\t\t\treturn cs.ID\n\t\t}\n\t}\n\treturn 0\n}\n\n// SupportedCipherSuites returns a list of all the cipher suites\n// Caddy supports. The list is NOT ordered by security preference.\nfunc SupportedCipherSuites() []*tls.CipherSuite {\n\treturn tls.CipherSuites()\n}\n\n// defaultCipherSuites is the ordered list of all the cipher\n// suites we want to support by default, assuming AES-NI\n// (hardware acceleration for AES).\nvar defaultCipherSuitesWithAESNI = []uint16{\n\ttls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,\n\ttls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,\n\ttls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,\n\ttls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,\n\ttls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,\n\ttls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,\n}\n\n// defaultCipherSuites is the ordered list of all the cipher\n// suites we want to support by default, assuming lack of\n// AES-NI (NO hardware acceleration for AES).\nvar defaultCipherSuitesWithoutAESNI = []uint16{\n\ttls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,\n\ttls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,\n\ttls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,\n\ttls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,\n\ttls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,\n\ttls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,\n}\n\n// getOptimalDefaultCipherSuites returns an appropriate cipher\n// suite to use depending on the hardware support for AES.\n//\n// See https://github.com/caddyserver/caddy/issues/1674\nfunc getOptimalDefaultCipherSuites() []uint16 {\n\tif cpuid.CPU.Supports(cpuid.AESNI) {\n\t\treturn defaultCipherSuitesWithAESNI\n\t}\n\treturn defaultCipherSuitesWithoutAESNI\n}\n\n// SupportedCurves is the unordered map of supported curves\n// or key exchange mechanisms (\"curves\" traditionally).\n// https://golang.org/pkg/crypto/tls/#CurveID\nvar SupportedCurves = map[string]tls.CurveID{\n\t\"x25519mlkem768\": tls.X25519MLKEM768,\n\t\"x25519\":         tls.X25519,\n\t\"secp256r1\":      tls.CurveP256,\n\t\"secp384r1\":      tls.CurveP384,\n\t\"secp521r1\":      tls.CurveP521,\n}\n\n// supportedCertKeyTypes is all the key types that are supported\n// for certificates that are obtained through ACME.\nvar supportedCertKeyTypes = map[string]certmagic.KeyType{\n\t\"rsa2048\": certmagic.RSA2048,\n\t\"rsa4096\": certmagic.RSA4096,\n\t\"p256\":    certmagic.P256,\n\t\"p384\":    certmagic.P384,\n\t\"ed25519\": certmagic.ED25519,\n}\n\n// defaultCurves is the list of only the curves or key exchange\n// mechanisms we want to use by default. The order is irrelevant.\n//\n// This list should only include mechanisms which are fast by\n// design (e.g. X25519) and those for which an optimized assembly\n// implementation exists (e.g. P256). The latter ones can be\n// found here:\n// https://github.com/golang/go/tree/master/src/crypto/elliptic\nvar defaultCurves = []tls.CurveID{\n\ttls.X25519MLKEM768,\n\ttls.X25519,\n\ttls.CurveP256,\n}\n\n// SupportedProtocols is a map of supported protocols.\nvar SupportedProtocols = map[string]uint16{\n\t\"tls1.2\": tls.VersionTLS12,\n\t\"tls1.3\": tls.VersionTLS13,\n}\n\n// unsupportedProtocols is a map of unsupported protocols.\n// Used for logging only, not enforcement.\nvar unsupportedProtocols = map[string]uint16{\n\t//nolint:staticcheck\n\t\"ssl3.0\": tls.VersionSSL30,\n\t\"tls1.0\": tls.VersionTLS10,\n\t\"tls1.1\": tls.VersionTLS11,\n}\n\n// publicKeyAlgorithms is the map of supported public key algorithms.\nvar publicKeyAlgorithms = map[string]x509.PublicKeyAlgorithm{\n\t\"rsa\":   x509.RSA,\n\t\"dsa\":   x509.DSA,\n\t\"ecdsa\": x509.ECDSA,\n}\n\n// ProtocolName returns the standard name for the passed protocol version ID\n// (e.g.  \"TLS1.3\") or a fallback representation of the ID value if the version\n// is not supported.\nfunc ProtocolName(id uint16) string {\n\tfor k, v := range SupportedProtocols {\n\t\tif v == id {\n\t\t\treturn k\n\t\t}\n\t}\n\n\tfor k, v := range unsupportedProtocols {\n\t\tif v == id {\n\t\t\treturn k\n\t\t}\n\t}\n\n\treturn fmt.Sprintf(\"0x%04x\", id)\n}\n"
  },
  {
    "path": "modules/caddytls/zerosslissuer.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddytls\n\nimport (\n\t\"context\"\n\t\"crypto/x509\"\n\t\"fmt\"\n\t\"strconv\"\n\t\"time\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(new(ZeroSSLIssuer))\n}\n\n// ZeroSSLIssuer uses the ZeroSSL API to get certificates.\n// Note that this is distinct from ZeroSSL's ACME endpoint.\n// To use ZeroSSL's ACME endpoint, use the ACMEIssuer\n// configured with ZeroSSL's ACME directory endpoint.\ntype ZeroSSLIssuer struct {\n\t// The API key (or \"access key\") for using the ZeroSSL API.\n\t// REQUIRED.\n\tAPIKey string `json:\"api_key,omitempty\"` //nolint:gosec // false positive... yes this is exported, for JSON interop\n\n\t// How many days the certificate should be valid for.\n\t// Only certain values are accepted; see ZeroSSL docs.\n\tValidityDays int `json:\"validity_days,omitempty\"`\n\n\t// The host to bind to when opening a listener for\n\t// verifying domain names (or IPs).\n\tListenHost string `json:\"listen_host,omitempty\"`\n\n\t// If HTTP is forwarded from port 80, specify the\n\t// forwarded port here.\n\tAlternateHTTPPort int `json:\"alternate_http_port,omitempty\"`\n\n\t// Use CNAME validation instead of HTTP. ZeroSSL's\n\t// API uses CNAME records for DNS validation, similar\n\t// to how Let's Encrypt uses TXT records for the\n\t// DNS challenge.\n\tCNAMEValidation *DNSChallengeConfig `json:\"cname_validation,omitempty\"`\n\n\tlogger  *zap.Logger\n\tstorage certmagic.Storage\n\tissuer  *certmagic.ZeroSSLIssuer\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (*ZeroSSLIssuer) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"tls.issuance.zerossl\",\n\t\tNew: func() caddy.Module { return new(ZeroSSLIssuer) },\n\t}\n}\n\n// Provision sets up the issuer.\nfunc (iss *ZeroSSLIssuer) Provision(ctx caddy.Context) error {\n\tiss.logger = ctx.Logger()\n\tiss.storage = ctx.Storage()\n\trepl := caddy.NewReplacer()\n\n\tvar dnsManager *certmagic.DNSManager\n\tif iss.CNAMEValidation != nil && len(iss.CNAMEValidation.ProviderRaw) > 0 {\n\t\tval, err := ctx.LoadModule(iss.CNAMEValidation, \"ProviderRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading DNS provider module: %v\", err)\n\t\t}\n\t\tdnsManager = &certmagic.DNSManager{\n\t\t\tDNSProvider:        val.(certmagic.DNSProvider),\n\t\t\tTTL:                time.Duration(iss.CNAMEValidation.TTL),\n\t\t\tPropagationDelay:   time.Duration(iss.CNAMEValidation.PropagationDelay),\n\t\t\tPropagationTimeout: time.Duration(iss.CNAMEValidation.PropagationTimeout),\n\t\t\tResolvers:          iss.CNAMEValidation.Resolvers,\n\t\t\tOverrideDomain:     iss.CNAMEValidation.OverrideDomain,\n\t\t\tLogger:             iss.logger.Named(\"cname\"),\n\t\t}\n\t}\n\n\tiss.issuer = &certmagic.ZeroSSLIssuer{\n\t\tAPIKey:          repl.ReplaceAll(iss.APIKey, \"\"),\n\t\tValidityDays:    iss.ValidityDays,\n\t\tListenHost:      iss.ListenHost,\n\t\tAltHTTPPort:     iss.AlternateHTTPPort,\n\t\tStorage:         iss.storage,\n\t\tCNAMEValidation: dnsManager,\n\t\tLogger:          iss.logger,\n\t}\n\n\treturn nil\n}\n\n// Issue obtains a certificate for the given csr.\nfunc (iss *ZeroSSLIssuer) Issue(ctx context.Context, csr *x509.CertificateRequest) (*certmagic.IssuedCertificate, error) {\n\treturn iss.issuer.Issue(ctx, csr)\n}\n\n// IssuerKey returns the unique issuer key for the configured CA endpoint.\nfunc (iss *ZeroSSLIssuer) IssuerKey() string {\n\treturn iss.issuer.IssuerKey()\n}\n\n// Revoke revokes the given certificate.\nfunc (iss *ZeroSSLIssuer) Revoke(ctx context.Context, cert certmagic.CertificateResource, reason int) error {\n\treturn iss.issuer.Revoke(ctx, cert, reason)\n}\n\n// UnmarshalCaddyfile deserializes Caddyfile tokens into iss.\n//\n//\t... zerossl <api_key> {\n//\t\t    validity_days <days>\n//\t\t    alt_http_port <port>\n//\t\t    dns <provider_name> ...\n//\t\t    propagation_delay <duration>\n//\t\t    propagation_timeout <duration>\n//\t\t    resolvers <list...>\n//\t\t    dns_ttl <duration>\n//\t}\nfunc (iss *ZeroSSLIssuer) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume issuer name\n\n\t// API key is required\n\tif !d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\tiss.APIKey = d.Val()\n\tif d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\n\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\tswitch d.Val() {\n\t\tcase \"validity_days\":\n\t\t\tif iss.ValidityDays != 0 {\n\t\t\t\treturn d.Errf(\"validity days is already specified: %d\", iss.ValidityDays)\n\t\t\t}\n\t\t\tdays, err := strconv.Atoi(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid number of days %s: %v\", d.Val(), err)\n\t\t\t}\n\t\t\tiss.ValidityDays = days\n\n\t\tcase \"alt_http_port\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tport, err := strconv.Atoi(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid port %s: %v\", d.Val(), err)\n\t\t\t}\n\t\t\tiss.AlternateHTTPPort = port\n\n\t\tcase \"dns\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tprovName := d.Val()\n\t\t\tif iss.CNAMEValidation == nil {\n\t\t\t\tiss.CNAMEValidation = new(DNSChallengeConfig)\n\t\t\t}\n\t\t\tunm, err := caddyfile.UnmarshalModule(d, \"dns.providers.\"+provName)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiss.CNAMEValidation.ProviderRaw = caddyconfig.JSONModuleObject(unm, \"name\", provName, nil)\n\n\t\tcase \"propagation_delay\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tdelayStr := d.Val()\n\t\t\tdelay, err := caddy.ParseDuration(delayStr)\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid propagation_delay duration %s: %v\", delayStr, err)\n\t\t\t}\n\t\t\tif iss.CNAMEValidation == nil {\n\t\t\t\tiss.CNAMEValidation = new(DNSChallengeConfig)\n\t\t\t}\n\t\t\tiss.CNAMEValidation.PropagationDelay = caddy.Duration(delay)\n\n\t\tcase \"propagation_timeout\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\ttimeoutStr := d.Val()\n\t\t\tvar timeout time.Duration\n\t\t\tif timeoutStr == \"-1\" {\n\t\t\t\ttimeout = time.Duration(-1)\n\t\t\t} else {\n\t\t\t\tvar err error\n\t\t\t\ttimeout, err = caddy.ParseDuration(timeoutStr)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn d.Errf(\"invalid propagation_timeout duration %s: %v\", timeoutStr, err)\n\t\t\t\t}\n\t\t\t}\n\t\t\tif iss.CNAMEValidation == nil {\n\t\t\t\tiss.CNAMEValidation = new(DNSChallengeConfig)\n\t\t\t}\n\t\t\tiss.CNAMEValidation.PropagationTimeout = caddy.Duration(timeout)\n\n\t\tcase \"resolvers\":\n\t\t\tif iss.CNAMEValidation == nil {\n\t\t\t\tiss.CNAMEValidation = new(DNSChallengeConfig)\n\t\t\t}\n\t\t\tiss.CNAMEValidation.Resolvers = d.RemainingArgs()\n\t\t\tif len(iss.CNAMEValidation.Resolvers) == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"dns_ttl\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tttlStr := d.Val()\n\t\t\tttl, err := caddy.ParseDuration(ttlStr)\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid dns_ttl duration %s: %v\", ttlStr, err)\n\t\t\t}\n\t\t\tif iss.CNAMEValidation == nil {\n\t\t\t\tiss.CNAMEValidation = new(DNSChallengeConfig)\n\t\t\t}\n\t\t\tiss.CNAMEValidation.TTL = caddy.Duration(ttl)\n\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized zerossl issuer property: %s\", d.Val())\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// Interface guards\nvar (\n\t_ certmagic.Issuer  = (*ZeroSSLIssuer)(nil)\n\t_ certmagic.Revoker = (*ZeroSSLIssuer)(nil)\n\t_ caddy.Provisioner = (*ZeroSSLIssuer)(nil)\n)\n"
  },
  {
    "path": "modules/filestorage/filestorage.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage filestorage\n\nimport (\n\t\"github.com/caddyserver/certmagic\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(FileStorage{})\n}\n\n// FileStorage is a certmagic.Storage wrapper for certmagic.FileStorage.\ntype FileStorage struct {\n\t// The base path to the folder used for storage.\n\tRoot string `json:\"root,omitempty\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (FileStorage) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.storage.file_system\",\n\t\tNew: func() caddy.Module { return new(FileStorage) },\n\t}\n}\n\n// CertMagicStorage converts s to a certmagic.Storage instance.\nfunc (s FileStorage) CertMagicStorage() (certmagic.Storage, error) {\n\treturn &certmagic.FileStorage{Path: s.Root}, nil\n}\n\n// UnmarshalCaddyfile sets up the storage module from Caddyfile tokens.\nfunc (s *FileStorage) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\tif !d.Next() {\n\t\treturn d.Err(\"expected tokens\")\n\t}\n\tif d.NextArg() {\n\t\ts.Root = d.Val()\n\t}\n\tif d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"root\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tif s.Root != \"\" {\n\t\t\t\treturn d.Err(\"root already set\")\n\t\t\t}\n\t\t\ts.Root = d.Val()\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized parameter '%s'\", d.Val())\n\t\t}\n\t}\n\tif s.Root == \"\" {\n\t\treturn d.Err(\"missing root path (to use default, omit storage config entirely)\")\n\t}\n\treturn nil\n}\n\n// Interface guards\nvar (\n\t_ caddy.StorageConverter = (*FileStorage)(nil)\n\t_ caddyfile.Unmarshaler  = (*FileStorage)(nil)\n)\n"
  },
  {
    "path": "modules/internal/network/networkproxy.go",
    "content": "package network\n\nimport (\n\t\"errors\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"strings\"\n\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(ProxyFromURL{})\n\tcaddy.RegisterModule(ProxyFromNone{})\n}\n\n// The \"url\" proxy source uses the defined URL as the proxy\ntype ProxyFromURL struct {\n\tURL string `json:\"url\"`\n\n\tctx    caddy.Context\n\tlogger *zap.Logger\n}\n\n// CaddyModule implements Module.\nfunc (p ProxyFromURL) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID: \"caddy.network_proxy.url\",\n\t\tNew: func() caddy.Module {\n\t\t\treturn &ProxyFromURL{}\n\t\t},\n\t}\n}\n\nfunc (p *ProxyFromURL) Provision(ctx caddy.Context) error {\n\tp.ctx = ctx\n\tp.logger = ctx.Logger()\n\treturn nil\n}\n\n// Validate implements Validator.\nfunc (p ProxyFromURL) Validate() error {\n\tif _, err := url.Parse(p.URL); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// ProxyFunc implements ProxyFuncProducer.\nfunc (p ProxyFromURL) ProxyFunc() func(*http.Request) (*url.URL, error) {\n\tif strings.Contains(p.URL, \"{\") && strings.Contains(p.URL, \"}\") {\n\t\t// courtesy of @ImpostorKeanu: https://github.com/caddyserver/caddy/pull/6397\n\t\treturn func(r *http.Request) (*url.URL, error) {\n\t\t\t// retrieve the replacer from context.\n\t\t\trepl, ok := r.Context().Value(caddy.ReplacerCtxKey).(*caddy.Replacer)\n\t\t\tif !ok {\n\t\t\t\terr := errors.New(\"failed to obtain replacer from request\")\n\t\t\t\tp.logger.Error(err.Error())\n\t\t\t\treturn nil, err\n\t\t\t}\n\n\t\t\t// apply placeholders to the value\n\t\t\t// note: h.ForwardProxyURL should never be empty at this point\n\t\t\ts := repl.ReplaceAll(p.URL, \"\")\n\t\t\tif s == \"\" {\n\t\t\t\tp.logger.Error(\"network_proxy URL was empty after applying placeholders\",\n\t\t\t\t\tzap.String(\"initial_value\", p.URL),\n\t\t\t\t\tzap.String(\"final_value\", s),\n\t\t\t\t\tzap.String(\"hint\", \"check for invalid placeholders\"))\n\t\t\t\treturn nil, errors.New(\"empty value for network_proxy URL\")\n\t\t\t}\n\n\t\t\t// parse the url\n\t\t\tpUrl, err := url.Parse(s)\n\t\t\tif err != nil {\n\t\t\t\tp.logger.Warn(\"failed to derive transport proxy from network_proxy URL\")\n\t\t\t\tpUrl = nil\n\t\t\t} else if pUrl.Host == \"\" || strings.Split(\"\", pUrl.Host)[0] == \":\" {\n\t\t\t\t// url.Parse does not return an error on these values:\n\t\t\t\t//\n\t\t\t\t// - http://:80\n\t\t\t\t//   - pUrl.Host == \":80\"\n\t\t\t\t// - /some/path\n\t\t\t\t//   - pUrl.Host == \"\"\n\t\t\t\t//\n\t\t\t\t// Super edge cases, but humans are human.\n\t\t\t\terr = errors.New(\"supplied network_proxy URL is missing a host value\")\n\t\t\t\tpUrl = nil\n\t\t\t} else {\n\t\t\t\tp.logger.Debug(\"setting transport proxy url\", zap.String(\"url\", s))\n\t\t\t}\n\n\t\t\treturn pUrl, err\n\t\t}\n\t}\n\treturn func(r *http.Request) (*url.URL, error) {\n\t\treturn url.Parse(p.URL)\n\t}\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (p *ProxyFromURL) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next()\n\td.Next()\n\tp.URL = d.Val()\n\treturn nil\n}\n\n// The \"none\" proxy source module disables the use of network proxy.\ntype ProxyFromNone struct{}\n\nfunc (p ProxyFromNone) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID: \"caddy.network_proxy.none\",\n\t\tNew: func() caddy.Module {\n\t\t\treturn &ProxyFromNone{}\n\t\t},\n\t}\n}\n\n// UnmarshalCaddyfile implements caddyfile.Unmarshaler.\nfunc (p ProxyFromNone) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\treturn nil\n}\n\n// ProxyFunc implements ProxyFuncProducer.\nfunc (p ProxyFromNone) ProxyFunc() func(*http.Request) (*url.URL, error) {\n\treturn nil\n}\n\nvar (\n\t_ caddy.Module            = ProxyFromURL{}\n\t_ caddy.Provisioner       = (*ProxyFromURL)(nil)\n\t_ caddy.Validator         = ProxyFromURL{}\n\t_ caddy.ProxyFuncProducer = ProxyFromURL{}\n\t_ caddyfile.Unmarshaler   = (*ProxyFromURL)(nil)\n\n\t_ caddy.Module            = ProxyFromNone{}\n\t_ caddy.ProxyFuncProducer = ProxyFromNone{}\n\t_ caddyfile.Unmarshaler   = ProxyFromNone{}\n)\n"
  },
  {
    "path": "modules/logging/appendencoder.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage logging\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"os\"\n\t\"strings\"\n\t\"time\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/buffer\"\n\t\"go.uber.org/zap/zapcore\"\n\t\"golang.org/x/term\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(AppendEncoder{})\n}\n\n// AppendEncoder can be used to add fields to all log entries\n// that pass through it. It is a wrapper around another\n// encoder, which it uses to actually encode the log entries.\n// It is most useful for adding information about the Caddy\n// instance that is producing the log entries, possibly via\n// an environment variable.\ntype AppendEncoder struct {\n\t// The underlying encoder that actually encodes the\n\t// log entries. If not specified, defaults to \"json\",\n\t// unless the output is a terminal, in which case\n\t// it defaults to \"console\".\n\tWrappedRaw json.RawMessage `json:\"wrap,omitempty\" caddy:\"namespace=caddy.logging.encoders inline_key=format\"`\n\n\t// A map of field names to their values. The values\n\t// can be global placeholders (e.g. env vars), or constants.\n\t// Note that the encoder does not run as part of an HTTP\n\t// request context, so request placeholders are not available.\n\tFields map[string]any `json:\"fields,omitempty\"`\n\n\twrapped zapcore.Encoder\n\trepl    *caddy.Replacer\n\n\twrappedIsDefault bool\n\tctx              caddy.Context\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (AppendEncoder) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.logging.encoders.append\",\n\t\tNew: func() caddy.Module { return new(AppendEncoder) },\n\t}\n}\n\n// Provision sets up the encoder.\nfunc (fe *AppendEncoder) Provision(ctx caddy.Context) error {\n\tfe.ctx = ctx\n\tfe.repl = caddy.NewReplacer()\n\n\tif fe.WrappedRaw == nil {\n\t\t// if wrap is not specified, default to JSON\n\t\tfe.wrapped = &JSONEncoder{}\n\t\tif p, ok := fe.wrapped.(caddy.Provisioner); ok {\n\t\t\tif err := p.Provision(ctx); err != nil {\n\t\t\t\treturn fmt.Errorf(\"provisioning fallback encoder module: %v\", err)\n\t\t\t}\n\t\t}\n\t\tfe.wrappedIsDefault = true\n\t} else {\n\t\t// set up wrapped encoder\n\t\tval, err := ctx.LoadModule(fe, \"WrappedRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading fallback encoder module: %v\", err)\n\t\t}\n\t\tfe.wrapped = val.(zapcore.Encoder)\n\t}\n\n\treturn nil\n}\n\n// ConfigureDefaultFormat will set the default format to \"console\"\n// if the writer is a terminal. If already configured, it passes\n// through the writer so a deeply nested encoder can configure\n// its own default format.\nfunc (fe *AppendEncoder) ConfigureDefaultFormat(wo caddy.WriterOpener) error {\n\tif !fe.wrappedIsDefault {\n\t\tif cfd, ok := fe.wrapped.(caddy.ConfiguresFormatterDefault); ok {\n\t\t\treturn cfd.ConfigureDefaultFormat(wo)\n\t\t}\n\t\treturn nil\n\t}\n\n\tif caddy.IsWriterStandardStream(wo) && term.IsTerminal(int(os.Stderr.Fd())) {\n\t\tfe.wrapped = &ConsoleEncoder{}\n\t\tif p, ok := fe.wrapped.(caddy.Provisioner); ok {\n\t\t\tif err := p.Provision(fe.ctx); err != nil {\n\t\t\t\treturn fmt.Errorf(\"provisioning fallback encoder module: %v\", err)\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens. Syntax:\n//\n//\tappend {\n//\t    wrap <another encoder>\n//\t    fields {\n//\t        <field> <value>\n//\t    }\n//\t    <field> <value>\n//\t}\nfunc (fe *AppendEncoder) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume encoder name\n\n\t// parse a field\n\tparseField := func() error {\n\t\tif fe.Fields == nil {\n\t\t\tfe.Fields = make(map[string]any)\n\t\t}\n\t\tfield := d.Val()\n\t\tif !d.NextArg() {\n\t\t\treturn d.ArgErr()\n\t\t}\n\t\tfe.Fields[field] = d.ScalarVal()\n\t\tif d.NextArg() {\n\t\t\treturn d.ArgErr()\n\t\t}\n\t\treturn nil\n\t}\n\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"wrap\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tmoduleName := d.Val()\n\t\t\tmoduleID := \"caddy.logging.encoders.\" + moduleName\n\t\t\tunm, err := caddyfile.UnmarshalModule(d, moduleID)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tenc, ok := unm.(zapcore.Encoder)\n\t\t\tif !ok {\n\t\t\t\treturn d.Errf(\"module %s (%T) is not a zapcore.Encoder\", moduleID, unm)\n\t\t\t}\n\t\t\tfe.WrappedRaw = caddyconfig.JSONModuleObject(enc, \"format\", moduleName, nil)\n\n\t\tcase \"fields\":\n\t\t\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\t\t\terr := parseField()\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\n\t\tdefault:\n\t\t\t// if unknown, assume it's a field so that\n\t\t\t// the config can be flat\n\t\t\terr := parseField()\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\n// AddArray is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddArray(key string, marshaler zapcore.ArrayMarshaler) error {\n\treturn fe.wrapped.AddArray(key, marshaler)\n}\n\n// AddObject is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddObject(key string, marshaler zapcore.ObjectMarshaler) error {\n\treturn fe.wrapped.AddObject(key, marshaler)\n}\n\n// AddBinary is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddBinary(key string, value []byte) {\n\tfe.wrapped.AddBinary(key, value)\n}\n\n// AddByteString is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddByteString(key string, value []byte) {\n\tfe.wrapped.AddByteString(key, value)\n}\n\n// AddBool is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddBool(key string, value bool) {\n\tfe.wrapped.AddBool(key, value)\n}\n\n// AddComplex128 is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddComplex128(key string, value complex128) {\n\tfe.wrapped.AddComplex128(key, value)\n}\n\n// AddComplex64 is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddComplex64(key string, value complex64) {\n\tfe.wrapped.AddComplex64(key, value)\n}\n\n// AddDuration is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddDuration(key string, value time.Duration) {\n\tfe.wrapped.AddDuration(key, value)\n}\n\n// AddFloat64 is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddFloat64(key string, value float64) {\n\tfe.wrapped.AddFloat64(key, value)\n}\n\n// AddFloat32 is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddFloat32(key string, value float32) {\n\tfe.wrapped.AddFloat32(key, value)\n}\n\n// AddInt is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddInt(key string, value int) {\n\tfe.wrapped.AddInt(key, value)\n}\n\n// AddInt64 is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddInt64(key string, value int64) {\n\tfe.wrapped.AddInt64(key, value)\n}\n\n// AddInt32 is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddInt32(key string, value int32) {\n\tfe.wrapped.AddInt32(key, value)\n}\n\n// AddInt16 is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddInt16(key string, value int16) {\n\tfe.wrapped.AddInt16(key, value)\n}\n\n// AddInt8 is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddInt8(key string, value int8) {\n\tfe.wrapped.AddInt8(key, value)\n}\n\n// AddString is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddString(key, value string) {\n\tfe.wrapped.AddString(key, value)\n}\n\n// AddTime is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddTime(key string, value time.Time) {\n\tfe.wrapped.AddTime(key, value)\n}\n\n// AddUint is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddUint(key string, value uint) {\n\tfe.wrapped.AddUint(key, value)\n}\n\n// AddUint64 is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddUint64(key string, value uint64) {\n\tfe.wrapped.AddUint64(key, value)\n}\n\n// AddUint32 is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddUint32(key string, value uint32) {\n\tfe.wrapped.AddUint32(key, value)\n}\n\n// AddUint16 is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddUint16(key string, value uint16) {\n\tfe.wrapped.AddUint16(key, value)\n}\n\n// AddUint8 is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddUint8(key string, value uint8) {\n\tfe.wrapped.AddUint8(key, value)\n}\n\n// AddUintptr is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddUintptr(key string, value uintptr) {\n\tfe.wrapped.AddUintptr(key, value)\n}\n\n// AddReflected is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) AddReflected(key string, value any) error {\n\treturn fe.wrapped.AddReflected(key, value)\n}\n\n// OpenNamespace is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) OpenNamespace(key string) {\n\tfe.wrapped.OpenNamespace(key)\n}\n\n// Clone is part of the zapcore.ObjectEncoder interface.\nfunc (fe AppendEncoder) Clone() zapcore.Encoder {\n\treturn AppendEncoder{\n\t\tFields:  fe.Fields,\n\t\twrapped: fe.wrapped.Clone(),\n\t\trepl:    fe.repl,\n\t}\n}\n\n// EncodeEntry partially implements the zapcore.Encoder interface.\nfunc (fe AppendEncoder) EncodeEntry(ent zapcore.Entry, fields []zapcore.Field) (*buffer.Buffer, error) {\n\tfe.wrapped = fe.wrapped.Clone()\n\tfor _, field := range fields {\n\t\tfield.AddTo(fe)\n\t}\n\n\t// append fields from config\n\tfor key, value := range fe.Fields {\n\t\t// if the value is a string\n\t\tif str, ok := value.(string); ok {\n\t\t\tisPlaceholder := strings.HasPrefix(str, \"{\") &&\n\t\t\t\tstrings.HasSuffix(str, \"}\") &&\n\t\t\t\tstrings.Count(str, \"{\") == 1\n\t\t\tif isPlaceholder {\n\t\t\t\t// and it looks like a placeholder, evaluate it\n\t\t\t\treplaced, _ := fe.repl.Get(strings.Trim(str, \"{}\"))\n\t\t\t\tzap.Any(key, replaced).AddTo(fe)\n\t\t\t} else {\n\t\t\t\t// just use the string as-is\n\t\t\t\tzap.String(key, str).AddTo(fe)\n\t\t\t}\n\t\t} else {\n\t\t\t// not a string, so use the value as any\n\t\t\tzap.Any(key, value).AddTo(fe)\n\t\t}\n\t}\n\n\treturn fe.wrapped.EncodeEntry(ent, nil)\n}\n\n// Interface guards\nvar (\n\t_ zapcore.Encoder                  = (*AppendEncoder)(nil)\n\t_ caddyfile.Unmarshaler            = (*AppendEncoder)(nil)\n\t_ caddy.ConfiguresFormatterDefault = (*AppendEncoder)(nil)\n)\n"
  },
  {
    "path": "modules/logging/cores.go",
    "content": "package logging\n\nimport (\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(MockCore{})\n}\n\n// MockCore is a no-op module, purely for testing\ntype MockCore struct {\n\tzapcore.Core `json:\"-\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (MockCore) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.logging.cores.mock\",\n\t\tNew: func() caddy.Module { return new(MockCore) },\n\t}\n}\n\nfunc (lec *MockCore) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\treturn nil\n}\n\n// Interface guards\nvar (\n\t_ zapcore.Core          = (*MockCore)(nil)\n\t_ caddy.Module          = (*MockCore)(nil)\n\t_ caddyfile.Unmarshaler = (*MockCore)(nil)\n)\n"
  },
  {
    "path": "modules/logging/encoders.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage logging\n\nimport (\n\t\"time\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/buffer\"\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(ConsoleEncoder{})\n\tcaddy.RegisterModule(JSONEncoder{})\n}\n\n// ConsoleEncoder encodes log entries that are mostly human-readable.\ntype ConsoleEncoder struct {\n\tzapcore.Encoder `json:\"-\"`\n\tLogEncoderConfig\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (ConsoleEncoder) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.logging.encoders.console\",\n\t\tNew: func() caddy.Module { return new(ConsoleEncoder) },\n\t}\n}\n\n// Provision sets up the encoder.\nfunc (ce *ConsoleEncoder) Provision(_ caddy.Context) error {\n\tif ce.LevelFormat == \"\" {\n\t\tce.LevelFormat = \"color\"\n\t}\n\tif ce.TimeFormat == \"\" {\n\t\tce.TimeFormat = \"wall_milli\"\n\t}\n\tce.Encoder = zapcore.NewConsoleEncoder(ce.ZapcoreEncoderConfig())\n\treturn nil\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens. Syntax:\n//\n//\tconsole {\n//\t    <common encoder config subdirectives...>\n//\t}\n//\n// See the godoc on the LogEncoderConfig type for the syntax of\n// subdirectives that are common to most/all encoders.\nfunc (ce *ConsoleEncoder) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume encoder name\n\tif d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\terr := ce.LogEncoderConfig.UnmarshalCaddyfile(d)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// JSONEncoder encodes entries as JSON.\ntype JSONEncoder struct {\n\tzapcore.Encoder `json:\"-\"`\n\tLogEncoderConfig\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (JSONEncoder) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.logging.encoders.json\",\n\t\tNew: func() caddy.Module { return new(JSONEncoder) },\n\t}\n}\n\n// Provision sets up the encoder.\nfunc (je *JSONEncoder) Provision(_ caddy.Context) error {\n\tje.Encoder = zapcore.NewJSONEncoder(je.ZapcoreEncoderConfig())\n\treturn nil\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens. Syntax:\n//\n//\tjson {\n//\t    <common encoder config subdirectives...>\n//\t}\n//\n// See the godoc on the LogEncoderConfig type for the syntax of\n// subdirectives that are common to most/all encoders.\nfunc (je *JSONEncoder) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume encoder name\n\tif d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\terr := je.LogEncoderConfig.UnmarshalCaddyfile(d)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// LogEncoderConfig holds configuration common to most encoders.\ntype LogEncoderConfig struct {\n\tMessageKey    *string `json:\"message_key,omitempty\"`\n\tLevelKey      *string `json:\"level_key,omitempty\"`\n\tTimeKey       *string `json:\"time_key,omitempty\"`\n\tNameKey       *string `json:\"name_key,omitempty\"`\n\tCallerKey     *string `json:\"caller_key,omitempty\"`\n\tStacktraceKey *string `json:\"stacktrace_key,omitempty\"`\n\tLineEnding    *string `json:\"line_ending,omitempty\"`\n\n\t// Recognized values are: unix_seconds_float, unix_milli_float, unix_nano, iso8601, rfc3339, rfc3339_nano, wall, wall_milli, wall_nano, common_log.\n\t// The value may also be custom format per the Go `time` package layout specification, as described [here](https://pkg.go.dev/time#pkg-constants).\n\tTimeFormat string `json:\"time_format,omitempty\"`\n\tTimeLocal  bool   `json:\"time_local,omitempty\"`\n\n\t// Recognized values are: s/second/seconds, ns/nano/nanos, ms/milli/millis, string.\n\t// Empty and unrecognized value default to seconds.\n\tDurationFormat string `json:\"duration_format,omitempty\"`\n\n\t// Recognized values are: lower, upper, color.\n\t// Empty and unrecognized value default to lower.\n\tLevelFormat string `json:\"level_format,omitempty\"`\n}\n\n// UnmarshalCaddyfile populates the struct from Caddyfile tokens. Syntax:\n//\n//\t{\n//\t    message_key     <key>\n//\t    level_key       <key>\n//\t    time_key        <key>\n//\t    name_key        <key>\n//\t    caller_key      <key>\n//\t    stacktrace_key  <key>\n//\t    line_ending     <char>\n//\t    time_format     <format>\n//\t    time_local\n//\t    duration_format <format>\n//\t    level_format    <format>\n//\t}\nfunc (lec *LogEncoderConfig) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\tfor d.NextBlock(0) {\n\t\tsubdir := d.Val()\n\t\tswitch subdir {\n\t\tcase \"time_local\":\n\t\t\tlec.TimeLocal = true\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\t\tvar arg string\n\t\tif !d.AllArgs(&arg) {\n\t\t\treturn d.ArgErr()\n\t\t}\n\t\tswitch subdir {\n\t\tcase \"message_key\":\n\t\t\tlec.MessageKey = &arg\n\t\tcase \"level_key\":\n\t\t\tlec.LevelKey = &arg\n\t\tcase \"time_key\":\n\t\t\tlec.TimeKey = &arg\n\t\tcase \"name_key\":\n\t\t\tlec.NameKey = &arg\n\t\tcase \"caller_key\":\n\t\t\tlec.CallerKey = &arg\n\t\tcase \"stacktrace_key\":\n\t\t\tlec.StacktraceKey = &arg\n\t\tcase \"line_ending\":\n\t\t\tlec.LineEnding = &arg\n\t\tcase \"time_format\":\n\t\t\tlec.TimeFormat = arg\n\t\tcase \"duration_format\":\n\t\t\tlec.DurationFormat = arg\n\t\tcase \"level_format\":\n\t\t\tlec.LevelFormat = arg\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized subdirective %s\", subdir)\n\t\t}\n\t}\n\treturn nil\n}\n\n// ZapcoreEncoderConfig returns the equivalent zapcore.EncoderConfig.\n// If lec is nil, zap.NewProductionEncoderConfig() is returned.\nfunc (lec *LogEncoderConfig) ZapcoreEncoderConfig() zapcore.EncoderConfig {\n\tcfg := zap.NewProductionEncoderConfig()\n\tif lec == nil {\n\t\tlec = new(LogEncoderConfig)\n\t}\n\tif lec.MessageKey != nil {\n\t\tcfg.MessageKey = *lec.MessageKey\n\t}\n\tif lec.LevelKey != nil {\n\t\tcfg.LevelKey = *lec.LevelKey\n\t}\n\tif lec.TimeKey != nil {\n\t\tcfg.TimeKey = *lec.TimeKey\n\t}\n\tif lec.NameKey != nil {\n\t\tcfg.NameKey = *lec.NameKey\n\t}\n\tif lec.CallerKey != nil {\n\t\tcfg.CallerKey = *lec.CallerKey\n\t}\n\tif lec.StacktraceKey != nil {\n\t\tcfg.StacktraceKey = *lec.StacktraceKey\n\t}\n\tif lec.LineEnding != nil {\n\t\tcfg.LineEnding = *lec.LineEnding\n\t}\n\n\t// time format\n\tvar timeFormatter zapcore.TimeEncoder\n\tswitch lec.TimeFormat {\n\tcase \"\", \"unix_seconds_float\":\n\t\ttimeFormatter = zapcore.EpochTimeEncoder\n\tcase \"unix_milli_float\":\n\t\ttimeFormatter = zapcore.EpochMillisTimeEncoder\n\tcase \"unix_nano\":\n\t\ttimeFormatter = zapcore.EpochNanosTimeEncoder\n\tcase \"iso8601\":\n\t\ttimeFormatter = zapcore.ISO8601TimeEncoder\n\tdefault:\n\t\ttimeFormat := lec.TimeFormat\n\t\tswitch lec.TimeFormat {\n\t\tcase \"rfc3339\":\n\t\t\ttimeFormat = time.RFC3339\n\t\tcase \"rfc3339_nano\":\n\t\t\ttimeFormat = time.RFC3339Nano\n\t\tcase \"wall\":\n\t\t\ttimeFormat = \"2006/01/02 15:04:05\"\n\t\tcase \"wall_milli\":\n\t\t\ttimeFormat = \"2006/01/02 15:04:05.000\"\n\t\tcase \"wall_nano\":\n\t\t\ttimeFormat = \"2006/01/02 15:04:05.000000000\"\n\t\tcase \"common_log\":\n\t\t\ttimeFormat = \"02/Jan/2006:15:04:05 -0700\"\n\t\t}\n\t\ttimeFormatter = func(ts time.Time, encoder zapcore.PrimitiveArrayEncoder) {\n\t\t\tvar time time.Time\n\t\t\tif lec.TimeLocal {\n\t\t\t\ttime = ts.Local()\n\t\t\t} else {\n\t\t\t\ttime = ts.UTC()\n\t\t\t}\n\t\t\tencoder.AppendString(time.Format(timeFormat))\n\t\t}\n\t}\n\tcfg.EncodeTime = timeFormatter\n\n\t// duration format\n\tvar durFormatter zapcore.DurationEncoder\n\tswitch lec.DurationFormat {\n\tcase \"s\", \"second\", \"seconds\":\n\t\tdurFormatter = zapcore.SecondsDurationEncoder\n\tcase \"ns\", \"nano\", \"nanos\":\n\t\tdurFormatter = zapcore.NanosDurationEncoder\n\tcase \"ms\", \"milli\", \"millis\":\n\t\tdurFormatter = zapcore.MillisDurationEncoder\n\tcase \"string\":\n\t\tdurFormatter = zapcore.StringDurationEncoder\n\tdefault:\n\t\tdurFormatter = zapcore.SecondsDurationEncoder\n\t}\n\tcfg.EncodeDuration = durFormatter\n\n\t// level format\n\tvar levelFormatter zapcore.LevelEncoder\n\tswitch lec.LevelFormat {\n\tcase \"\", \"lower\":\n\t\tlevelFormatter = zapcore.LowercaseLevelEncoder\n\tcase \"upper\":\n\t\tlevelFormatter = zapcore.CapitalLevelEncoder\n\tcase \"color\":\n\t\tlevelFormatter = zapcore.CapitalColorLevelEncoder\n\t}\n\tcfg.EncodeLevel = levelFormatter\n\n\treturn cfg\n}\n\nvar bufferpool = buffer.NewPool()\n\n// Interface guards\nvar (\n\t_ zapcore.Encoder = (*ConsoleEncoder)(nil)\n\t_ zapcore.Encoder = (*JSONEncoder)(nil)\n\n\t_ caddyfile.Unmarshaler = (*ConsoleEncoder)(nil)\n\t_ caddyfile.Unmarshaler = (*JSONEncoder)(nil)\n)\n"
  },
  {
    "path": "modules/logging/filewriter.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage logging\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"math\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/DeRuina/timberjack\"\n\t\"github.com/dustin/go-humanize\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(FileWriter{})\n}\n\n// fileMode is a string made of 1 to 4 octal digits representing\n// a numeric mode as specified with the `chmod` unix command.\n// `\"0777\"` and `\"777\"` are thus equivalent values.\ntype fileMode os.FileMode\n\n// UnmarshalJSON satisfies json.Unmarshaler.\nfunc (m *fileMode) UnmarshalJSON(b []byte) error {\n\tif len(b) == 0 {\n\t\treturn io.EOF\n\t}\n\n\tvar s string\n\tif err := json.Unmarshal(b, &s); err != nil {\n\t\treturn err\n\t}\n\n\tmode, err := parseFileMode(s)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t*m = fileMode(mode)\n\treturn err\n}\n\n// MarshalJSON satisfies json.Marshaler.\nfunc (m *fileMode) MarshalJSON() ([]byte, error) {\n\treturn fmt.Appendf(nil, \"\\\"%04o\\\"\", *m), nil\n}\n\n// parseFileMode parses a file mode string,\n// adding support for `chmod` unix command like\n// 1 to 4 digital octal values.\nfunc parseFileMode(s string) (os.FileMode, error) {\n\tmodeStr := fmt.Sprintf(\"%04s\", s)\n\tmode, err := strconv.ParseUint(modeStr, 8, 32)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\treturn os.FileMode(mode), nil\n}\n\n// FileWriter can write logs to files. By default, log files\n// are rotated (\"rolled\") when they get large, and old log\n// files get deleted, to ensure that the process does not\n// exhaust disk space.\ntype FileWriter struct {\n\t// Filename is the name of the file to write.\n\tFilename string `json:\"filename,omitempty\"`\n\n\t// The file permissions mode.\n\t// 0600 by default.\n\tMode fileMode `json:\"mode,omitempty\"`\n\n\t// DirMode controls permissions for any directories created to reach Filename.\n\t// Default: 0700 (current behavior).\n\t//\n\t// Special values:\n\t//   - \"inherit\"   → copy the nearest existing parent directory's perms (with r→x normalization)\n\t//   - \"from_file\" → derive from the file Mode (with r→x), e.g. 0644 → 0755, 0600 → 0700\n\t// Numeric octal strings (e.g. \"0755\") are also accepted. Subject to process umask.\n\tDirMode string `json:\"dir_mode,omitempty\"`\n\n\t// Roll toggles log rolling or rotation, which is\n\t// enabled by default.\n\tRoll *bool `json:\"roll,omitempty\"`\n\n\t// When a log file reaches approximately this size,\n\t// it will be rotated.\n\tRollSizeMB int `json:\"roll_size_mb,omitempty\"`\n\n\t// Roll log file after some time\n\tRollInterval time.Duration `json:\"roll_interval,omitempty\"`\n\n\t// Roll log file at fix minutes\n\t// For example []int{0, 30} will roll file at xx:00 and xx:30 each hour\n\t// Invalid value are ignored with a warning on stderr\n\t// See https://github.com/DeRuina/timberjack#%EF%B8%8F-rotation-notes--warnings for caveats\n\tRollAtMinutes []int `json:\"roll_minutes,omitempty\"`\n\n\t// Roll log file at fix time\n\t// For example []string{\"00:00\", \"12:00\"} will roll file at 00:00 and 12:00 each day\n\t// Invalid value are ignored with a warning on stderr\n\t// See https://github.com/DeRuina/timberjack#%EF%B8%8F-rotation-notes--warnings for caveats\n\tRollAt []string `json:\"roll_at,omitempty\"`\n\n\t// Whether to compress rolled files.\n\t// Default: true.\n\t// Deprecated: Use RollCompression instead, setting it to \"none\".\n\tRollCompress *bool `json:\"roll_gzip,omitempty\"`\n\n\t// RollCompression selects the compression algorithm for rolled files.\n\t// Accepted values: \"none\", \"gzip\", \"zstd\".\n\t// Default: gzip\n\tRollCompression string `json:\"roll_compression,omitempty\"`\n\n\t// Whether to use local timestamps in rolled filenames.\n\t// Default: false\n\tRollLocalTime bool `json:\"roll_local_time,omitempty\"`\n\n\t// The maximum number of rolled log files to keep.\n\t// Default: 10\n\tRollKeep int `json:\"roll_keep,omitempty\"`\n\n\t// How many days to keep rolled log files. Default: 90\n\tRollKeepDays int `json:\"roll_keep_days,omitempty\"`\n\n\t// Rotated file will have format <logfilename>-<format>-<criterion>.log\n\t// Optional. If unset or invalid, defaults to 2006-01-02T15-04-05.000 (with fallback warning)\n\t// <format> must be a Go time compatible format, see https://pkg.go.dev/time#pkg-constants\n\tBackupTimeFormat string `json:\"backup_time_format,omitempty\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (FileWriter) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.logging.writers.file\",\n\t\tNew: func() caddy.Module { return new(FileWriter) },\n\t}\n}\n\n// Provision sets up the module\nfunc (fw *FileWriter) Provision(ctx caddy.Context) error {\n\t// Replace placeholder in filename\n\trepl := caddy.NewReplacer()\n\tfilename, err := repl.ReplaceOrErr(fw.Filename, true, true)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"invalid filename for log file: %v\", err)\n\t}\n\n\tfw.Filename = filename\n\treturn nil\n}\n\nfunc (fw FileWriter) String() string {\n\tfpath, err := caddy.FastAbs(fw.Filename)\n\tif err == nil {\n\t\treturn fpath\n\t}\n\treturn fw.Filename\n}\n\n// WriterKey returns a unique key representing this fw.\nfunc (fw FileWriter) WriterKey() string {\n\treturn \"file:\" + fw.Filename\n}\n\n// OpenWriter opens a new file writer.\nfunc (fw FileWriter) OpenWriter() (io.WriteCloser, error) {\n\tmodeIfCreating := os.FileMode(fw.Mode)\n\tif modeIfCreating == 0 {\n\t\tmodeIfCreating = 0o600\n\t}\n\n\t// roll log files as a sensible default to avoid disk space exhaustion\n\troll := fw.Roll == nil || *fw.Roll\n\n\t// Ensure directory exists before opening the file.\n\tdirPath := filepath.Dir(fw.Filename)\n\tswitch strings.ToLower(strings.TrimSpace(fw.DirMode)) {\n\tcase \"\", \"0\":\n\t\t// Preserve current behavior: locked-down directories by default.\n\t\tif err := os.MkdirAll(dirPath, 0o700); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\tcase \"inherit\":\n\t\tif err := mkdirAllInherit(dirPath); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\tcase \"from_file\":\n\t\tif err := mkdirAllFromFile(dirPath, os.FileMode(fw.Mode)); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\tdefault:\n\t\tdm, err := parseFileMode(fw.DirMode)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"dir_mode: %w\", err)\n\t\t}\n\t\tif err := os.MkdirAll(dirPath, dm); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\t// create/open the file\n\tfile, err := os.OpenFile(fw.Filename, os.O_WRONLY|os.O_APPEND|os.O_CREATE, modeIfCreating)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tinfo, err := file.Stat()\n\tif roll {\n\t\tfile.Close() // timberjack will reopen it on its own\n\t}\n\n\t// Ensure already existing files have the right mode, since OpenFile will not set the mode in such case.\n\tif configuredMode := os.FileMode(fw.Mode); configuredMode != 0 {\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"unable to stat log file to see if we need to set permissions: %v\", err)\n\t\t}\n\t\t// only chmod if the configured mode is different\n\t\tif info.Mode()&os.ModePerm != configuredMode&os.ModePerm {\n\t\t\tif err = os.Chmod(fw.Filename, configuredMode); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\t}\n\n\t// if not rolling, then the plain file handle is all we need\n\tif !roll {\n\t\treturn file, nil\n\t}\n\n\t// otherwise, return a rolling log\n\tif fw.RollSizeMB == 0 {\n\t\tfw.RollSizeMB = 100\n\t}\n\tif fw.RollCompress == nil {\n\t\tcompress := true\n\t\tfw.RollCompress = &compress\n\t}\n\tif fw.RollKeep == 0 {\n\t\tfw.RollKeep = 10\n\t}\n\tif fw.RollKeepDays == 0 {\n\t\tfw.RollKeepDays = 90\n\t}\n\n\t// Determine compression algorithm to use. Priority:\n\t// 1) explicit RollCompression (none|gzip|zstd)\n\t// 2) if RollCompress is unset or true -> gzip\n\t// 3) if RollCompress is false -> none\n\tvar compression string\n\tif fw.RollCompression != \"\" {\n\t\tcompression = strings.ToLower(strings.TrimSpace(fw.RollCompression))\n\t\tif compression != \"none\" && compression != \"gzip\" && compression != \"zstd\" {\n\t\t\treturn nil, fmt.Errorf(\"invalid roll_compression: %s\", fw.RollCompression)\n\t\t}\n\t} else {\n\t\tif fw.RollCompress == nil || *fw.RollCompress {\n\t\t\tcompression = \"gzip\"\n\t\t} else {\n\t\t\tcompression = \"none\"\n\t\t}\n\t}\n\n\treturn &timberjack.Logger{\n\t\tFilename:         fw.Filename,\n\t\tMaxSize:          fw.RollSizeMB,\n\t\tMaxAge:           fw.RollKeepDays,\n\t\tMaxBackups:       fw.RollKeep,\n\t\tLocalTime:        fw.RollLocalTime,\n\t\tCompression:      compression,\n\t\tRotationInterval: fw.RollInterval,\n\t\tRotateAtMinutes:  fw.RollAtMinutes,\n\t\tRotateAt:         fw.RollAt,\n\t\tBackupTimeFormat: fw.BackupTimeFormat,\n\t\tFileMode:         os.FileMode(fw.Mode),\n\t}, nil\n}\n\n// normalizeDirPerm ensures that read bits also have execute bits set.\nfunc normalizeDirPerm(p os.FileMode) os.FileMode {\n\tif p&0o400 != 0 {\n\t\tp |= 0o100\n\t}\n\tif p&0o040 != 0 {\n\t\tp |= 0o010\n\t}\n\tif p&0o004 != 0 {\n\t\tp |= 0o001\n\t}\n\treturn p\n}\n\n// mkdirAllInherit creates missing dirs using the nearest existing parent's\n// permissions, normalized with r→x.\nfunc mkdirAllInherit(dir string) error {\n\tif fi, err := os.Stat(dir); err == nil && fi.IsDir() {\n\t\treturn nil\n\t}\n\tcur := dir\n\tvar parent string\n\tfor {\n\t\tnext := filepath.Dir(cur)\n\t\tif next == cur {\n\t\t\tparent = next\n\t\t\tbreak\n\t\t}\n\t\tif fi, err := os.Stat(next); err == nil {\n\t\t\tif !fi.IsDir() {\n\t\t\t\treturn fmt.Errorf(\"path component %s exists and is not a directory\", next)\n\t\t\t}\n\t\t\tparent = next\n\t\t\tbreak\n\t\t}\n\t\tcur = next\n\t}\n\tperm := os.FileMode(0o700)\n\tif fi, err := os.Stat(parent); err == nil && fi.IsDir() {\n\t\tperm = fi.Mode().Perm()\n\t}\n\tperm = normalizeDirPerm(perm)\n\treturn os.MkdirAll(dir, perm)\n}\n\n// mkdirAllFromFile creates missing dirs using the file's mode (with r→x) so\n// 0644 → 0755, 0600 → 0700, etc.\nfunc mkdirAllFromFile(dir string, fileMode os.FileMode) error {\n\tif fi, err := os.Stat(dir); err == nil && fi.IsDir() {\n\t\treturn nil\n\t}\n\tperm := normalizeDirPerm(fileMode.Perm()) | 0o200 // ensure owner write on dir so files can be created\n\treturn os.MkdirAll(dir, perm)\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens. Syntax:\n//\n//\tfile <filename> {\n//\t    mode          <mode>\n//\t    dir_mode      <mode|inherit|from_file>\n//\t    roll_disabled\n//\t    roll_size     <size>\n//\t    roll_uncompressed\n//\t    roll_compression <none|gzip|zstd>\n//\t    roll_local_time\n//\t    roll_keep     <num>\n//\t    roll_keep_for <days>\n//\t}\n//\n// The roll_size value has megabyte resolution.\n// Fractional values are rounded up to the next whole megabyte (MiB).\n//\n// By default, compression is enabled, but can be turned off by setting\n// the roll_uncompressed option.\n//\n// The roll_keep_for duration has day resolution.\n// Fractional values are rounded up to the next whole number of days.\n//\n// If any of the mode, roll_size, roll_keep, or roll_keep_for subdirectives are\n// omitted or set to a zero value, then Caddy's default value for that\n// subdirective is used.\nfunc (fw *FileWriter) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume writer name\n\tif !d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\tfw.Filename = d.Val()\n\tif d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"mode\":\n\t\t\tvar modeStr string\n\t\t\tif !d.AllArgs(&modeStr) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tmode, err := parseFileMode(modeStr)\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"parsing mode: %v\", err)\n\t\t\t}\n\t\t\tfw.Mode = fileMode(mode)\n\n\t\tcase \"dir_mode\":\n\t\t\tvar val string\n\t\t\tif !d.AllArgs(&val) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tval = strings.TrimSpace(val)\n\t\t\tswitch strings.ToLower(val) {\n\t\t\tcase \"inherit\", \"from_file\":\n\t\t\t\tfw.DirMode = val\n\t\t\tdefault:\n\t\t\t\tif _, err := parseFileMode(val); err != nil {\n\t\t\t\t\treturn d.Errf(\"parsing dir_mode: %v\", err)\n\t\t\t\t}\n\t\t\t\tfw.DirMode = val\n\t\t\t}\n\n\t\tcase \"roll_disabled\":\n\t\t\tvar f bool\n\t\t\tfw.Roll = &f\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"roll_size\":\n\t\t\tvar sizeStr string\n\t\t\tif !d.AllArgs(&sizeStr) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tsize, err := humanize.ParseBytes(sizeStr)\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"parsing size: %v\", err)\n\t\t\t}\n\t\t\tfw.RollSizeMB = int(math.Ceil(float64(size) / humanize.MiByte))\n\n\t\tcase \"roll_uncompressed\":\n\t\t\tvar f bool\n\t\t\tfw.RollCompress = &f\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"roll_compression\":\n\t\t\tvar comp string\n\t\t\tif !d.AllArgs(&comp) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tcomp = strings.ToLower(strings.TrimSpace(comp))\n\t\t\tswitch comp {\n\t\t\tcase \"none\", \"gzip\", \"zstd\":\n\t\t\t\tfw.RollCompression = comp\n\t\t\tdefault:\n\t\t\t\treturn d.Errf(\"parsing roll_compression: must be 'none', 'gzip' or 'zstd'\")\n\t\t\t}\n\n\t\tcase \"roll_local_time\":\n\t\t\tfw.RollLocalTime = true\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\tcase \"roll_keep\":\n\t\t\tvar keepStr string\n\t\t\tif !d.AllArgs(&keepStr) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tkeep, err := strconv.Atoi(keepStr)\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"parsing roll_keep number: %v\", err)\n\t\t\t}\n\t\t\tfw.RollKeep = keep\n\n\t\tcase \"roll_keep_for\":\n\t\t\tvar keepForStr string\n\t\t\tif !d.AllArgs(&keepForStr) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tkeepFor, err := caddy.ParseDuration(keepForStr)\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"parsing roll_keep_for duration: %v\", err)\n\t\t\t}\n\t\t\tif keepFor < 0 {\n\t\t\t\treturn d.Errf(\"negative roll_keep_for duration: %v\", keepFor)\n\t\t\t}\n\t\t\tfw.RollKeepDays = int(math.Ceil(keepFor.Hours() / 24))\n\n\t\tcase \"roll_interval\":\n\t\t\tvar durationStr string\n\t\t\tif !d.AllArgs(&durationStr) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tduration, err := time.ParseDuration(durationStr)\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"parsing roll_interval duration: %v\", err)\n\t\t\t}\n\t\t\tfw.RollInterval = duration\n\n\t\tcase \"roll_minutes\":\n\t\t\t// Accept either a single comma-separated argument or\n\t\t\t// multiple space-separated arguments. Collect all\n\t\t\t// remaining args on the line and split on commas.\n\t\t\targs := d.RemainingArgs()\n\t\t\tif len(args) == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tvar minutes []int\n\t\t\tfor _, arg := range args {\n\t\t\t\tparts := strings.SplitSeq(arg, \",\")\n\t\t\t\tfor p := range parts {\n\t\t\t\t\tms := strings.TrimSpace(p)\n\t\t\t\t\tif ms == \"\" {\n\t\t\t\t\t\treturn d.Errf(\"parsing roll_minutes: empty value\")\n\t\t\t\t\t}\n\t\t\t\t\tm, err := strconv.Atoi(ms)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn d.Errf(\"parsing roll_minutes number: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t\tminutes = append(minutes, m)\n\t\t\t\t}\n\t\t\t}\n\t\t\tfw.RollAtMinutes = minutes\n\n\t\tcase \"roll_at\":\n\t\t\t// Accept either a single comma-separated argument or\n\t\t\t// multiple space-separated arguments. Collect all\n\t\t\t// remaining args on the line and split on commas.\n\t\t\targs := d.RemainingArgs()\n\t\t\tif len(args) == 0 {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tvar times []string\n\t\t\tfor _, arg := range args {\n\t\t\t\tparts := strings.SplitSeq(arg, \",\")\n\t\t\t\tfor p := range parts {\n\t\t\t\t\tts := strings.TrimSpace(p)\n\t\t\t\t\tif ts == \"\" {\n\t\t\t\t\t\treturn d.Errf(\"parsing roll_at: empty value\")\n\t\t\t\t\t}\n\t\t\t\t\ttimes = append(times, ts)\n\t\t\t\t}\n\t\t\t}\n\t\t\tfw.RollAt = times\n\n\t\tcase \"backup_time_format\":\n\t\t\tvar format string\n\t\t\tif !d.AllArgs(&format) {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tfw.BackupTimeFormat = format\n\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized subdirective '%s'\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner     = (*FileWriter)(nil)\n\t_ caddy.WriterOpener    = (*FileWriter)(nil)\n\t_ caddyfile.Unmarshaler = (*FileWriter)(nil)\n)\n"
  },
  {
    "path": "modules/logging/filewriter_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n//go:build !windows\n\npackage logging\n\nimport (\n\t\"encoding/json\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"syscall\"\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc TestFileCreationMode(t *testing.T) {\n\ton := true\n\toff := false\n\n\ttests := []struct {\n\t\tname     string\n\t\tfw       FileWriter\n\t\twantMode os.FileMode\n\t}{\n\t\t{\n\t\t\tname: \"default mode no roll\",\n\t\t\tfw: FileWriter{\n\t\t\t\tRoll: &off,\n\t\t\t},\n\t\t\twantMode: 0o600,\n\t\t},\n\t\t{\n\t\t\tname: \"default mode roll\",\n\t\t\tfw: FileWriter{\n\t\t\t\tRoll: &on,\n\t\t\t},\n\t\t\twantMode: 0o600,\n\t\t},\n\t\t{\n\t\t\tname: \"custom mode no roll\",\n\t\t\tfw: FileWriter{\n\t\t\t\tRoll: &off,\n\t\t\t\tMode: 0o666,\n\t\t\t},\n\t\t\twantMode: 0o666,\n\t\t},\n\t\t{\n\t\t\tname: \"custom mode roll\",\n\t\t\tfw: FileWriter{\n\t\t\t\tRoll: &on,\n\t\t\t\tMode: 0o666,\n\t\t\t},\n\t\t\twantMode: 0o666,\n\t\t},\n\t}\n\n\tm := syscall.Umask(0o000)\n\tdefer syscall.Umask(m)\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tdir, err := os.MkdirTemp(\"\", \"caddytest\")\n\t\t\tif err != nil {\n\t\t\t\tt.Fatalf(\"failed to create tempdir: %v\", err)\n\t\t\t}\n\t\t\tdefer os.RemoveAll(dir)\n\t\t\tfpath := filepath.Join(dir, \"test.log\")\n\t\t\ttt.fw.Filename = fpath\n\n\t\t\tlogger, err := tt.fw.OpenWriter()\n\t\t\tif err != nil {\n\t\t\t\tt.Fatalf(\"failed to create file: %v\", err)\n\t\t\t}\n\t\t\tdefer logger.Close()\n\n\t\t\tst, err := os.Stat(fpath)\n\t\t\tif err != nil {\n\t\t\t\tt.Fatalf(\"failed to check file permissions: %v\", err)\n\t\t\t}\n\n\t\t\tif st.Mode() != tt.wantMode {\n\t\t\t\tt.Errorf(\"%s: file mode is %v, want %v\", tt.name, st.Mode(), tt.wantMode)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestFileRotationPreserveMode(t *testing.T) {\n\tm := syscall.Umask(0o000)\n\tdefer syscall.Umask(m)\n\n\tdir, err := os.MkdirTemp(\"\", \"caddytest\")\n\tif err != nil {\n\t\tt.Fatalf(\"failed to create tempdir: %v\", err)\n\t}\n\tdefer os.RemoveAll(dir)\n\n\tfpath := path.Join(dir, \"test.log\")\n\n\troll := true\n\tmode := fileMode(0o640)\n\tfw := FileWriter{\n\t\tFilename:   fpath,\n\t\tMode:       mode,\n\t\tRoll:       &roll,\n\t\tRollSizeMB: 1,\n\t}\n\n\tlogger, err := fw.OpenWriter()\n\tif err != nil {\n\t\tt.Fatalf(\"failed to create file: %v\", err)\n\t}\n\tdefer logger.Close()\n\n\tb := make([]byte, 1024*1024-1000)\n\tlogger.Write(b)\n\tlogger.Write(b[0:2000])\n\n\tfiles, err := os.ReadDir(dir)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to read temporary log dir: %v\", err)\n\t}\n\n\t// We might get 2 or 3 files depending\n\t// on the race between compressed log file generation,\n\t// removal of the non compressed file and reading the directory.\n\t// Ordering of the files are [ test-*.log test-*.log.gz test.log ]\n\tif len(files) < 2 || len(files) > 3 {\n\t\tt.Log(\"got files: \", files)\n\t\tt.Fatalf(\"got %v files want 2\", len(files))\n\t}\n\n\twantPattern := \"test-*-*-*-*-*.*.log\"\n\ttest_date_log := files[0]\n\tif m, _ := path.Match(wantPattern, test_date_log.Name()); m != true {\n\t\tt.Fatalf(\"got %v filename want %v\", test_date_log.Name(), wantPattern)\n\t}\n\n\tst, err := os.Stat(path.Join(dir, test_date_log.Name()))\n\tif err != nil {\n\t\tt.Fatalf(\"failed to check file permissions: %v\", err)\n\t}\n\n\tif st.Mode() != os.FileMode(mode) {\n\t\tt.Errorf(\"file mode is %v, want %v\", st.Mode(), mode)\n\t}\n\n\ttest_dot_log := files[len(files)-1]\n\tif test_dot_log.Name() != \"test.log\" {\n\t\tt.Fatalf(\"got %v filename want test.log\", test_dot_log.Name())\n\t}\n\n\tst, err = os.Stat(path.Join(dir, test_dot_log.Name()))\n\tif err != nil {\n\t\tt.Fatalf(\"failed to check file permissions: %v\", err)\n\t}\n\n\tif st.Mode() != os.FileMode(mode) {\n\t\tt.Errorf(\"file mode is %v, want %v\", st.Mode(), mode)\n\t}\n}\n\nfunc TestFileModeConfig(t *testing.T) {\n\ttests := []struct {\n\t\tname    string\n\t\td       *caddyfile.Dispenser\n\t\tfw      FileWriter\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"set mode\",\n\t\t\td: caddyfile.NewTestDispenser(`\nfile test.log {\n\tmode 0666\n}\n`),\n\t\t\tfw: FileWriter{\n\t\t\t\tMode: 0o666,\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"set mode 3 digits\",\n\t\t\td: caddyfile.NewTestDispenser(`\nfile test.log {\n\tmode 666\n}\n`),\n\t\t\tfw: FileWriter{\n\t\t\t\tMode: 0o666,\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"set mode 2 digits\",\n\t\t\td: caddyfile.NewTestDispenser(`\nfile test.log {\n\tmode 66\n}\n`),\n\t\t\tfw: FileWriter{\n\t\t\t\tMode: 0o066,\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"set mode 1 digits\",\n\t\t\td: caddyfile.NewTestDispenser(`\nfile test.log {\n\tmode 6\n}\n`),\n\t\t\tfw: FileWriter{\n\t\t\t\tMode: 0o006,\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"invalid mode\",\n\t\t\td: caddyfile.NewTestDispenser(`\nfile test.log {\n\tmode foobar\n}\n`),\n\t\t\tfw:      FileWriter{},\n\t\t\twantErr: true,\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tfw := &FileWriter{}\n\t\t\tif err := fw.UnmarshalCaddyfile(tt.d); (err != nil) != tt.wantErr {\n\t\t\t\tt.Fatalf(\"UnmarshalCaddyfile() error = %v, want %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tif fw.Mode != tt.fw.Mode {\n\t\t\t\tt.Errorf(\"got mode %v, want %v\", fw.Mode, tt.fw.Mode)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestFileModeJSON(t *testing.T) {\n\ttests := []struct {\n\t\tname    string\n\t\tconfig  string\n\t\tfw      FileWriter\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"set mode\",\n\t\t\tconfig: `\n{\n\t\"mode\": \"0666\"\n}\n`,\n\t\t\tfw: FileWriter{\n\t\t\t\tMode: 0o666,\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"set mode invalid value\",\n\t\t\tconfig: `\n{\n\t\"mode\": \"0x666\"\n}\n`,\n\t\t\tfw:      FileWriter{},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"set mode invalid string\",\n\t\t\tconfig: `\n{\n\t\"mode\": 777\n}\n`,\n\t\t\tfw:      FileWriter{},\n\t\t\twantErr: true,\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tfw := &FileWriter{}\n\t\t\tif err := json.Unmarshal([]byte(tt.config), fw); (err != nil) != tt.wantErr {\n\t\t\t\tt.Fatalf(\"UnmarshalJSON() error = %v, want %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tif fw.Mode != tt.fw.Mode {\n\t\t\t\tt.Errorf(\"got mode %v, want %v\", fw.Mode, tt.fw.Mode)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestFileModeToJSON(t *testing.T) {\n\ttests := []struct {\n\t\tname    string\n\t\tmode    fileMode\n\t\twant    string\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname:    \"none zero\",\n\t\t\tmode:    0o644,\n\t\t\twant:    `\"0644\"`,\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname:    \"zero mode\",\n\t\t\tmode:    0,\n\t\t\twant:    `\"0000\"`,\n\t\t\twantErr: false,\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tvar b []byte\n\t\t\tvar err error\n\n\t\t\tif b, err = json.Marshal(&tt.mode); (err != nil) != tt.wantErr {\n\t\t\t\tt.Fatalf(\"MarshalJSON() error = %v, want %v\", err, tt.wantErr)\n\t\t\t}\n\n\t\t\tgot := string(b[:])\n\n\t\t\tif got != tt.want {\n\t\t\t\tt.Errorf(\"got mode %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestFileModeModification(t *testing.T) {\n\tm := syscall.Umask(0o000)\n\tdefer syscall.Umask(m)\n\n\tdir, err := os.MkdirTemp(\"\", \"caddytest\")\n\tif err != nil {\n\t\tt.Fatalf(\"failed to create tempdir: %v\", err)\n\t}\n\tdefer os.RemoveAll(dir)\n\n\tfpath := path.Join(dir, \"test.log\")\n\tf_tmp, err := os.OpenFile(fpath, os.O_WRONLY|os.O_APPEND|os.O_CREATE, os.FileMode(0o600))\n\tif err != nil {\n\t\tt.Fatalf(\"failed to create test file: %v\", err)\n\t}\n\tf_tmp.Close()\n\n\tfw := FileWriter{\n\t\tMode:     0o666,\n\t\tFilename: fpath,\n\t}\n\n\tlogger, err := fw.OpenWriter()\n\tif err != nil {\n\t\tt.Fatalf(\"failed to create file: %v\", err)\n\t}\n\tdefer logger.Close()\n\n\tst, err := os.Stat(fpath)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to check file permissions: %v\", err)\n\t}\n\n\twant := os.FileMode(fw.Mode)\n\tif st.Mode() != want {\n\t\tt.Errorf(\"file mode is %v, want %v\", st.Mode(), want)\n\t}\n}\n\nfunc TestDirMode_Inherit(t *testing.T) {\n\tm := syscall.Umask(0)\n\tdefer syscall.Umask(m)\n\n\tparent := t.TempDir()\n\tif err := os.Chmod(parent, 0o755); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\ttargetDir := filepath.Join(parent, \"a\", \"b\")\n\tfw := &FileWriter{\n\t\tFilename: filepath.Join(targetDir, \"test.log\"),\n\t\tDirMode:  \"inherit\",\n\t\tMode:     0o640,\n\t\tRoll:     func() *bool { f := false; return &f }(),\n\t}\n\tw, err := fw.OpenWriter()\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\t_ = w.Close()\n\n\tst, err := os.Stat(targetDir)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif got := st.Mode().Perm(); got != 0o755 {\n\t\tt.Fatalf(\"dir perm = %o, want 0755\", got)\n\t}\n}\n\nfunc TestDirMode_FromFile(t *testing.T) {\n\tm := syscall.Umask(0)\n\tdefer syscall.Umask(m)\n\n\tbase := t.TempDir()\n\n\tdir1 := filepath.Join(base, \"logs1\")\n\tfw1 := &FileWriter{\n\t\tFilename: filepath.Join(dir1, \"app.log\"),\n\t\tDirMode:  \"from_file\",\n\t\tMode:     0o644, // => dir 0755\n\t\tRoll:     func() *bool { f := false; return &f }(),\n\t}\n\tw1, err := fw1.OpenWriter()\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\t_ = w1.Close()\n\n\tst1, err := os.Stat(dir1)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif got := st1.Mode().Perm(); got != 0o755 {\n\t\tt.Fatalf(\"dir perm = %o, want 0755\", got)\n\t}\n\n\tdir2 := filepath.Join(base, \"logs2\")\n\tfw2 := &FileWriter{\n\t\tFilename: filepath.Join(dir2, \"app.log\"),\n\t\tDirMode:  \"from_file\",\n\t\tMode:     0o600, // => dir 0700\n\t\tRoll:     func() *bool { f := false; return &f }(),\n\t}\n\tw2, err := fw2.OpenWriter()\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\t_ = w2.Close()\n\n\tst2, err := os.Stat(dir2)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif got := st2.Mode().Perm(); got != 0o700 {\n\t\tt.Fatalf(\"dir perm = %o, want 0700\", got)\n\t}\n}\n\nfunc TestDirMode_ExplicitOctal(t *testing.T) {\n\tm := syscall.Umask(0)\n\tdefer syscall.Umask(m)\n\n\tbase := t.TempDir()\n\tdest := filepath.Join(base, \"logs3\")\n\tfw := &FileWriter{\n\t\tFilename: filepath.Join(dest, \"app.log\"),\n\t\tDirMode:  \"0750\",\n\t\tMode:     0o640,\n\t\tRoll:     func() *bool { f := false; return &f }(),\n\t}\n\tw, err := fw.OpenWriter()\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\t_ = w.Close()\n\n\tst, err := os.Stat(dest)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif got := st.Mode().Perm(); got != 0o750 {\n\t\tt.Fatalf(\"dir perm = %o, want 0750\", got)\n\t}\n}\n\nfunc TestDirMode_Default0700(t *testing.T) {\n\tm := syscall.Umask(0)\n\tdefer syscall.Umask(m)\n\n\tbase := t.TempDir()\n\tdest := filepath.Join(base, \"logs4\")\n\tfw := &FileWriter{\n\t\tFilename: filepath.Join(dest, \"app.log\"),\n\t\tMode:     0o640,\n\t\tRoll:     func() *bool { f := false; return &f }(),\n\t}\n\tw, err := fw.OpenWriter()\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\t_ = w.Close()\n\n\tst, err := os.Stat(dest)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif got := st.Mode().Perm(); got != 0o700 {\n\t\tt.Fatalf(\"dir perm = %o, want 0700\", got)\n\t}\n}\n\nfunc TestDirMode_UmaskInteraction(t *testing.T) {\n\t_ = syscall.Umask(0o022) // typical umask; restore after\n\tdefer syscall.Umask(0)\n\n\tbase := t.TempDir()\n\tdest := filepath.Join(base, \"logs5\")\n\tfw := &FileWriter{\n\t\tFilename: filepath.Join(dest, \"app.log\"),\n\t\tDirMode:  \"0755\",\n\t\tMode:     0o644,\n\t\tRoll:     func() *bool { f := false; return &f }(),\n\t}\n\tw, err := fw.OpenWriter()\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\t_ = w.Close()\n\n\tst, err := os.Stat(dest)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\t// 0755 &^ 0022 still 0755 for dirs; this just sanity-checks we didn't get stricter unexpectedly\n\tif got := st.Mode().Perm(); got != 0o755 {\n\t\tt.Fatalf(\"dir perm = %o, want 0755 (considering umask)\", got)\n\t}\n}\n\nfunc TestCaddyfile_DirMode_Inherit(t *testing.T) {\n\td := caddyfile.NewTestDispenser(`\nfile /var/log/app.log {\n    dir_mode inherit\n    mode 0640\n}`)\n\tvar fw FileWriter\n\tif err := fw.UnmarshalCaddyfile(d); err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif fw.DirMode != \"inherit\" {\n\t\tt.Fatalf(\"got %q\", fw.DirMode)\n\t}\n\tif fw.Mode != 0o640 {\n\t\tt.Fatalf(\"mode = %o\", fw.Mode)\n\t}\n}\n\nfunc TestCaddyfile_DirMode_FromFile(t *testing.T) {\n\td := caddyfile.NewTestDispenser(`\nfile /var/log/app.log {\n    dir_mode from_file\n    mode 0600\n}`)\n\tvar fw FileWriter\n\tif err := fw.UnmarshalCaddyfile(d); err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif fw.DirMode != \"from_file\" {\n\t\tt.Fatalf(\"got %q\", fw.DirMode)\n\t}\n\tif fw.Mode != 0o600 {\n\t\tt.Fatalf(\"mode = %o\", fw.Mode)\n\t}\n}\n\nfunc TestCaddyfile_DirMode_Octal(t *testing.T) {\n\td := caddyfile.NewTestDispenser(`\nfile /var/log/app.log {\n    dir_mode 0755\n}`)\n\tvar fw FileWriter\n\tif err := fw.UnmarshalCaddyfile(d); err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif fw.DirMode != \"0755\" {\n\t\tt.Fatalf(\"got %q\", fw.DirMode)\n\t}\n}\n\nfunc TestCaddyfile_DirMode_Invalid(t *testing.T) {\n\td := caddyfile.NewTestDispenser(`\nfile /var/log/app.log {\n    dir_mode nope\n}`)\n\tvar fw FileWriter\n\tif err := fw.UnmarshalCaddyfile(d); err == nil {\n\t\tt.Fatal(\"expected error for invalid dir_mode\")\n\t}\n}\n"
  },
  {
    "path": "modules/logging/filewriter_test_windows.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n//go:build windows\n\npackage logging\n\nimport (\n\t\"os\"\n\t\"path\"\n\t\"testing\"\n)\n\n// Windows relies on ACLs instead of unix permissions model.\n// Go allows to open files with a particular mode but it is limited to read or write.\n// See https://cs.opensource.google/go/go/+/refs/tags/go1.22.3:src/syscall/syscall_windows.go;l=708.\n// This is pretty restrictive and has little interest for log files and thus we just test that log files are\n// opened with R/W permissions by default on Windows too.\nfunc TestFileCreationMode(t *testing.T) {\n\tdir, err := os.MkdirTemp(\"\", \"caddytest\")\n\tif err != nil {\n\t\tt.Fatalf(\"failed to create tempdir: %v\", err)\n\t}\n\tdefer os.RemoveAll(dir)\n\n\tfw := &FileWriter{\n\t\tFilename: path.Join(dir, \"test.log\"),\n\t}\n\n\tlogger, err := fw.OpenWriter()\n\tif err != nil {\n\t\tt.Fatalf(\"failed to create file: %v\", err)\n\t}\n\tdefer logger.Close()\n\n\tst, err := os.Stat(fw.Filename)\n\tif err != nil {\n\t\tt.Fatalf(\"failed to check file permissions: %v\", err)\n\t}\n\n\tif st.Mode().Perm()&0o600 != 0o600 {\n\t\tt.Fatalf(\"file mode is %v, want rw for user\", st.Mode().Perm())\n\t}\n}\n\nfunc TestDirMode_Windows_CreateSucceeds(t *testing.T) {\n\tdir, err := os.MkdirTemp(\"\", \"caddytest\")\n\tif err != nil {\n\t\tt.Fatalf(\"failed to create tempdir: %v\", err)\n\t}\n\tdefer os.RemoveAll(dir)\n\n\ttests := []struct {\n\t\tname    string\n\t\tdirMode string\n\t}{\n\t\t{\"inherit\", \"inherit\"},\n\t\t{\"from_file\", \"from_file\"},\n\t\t{\"octal\", \"0755\"},\n\t\t{\"default\", \"\"},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tsubdir := path.Join(dir, \"logs-\"+tt.name)\n\t\t\tfw := &FileWriter{\n\t\t\t\tFilename: path.Join(subdir, \"test.log\"),\n\t\t\t\tDirMode:  tt.dirMode,\n\t\t\t\tMode:     0o600,\n\t\t\t}\n\t\t\tw, err := fw.OpenWriter()\n\t\t\tif err != nil {\n\t\t\t\tt.Fatalf(\"failed to open writer: %v\", err)\n\t\t\t}\n\t\t\tdefer w.Close()\n\n\t\t\tif _, err := os.Stat(fw.Filename); err != nil {\n\t\t\t\tt.Fatalf(\"expected file to exist: %v\", err)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "modules/logging/filterencoder.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage logging\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"os\"\n\t\"time\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/buffer\"\n\t\"go.uber.org/zap/zapcore\"\n\t\"golang.org/x/term\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(FilterEncoder{})\n}\n\n// FilterEncoder can filter (manipulate) fields on\n// log entries before they are actually encoded by\n// an underlying encoder.\ntype FilterEncoder struct {\n\t// The underlying encoder that actually encodes the\n\t// log entries. If not specified, defaults to \"json\",\n\t// unless the output is a terminal, in which case\n\t// it defaults to \"console\".\n\tWrappedRaw json.RawMessage `json:\"wrap,omitempty\" caddy:\"namespace=caddy.logging.encoders inline_key=format\"`\n\n\t// A map of field names to their filters. Note that this\n\t// is not a module map; the keys are field names.\n\t//\n\t// Nested fields can be referenced by representing a\n\t// layer of nesting with `>`. In other words, for an\n\t// object like `{\"a\":{\"b\":0}}`, the inner field can\n\t// be referenced as `a>b`.\n\t//\n\t// The following fields are fundamental to the log and\n\t// cannot be filtered because they are added by the\n\t// underlying logging library as special cases: ts,\n\t// level, logger, and msg.\n\tFieldsRaw map[string]json.RawMessage `json:\"fields,omitempty\" caddy:\"namespace=caddy.logging.encoders.filter inline_key=filter\"`\n\n\twrapped zapcore.Encoder\n\tFields  map[string]LogFieldFilter `json:\"-\"`\n\n\t// used to keep keys unique across nested objects\n\tkeyPrefix string\n\n\twrappedIsDefault bool\n\tctx              caddy.Context\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (FilterEncoder) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.logging.encoders.filter\",\n\t\tNew: func() caddy.Module { return new(FilterEncoder) },\n\t}\n}\n\n// Provision sets up the encoder.\nfunc (fe *FilterEncoder) Provision(ctx caddy.Context) error {\n\tfe.ctx = ctx\n\n\tif fe.WrappedRaw == nil {\n\t\t// if wrap is not specified, default to JSON\n\t\tfe.wrapped = &JSONEncoder{}\n\t\tif p, ok := fe.wrapped.(caddy.Provisioner); ok {\n\t\t\tif err := p.Provision(ctx); err != nil {\n\t\t\t\treturn fmt.Errorf(\"provisioning fallback encoder module: %v\", err)\n\t\t\t}\n\t\t}\n\t\tfe.wrappedIsDefault = true\n\t} else {\n\t\t// set up wrapped encoder\n\t\tval, err := ctx.LoadModule(fe, \"WrappedRaw\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"loading fallback encoder module: %v\", err)\n\t\t}\n\t\tfe.wrapped = val.(zapcore.Encoder)\n\t}\n\n\t// set up each field filter\n\tif fe.Fields == nil {\n\t\tfe.Fields = make(map[string]LogFieldFilter)\n\t}\n\tvals, err := ctx.LoadModule(fe, \"FieldsRaw\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"loading log filter modules: %v\", err)\n\t}\n\tfor fieldName, modIface := range vals.(map[string]any) {\n\t\tfe.Fields[fieldName] = modIface.(LogFieldFilter)\n\t}\n\n\treturn nil\n}\n\n// ConfigureDefaultFormat will set the default format to \"console\"\n// if the writer is a terminal. If already configured as a filter\n// encoder, it passes through the writer so a deeply nested filter\n// encoder can configure its own default format.\nfunc (fe *FilterEncoder) ConfigureDefaultFormat(wo caddy.WriterOpener) error {\n\tif !fe.wrappedIsDefault {\n\t\tif cfd, ok := fe.wrapped.(caddy.ConfiguresFormatterDefault); ok {\n\t\t\treturn cfd.ConfigureDefaultFormat(wo)\n\t\t}\n\t\treturn nil\n\t}\n\n\tif caddy.IsWriterStandardStream(wo) && term.IsTerminal(int(os.Stderr.Fd())) {\n\t\tfe.wrapped = &ConsoleEncoder{}\n\t\tif p, ok := fe.wrapped.(caddy.Provisioner); ok {\n\t\t\tif err := p.Provision(fe.ctx); err != nil {\n\t\t\t\treturn fmt.Errorf(\"provisioning fallback encoder module: %v\", err)\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens. Syntax:\n//\n//\tfilter {\n//\t    wrap <another encoder>\n//\t    fields {\n//\t        <field> <filter> {\n//\t            <filter options>\n//\t        }\n//\t    }\n//\t    <field> <filter> {\n//\t        <filter options>\n//\t    }\n//\t}\nfunc (fe *FilterEncoder) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume encoder name\n\n\t// Track regexp filters for automatic merging\n\tregexpFilters := make(map[string][]*RegexpFilter)\n\n\t// parse a field\n\tparseField := func() error {\n\t\tif fe.FieldsRaw == nil {\n\t\t\tfe.FieldsRaw = make(map[string]json.RawMessage)\n\t\t}\n\t\tfield := d.Val()\n\t\tif !d.NextArg() {\n\t\t\treturn d.ArgErr()\n\t\t}\n\t\tfilterName := d.Val()\n\t\tmoduleID := \"caddy.logging.encoders.filter.\" + filterName\n\t\tunm, err := caddyfile.UnmarshalModule(d, moduleID)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tfilter, ok := unm.(LogFieldFilter)\n\t\tif !ok {\n\t\t\treturn d.Errf(\"module %s (%T) is not a logging.LogFieldFilter\", moduleID, unm)\n\t\t}\n\n\t\t// Special handling for regexp filters to support multiple instances\n\t\tif regexpFilter, isRegexp := filter.(*RegexpFilter); isRegexp {\n\t\t\tregexpFilters[field] = append(regexpFilters[field], regexpFilter)\n\t\t\treturn nil // Don't set FieldsRaw yet, we'll merge them later\n\t\t}\n\n\t\t// Check if we're trying to add a non-regexp filter to a field that already has regexp filters\n\t\tif _, hasRegexpFilters := regexpFilters[field]; hasRegexpFilters {\n\t\t\treturn d.Errf(\"cannot mix regexp filters with other filter types for field %s\", field)\n\t\t}\n\n\t\t// Check if field already has a filter and it's not regexp-related\n\t\tif _, exists := fe.FieldsRaw[field]; exists {\n\t\t\treturn d.Errf(\"field %s already has a filter; multiple non-regexp filters per field are not supported\", field)\n\t\t}\n\n\t\tfe.FieldsRaw[field] = caddyconfig.JSONModuleObject(filter, \"filter\", filterName, nil)\n\t\treturn nil\n\t}\n\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"wrap\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tmoduleName := d.Val()\n\t\t\tmoduleID := \"caddy.logging.encoders.\" + moduleName\n\t\t\tunm, err := caddyfile.UnmarshalModule(d, moduleID)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tenc, ok := unm.(zapcore.Encoder)\n\t\t\tif !ok {\n\t\t\t\treturn d.Errf(\"module %s (%T) is not a zapcore.Encoder\", moduleID, unm)\n\t\t\t}\n\t\t\tfe.WrappedRaw = caddyconfig.JSONModuleObject(enc, \"format\", moduleName, nil)\n\n\t\tcase \"fields\":\n\t\t\tfor nesting := d.Nesting(); d.NextBlock(nesting); {\n\t\t\t\terr := parseField()\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\n\t\tdefault:\n\t\t\t// if unknown, assume it's a field so that\n\t\t\t// the config can be flat\n\t\t\terr := parseField()\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\n\t// After parsing all fields, merge multiple regexp filters into MultiRegexpFilter\n\tfor field, filters := range regexpFilters {\n\t\tif len(filters) == 1 {\n\t\t\t// Single regexp filter, use the original RegexpFilter\n\t\t\tfe.FieldsRaw[field] = caddyconfig.JSONModuleObject(filters[0], \"filter\", \"regexp\", nil)\n\t\t} else {\n\t\t\t// Multiple regexp filters, merge into MultiRegexpFilter\n\t\t\tmultiFilter := &MultiRegexpFilter{}\n\t\t\tfor _, regexpFilter := range filters {\n\t\t\t\terr := multiFilter.AddOperation(regexpFilter.RawRegexp, regexpFilter.Value)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn fmt.Errorf(\"adding regexp operation for field %s: %v\", field, err)\n\t\t\t\t}\n\t\t\t}\n\t\t\tfe.FieldsRaw[field] = caddyconfig.JSONModuleObject(multiFilter, \"filter\", \"multi_regexp\", nil)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// AddArray is part of the zapcore.ObjectEncoder interface.\n// Array elements do not get filtered.\nfunc (fe FilterEncoder) AddArray(key string, marshaler zapcore.ArrayMarshaler) error {\n\tif filter, ok := fe.Fields[fe.keyPrefix+key]; ok {\n\t\tfilter.Filter(zap.Array(key, marshaler)).AddTo(fe.wrapped)\n\t\treturn nil\n\t}\n\treturn fe.wrapped.AddArray(key, marshaler)\n}\n\n// AddObject is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddObject(key string, marshaler zapcore.ObjectMarshaler) error {\n\tif fe.filtered(key, marshaler) {\n\t\treturn nil\n\t}\n\tfe.keyPrefix += key + \">\"\n\treturn fe.wrapped.AddObject(key, logObjectMarshalerWrapper{\n\t\tenc:   fe,\n\t\tmarsh: marshaler,\n\t})\n}\n\n// AddBinary is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddBinary(key string, value []byte) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddBinary(key, value)\n\t}\n}\n\n// AddByteString is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddByteString(key string, value []byte) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddByteString(key, value)\n\t}\n}\n\n// AddBool is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddBool(key string, value bool) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddBool(key, value)\n\t}\n}\n\n// AddComplex128 is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddComplex128(key string, value complex128) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddComplex128(key, value)\n\t}\n}\n\n// AddComplex64 is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddComplex64(key string, value complex64) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddComplex64(key, value)\n\t}\n}\n\n// AddDuration is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddDuration(key string, value time.Duration) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddDuration(key, value)\n\t}\n}\n\n// AddFloat64 is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddFloat64(key string, value float64) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddFloat64(key, value)\n\t}\n}\n\n// AddFloat32 is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddFloat32(key string, value float32) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddFloat32(key, value)\n\t}\n}\n\n// AddInt is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddInt(key string, value int) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddInt(key, value)\n\t}\n}\n\n// AddInt64 is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddInt64(key string, value int64) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddInt64(key, value)\n\t}\n}\n\n// AddInt32 is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddInt32(key string, value int32) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddInt32(key, value)\n\t}\n}\n\n// AddInt16 is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddInt16(key string, value int16) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddInt16(key, value)\n\t}\n}\n\n// AddInt8 is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddInt8(key string, value int8) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddInt8(key, value)\n\t}\n}\n\n// AddString is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddString(key, value string) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddString(key, value)\n\t}\n}\n\n// AddTime is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddTime(key string, value time.Time) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddTime(key, value)\n\t}\n}\n\n// AddUint is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddUint(key string, value uint) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddUint(key, value)\n\t}\n}\n\n// AddUint64 is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddUint64(key string, value uint64) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddUint64(key, value)\n\t}\n}\n\n// AddUint32 is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddUint32(key string, value uint32) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddUint32(key, value)\n\t}\n}\n\n// AddUint16 is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddUint16(key string, value uint16) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddUint16(key, value)\n\t}\n}\n\n// AddUint8 is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddUint8(key string, value uint8) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddUint8(key, value)\n\t}\n}\n\n// AddUintptr is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddUintptr(key string, value uintptr) {\n\tif !fe.filtered(key, value) {\n\t\tfe.wrapped.AddUintptr(key, value)\n\t}\n}\n\n// AddReflected is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) AddReflected(key string, value any) error {\n\tif !fe.filtered(key, value) {\n\t\treturn fe.wrapped.AddReflected(key, value)\n\t}\n\treturn nil\n}\n\n// OpenNamespace is part of the zapcore.ObjectEncoder interface.\nfunc (fe FilterEncoder) OpenNamespace(key string) {\n\tfe.wrapped.OpenNamespace(key)\n}\n\n// Clone is part of the zapcore.ObjectEncoder interface.\n// We don't use it as of Oct 2019 (v2 beta 7), I'm not\n// really sure what it'd be useful for in our case.\nfunc (fe FilterEncoder) Clone() zapcore.Encoder {\n\treturn FilterEncoder{\n\t\tFields:    fe.Fields,\n\t\twrapped:   fe.wrapped.Clone(),\n\t\tkeyPrefix: fe.keyPrefix,\n\t}\n}\n\n// EncodeEntry partially implements the zapcore.Encoder interface.\nfunc (fe FilterEncoder) EncodeEntry(ent zapcore.Entry, fields []zapcore.Field) (*buffer.Buffer, error) {\n\t// without this clone and storing it to fe.wrapped, fields\n\t// from subsequent log entries get appended to previous\n\t// ones, and I'm not 100% sure why; see end of\n\t// https://github.com/uber-go/zap/issues/750\n\tfe.wrapped = fe.wrapped.Clone()\n\tfor _, field := range fields {\n\t\tfield.AddTo(fe)\n\t}\n\treturn fe.wrapped.EncodeEntry(ent, nil)\n}\n\n// filtered returns true if the field was filtered.\n// If true is returned, the field was filtered and\n// added to the underlying encoder (so do not do\n// that again). If false was returned, the field has\n// not yet been added to the underlying encoder.\nfunc (fe FilterEncoder) filtered(key string, value any) bool {\n\tfilter, ok := fe.Fields[fe.keyPrefix+key]\n\tif !ok {\n\t\treturn false\n\t}\n\tfilter.Filter(zap.Any(key, value)).AddTo(fe.wrapped)\n\treturn true\n}\n\n// logObjectMarshalerWrapper allows us to recursively\n// filter fields of objects as they get encoded.\ntype logObjectMarshalerWrapper struct {\n\tenc   FilterEncoder\n\tmarsh zapcore.ObjectMarshaler\n}\n\n// MarshalLogObject implements the zapcore.ObjectMarshaler interface.\nfunc (mom logObjectMarshalerWrapper) MarshalLogObject(_ zapcore.ObjectEncoder) error {\n\treturn mom.marsh.MarshalLogObject(mom.enc)\n}\n\n// Interface guards\nvar (\n\t_ zapcore.Encoder                  = (*FilterEncoder)(nil)\n\t_ zapcore.ObjectMarshaler          = (*logObjectMarshalerWrapper)(nil)\n\t_ caddyfile.Unmarshaler            = (*FilterEncoder)(nil)\n\t_ caddy.ConfiguresFormatterDefault = (*FilterEncoder)(nil)\n)\n"
  },
  {
    "path": "modules/logging/filters.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage logging\n\nimport (\n\t\"crypto/sha256\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(DeleteFilter{})\n\tcaddy.RegisterModule(HashFilter{})\n\tcaddy.RegisterModule(ReplaceFilter{})\n\tcaddy.RegisterModule(IPMaskFilter{})\n\tcaddy.RegisterModule(QueryFilter{})\n\tcaddy.RegisterModule(CookieFilter{})\n\tcaddy.RegisterModule(RegexpFilter{})\n\tcaddy.RegisterModule(RenameFilter{})\n\tcaddy.RegisterModule(MultiRegexpFilter{})\n}\n\n// LogFieldFilter can filter (or manipulate)\n// a field in a log entry.\ntype LogFieldFilter interface {\n\tFilter(zapcore.Field) zapcore.Field\n}\n\n// DeleteFilter is a Caddy log field filter that\n// deletes the field.\ntype DeleteFilter struct{}\n\n// CaddyModule returns the Caddy module information.\nfunc (DeleteFilter) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.logging.encoders.filter.delete\",\n\t\tNew: func() caddy.Module { return new(DeleteFilter) },\n\t}\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (DeleteFilter) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\treturn nil\n}\n\n// Filter filters the input field.\nfunc (DeleteFilter) Filter(in zapcore.Field) zapcore.Field {\n\tin.Type = zapcore.SkipType\n\treturn in\n}\n\n// hash returns the first 4 bytes of the SHA-256 hash of the given data as hexadecimal\nfunc hash(s string) string {\n\treturn fmt.Sprintf(\"%.4x\", sha256.Sum256([]byte(s)))\n}\n\n// HashFilter is a Caddy log field filter that\n// replaces the field with the initial 4 bytes\n// of the SHA-256 hash of the content. Operates\n// on string fields, or on arrays of strings\n// where each string is hashed.\ntype HashFilter struct{}\n\n// CaddyModule returns the Caddy module information.\nfunc (HashFilter) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.logging.encoders.filter.hash\",\n\t\tNew: func() caddy.Module { return new(HashFilter) },\n\t}\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (f *HashFilter) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\treturn nil\n}\n\n// Filter filters the input field with the replacement value.\nfunc (f *HashFilter) Filter(in zapcore.Field) zapcore.Field {\n\tif array, ok := in.Interface.(caddyhttp.LoggableStringArray); ok {\n\t\tnewArray := make(caddyhttp.LoggableStringArray, len(array))\n\t\tfor i, s := range array {\n\t\t\tnewArray[i] = hash(s)\n\t\t}\n\t\tin.Interface = newArray\n\t} else {\n\t\tin.String = hash(in.String)\n\t}\n\n\treturn in\n}\n\n// ReplaceFilter is a Caddy log field filter that\n// replaces the field with the indicated string.\ntype ReplaceFilter struct {\n\tValue string `json:\"value,omitempty\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (ReplaceFilter) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.logging.encoders.filter.replace\",\n\t\tNew: func() caddy.Module { return new(ReplaceFilter) },\n\t}\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (f *ReplaceFilter) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume filter name\n\tif d.NextArg() {\n\t\tf.Value = d.Val()\n\t}\n\treturn nil\n}\n\n// Filter filters the input field with the replacement value.\nfunc (f *ReplaceFilter) Filter(in zapcore.Field) zapcore.Field {\n\tin.Type = zapcore.StringType\n\tin.String = f.Value\n\treturn in\n}\n\n// IPMaskFilter is a Caddy log field filter that\n// masks IP addresses in a string, or in an array\n// of strings. The string may be a comma separated\n// list of IP addresses, where all of the values\n// will be masked.\ntype IPMaskFilter struct {\n\t// The IPv4 mask, as an subnet size CIDR.\n\tIPv4MaskRaw int `json:\"ipv4_cidr,omitempty\"`\n\n\t// The IPv6 mask, as an subnet size CIDR.\n\tIPv6MaskRaw int `json:\"ipv6_cidr,omitempty\"`\n\n\tv4Mask net.IPMask\n\tv6Mask net.IPMask\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (IPMaskFilter) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.logging.encoders.filter.ip_mask\",\n\t\tNew: func() caddy.Module { return new(IPMaskFilter) },\n\t}\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (m *IPMaskFilter) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume filter name\n\n\targs := d.RemainingArgs()\n\tif len(args) > 2 {\n\t\treturn d.Errf(\"too many arguments\")\n\t}\n\tif len(args) > 0 {\n\t\tval, err := strconv.Atoi(args[0])\n\t\tif err != nil {\n\t\t\treturn d.Errf(\"error parsing %s: %v\", args[0], err)\n\t\t}\n\t\tm.IPv4MaskRaw = val\n\n\t\tif len(args) > 1 {\n\t\t\tval, err := strconv.Atoi(args[1])\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"error parsing %s: %v\", args[1], err)\n\t\t\t}\n\t\t\tm.IPv6MaskRaw = val\n\t\t}\n\t}\n\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"ipv4\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tval, err := strconv.Atoi(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"error parsing %s: %v\", d.Val(), err)\n\t\t\t}\n\t\t\tm.IPv4MaskRaw = val\n\n\t\tcase \"ipv6\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tval, err := strconv.Atoi(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"error parsing %s: %v\", d.Val(), err)\n\t\t\t}\n\t\t\tm.IPv6MaskRaw = val\n\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized subdirective %s\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\n// Provision parses m's IP masks, from integers.\nfunc (m *IPMaskFilter) Provision(ctx caddy.Context) error {\n\tparseRawToMask := func(rawField int, bitLen int) net.IPMask {\n\t\tif rawField == 0 {\n\t\t\treturn nil\n\t\t}\n\n\t\t// we assume the int is a subnet size CIDR\n\t\t// e.g. \"16\" being equivalent to masking the last\n\t\t// two bytes of an ipv4 address, like \"255.255.0.0\"\n\t\treturn net.CIDRMask(rawField, bitLen)\n\t}\n\n\tm.v4Mask = parseRawToMask(m.IPv4MaskRaw, 32)\n\tm.v6Mask = parseRawToMask(m.IPv6MaskRaw, 128)\n\n\treturn nil\n}\n\n// Filter filters the input field.\nfunc (m IPMaskFilter) Filter(in zapcore.Field) zapcore.Field {\n\tif array, ok := in.Interface.(caddyhttp.LoggableStringArray); ok {\n\t\tnewArray := make(caddyhttp.LoggableStringArray, len(array))\n\t\tfor i, s := range array {\n\t\t\tnewArray[i] = m.mask(s)\n\t\t}\n\t\tin.Interface = newArray\n\t} else {\n\t\tin.String = m.mask(in.String)\n\t}\n\n\treturn in\n}\n\nfunc (m IPMaskFilter) mask(s string) string {\n\tparts := make([]string, 0)\n\tfor value := range strings.SplitSeq(s, \",\") {\n\t\tvalue = strings.TrimSpace(value)\n\t\thost, port, err := net.SplitHostPort(value)\n\t\tif err != nil {\n\t\t\thost = value // assume whole thing was IP address\n\t\t}\n\t\tipAddr := net.ParseIP(host)\n\t\tif ipAddr == nil {\n\t\t\tparts = append(parts, value)\n\t\t\tcontinue\n\t\t}\n\t\tmask := m.v4Mask\n\t\tif ipAddr.To4() == nil {\n\t\t\tmask = m.v6Mask\n\t\t}\n\t\tmasked := ipAddr.Mask(mask)\n\t\tif port == \"\" {\n\t\t\tparts = append(parts, masked.String())\n\t\t\tcontinue\n\t\t}\n\n\t\tparts = append(parts, net.JoinHostPort(masked.String(), port))\n\t}\n\treturn strings.Join(parts, \", \")\n}\n\ntype filterAction string\n\nconst (\n\t// Replace value(s).\n\treplaceAction filterAction = \"replace\"\n\n\t// Hash value(s).\n\thashAction filterAction = \"hash\"\n\n\t// Delete.\n\tdeleteAction filterAction = \"delete\"\n)\n\nfunc (a filterAction) IsValid() error {\n\tswitch a {\n\tcase replaceAction, deleteAction, hashAction:\n\t\treturn nil\n\t}\n\n\treturn errors.New(\"invalid action type\")\n}\n\ntype queryFilterAction struct {\n\t// `replace` to replace the value(s) associated with the parameter(s), `hash` to replace them with the 4 initial bytes of the SHA-256 of their content or `delete` to remove them entirely.\n\tType filterAction `json:\"type\"`\n\n\t// The name of the query parameter.\n\tParameter string `json:\"parameter\"`\n\n\t// The value to use as replacement if the action is `replace`.\n\tValue string `json:\"value,omitempty\"`\n}\n\n// QueryFilter is a Caddy log field filter that filters\n// query parameters from a URL.\n//\n// This filter updates the logged URL string to remove, replace or hash\n// query parameters containing sensitive data. For instance, it can be\n// used to redact any kind of secrets which were passed as query parameters,\n// such as OAuth access tokens, session IDs, magic link tokens, etc.\ntype QueryFilter struct {\n\t// A list of actions to apply to the query parameters of the URL.\n\tActions []queryFilterAction `json:\"actions\"`\n}\n\n// Validate checks that action types are correct.\nfunc (f *QueryFilter) Validate() error {\n\tfor _, a := range f.Actions {\n\t\tif err := a.Type.IsValid(); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (QueryFilter) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.logging.encoders.filter.query\",\n\t\tNew: func() caddy.Module { return new(QueryFilter) },\n\t}\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (m *QueryFilter) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume filter name\n\tfor d.NextBlock(0) {\n\t\tqfa := queryFilterAction{}\n\t\tswitch d.Val() {\n\t\tcase \"replace\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\t\tqfa.Type = replaceAction\n\t\t\tqfa.Parameter = d.Val()\n\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tqfa.Value = d.Val()\n\n\t\tcase \"hash\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\t\tqfa.Type = hashAction\n\t\t\tqfa.Parameter = d.Val()\n\n\t\tcase \"delete\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\t\tqfa.Type = deleteAction\n\t\t\tqfa.Parameter = d.Val()\n\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized subdirective %s\", d.Val())\n\t\t}\n\n\t\tm.Actions = append(m.Actions, qfa)\n\t}\n\treturn nil\n}\n\n// Filter filters the input field.\nfunc (m QueryFilter) Filter(in zapcore.Field) zapcore.Field {\n\tif array, ok := in.Interface.(caddyhttp.LoggableStringArray); ok {\n\t\tnewArray := make(caddyhttp.LoggableStringArray, len(array))\n\t\tfor i, s := range array {\n\t\t\tnewArray[i] = m.processQueryString(s)\n\t\t}\n\t\tin.Interface = newArray\n\t} else {\n\t\tin.String = m.processQueryString(in.String)\n\t}\n\n\treturn in\n}\n\nfunc (m QueryFilter) processQueryString(s string) string {\n\tu, err := url.Parse(s)\n\tif err != nil {\n\t\treturn s\n\t}\n\n\tq := u.Query()\n\tfor _, a := range m.Actions {\n\t\tswitch a.Type {\n\t\tcase replaceAction:\n\t\t\tfor i := range q[a.Parameter] {\n\t\t\t\tq[a.Parameter][i] = a.Value\n\t\t\t}\n\n\t\tcase hashAction:\n\t\t\tfor i := range q[a.Parameter] {\n\t\t\t\tq[a.Parameter][i] = hash(a.Value)\n\t\t\t}\n\n\t\tcase deleteAction:\n\t\t\tq.Del(a.Parameter)\n\t\t}\n\t}\n\n\tu.RawQuery = q.Encode()\n\treturn u.String()\n}\n\ntype cookieFilterAction struct {\n\t// `replace` to replace the value of the cookie, `hash` to replace it with the 4 initial bytes of the SHA-256 of its content or `delete` to remove it entirely.\n\tType filterAction `json:\"type\"`\n\n\t// The name of the cookie.\n\tName string `json:\"name\"`\n\n\t// The value to use as replacement if the action is `replace`.\n\tValue string `json:\"value,omitempty\"`\n}\n\n// CookieFilter is a Caddy log field filter that filters\n// cookies.\n//\n// This filter updates the logged HTTP header string\n// to remove, replace or hash cookies containing sensitive data. For instance,\n// it can be used to redact any kind of secrets, such as session IDs.\n//\n// If several actions are configured for the same cookie name, only the first\n// will be applied.\ntype CookieFilter struct {\n\t// A list of actions to apply to the cookies.\n\tActions []cookieFilterAction `json:\"actions\"`\n}\n\n// Validate checks that action types are correct.\nfunc (f *CookieFilter) Validate() error {\n\tfor _, a := range f.Actions {\n\t\tif err := a.Type.IsValid(); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (CookieFilter) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.logging.encoders.filter.cookie\",\n\t\tNew: func() caddy.Module { return new(CookieFilter) },\n\t}\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (m *CookieFilter) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume filter name\n\tfor d.NextBlock(0) {\n\t\tcfa := cookieFilterAction{}\n\t\tswitch d.Val() {\n\t\tcase \"replace\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\t\tcfa.Type = replaceAction\n\t\t\tcfa.Name = d.Val()\n\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tcfa.Value = d.Val()\n\n\t\tcase \"hash\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\t\tcfa.Type = hashAction\n\t\t\tcfa.Name = d.Val()\n\n\t\tcase \"delete\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\n\t\t\tcfa.Type = deleteAction\n\t\t\tcfa.Name = d.Val()\n\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized subdirective %s\", d.Val())\n\t\t}\n\n\t\tm.Actions = append(m.Actions, cfa)\n\t}\n\treturn nil\n}\n\n// Filter filters the input field.\nfunc (m CookieFilter) Filter(in zapcore.Field) zapcore.Field {\n\tcookiesSlice, ok := in.Interface.(caddyhttp.LoggableStringArray)\n\tif !ok {\n\t\treturn in\n\t}\n\n\t// using a dummy Request to make use of the Cookies() function to parse it\n\toriginRequest := http.Request{Header: http.Header{\"Cookie\": cookiesSlice}}\n\tcookies := originRequest.Cookies()\n\ttransformedRequest := http.Request{Header: make(http.Header)}\n\nOUTER:\n\tfor _, c := range cookies {\n\t\tfor _, a := range m.Actions {\n\t\t\tif c.Name != a.Name {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tswitch a.Type {\n\t\t\tcase replaceAction:\n\t\t\t\tc.Value = a.Value\n\t\t\t\ttransformedRequest.AddCookie(c)\n\t\t\t\tcontinue OUTER\n\n\t\t\tcase hashAction:\n\t\t\t\tc.Value = hash(c.Value)\n\t\t\t\ttransformedRequest.AddCookie(c)\n\t\t\t\tcontinue OUTER\n\n\t\t\tcase deleteAction:\n\t\t\t\tcontinue OUTER\n\t\t\t}\n\t\t}\n\n\t\ttransformedRequest.AddCookie(c)\n\t}\n\n\tin.Interface = caddyhttp.LoggableStringArray(transformedRequest.Header[\"Cookie\"])\n\n\treturn in\n}\n\n// RegexpFilter is a Caddy log field filter that\n// replaces the field matching the provided regexp\n// with the indicated string. If the field is an\n// array of strings, each of them will have the\n// regexp replacement applied.\ntype RegexpFilter struct {\n\t// The regular expression pattern defining what to replace.\n\tRawRegexp string `json:\"regexp,omitempty\"`\n\n\t// The value to use as replacement\n\tValue string `json:\"value,omitempty\"`\n\n\tregexp *regexp.Regexp\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (RegexpFilter) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.logging.encoders.filter.regexp\",\n\t\tNew: func() caddy.Module { return new(RegexpFilter) },\n\t}\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (f *RegexpFilter) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume filter name\n\tif d.NextArg() {\n\t\tf.RawRegexp = d.Val()\n\t}\n\tif d.NextArg() {\n\t\tf.Value = d.Val()\n\t}\n\treturn nil\n}\n\n// Provision compiles m's regexp.\nfunc (m *RegexpFilter) Provision(ctx caddy.Context) error {\n\tr, err := regexp.Compile(m.RawRegexp)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tm.regexp = r\n\n\treturn nil\n}\n\n// Filter filters the input field with the replacement value if it matches the regexp.\nfunc (f *RegexpFilter) Filter(in zapcore.Field) zapcore.Field {\n\tif array, ok := in.Interface.(caddyhttp.LoggableStringArray); ok {\n\t\tnewArray := make(caddyhttp.LoggableStringArray, len(array))\n\t\tfor i, s := range array {\n\t\t\tnewArray[i] = f.regexp.ReplaceAllString(s, f.Value)\n\t\t}\n\t\tin.Interface = newArray\n\t} else {\n\t\tin.String = f.regexp.ReplaceAllString(in.String, f.Value)\n\t}\n\n\treturn in\n}\n\n// regexpFilterOperation represents a single regexp operation\n// within a MultiRegexpFilter.\ntype regexpFilterOperation struct {\n\t// The regular expression pattern defining what to replace.\n\tRawRegexp string `json:\"regexp,omitempty\"`\n\n\t// The value to use as replacement\n\tValue string `json:\"value,omitempty\"`\n\n\tregexp *regexp.Regexp\n}\n\n// MultiRegexpFilter is a Caddy log field filter that\n// can apply multiple regular expression replacements to\n// the same field. This filter processes operations in the\n// order they are defined, applying each regexp replacement\n// sequentially to the result of the previous operation.\n//\n// This allows users to define multiple regexp filters for\n// the same field without them overwriting each other.\n//\n// Security considerations:\n// - Uses Go's regexp package (RE2 engine) which is safe from ReDoS attacks\n// - Validates all patterns during provisioning\n// - Limits the maximum number of operations to prevent resource exhaustion\n// - Sanitizes input to prevent injection attacks\ntype MultiRegexpFilter struct {\n\t// A list of regexp operations to apply in sequence.\n\t// Maximum of 50 operations allowed for security and performance.\n\tOperations []regexpFilterOperation `json:\"operations\"`\n}\n\n// Security constants\nconst (\n\tmaxRegexpOperations = 50   // Maximum operations to prevent resource exhaustion\n\tmaxPatternLength    = 1000 // Maximum pattern length to prevent abuse\n)\n\n// CaddyModule returns the Caddy module information.\nfunc (MultiRegexpFilter) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.logging.encoders.filter.multi_regexp\",\n\t\tNew: func() caddy.Module { return new(MultiRegexpFilter) },\n\t}\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\n// Syntax:\n//\n//\tmulti_regexp {\n//\t    regexp <pattern> <replacement>\n//\t    regexp <pattern> <replacement>\n//\t    ...\n//\t}\nfunc (f *MultiRegexpFilter) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume filter name\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"regexp\":\n\t\t\t// Security check: limit number of operations\n\t\t\tif len(f.Operations) >= maxRegexpOperations {\n\t\t\t\treturn d.Errf(\"too many regexp operations (maximum %d allowed)\", maxRegexpOperations)\n\t\t\t}\n\n\t\t\top := regexpFilterOperation{}\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\top.RawRegexp = d.Val()\n\n\t\t\t// Security validation: check pattern length\n\t\t\tif len(op.RawRegexp) > maxPatternLength {\n\t\t\t\treturn d.Errf(\"regexp pattern too long (maximum %d characters)\", maxPatternLength)\n\t\t\t}\n\n\t\t\t// Security validation: basic pattern validation\n\t\t\tif op.RawRegexp == \"\" {\n\t\t\t\treturn d.Errf(\"regexp pattern cannot be empty\")\n\t\t\t}\n\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\top.Value = d.Val()\n\t\t\tf.Operations = append(f.Operations, op)\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized subdirective %s\", d.Val())\n\t\t}\n\t}\n\n\t// Security check: ensure at least one operation is defined\n\tif len(f.Operations) == 0 {\n\t\treturn d.Err(\"multi_regexp filter requires at least one regexp operation\")\n\t}\n\n\treturn nil\n}\n\n// Provision compiles all regexp patterns with security validation.\nfunc (f *MultiRegexpFilter) Provision(ctx caddy.Context) error {\n\t// Security check: validate operation count\n\tif len(f.Operations) > maxRegexpOperations {\n\t\treturn fmt.Errorf(\"too many regexp operations: %d (maximum %d allowed)\", len(f.Operations), maxRegexpOperations)\n\t}\n\n\tif len(f.Operations) == 0 {\n\t\treturn fmt.Errorf(\"multi_regexp filter requires at least one operation\")\n\t}\n\n\tfor i := range f.Operations {\n\t\t// Security validation: pattern length check\n\t\tif len(f.Operations[i].RawRegexp) > maxPatternLength {\n\t\t\treturn fmt.Errorf(\"regexp pattern %d too long: %d characters (maximum %d)\", i, len(f.Operations[i].RawRegexp), maxPatternLength)\n\t\t}\n\n\t\t// Security validation: empty pattern check\n\t\tif f.Operations[i].RawRegexp == \"\" {\n\t\t\treturn fmt.Errorf(\"regexp pattern %d cannot be empty\", i)\n\t\t}\n\n\t\t// Compile and validate the pattern (uses RE2 engine - safe from ReDoS)\n\t\tr, err := regexp.Compile(f.Operations[i].RawRegexp)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"compiling regexp pattern %d (%s): %v\", i, f.Operations[i].RawRegexp, err)\n\t\t}\n\t\tf.Operations[i].regexp = r\n\t}\n\treturn nil\n}\n\n// Validate ensures the filter is properly configured with security checks.\nfunc (f *MultiRegexpFilter) Validate() error {\n\tif len(f.Operations) == 0 {\n\t\treturn fmt.Errorf(\"multi_regexp filter requires at least one operation\")\n\t}\n\n\tif len(f.Operations) > maxRegexpOperations {\n\t\treturn fmt.Errorf(\"too many regexp operations: %d (maximum %d allowed)\", len(f.Operations), maxRegexpOperations)\n\t}\n\n\tfor i, op := range f.Operations {\n\t\tif op.RawRegexp == \"\" {\n\t\t\treturn fmt.Errorf(\"regexp pattern %d cannot be empty\", i)\n\t\t}\n\t\tif len(op.RawRegexp) > maxPatternLength {\n\t\t\treturn fmt.Errorf(\"regexp pattern %d too long: %d characters (maximum %d)\", i, len(op.RawRegexp), maxPatternLength)\n\t\t}\n\t\tif op.regexp == nil {\n\t\t\treturn fmt.Errorf(\"regexp pattern %d not compiled (call Provision first)\", i)\n\t\t}\n\t}\n\treturn nil\n}\n\n// Filter applies all regexp operations sequentially to the input field.\n// Input is sanitized and validated for security.\nfunc (f *MultiRegexpFilter) Filter(in zapcore.Field) zapcore.Field {\n\tif array, ok := in.Interface.(caddyhttp.LoggableStringArray); ok {\n\t\tnewArray := make(caddyhttp.LoggableStringArray, len(array))\n\t\tfor i, s := range array {\n\t\t\tnewArray[i] = f.processString(s)\n\t\t}\n\t\tin.Interface = newArray\n\t} else {\n\t\tin.String = f.processString(in.String)\n\t}\n\n\treturn in\n}\n\n// processString applies all regexp operations to a single string with input validation.\nfunc (f *MultiRegexpFilter) processString(s string) string {\n\t// Security: validate input string length to prevent resource exhaustion\n\tconst maxInputLength = 1000000 // 1MB max input size\n\tif len(s) > maxInputLength {\n\t\t// Log warning but continue processing (truncated)\n\t\ts = s[:maxInputLength]\n\t}\n\n\tresult := s\n\tfor _, op := range f.Operations {\n\t\t// Each regexp operation is applied sequentially\n\t\t// Using RE2 engine which is safe from ReDoS attacks\n\t\tresult = op.regexp.ReplaceAllString(result, op.Value)\n\n\t\t// Ensure result doesn't exceed max length after each operation\n\t\tif len(result) > maxInputLength {\n\t\t\tresult = result[:maxInputLength]\n\t\t}\n\t}\n\treturn result\n}\n\n// AddOperation adds a single regexp operation to the filter with validation.\n// This is used when merging multiple RegexpFilter instances.\nfunc (f *MultiRegexpFilter) AddOperation(rawRegexp, value string) error {\n\t// Security checks\n\tif len(f.Operations) >= maxRegexpOperations {\n\t\treturn fmt.Errorf(\"cannot add operation: maximum %d operations allowed\", maxRegexpOperations)\n\t}\n\n\tif rawRegexp == \"\" {\n\t\treturn fmt.Errorf(\"regexp pattern cannot be empty\")\n\t}\n\n\tif len(rawRegexp) > maxPatternLength {\n\t\treturn fmt.Errorf(\"regexp pattern too long: %d characters (maximum %d)\", len(rawRegexp), maxPatternLength)\n\t}\n\n\tf.Operations = append(f.Operations, regexpFilterOperation{\n\t\tRawRegexp: rawRegexp,\n\t\tValue:     value,\n\t})\n\treturn nil\n}\n\n// RenameFilter is a Caddy log field filter that\n// renames the field's key with the indicated name.\ntype RenameFilter struct {\n\tName string `json:\"name,omitempty\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (RenameFilter) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.logging.encoders.filter.rename\",\n\t\tNew: func() caddy.Module { return new(RenameFilter) },\n\t}\n}\n\n// UnmarshalCaddyfile sets up the module from Caddyfile tokens.\nfunc (f *RenameFilter) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume filter name\n\tif d.NextArg() {\n\t\tf.Name = d.Val()\n\t}\n\treturn nil\n}\n\n// Filter renames the input field with the replacement name.\nfunc (f *RenameFilter) Filter(in zapcore.Field) zapcore.Field {\n\tin.Key = f.Name\n\treturn in\n}\n\n// Interface guards\nvar (\n\t_ LogFieldFilter = (*DeleteFilter)(nil)\n\t_ LogFieldFilter = (*HashFilter)(nil)\n\t_ LogFieldFilter = (*ReplaceFilter)(nil)\n\t_ LogFieldFilter = (*IPMaskFilter)(nil)\n\t_ LogFieldFilter = (*QueryFilter)(nil)\n\t_ LogFieldFilter = (*CookieFilter)(nil)\n\t_ LogFieldFilter = (*RegexpFilter)(nil)\n\t_ LogFieldFilter = (*RenameFilter)(nil)\n\t_ LogFieldFilter = (*MultiRegexpFilter)(nil)\n\n\t_ caddyfile.Unmarshaler = (*DeleteFilter)(nil)\n\t_ caddyfile.Unmarshaler = (*HashFilter)(nil)\n\t_ caddyfile.Unmarshaler = (*ReplaceFilter)(nil)\n\t_ caddyfile.Unmarshaler = (*IPMaskFilter)(nil)\n\t_ caddyfile.Unmarshaler = (*QueryFilter)(nil)\n\t_ caddyfile.Unmarshaler = (*CookieFilter)(nil)\n\t_ caddyfile.Unmarshaler = (*RegexpFilter)(nil)\n\t_ caddyfile.Unmarshaler = (*RenameFilter)(nil)\n\t_ caddyfile.Unmarshaler = (*MultiRegexpFilter)(nil)\n\n\t_ caddy.Provisioner = (*IPMaskFilter)(nil)\n\t_ caddy.Provisioner = (*RegexpFilter)(nil)\n\t_ caddy.Provisioner = (*MultiRegexpFilter)(nil)\n\n\t_ caddy.Validator = (*QueryFilter)(nil)\n\t_ caddy.Validator = (*MultiRegexpFilter)(nil)\n)\n"
  },
  {
    "path": "modules/logging/filters_test.go",
    "content": "package logging\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"go.uber.org/zap/zapcore\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc TestIPMaskSingleValue(t *testing.T) {\n\tf := IPMaskFilter{IPv4MaskRaw: 16, IPv6MaskRaw: 32}\n\tf.Provision(caddy.Context{})\n\n\tout := f.Filter(zapcore.Field{String: \"255.255.255.255\"})\n\tif out.String != \"255.255.0.0\" {\n\t\tt.Fatalf(\"field has not been filtered: %s\", out.String)\n\t}\n\n\tout = f.Filter(zapcore.Field{String: \"ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff\"})\n\tif out.String != \"ffff:ffff::\" {\n\t\tt.Fatalf(\"field has not been filtered: %s\", out.String)\n\t}\n\n\tout = f.Filter(zapcore.Field{String: \"not-an-ip\"})\n\tif out.String != \"not-an-ip\" {\n\t\tt.Fatalf(\"field has been filtered: %s\", out.String)\n\t}\n}\n\nfunc TestIPMaskCommaValue(t *testing.T) {\n\tf := IPMaskFilter{IPv4MaskRaw: 16, IPv6MaskRaw: 32}\n\tf.Provision(caddy.Context{})\n\n\tout := f.Filter(zapcore.Field{String: \"255.255.255.255, 244.244.244.244\"})\n\tif out.String != \"255.255.0.0, 244.244.0.0\" {\n\t\tt.Fatalf(\"field has not been filtered: %s\", out.String)\n\t}\n\n\tout = f.Filter(zapcore.Field{String: \"ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff, ff00:ffff:ffff:ffff:ffff:ffff:ffff:ffff\"})\n\tif out.String != \"ffff:ffff::, ff00:ffff::\" {\n\t\tt.Fatalf(\"field has not been filtered: %s\", out.String)\n\t}\n\n\tout = f.Filter(zapcore.Field{String: \"not-an-ip, 255.255.255.255\"})\n\tif out.String != \"not-an-ip, 255.255.0.0\" {\n\t\tt.Fatalf(\"field has not been filtered: %s\", out.String)\n\t}\n}\n\nfunc TestIPMaskMultiValue(t *testing.T) {\n\tf := IPMaskFilter{IPv4MaskRaw: 16, IPv6MaskRaw: 32}\n\tf.Provision(caddy.Context{})\n\n\tout := f.Filter(zapcore.Field{Interface: caddyhttp.LoggableStringArray{\n\t\t\"255.255.255.255\",\n\t\t\"244.244.244.244\",\n\t}})\n\tarr, ok := out.Interface.(caddyhttp.LoggableStringArray)\n\tif !ok {\n\t\tt.Fatalf(\"field is wrong type: %T\", out.Integer)\n\t}\n\tif arr[0] != \"255.255.0.0\" {\n\t\tt.Fatalf(\"field entry 0 has not been filtered: %s\", arr[0])\n\t}\n\tif arr[1] != \"244.244.0.0\" {\n\t\tt.Fatalf(\"field entry 1 has not been filtered: %s\", arr[1])\n\t}\n\n\tout = f.Filter(zapcore.Field{Interface: caddyhttp.LoggableStringArray{\n\t\t\"ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff\",\n\t\t\"ff00:ffff:ffff:ffff:ffff:ffff:ffff:ffff\",\n\t}})\n\tarr, ok = out.Interface.(caddyhttp.LoggableStringArray)\n\tif !ok {\n\t\tt.Fatalf(\"field is wrong type: %T\", out.Integer)\n\t}\n\tif arr[0] != \"ffff:ffff::\" {\n\t\tt.Fatalf(\"field entry 0 has not been filtered: %s\", arr[0])\n\t}\n\tif arr[1] != \"ff00:ffff::\" {\n\t\tt.Fatalf(\"field entry 1 has not been filtered: %s\", arr[1])\n\t}\n}\n\nfunc TestQueryFilterSingleValue(t *testing.T) {\n\tf := QueryFilter{[]queryFilterAction{\n\t\t{replaceAction, \"foo\", \"REDACTED\"},\n\t\t{replaceAction, \"notexist\", \"REDACTED\"},\n\t\t{deleteAction, \"bar\", \"\"},\n\t\t{deleteAction, \"notexist\", \"\"},\n\t\t{hashAction, \"hash\", \"\"},\n\t}}\n\n\tif f.Validate() != nil {\n\t\tt.Fatalf(\"the filter must be valid\")\n\t}\n\n\tout := f.Filter(zapcore.Field{String: \"/path?foo=a&foo=b&bar=c&bar=d&baz=e&hash=hashed\"})\n\tif out.String != \"/path?baz=e&foo=REDACTED&foo=REDACTED&hash=e3b0c442\" {\n\t\tt.Fatalf(\"query parameters have not been filtered: %s\", out.String)\n\t}\n}\n\nfunc TestQueryFilterMultiValue(t *testing.T) {\n\tf := QueryFilter{\n\t\tActions: []queryFilterAction{\n\t\t\t{Type: replaceAction, Parameter: \"foo\", Value: \"REDACTED\"},\n\t\t\t{Type: replaceAction, Parameter: \"notexist\", Value: \"REDACTED\"},\n\t\t\t{Type: deleteAction, Parameter: \"bar\"},\n\t\t\t{Type: deleteAction, Parameter: \"notexist\"},\n\t\t\t{Type: hashAction, Parameter: \"hash\"},\n\t\t},\n\t}\n\n\tif f.Validate() != nil {\n\t\tt.Fatalf(\"the filter must be valid\")\n\t}\n\n\tout := f.Filter(zapcore.Field{Interface: caddyhttp.LoggableStringArray{\n\t\t\"/path1?foo=a&foo=b&bar=c&bar=d&baz=e&hash=hashed\",\n\t\t\"/path2?foo=c&foo=d&bar=e&bar=f&baz=g&hash=hashed\",\n\t}})\n\tarr, ok := out.Interface.(caddyhttp.LoggableStringArray)\n\tif !ok {\n\t\tt.Fatalf(\"field is wrong type: %T\", out.Interface)\n\t}\n\n\texpected1 := \"/path1?baz=e&foo=REDACTED&foo=REDACTED&hash=e3b0c442\"\n\texpected2 := \"/path2?baz=g&foo=REDACTED&foo=REDACTED&hash=e3b0c442\"\n\tif arr[0] != expected1 {\n\t\tt.Fatalf(\"query parameters in entry 0 have not been filtered correctly: got %s, expected %s\", arr[0], expected1)\n\t}\n\tif arr[1] != expected2 {\n\t\tt.Fatalf(\"query parameters in entry 1 have not been filtered correctly: got %s, expected %s\", arr[1], expected2)\n\t}\n}\n\nfunc TestValidateQueryFilter(t *testing.T) {\n\tf := QueryFilter{[]queryFilterAction{\n\t\t{},\n\t}}\n\tif f.Validate() == nil {\n\t\tt.Fatalf(\"empty action type must be invalid\")\n\t}\n\n\tf = QueryFilter{[]queryFilterAction{\n\t\t{Type: \"foo\"},\n\t}}\n\tif f.Validate() == nil {\n\t\tt.Fatalf(\"unknown action type must be invalid\")\n\t}\n}\n\nfunc TestCookieFilter(t *testing.T) {\n\tf := CookieFilter{[]cookieFilterAction{\n\t\t{replaceAction, \"foo\", \"REDACTED\"},\n\t\t{deleteAction, \"bar\", \"\"},\n\t\t{hashAction, \"hash\", \"\"},\n\t}}\n\n\tout := f.Filter(zapcore.Field{Interface: caddyhttp.LoggableStringArray{\n\t\t\"foo=a; foo=b; bar=c; bar=d; baz=e; hash=hashed\",\n\t}})\n\toutval := out.Interface.(caddyhttp.LoggableStringArray)\n\texpected := caddyhttp.LoggableStringArray{\n\t\t\"foo=REDACTED; foo=REDACTED; baz=e; hash=1a06df82\",\n\t}\n\tif outval[0] != expected[0] {\n\t\tt.Fatalf(\"cookies have not been filtered: %s\", out.String)\n\t}\n}\n\nfunc TestValidateCookieFilter(t *testing.T) {\n\tf := CookieFilter{[]cookieFilterAction{\n\t\t{},\n\t}}\n\tif f.Validate() == nil {\n\t\tt.Fatalf(\"empty action type must be invalid\")\n\t}\n\n\tf = CookieFilter{[]cookieFilterAction{\n\t\t{Type: \"foo\"},\n\t}}\n\tif f.Validate() == nil {\n\t\tt.Fatalf(\"unknown action type must be invalid\")\n\t}\n}\n\nfunc TestRegexpFilterSingleValue(t *testing.T) {\n\tf := RegexpFilter{RawRegexp: `secret`, Value: \"REDACTED\"}\n\tf.Provision(caddy.Context{})\n\n\tout := f.Filter(zapcore.Field{String: \"foo-secret-bar\"})\n\tif out.String != \"foo-REDACTED-bar\" {\n\t\tt.Fatalf(\"field has not been filtered: %s\", out.String)\n\t}\n}\n\nfunc TestRegexpFilterMultiValue(t *testing.T) {\n\tf := RegexpFilter{RawRegexp: `secret`, Value: \"REDACTED\"}\n\tf.Provision(caddy.Context{})\n\n\tout := f.Filter(zapcore.Field{Interface: caddyhttp.LoggableStringArray{\"foo-secret-bar\", \"bar-secret-foo\"}})\n\tarr, ok := out.Interface.(caddyhttp.LoggableStringArray)\n\tif !ok {\n\t\tt.Fatalf(\"field is wrong type: %T\", out.Integer)\n\t}\n\tif arr[0] != \"foo-REDACTED-bar\" {\n\t\tt.Fatalf(\"field entry 0 has not been filtered: %s\", arr[0])\n\t}\n\tif arr[1] != \"bar-REDACTED-foo\" {\n\t\tt.Fatalf(\"field entry 1 has not been filtered: %s\", arr[1])\n\t}\n}\n\nfunc TestHashFilterSingleValue(t *testing.T) {\n\tf := HashFilter{}\n\n\tout := f.Filter(zapcore.Field{String: \"foo\"})\n\tif out.String != \"2c26b46b\" {\n\t\tt.Fatalf(\"field has not been filtered: %s\", out.String)\n\t}\n}\n\nfunc TestHashFilterMultiValue(t *testing.T) {\n\tf := HashFilter{}\n\n\tout := f.Filter(zapcore.Field{Interface: caddyhttp.LoggableStringArray{\"foo\", \"bar\"}})\n\tarr, ok := out.Interface.(caddyhttp.LoggableStringArray)\n\tif !ok {\n\t\tt.Fatalf(\"field is wrong type: %T\", out.Integer)\n\t}\n\tif arr[0] != \"2c26b46b\" {\n\t\tt.Fatalf(\"field entry 0 has not been filtered: %s\", arr[0])\n\t}\n\tif arr[1] != \"fcde2b2e\" {\n\t\tt.Fatalf(\"field entry 1 has not been filtered: %s\", arr[1])\n\t}\n}\n\nfunc TestMultiRegexpFilterSingleOperation(t *testing.T) {\n\tf := MultiRegexpFilter{\n\t\tOperations: []regexpFilterOperation{\n\t\t\t{RawRegexp: `secret`, Value: \"REDACTED\"},\n\t\t},\n\t}\n\terr := f.Provision(caddy.Context{})\n\tif err != nil {\n\t\tt.Fatalf(\"unexpected error provisioning: %v\", err)\n\t}\n\n\tout := f.Filter(zapcore.Field{String: \"foo-secret-bar\"})\n\tif out.String != \"foo-REDACTED-bar\" {\n\t\tt.Fatalf(\"field has not been filtered: %s\", out.String)\n\t}\n}\n\nfunc TestMultiRegexpFilterMultipleOperations(t *testing.T) {\n\tf := MultiRegexpFilter{\n\t\tOperations: []regexpFilterOperation{\n\t\t\t{RawRegexp: `secret`, Value: \"REDACTED\"},\n\t\t\t{RawRegexp: `password`, Value: \"HIDDEN\"},\n\t\t\t{RawRegexp: `token`, Value: \"XXX\"},\n\t\t},\n\t}\n\terr := f.Provision(caddy.Context{})\n\tif err != nil {\n\t\tt.Fatalf(\"unexpected error provisioning: %v\", err)\n\t}\n\n\t// Test sequential application\n\tout := f.Filter(zapcore.Field{String: \"my-secret-password-token-data\"})\n\texpected := \"my-REDACTED-HIDDEN-XXX-data\"\n\tif out.String != expected {\n\t\tt.Fatalf(\"field has not been filtered correctly: got %s, expected %s\", out.String, expected)\n\t}\n}\n\nfunc TestMultiRegexpFilterMultiValue(t *testing.T) {\n\tf := MultiRegexpFilter{\n\t\tOperations: []regexpFilterOperation{\n\t\t\t{RawRegexp: `secret`, Value: \"REDACTED\"},\n\t\t\t{RawRegexp: `\\d+`, Value: \"NUM\"},\n\t\t},\n\t}\n\terr := f.Provision(caddy.Context{})\n\tif err != nil {\n\t\tt.Fatalf(\"unexpected error provisioning: %v\", err)\n\t}\n\n\tout := f.Filter(zapcore.Field{Interface: caddyhttp.LoggableStringArray{\n\t\t\"foo-secret-123\",\n\t\t\"bar-secret-456\",\n\t}})\n\tarr, ok := out.Interface.(caddyhttp.LoggableStringArray)\n\tif !ok {\n\t\tt.Fatalf(\"field is wrong type: %T\", out.Interface)\n\t}\n\tif arr[0] != \"foo-REDACTED-NUM\" {\n\t\tt.Fatalf(\"field entry 0 has not been filtered: %s\", arr[0])\n\t}\n\tif arr[1] != \"bar-REDACTED-NUM\" {\n\t\tt.Fatalf(\"field entry 1 has not been filtered: %s\", arr[1])\n\t}\n}\n\nfunc TestMultiRegexpFilterAddOperation(t *testing.T) {\n\tf := MultiRegexpFilter{}\n\terr := f.AddOperation(\"secret\", \"REDACTED\")\n\tif err != nil {\n\t\tt.Fatalf(\"unexpected error adding operation: %v\", err)\n\t}\n\terr = f.AddOperation(\"password\", \"HIDDEN\")\n\tif err != nil {\n\t\tt.Fatalf(\"unexpected error adding operation: %v\", err)\n\t}\n\terr = f.Provision(caddy.Context{})\n\tif err != nil {\n\t\tt.Fatalf(\"unexpected error provisioning: %v\", err)\n\t}\n\n\tif len(f.Operations) != 2 {\n\t\tt.Fatalf(\"expected 2 operations, got %d\", len(f.Operations))\n\t}\n\n\tout := f.Filter(zapcore.Field{String: \"my-secret-password\"})\n\texpected := \"my-REDACTED-HIDDEN\"\n\tif out.String != expected {\n\t\tt.Fatalf(\"field has not been filtered correctly: got %s, expected %s\", out.String, expected)\n\t}\n}\n\nfunc TestMultiRegexpFilterSecurityLimits(t *testing.T) {\n\tf := MultiRegexpFilter{}\n\n\t// Test maximum operations limit\n\tfor i := 0; i < 51; i++ {\n\t\terr := f.AddOperation(fmt.Sprintf(\"pattern%d\", i), \"replacement\")\n\t\tif i < 50 {\n\t\t\tif err != nil {\n\t\t\t\tt.Fatalf(\"unexpected error adding operation %d: %v\", i, err)\n\t\t\t}\n\t\t} else {\n\t\t\tif err == nil {\n\t\t\t\tt.Fatalf(\"expected error when adding operation %d (exceeds limit)\", i)\n\t\t\t}\n\t\t}\n\t}\n\n\t// Test empty pattern validation\n\tf2 := MultiRegexpFilter{}\n\terr := f2.AddOperation(\"\", \"replacement\")\n\tif err == nil {\n\t\tt.Fatalf(\"expected error for empty pattern\")\n\t}\n\n\t// Test pattern length limit\n\tf3 := MultiRegexpFilter{}\n\tlongPattern := strings.Repeat(\"a\", 1001)\n\terr = f3.AddOperation(longPattern, \"replacement\")\n\tif err == nil {\n\t\tt.Fatalf(\"expected error for pattern exceeding length limit\")\n\t}\n}\n\nfunc TestMultiRegexpFilterValidation(t *testing.T) {\n\t// Test validation with empty operations\n\tf := MultiRegexpFilter{}\n\terr := f.Validate()\n\tif err == nil {\n\t\tt.Fatalf(\"expected validation error for empty operations\")\n\t}\n\n\t// Test validation with valid operations\n\terr = f.AddOperation(\"valid\", \"replacement\")\n\tif err != nil {\n\t\tt.Fatalf(\"unexpected error adding operation: %v\", err)\n\t}\n\terr = f.Provision(caddy.Context{})\n\tif err != nil {\n\t\tt.Fatalf(\"unexpected error provisioning: %v\", err)\n\t}\n\terr = f.Validate()\n\tif err != nil {\n\t\tt.Fatalf(\"unexpected validation error: %v\", err)\n\t}\n}\n\nfunc TestMultiRegexpFilterInputSizeLimit(t *testing.T) {\n\tf := MultiRegexpFilter{\n\t\tOperations: []regexpFilterOperation{\n\t\t\t{RawRegexp: `test`, Value: \"REPLACED\"},\n\t\t},\n\t}\n\terr := f.Provision(caddy.Context{})\n\tif err != nil {\n\t\tt.Fatalf(\"unexpected error provisioning: %v\", err)\n\t}\n\n\t// Test with very large input (should be truncated)\n\tlargeInput := strings.Repeat(\"test\", 300000) // Creates ~1.2MB string\n\tout := f.Filter(zapcore.Field{String: largeInput})\n\n\t// The input should be truncated to 1MB and still processed\n\tif len(out.String) > 1000000 {\n\t\tt.Fatalf(\"output string not truncated: length %d\", len(out.String))\n\t}\n\n\t// Should still contain replacements within the truncated portion\n\tif !strings.Contains(out.String, \"REPLACED\") {\n\t\tt.Fatalf(\"replacements not applied to truncated input\")\n\t}\n}\n\nfunc TestMultiRegexpFilterOverlappingPatterns(t *testing.T) {\n\tf := MultiRegexpFilter{\n\t\tOperations: []regexpFilterOperation{\n\t\t\t{RawRegexp: `secret.*password`, Value: \"SENSITIVE\"},\n\t\t\t{RawRegexp: `password`, Value: \"HIDDEN\"},\n\t\t},\n\t}\n\terr := f.Provision(caddy.Context{})\n\tif err != nil {\n\t\tt.Fatalf(\"unexpected error provisioning: %v\", err)\n\t}\n\n\t// The first pattern should match and replace the entire \"secret...password\" portion\n\t// Then the second pattern should not find \"password\" anymore since it was already replaced\n\tout := f.Filter(zapcore.Field{String: \"my-secret-data-password-end\"})\n\texpected := \"my-SENSITIVE-end\"\n\tif out.String != expected {\n\t\tt.Fatalf(\"field has not been filtered correctly: got %s, expected %s\", out.String, expected)\n\t}\n}\n"
  },
  {
    "path": "modules/logging/netwriter.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage logging\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"os\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(NetWriter{})\n}\n\n// NetWriter implements a log writer that outputs to a network socket. If\n// the socket goes down, it will dump logs to stderr while it attempts to\n// reconnect.\ntype NetWriter struct {\n\t// The address of the network socket to which to connect.\n\tAddress string `json:\"address,omitempty\"`\n\n\t// The timeout to wait while connecting to the socket.\n\tDialTimeout caddy.Duration `json:\"dial_timeout,omitempty\"`\n\n\t// If enabled, allow connections errors when first opening the\n\t// writer. The error and subsequent log entries will be reported\n\t// to stderr instead until a connection can be re-established.\n\tSoftStart bool `json:\"soft_start,omitempty\"`\n\n\taddr caddy.NetworkAddress\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (NetWriter) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"caddy.logging.writers.net\",\n\t\tNew: func() caddy.Module { return new(NetWriter) },\n\t}\n}\n\n// Provision sets up the module.\nfunc (nw *NetWriter) Provision(ctx caddy.Context) error {\n\trepl := caddy.NewReplacer()\n\taddress, err := repl.ReplaceOrErr(nw.Address, true, true)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"invalid host in address: %v\", err)\n\t}\n\n\tnw.addr, err = caddy.ParseNetworkAddress(address)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"parsing network address '%s': %v\", address, err)\n\t}\n\n\tif nw.addr.PortRangeSize() != 1 {\n\t\treturn fmt.Errorf(\"multiple ports not supported\")\n\t}\n\n\tif nw.DialTimeout < 0 {\n\t\treturn fmt.Errorf(\"timeout cannot be less than 0\")\n\t}\n\n\treturn nil\n}\n\nfunc (nw NetWriter) String() string {\n\treturn nw.addr.String()\n}\n\n// WriterKey returns a unique key representing this nw.\nfunc (nw NetWriter) WriterKey() string {\n\treturn nw.addr.String()\n}\n\n// OpenWriter opens a new network connection.\nfunc (nw NetWriter) OpenWriter() (io.WriteCloser, error) {\n\treconn := &redialerConn{\n\t\tnw:      nw,\n\t\ttimeout: time.Duration(nw.DialTimeout),\n\t}\n\tconn, err := reconn.dial()\n\tif err != nil {\n\t\tif !nw.SoftStart {\n\t\t\treturn nil, err\n\t\t}\n\t\t// don't block config load if remote is down or some other external problem;\n\t\t// we can dump logs to stderr for now (see issue #5520)\n\t\tfmt.Fprintf(os.Stderr, \"[ERROR] net log writer failed to connect: %v (will retry connection and print errors here in the meantime)\\n\", err)\n\t}\n\treconn.connMu.Lock()\n\treconn.Conn = conn\n\treconn.connMu.Unlock()\n\treturn reconn, nil\n}\n\n// UnmarshalCaddyfile sets up the handler from Caddyfile tokens. Syntax:\n//\n//\tnet <address> {\n//\t    dial_timeout <duration>\n//\t    soft_start\n//\t}\nfunc (nw *NetWriter) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume writer name\n\tif !d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\tnw.Address = d.Val()\n\tif d.NextArg() {\n\t\treturn d.ArgErr()\n\t}\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"dial_timeout\":\n\t\t\tif !d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\ttimeout, err := caddy.ParseDuration(d.Val())\n\t\t\tif err != nil {\n\t\t\t\treturn d.Errf(\"invalid duration: %s\", d.Val())\n\t\t\t}\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tnw.DialTimeout = caddy.Duration(timeout)\n\n\t\tcase \"soft_start\":\n\t\t\tif d.NextArg() {\n\t\t\t\treturn d.ArgErr()\n\t\t\t}\n\t\t\tnw.SoftStart = true\n\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized subdirective '%s'\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\n// redialerConn wraps an underlying Conn so that if any\n// writes fail, the connection is redialed and the write\n// is retried.\ntype redialerConn struct {\n\tnet.Conn\n\tconnMu     sync.RWMutex\n\tnw         NetWriter\n\ttimeout    time.Duration\n\tlastRedial time.Time\n}\n\n// Write wraps the underlying Conn.Write method, but if that fails,\n// it will re-dial the connection anew and try writing again.\nfunc (reconn *redialerConn) Write(b []byte) (n int, err error) {\n\treconn.connMu.RLock()\n\tconn := reconn.Conn\n\treconn.connMu.RUnlock()\n\tif conn != nil {\n\t\tif n, err = conn.Write(b); err == nil {\n\t\t\treturn n, err\n\t\t}\n\t}\n\n\t// problem with the connection - lock it and try to fix it\n\treconn.connMu.Lock()\n\tdefer reconn.connMu.Unlock()\n\n\t// if multiple concurrent writes failed on the same broken conn, then\n\t// one of them might have already re-dialed by now; try writing again\n\tif reconn.Conn != nil {\n\t\tif n, err = reconn.Conn.Write(b); err == nil {\n\t\t\treturn n, err\n\t\t}\n\t}\n\n\t// there's still a problem, so try to re-attempt dialing the socket\n\t// if some time has passed in which the issue could have potentially\n\t// been resolved - we don't want to block at every single log\n\t// emission (!) - see discussion in #4111\n\tif time.Since(reconn.lastRedial) > 10*time.Second {\n\t\treconn.lastRedial = time.Now()\n\t\tconn2, err2 := reconn.dial()\n\t\tif err2 != nil {\n\t\t\t// logger socket still offline; instead of discarding the log, dump it to stderr\n\t\t\tos.Stderr.Write(b)\n\t\t\treturn n, err\n\t\t}\n\t\tif n, err = conn2.Write(b); err == nil {\n\t\t\tif reconn.Conn != nil {\n\t\t\t\treconn.Conn.Close()\n\t\t\t}\n\t\t\treconn.Conn = conn2\n\t\t}\n\t} else {\n\t\t// last redial attempt was too recent; just dump to stderr for now\n\t\tos.Stderr.Write(b)\n\t}\n\n\treturn n, err\n}\n\nfunc (reconn *redialerConn) dial() (net.Conn, error) {\n\treturn net.DialTimeout(reconn.nw.addr.Network, reconn.nw.addr.JoinHostPort(0), reconn.timeout)\n}\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner     = (*NetWriter)(nil)\n\t_ caddy.WriterOpener    = (*NetWriter)(nil)\n\t_ caddyfile.Unmarshaler = (*NetWriter)(nil)\n)\n"
  },
  {
    "path": "modules/logging/nopencoder.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage logging\n\nimport (\n\t\"time\"\n\n\t\"go.uber.org/zap/buffer\"\n\t\"go.uber.org/zap/zapcore\"\n)\n\n// nopEncoder is a zapcore.Encoder that does nothing.\ntype nopEncoder struct{}\n\n// AddArray is part of the zapcore.ObjectEncoder interface.\n// Array elements do not get filtered.\nfunc (nopEncoder) AddArray(key string, marshaler zapcore.ArrayMarshaler) error { return nil }\n\n// AddObject is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddObject(key string, marshaler zapcore.ObjectMarshaler) error { return nil }\n\n// AddBinary is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddBinary(key string, value []byte) {}\n\n// AddByteString is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddByteString(key string, value []byte) {}\n\n// AddBool is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddBool(key string, value bool) {}\n\n// AddComplex128 is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddComplex128(key string, value complex128) {}\n\n// AddComplex64 is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddComplex64(key string, value complex64) {}\n\n// AddDuration is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddDuration(key string, value time.Duration) {}\n\n// AddFloat64 is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddFloat64(key string, value float64) {}\n\n// AddFloat32 is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddFloat32(key string, value float32) {}\n\n// AddInt is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddInt(key string, value int) {}\n\n// AddInt64 is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddInt64(key string, value int64) {}\n\n// AddInt32 is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddInt32(key string, value int32) {}\n\n// AddInt16 is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddInt16(key string, value int16) {}\n\n// AddInt8 is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddInt8(key string, value int8) {}\n\n// AddString is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddString(key, value string) {}\n\n// AddTime is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddTime(key string, value time.Time) {}\n\n// AddUint is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddUint(key string, value uint) {}\n\n// AddUint64 is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddUint64(key string, value uint64) {}\n\n// AddUint32 is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddUint32(key string, value uint32) {}\n\n// AddUint16 is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddUint16(key string, value uint16) {}\n\n// AddUint8 is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddUint8(key string, value uint8) {}\n\n// AddUintptr is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddUintptr(key string, value uintptr) {}\n\n// AddReflected is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) AddReflected(key string, value any) error { return nil }\n\n// OpenNamespace is part of the zapcore.ObjectEncoder interface.\nfunc (nopEncoder) OpenNamespace(key string) {}\n\n// Clone is part of the zapcore.ObjectEncoder interface.\n// We don't use it as of Oct 2019 (v2 beta 7), I'm not\n// really sure what it'd be useful for in our case.\nfunc (ne nopEncoder) Clone() zapcore.Encoder { return ne }\n\n// EncodeEntry partially implements the zapcore.Encoder interface.\nfunc (nopEncoder) EncodeEntry(ent zapcore.Entry, fields []zapcore.Field) (*buffer.Buffer, error) {\n\treturn bufferpool.Get(), nil\n}\n\n// Interface guard\nvar _ zapcore.Encoder = (*nopEncoder)(nil)\n"
  },
  {
    "path": "modules/metrics/adminmetrics.go",
    "content": "// Copyright 2020 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage metrics\n\nimport (\n\t\"errors\"\n\t\"net/http\"\n\n\t\"github.com/prometheus/client_golang/prometheus\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(AdminMetrics{})\n}\n\n// AdminMetrics is a module that serves a metrics endpoint so that any gathered\n// metrics can be exposed for scraping. This module is not configurable, and\n// is permanently mounted to the admin API endpoint at \"/metrics\".\n// See the Metrics module for a configurable endpoint that is usable if the\n// Admin API is disabled.\ntype AdminMetrics struct {\n\tregistry *prometheus.Registry\n\n\tmetricsHandler http.Handler\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (AdminMetrics) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"admin.api.metrics\",\n\t\tNew: func() caddy.Module { return new(AdminMetrics) },\n\t}\n}\n\n// Provision -\nfunc (m *AdminMetrics) Provision(ctx caddy.Context) error {\n\tm.registry = ctx.GetMetricsRegistry()\n\tif m.registry == nil {\n\t\treturn errors.New(\"no metrics registry found\")\n\t}\n\tm.metricsHandler = createMetricsHandler(nil, false, m.registry)\n\treturn nil\n}\n\n// Routes returns a route for the /metrics endpoint.\nfunc (m *AdminMetrics) Routes() []caddy.AdminRoute {\n\treturn []caddy.AdminRoute{{Pattern: \"/metrics\", Handler: caddy.AdminHandlerFunc(m.serveHTTP)}}\n}\n\nfunc (m *AdminMetrics) serveHTTP(w http.ResponseWriter, r *http.Request) error {\n\tm.metricsHandler.ServeHTTP(w, r)\n\treturn nil\n}\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner = (*AdminMetrics)(nil)\n\t_ caddy.AdminRouter = (*AdminMetrics)(nil)\n)\n"
  },
  {
    "path": "modules/metrics/metrics.go",
    "content": "// Copyright 2020 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage metrics\n\nimport (\n\t\"errors\"\n\t\"net/http\"\n\n\t\"github.com/prometheus/client_golang/prometheus\"\n\t\"github.com/prometheus/client_golang/prometheus/promhttp\"\n\t\"go.uber.org/zap\"\n\n\t\"github.com/caddyserver/caddy/v2\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile\"\n\t\"github.com/caddyserver/caddy/v2/modules/caddyhttp\"\n)\n\nfunc init() {\n\tcaddy.RegisterModule(Metrics{})\n\thttpcaddyfile.RegisterHandlerDirective(\"metrics\", parseCaddyfile)\n}\n\n// Metrics is a module that serves a /metrics endpoint so that any gathered\n// metrics can be exposed for scraping. This module is configurable by end-users\n// unlike AdminMetrics.\ntype Metrics struct {\n\tmetricsHandler http.Handler\n\n\t// Disable OpenMetrics negotiation, enabled by default. May be necessary if\n\t// the produced metrics cannot be parsed by the service scraping metrics.\n\tDisableOpenMetrics bool `json:\"disable_openmetrics,omitempty\"`\n}\n\n// CaddyModule returns the Caddy module information.\nfunc (Metrics) CaddyModule() caddy.ModuleInfo {\n\treturn caddy.ModuleInfo{\n\t\tID:  \"http.handlers.metrics\",\n\t\tNew: func() caddy.Module { return new(Metrics) },\n\t}\n}\n\ntype zapLogger struct {\n\tzl *zap.Logger\n}\n\nfunc (l *zapLogger) Println(v ...any) {\n\tl.zl.Sugar().Error(v...)\n}\n\n// Provision sets up m.\nfunc (m *Metrics) Provision(ctx caddy.Context) error {\n\tlog := ctx.Logger()\n\tregistry := ctx.GetMetricsRegistry()\n\tif registry == nil {\n\t\treturn errors.New(\"no metrics registry found\")\n\t}\n\tm.metricsHandler = createMetricsHandler(&zapLogger{log}, !m.DisableOpenMetrics, registry)\n\treturn nil\n}\n\nfunc parseCaddyfile(h httpcaddyfile.Helper) (caddyhttp.MiddlewareHandler, error) {\n\tvar m Metrics\n\terr := m.UnmarshalCaddyfile(h.Dispenser)\n\treturn m, err\n}\n\n// UnmarshalCaddyfile sets up the handler from Caddyfile tokens. Syntax:\n//\n//\tmetrics [<matcher>] {\n//\t    disable_openmetrics\n//\t}\nfunc (m *Metrics) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {\n\td.Next() // consume directive name\n\targs := d.RemainingArgs()\n\tif len(args) > 0 {\n\t\treturn d.ArgErr()\n\t}\n\n\tfor d.NextBlock(0) {\n\t\tswitch d.Val() {\n\t\tcase \"disable_openmetrics\":\n\t\t\tm.DisableOpenMetrics = true\n\t\tdefault:\n\t\t\treturn d.Errf(\"unrecognized subdirective %q\", d.Val())\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (m Metrics) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {\n\tm.metricsHandler.ServeHTTP(w, r)\n\treturn nil\n}\n\n// Interface guards\nvar (\n\t_ caddy.Provisioner           = (*Metrics)(nil)\n\t_ caddyhttp.MiddlewareHandler = (*Metrics)(nil)\n\t_ caddyfile.Unmarshaler       = (*Metrics)(nil)\n)\n\nfunc createMetricsHandler(logger promhttp.Logger, enableOpenMetrics bool, registry *prometheus.Registry) http.Handler {\n\treturn promhttp.InstrumentMetricHandler(registry,\n\t\tpromhttp.HandlerFor(registry, promhttp.HandlerOpts{\n\t\t\t// will only log errors if logger is non-nil\n\t\t\tErrorLog: logger,\n\n\t\t\t// Allow OpenMetrics format to be negotiated - largely compatible,\n\t\t\t// except quantile/le label values always have a decimal.\n\t\t\tEnableOpenMetrics: enableOpenMetrics,\n\t\t}),\n\t)\n}\n"
  },
  {
    "path": "modules/metrics/metrics_test.go",
    "content": "package metrics\n\nimport (\n\t\"testing\"\n\n\t\"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n)\n\nfunc TestMetricsUnmarshalCaddyfile(t *testing.T) {\n\tm := &Metrics{}\n\td := caddyfile.NewTestDispenser(`metrics bogus`)\n\terr := m.UnmarshalCaddyfile(d)\n\tif err == nil {\n\t\tt.Errorf(\"expected error\")\n\t}\n\n\tm = &Metrics{}\n\td = caddyfile.NewTestDispenser(`metrics`)\n\terr = m.UnmarshalCaddyfile(d)\n\tif err != nil {\n\t\tt.Errorf(\"unexpected error: %v\", err)\n\t}\n\n\tif m.DisableOpenMetrics {\n\t\tt.Errorf(\"DisableOpenMetrics should've been false: %v\", m.DisableOpenMetrics)\n\t}\n\n\tm = &Metrics{}\n\td = caddyfile.NewTestDispenser(`metrics { disable_openmetrics }`)\n\terr = m.UnmarshalCaddyfile(d)\n\tif err != nil {\n\t\tt.Errorf(\"unexpected error: %v\", err)\n\t}\n\n\tif !m.DisableOpenMetrics {\n\t\tt.Errorf(\"DisableOpenMetrics should've been true: %v\", m.DisableOpenMetrics)\n\t}\n\n\tm = &Metrics{}\n\td = caddyfile.NewTestDispenser(`metrics { bogus }`)\n\terr = m.UnmarshalCaddyfile(d)\n\tif err == nil {\n\t\tt.Errorf(\"expected error: %v\", err)\n\t}\n}\n"
  },
  {
    "path": "modules/standard/imports.go",
    "content": "package standard\n\nimport (\n\t// standard Caddy modules\n\t_ \"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyevents\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyevents/eventsconfig\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyfs\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddyhttp/standard\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddypki\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddypki/acmeserver\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddytls\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddytls/distributedstek\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/caddytls/standardstek\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/filestorage\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/logging\"\n\t_ \"github.com/caddyserver/caddy/v2/modules/metrics\"\n)\n"
  },
  {
    "path": "modules.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport (\n\t\"bytes\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"reflect\"\n\t\"sort\"\n\t\"strings\"\n\t\"sync\"\n)\n\n// Module is a type that is used as a Caddy module. In\n// addition to this interface, most modules will implement\n// some interface expected by their host module in order\n// to be useful. To learn which interface(s) to implement,\n// see the documentation for the host module. At a bare\n// minimum, this interface, when implemented, only provides\n// the module's ID and constructor function.\n//\n// Modules will often implement additional interfaces\n// including Provisioner, Validator, and CleanerUpper.\n// If a module implements these interfaces, their\n// methods are called during the module's lifespan.\n//\n// When a module is loaded by a host module, the following\n// happens: 1) ModuleInfo.New() is called to get a new\n// instance of the module. 2) The module's configuration is\n// unmarshaled into that instance. 3) If the module is a\n// Provisioner, the Provision() method is called. 4) If the\n// module is a Validator, the Validate() method is called.\n// 5) The module will probably be type-asserted from\n// 'any' to some other, more useful interface expected\n// by the host module. For example, HTTP handler modules are\n// type-asserted as caddyhttp.MiddlewareHandler values.\n// 6) When a module's containing Context is canceled, if it is\n// a CleanerUpper, its Cleanup() method is called.\ntype Module interface {\n\t// This method indicates that the type is a Caddy\n\t// module. The returned ModuleInfo must have both\n\t// a name and a constructor function. This method\n\t// must not have any side-effects.\n\tCaddyModule() ModuleInfo\n}\n\n// ModuleInfo represents a registered Caddy module.\ntype ModuleInfo struct {\n\t// ID is the \"full name\" of the module. It\n\t// must be unique and properly namespaced.\n\tID ModuleID\n\n\t// New returns a pointer to a new, empty\n\t// instance of the module's type. This\n\t// method must not have any side-effects,\n\t// and no other initialization should\n\t// occur within it. Any initialization\n\t// of the returned value should be done\n\t// in a Provision() method (see the\n\t// Provisioner interface).\n\tNew func() Module\n}\n\n// ModuleID is a string that uniquely identifies a Caddy module. A\n// module ID is lightly structured. It consists of dot-separated\n// labels which form a simple hierarchy from left to right. The last\n// label is the module name, and the labels before that constitute\n// the namespace (or scope).\n//\n// Thus, a module ID has the form: <namespace>.<name>\n//\n// An ID with no dot has the empty namespace, which is appropriate\n// for app modules (these are \"top-level\" modules that Caddy core\n// loads and runs).\n//\n// Module IDs should be lowercase and use underscores (_) instead of\n// spaces.\n//\n// Examples of valid IDs:\n// - http\n// - http.handlers.file_server\n// - caddy.logging.encoders.json\ntype ModuleID string\n\n// Namespace returns the namespace (or scope) portion of a module ID,\n// which is all but the last label of the ID. If the ID has only one\n// label, then the namespace is empty.\nfunc (id ModuleID) Namespace() string {\n\tlastDot := strings.LastIndex(string(id), \".\")\n\tif lastDot < 0 {\n\t\treturn \"\"\n\t}\n\treturn string(id)[:lastDot]\n}\n\n// Name returns the Name (last element) of a module ID.\nfunc (id ModuleID) Name() string {\n\tif id == \"\" {\n\t\treturn \"\"\n\t}\n\tparts := strings.Split(string(id), \".\")\n\treturn parts[len(parts)-1]\n}\n\nfunc (mi ModuleInfo) String() string { return string(mi.ID) }\n\n// ModuleMap is a map that can contain multiple modules,\n// where the map key is the module's name. (The namespace\n// is usually read from an associated field's struct tag.)\n// Because the module's name is given as the key in a\n// module map, the name does not have to be given in the\n// json.RawMessage.\ntype ModuleMap map[string]json.RawMessage\n\n// RegisterModule registers a module by receiving a\n// plain/empty value of the module. For registration to\n// be properly recorded, this should be called in the\n// init phase of runtime. Typically, the module package\n// will do this as a side-effect of being imported.\n// This function panics if the module's info is\n// incomplete or invalid, or if the module is already\n// registered.\nfunc RegisterModule(instance Module) {\n\tmod := instance.CaddyModule()\n\n\tif mod.ID == \"\" {\n\t\tpanic(\"module ID missing\")\n\t}\n\tif mod.ID == \"caddy\" || mod.ID == \"admin\" {\n\t\tpanic(fmt.Sprintf(\"module ID '%s' is reserved\", mod.ID))\n\t}\n\tif mod.New == nil {\n\t\tpanic(\"missing ModuleInfo.New\")\n\t}\n\tif val := mod.New(); val == nil {\n\t\tpanic(\"ModuleInfo.New must return a non-nil module instance\")\n\t}\n\n\tmodulesMu.Lock()\n\tdefer modulesMu.Unlock()\n\n\tif _, ok := modules[string(mod.ID)]; ok {\n\t\tpanic(fmt.Sprintf(\"module already registered: %s\", mod.ID))\n\t}\n\tmodules[string(mod.ID)] = mod\n}\n\n// GetModule returns module information from its ID (full name).\nfunc GetModule(name string) (ModuleInfo, error) {\n\tmodulesMu.RLock()\n\tdefer modulesMu.RUnlock()\n\tm, ok := modules[name]\n\tif !ok {\n\t\treturn ModuleInfo{}, fmt.Errorf(\"module not registered: %s\", name)\n\t}\n\treturn m, nil\n}\n\n// GetModuleName returns a module's name (the last label of its ID)\n// from an instance of its value. If the value is not a module, an\n// empty string will be returned.\nfunc GetModuleName(instance any) string {\n\tvar name string\n\tif mod, ok := instance.(Module); ok {\n\t\tname = mod.CaddyModule().ID.Name()\n\t}\n\treturn name\n}\n\n// GetModuleID returns a module's ID from an instance of its value.\n// If the value is not a module, an empty string will be returned.\nfunc GetModuleID(instance any) string {\n\tvar id string\n\tif mod, ok := instance.(Module); ok {\n\t\tid = string(mod.CaddyModule().ID)\n\t}\n\treturn id\n}\n\n// GetModules returns all modules in the given scope/namespace.\n// For example, a scope of \"foo\" returns modules named \"foo.bar\",\n// \"foo.loo\", but not \"bar\", \"foo.bar.loo\", etc. An empty scope\n// returns top-level modules, for example \"foo\" or \"bar\". Partial\n// scopes are not matched (i.e. scope \"foo.ba\" does not match\n// name \"foo.bar\").\n//\n// Because modules are registered to a map under the hood, the\n// returned slice will be sorted to keep it deterministic.\nfunc GetModules(scope string) []ModuleInfo {\n\tmodulesMu.RLock()\n\tdefer modulesMu.RUnlock()\n\n\tscopeParts := strings.Split(scope, \".\")\n\n\t// handle the special case of an empty scope, which\n\t// should match only the top-level modules\n\tif scope == \"\" {\n\t\tscopeParts = []string{}\n\t}\n\n\tvar mods []ModuleInfo\niterateModules:\n\tfor id, m := range modules {\n\t\tmodParts := strings.Split(id, \".\")\n\n\t\t// match only the next level of nesting\n\t\tif len(modParts) != len(scopeParts)+1 {\n\t\t\tcontinue\n\t\t}\n\n\t\t// specified parts must be exact matches\n\t\tfor i := range scopeParts {\n\t\t\tif modParts[i] != scopeParts[i] {\n\t\t\t\tcontinue iterateModules\n\t\t\t}\n\t\t}\n\n\t\tmods = append(mods, m)\n\t}\n\n\t// make return value deterministic\n\tsort.Slice(mods, func(i, j int) bool {\n\t\treturn mods[i].ID < mods[j].ID\n\t})\n\n\treturn mods\n}\n\n// Modules returns the names of all registered modules\n// in ascending lexicographical order.\nfunc Modules() []string {\n\tmodulesMu.RLock()\n\tdefer modulesMu.RUnlock()\n\n\tnames := make([]string, 0, len(modules))\n\tfor name := range modules {\n\t\tnames = append(names, name)\n\t}\n\n\tsort.Strings(names)\n\n\treturn names\n}\n\n// getModuleNameInline loads the string value from raw of moduleNameKey,\n// where raw must be a JSON encoding of a map. It returns that value,\n// along with the result of removing that key from raw.\nfunc getModuleNameInline(moduleNameKey string, raw json.RawMessage) (string, json.RawMessage, error) {\n\tvar tmp map[string]any\n\terr := json.Unmarshal(raw, &tmp)\n\tif err != nil {\n\t\treturn \"\", nil, err\n\t}\n\n\tmoduleName, ok := tmp[moduleNameKey].(string)\n\tif !ok || moduleName == \"\" {\n\t\treturn \"\", nil, fmt.Errorf(\"module name not specified with key '%s' in %+v\", moduleNameKey, tmp)\n\t}\n\n\t// remove key from the object, otherwise decoding it later\n\t// will yield an error because the struct won't recognize it\n\t// (this is only needed because we strictly enforce that\n\t// all keys are recognized when loading modules)\n\tdelete(tmp, moduleNameKey)\n\tresult, err := json.Marshal(tmp)\n\tif err != nil {\n\t\treturn \"\", nil, fmt.Errorf(\"re-encoding module configuration: %v\", err)\n\t}\n\n\treturn moduleName, result, nil\n}\n\n// Provisioner is implemented by modules which may need to perform\n// some additional \"setup\" steps immediately after being loaded.\n// Provisioning should be fast (imperceptible running time). If\n// any side-effects result in the execution of this function (e.g.\n// creating global state, any other allocations which require\n// garbage collection, opening files, starting goroutines etc.),\n// be sure to clean up properly by implementing the CleanerUpper\n// interface to avoid leaking resources.\ntype Provisioner interface {\n\tProvision(Context) error\n}\n\n// Validator is implemented by modules which can verify that their\n// configurations are valid. This method will be called after\n// Provision() (if implemented). Validation should always be fast\n// (imperceptible running time) and an error must be returned if\n// the module's configuration is invalid.\ntype Validator interface {\n\tValidate() error\n}\n\n// CleanerUpper is implemented by modules which may have side-effects\n// such as opened files, spawned goroutines, or allocated some sort\n// of non-stack state when they were provisioned. This method should\n// deallocate/cleanup those resources to prevent memory leaks. Cleanup\n// should be fast and efficient. Cleanup should work even if Provision\n// returns an error, to allow cleaning up from partial provisionings.\ntype CleanerUpper interface {\n\tCleanup() error\n}\n\n// ParseStructTag parses a caddy struct tag into its keys and values.\n// It is very simple. The expected syntax is:\n// `caddy:\"key1=val1 key2=val2 ...\"`\nfunc ParseStructTag(tag string) (map[string]string, error) {\n\tresults := make(map[string]string)\n\tpairs := strings.Split(tag, \" \")\n\tfor i, pair := range pairs {\n\t\tif pair == \"\" {\n\t\t\tcontinue\n\t\t}\n\t\tbefore, after, isCut := strings.Cut(pair, \"=\")\n\t\tif !isCut {\n\t\t\treturn nil, fmt.Errorf(\"missing key in '%s' (pair %d)\", pair, i)\n\t\t}\n\t\tresults[before] = after\n\t}\n\treturn results, nil\n}\n\n// StrictUnmarshalJSON is like json.Unmarshal but returns an error\n// if any of the fields are unrecognized. Useful when decoding\n// module configurations, where you want to be more sure they're\n// correct.\nfunc StrictUnmarshalJSON(data []byte, v any) error {\n\tdec := json.NewDecoder(bytes.NewReader(data))\n\tdec.DisallowUnknownFields()\n\terr := dec.Decode(v)\n\tif jsonErr, ok := err.(*json.SyntaxError); ok {\n\t\treturn fmt.Errorf(\"%w, at offset %d\", jsonErr, jsonErr.Offset)\n\t}\n\treturn err\n}\n\nvar JSONRawMessageType = reflect.TypeFor[json.RawMessage]()\n\n// isJSONRawMessage returns true if the type is encoding/json.RawMessage.\nfunc isJSONRawMessage(typ reflect.Type) bool {\n\treturn typ == JSONRawMessageType\n}\n\n// isModuleMapType returns true if the type is map[string]json.RawMessage.\n// It assumes that the string key is the module name, but this is not\n// always the case. To know for sure, this function must return true, but\n// also the struct tag where this type appears must NOT define an inline_key\n// attribute, which would mean that the module names appear inline with the\n// values, not in the key.\nfunc isModuleMapType(typ reflect.Type) bool {\n\treturn typ.Kind() == reflect.Map &&\n\t\ttyp.Key().Kind() == reflect.String &&\n\t\tisJSONRawMessage(typ.Elem())\n}\n\n// ProxyFuncProducer is implemented by modules which produce a\n// function that returns a URL to use as network proxy. Modules\n// in the namespace `caddy.network_proxy` must implement this\n// interface.\ntype ProxyFuncProducer interface {\n\tProxyFunc() func(*http.Request) (*url.URL, error)\n}\n\nvar (\n\tmodules   = make(map[string]ModuleInfo)\n\tmodulesMu sync.RWMutex\n)\n"
  },
  {
    "path": "modules_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport (\n\t\"reflect\"\n\t\"testing\"\n)\n\nfunc TestGetModules(t *testing.T) {\n\tmodulesMu.Lock()\n\tmodules = map[string]ModuleInfo{\n\t\t\"a\":      {ID: \"a\"},\n\t\t\"a.b\":    {ID: \"a.b\"},\n\t\t\"a.b.c\":  {ID: \"a.b.c\"},\n\t\t\"a.b.cd\": {ID: \"a.b.cd\"},\n\t\t\"a.c\":    {ID: \"a.c\"},\n\t\t\"a.d\":    {ID: \"a.d\"},\n\t\t\"b\":      {ID: \"b\"},\n\t\t\"b.a\":    {ID: \"b.a\"},\n\t\t\"b.b\":    {ID: \"b.b\"},\n\t\t\"b.a.c\":  {ID: \"b.a.c\"},\n\t\t\"c\":      {ID: \"c\"},\n\t}\n\tmodulesMu.Unlock()\n\n\tfor i, tc := range []struct {\n\t\tinput  string\n\t\texpect []ModuleInfo\n\t}{\n\t\t{\n\t\t\tinput: \"\",\n\t\t\texpect: []ModuleInfo{\n\t\t\t\t{ID: \"a\"},\n\t\t\t\t{ID: \"b\"},\n\t\t\t\t{ID: \"c\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: \"a\",\n\t\t\texpect: []ModuleInfo{\n\t\t\t\t{ID: \"a.b\"},\n\t\t\t\t{ID: \"a.c\"},\n\t\t\t\t{ID: \"a.d\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: \"a.b\",\n\t\t\texpect: []ModuleInfo{\n\t\t\t\t{ID: \"a.b.c\"},\n\t\t\t\t{ID: \"a.b.cd\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: \"a.b.c\",\n\t\t},\n\t\t{\n\t\t\tinput: \"b\",\n\t\t\texpect: []ModuleInfo{\n\t\t\t\t{ID: \"b.a\"},\n\t\t\t\t{ID: \"b.b\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tinput: \"asdf\",\n\t\t},\n\t} {\n\t\tactual := GetModules(tc.input)\n\t\tif !reflect.DeepEqual(actual, tc.expect) {\n\t\t\tt.Errorf(\"Test %d: Expected %v but got %v\", i, tc.expect, actual)\n\t\t}\n\t}\n}\n\nfunc TestModuleID(t *testing.T) {\n\tfor i, tc := range []struct {\n\t\tinput           ModuleID\n\t\texpectNamespace string\n\t\texpectName      string\n\t}{\n\t\t{\n\t\t\tinput:           \"foo\",\n\t\t\texpectNamespace: \"\",\n\t\t\texpectName:      \"foo\",\n\t\t},\n\t\t{\n\t\t\tinput:           \"foo.bar\",\n\t\t\texpectNamespace: \"foo\",\n\t\t\texpectName:      \"bar\",\n\t\t},\n\t\t{\n\t\t\tinput:           \"a.b.c\",\n\t\t\texpectNamespace: \"a.b\",\n\t\t\texpectName:      \"c\",\n\t\t},\n\t} {\n\t\tactualNamespace := tc.input.Namespace()\n\t\tif actualNamespace != tc.expectNamespace {\n\t\t\tt.Errorf(\"Test %d: Expected namespace '%s' but got '%s'\", i, tc.expectNamespace, actualNamespace)\n\t\t}\n\t\tactualName := tc.input.Name()\n\t\tif actualName != tc.expectName {\n\t\t\tt.Errorf(\"Test %d: Expected name '%s' but got '%s'\", i, tc.expectName, actualName)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "notify/notify_linux.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Package notify provides facilities for notifying process managers\n// of state changes, mainly for when running as a system service.\npackage notify\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\t\"os\"\n\t\"strings\"\n)\n\n// The documentation about this IPC protocol is available here:\n// https://www.freedesktop.org/software/systemd/man/sd_notify.html\n\nfunc sdNotify(payload string) error {\n\tif socketPath == \"\" {\n\t\treturn nil\n\t}\n\n\tsocketAddr := &net.UnixAddr{\n\t\tName: socketPath,\n\t\tNet:  \"unixgram\",\n\t}\n\n\tconn, err := net.DialUnix(socketAddr.Net, nil, socketAddr)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer conn.Close()\n\n\t_, err = conn.Write([]byte(payload))\n\treturn err\n}\n\n// Ready notifies systemd that caddy has finished its\n// initialization routines.\nfunc Ready() error {\n\treturn sdNotify(\"READY=1\")\n}\n\n// Reloading notifies systemd that caddy is reloading its config.\nfunc Reloading() error {\n\treturn sdNotify(\"RELOADING=1\")\n}\n\n// Stopping notifies systemd that caddy is stopping.\nfunc Stopping() error {\n\treturn sdNotify(\"STOPPING=1\")\n}\n\n// Status sends systemd an updated status message.\nfunc Status(msg string) error {\n\treturn sdNotify(\"STATUS=\" + msg)\n}\n\n// Error is like Status, but sends systemd an error message\n// instead, with an optional errno-style error number.\nfunc Error(err error, errno int) error {\n\tcollapsedErr := strings.ReplaceAll(err.Error(), \"\\n\", \" \")\n\tmsg := fmt.Sprintf(\"STATUS=%s\", collapsedErr)\n\tif errno > 0 {\n\t\tmsg += fmt.Sprintf(\"\\nERRNO=%d\", errno)\n\t}\n\treturn sdNotify(msg)\n}\n\nvar socketPath, _ = os.LookupEnv(\"NOTIFY_SOCKET\")\n"
  },
  {
    "path": "notify/notify_other.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n//go:build !linux && !windows\n\npackage notify\n\nfunc Ready() error               { return nil }\nfunc Reloading() error           { return nil }\nfunc Stopping() error            { return nil }\nfunc Status(_ string) error      { return nil }\nfunc Error(_ error, _ int) error { return nil }\n"
  },
  {
    "path": "notify/notify_windows.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage notify\n\nimport (\n\t\"log\"\n\t\"strings\"\n\n\t\"golang.org/x/sys/windows/svc\"\n)\n\n// globalStatus store windows service status, it can be\n// use to notify caddy status.\nvar globalStatus chan<- svc.Status\n\n// SetGlobalStatus assigns the channel through which status updates\n// will be sent to the SCM. This is typically provided by the service\n// handler when the service starts.\nfunc SetGlobalStatus(status chan<- svc.Status) {\n\tglobalStatus = status\n}\n\n// Ready notifies the SCM that the service is fully running and ready\n// to accept stop or shutdown control requests.\nfunc Ready() error {\n\tif globalStatus != nil {\n\t\tglobalStatus <- svc.Status{\n\t\t\tState:   svc.Running,\n\t\t\tAccepts: svc.AcceptStop | svc.AcceptShutdown,\n\t\t}\n\t}\n\treturn nil\n}\n\n// Reloading notifies the SCM that the service is entering a transitional\n// state.\nfunc Reloading() error {\n\tif globalStatus != nil {\n\t\tglobalStatus <- svc.Status{State: svc.StartPending}\n\t}\n\treturn nil\n}\n\n// Stopping notifies the SCM that the service is in the process of stopping.\n// This allows Windows to track the shutdown transition properly.\nfunc Stopping() error {\n\tif globalStatus != nil {\n\t\tglobalStatus <- svc.Status{State: svc.StopPending}\n\t}\n\treturn nil\n}\n\n// Status sends an arbitrary service state to the SCM based on a string\n// identifier of [svc.State].\n// The unknown states will be logged.\nfunc Status(name string) error {\n\tif globalStatus == nil {\n\t\treturn nil\n\t}\n\n\tvar state svc.State\n\tvar accepts svc.Accepted\n\taccepts = 0\n\n\tswitch strings.ToLower(name) {\n\tcase \"stopped\":\n\t\tstate = svc.Stopped\n\tcase \"start_pending\":\n\t\tstate = svc.StartPending\n\tcase \"stop_pending\":\n\t\tstate = svc.StopPending\n\tcase \"running\":\n\t\tstate = svc.Running\n\t\taccepts = svc.AcceptStop | svc.AcceptShutdown\n\tcase \"continue_pending\":\n\t\tstate = svc.ContinuePending\n\tcase \"pause_pending\":\n\t\tstate = svc.PausePending\n\tcase \"paused\":\n\t\tstate = svc.Paused\n\t\taccepts = svc.AcceptStop | svc.AcceptShutdown | svc.AcceptPauseAndContinue\n\tdefault:\n\t\tlog.Printf(\"unknown state: %s\", name)\n\t\treturn nil\n\t}\n\n\tglobalStatus <- svc.Status{State: state, Accepts: accepts}\n\treturn nil\n}\n\n// Error notifies the SCM that the service is stopping due to a failure,\n// including a service-specific exit code.\nfunc Error(err error, code int) error {\n\tif globalStatus != nil {\n\t\tglobalStatus <- svc.Status{\n\t\t\tState:                   svc.StopPending,\n\t\t\tServiceSpecificExitCode: uint32(code),\n\t\t}\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "replacer.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"go.uber.org/zap\"\n)\n\n// NewReplacer returns a new Replacer.\nfunc NewReplacer() *Replacer {\n\trep := &Replacer{\n\t\tstatic:   make(map[string]any),\n\t\tmapMutex: &sync.RWMutex{},\n\t}\n\trep.providers = []replacementProvider{\n\t\tglobalDefaultReplacementProvider{},\n\t\tfileReplacementProvider{},\n\t\tReplacerFunc(rep.fromStatic),\n\t}\n\treturn rep\n}\n\n// NewEmptyReplacer returns a new Replacer,\n// without the global default replacements.\nfunc NewEmptyReplacer() *Replacer {\n\trep := &Replacer{\n\t\tstatic:   make(map[string]any),\n\t\tmapMutex: &sync.RWMutex{},\n\t}\n\trep.providers = []replacementProvider{\n\t\tReplacerFunc(rep.fromStatic),\n\t}\n\treturn rep\n}\n\n// Replacer can replace values in strings.\n// A default/empty Replacer is not valid;\n// use NewReplacer to make one.\ntype Replacer struct {\n\tproviders []replacementProvider\n\tstatic    map[string]any\n\tmapMutex  *sync.RWMutex\n}\n\n// WithoutFile returns a copy of the current Replacer\n// without support for the {file.*} placeholder, which\n// may be unsafe in some contexts.\n//\n// EXPERIMENTAL: Subject to change or removal.\nfunc (r *Replacer) WithoutFile() *Replacer {\n\trep := &Replacer{static: r.static}\n\tfor _, v := range r.providers {\n\t\tif _, ok := v.(fileReplacementProvider); ok {\n\t\t\tcontinue\n\t\t}\n\t\trep.providers = append(rep.providers, v)\n\t}\n\treturn rep\n}\n\n// Map adds mapFunc to the list of value providers.\n// mapFunc will be executed only at replace-time.\nfunc (r *Replacer) Map(mapFunc ReplacerFunc) {\n\tr.providers = append(r.providers, mapFunc)\n}\n\n// Set sets a custom variable to a static value.\nfunc (r *Replacer) Set(variable string, value any) {\n\tr.mapMutex.Lock()\n\tr.static[variable] = value\n\tr.mapMutex.Unlock()\n}\n\n// Get gets a value from the replacer. It returns\n// the value and whether the variable was known.\nfunc (r *Replacer) Get(variable string) (any, bool) {\n\tfor _, mapFunc := range r.providers {\n\t\tif val, ok := mapFunc.replace(variable); ok {\n\t\t\treturn val, true\n\t\t}\n\t}\n\treturn nil, false\n}\n\n// GetString is the same as Get, but coerces the value to a\n// string representation as efficiently as possible.\nfunc (r *Replacer) GetString(variable string) (string, bool) {\n\ts, found := r.Get(variable)\n\treturn ToString(s), found\n}\n\n// Delete removes a variable with a static value\n// that was created using Set.\nfunc (r *Replacer) Delete(variable string) {\n\tr.mapMutex.Lock()\n\tdelete(r.static, variable)\n\tr.mapMutex.Unlock()\n}\n\n// fromStatic provides values from r.static.\nfunc (r *Replacer) fromStatic(key string) (any, bool) {\n\tr.mapMutex.RLock()\n\tdefer r.mapMutex.RUnlock()\n\tval, ok := r.static[key]\n\treturn val, ok\n}\n\n// ReplaceOrErr is like ReplaceAll, but any placeholders\n// that are empty or not recognized will cause an error to\n// be returned.\nfunc (r *Replacer) ReplaceOrErr(input string, errOnEmpty, errOnUnknown bool) (string, error) {\n\treturn r.replace(input, \"\", false, errOnEmpty, errOnUnknown, nil)\n}\n\n// ReplaceKnown is like ReplaceAll but only replaces\n// placeholders that are known (recognized). Unrecognized\n// placeholders will remain in the output.\nfunc (r *Replacer) ReplaceKnown(input, empty string) string {\n\tout, _ := r.replace(input, empty, false, false, false, nil)\n\treturn out\n}\n\n// ReplaceAll efficiently replaces placeholders in input with\n// their values. All placeholders are replaced in the output\n// whether they are recognized or not. Values that are empty\n// string will be substituted with empty.\nfunc (r *Replacer) ReplaceAll(input, empty string) string {\n\tout, _ := r.replace(input, empty, true, false, false, nil)\n\treturn out\n}\n\n// ReplaceFunc is the same as ReplaceAll, but calls f for every\n// replacement to be made, in case f wants to change or inspect\n// the replacement.\nfunc (r *Replacer) ReplaceFunc(input string, f ReplacementFunc) (string, error) {\n\treturn r.replace(input, \"\", true, false, false, f)\n}\n\nfunc (r *Replacer) replace(input, empty string,\n\ttreatUnknownAsEmpty, errOnEmpty, errOnUnknown bool,\n\tf ReplacementFunc,\n) (string, error) {\n\tif !strings.Contains(input, string(phOpen)) && !strings.Contains(input, string(phClose)) {\n\t\treturn input, nil\n\t}\n\n\tvar sb strings.Builder\n\n\t// it is reasonable to assume that the output\n\t// will be approximately as long as the input\n\tsb.Grow(len(input))\n\n\t// iterate the input to find each placeholder\n\tvar lastWriteCursor int\n\n\t// fail fast if too many placeholders are unclosed\n\tvar unclosedCount int\n\nscan:\n\tfor i := 0; i < len(input); i++ {\n\t\t// check for escaped braces\n\t\tif i > 0 && input[i-1] == phEscape && (input[i] == phClose || input[i] == phOpen) {\n\t\t\tsb.WriteString(input[lastWriteCursor : i-1])\n\t\t\tlastWriteCursor = i\n\t\t\tcontinue\n\t\t}\n\n\t\tif input[i] != phOpen {\n\t\t\tcontinue\n\t\t}\n\n\t\t// our iterator is now on an unescaped open brace (start of placeholder)\n\n\t\t// too many unclosed placeholders in absolutely ridiculous input can be extremely slow (issue #4170)\n\t\tif unclosedCount > 100 {\n\t\t\treturn \"\", fmt.Errorf(\"too many unclosed placeholders\")\n\t\t}\n\n\t\t// find the end of the placeholder\n\t\tend := strings.Index(input[i:], string(phClose)) + i\n\t\tif end < i {\n\t\t\tunclosedCount++\n\t\t\tcontinue\n\t\t}\n\n\t\t// if necessary look for the first closing brace that is not escaped\n\t\tfor end > 0 && end < len(input)-1 && input[end-1] == phEscape {\n\t\t\tnextEnd := strings.Index(input[end+1:], string(phClose))\n\t\t\tif nextEnd < 0 {\n\t\t\t\tunclosedCount++\n\t\t\t\tcontinue scan\n\t\t\t}\n\t\t\tend += nextEnd + 1\n\t\t}\n\n\t\t// write the substring from the last cursor to this point\n\t\tsb.WriteString(input[lastWriteCursor:i])\n\n\t\t// trim opening bracket\n\t\tkey := input[i+1 : end]\n\n\t\t// try to get a value for this key, handle empty values accordingly\n\t\tval, found := r.Get(key)\n\t\tif !found {\n\t\t\t// placeholder is unknown (unrecognized); handle accordingly\n\t\t\tif errOnUnknown {\n\t\t\t\treturn \"\", fmt.Errorf(\"unrecognized placeholder %s%s%s\",\n\t\t\t\t\tstring(phOpen), key, string(phClose))\n\t\t\t} else if !treatUnknownAsEmpty {\n\t\t\t\t// if treatUnknownAsEmpty is true, we'll handle an empty\n\t\t\t\t// val later; so only continue otherwise\n\t\t\t\tlastWriteCursor = i\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\n\t\t// apply any transformations\n\t\tif f != nil {\n\t\t\tvar err error\n\t\t\tval, err = f(key, val)\n\t\t\tif err != nil {\n\t\t\t\treturn \"\", err\n\t\t\t}\n\t\t}\n\n\t\t// convert val to a string as efficiently as possible\n\t\tvalStr := ToString(val)\n\n\t\t// write the value; if it's empty, either return\n\t\t// an error or write a default value\n\t\tif valStr == \"\" {\n\t\t\tif errOnEmpty {\n\t\t\t\treturn \"\", fmt.Errorf(\"evaluated placeholder %s%s%s is empty\",\n\t\t\t\t\tstring(phOpen), key, string(phClose))\n\t\t\t} else if empty != \"\" {\n\t\t\t\tsb.WriteString(empty)\n\t\t\t}\n\t\t} else {\n\t\t\tsb.WriteString(valStr)\n\t\t}\n\n\t\t// advance cursor to end of placeholder\n\t\ti = end\n\t\tlastWriteCursor = i + 1\n\t}\n\n\t// flush any unwritten remainder\n\tsb.WriteString(input[lastWriteCursor:])\n\n\treturn sb.String(), nil\n}\n\n// ToString returns val as a string, as efficiently as possible.\n// EXPERIMENTAL: may be changed or removed later.\nfunc ToString(val any) string {\n\tswitch v := val.(type) {\n\tcase nil:\n\t\treturn \"\"\n\tcase string:\n\t\treturn v\n\tcase fmt.Stringer:\n\t\treturn v.String()\n\tcase error:\n\t\treturn v.Error()\n\tcase byte:\n\t\treturn string(v)\n\tcase []byte:\n\t\treturn string(v)\n\tcase []rune:\n\t\treturn string(v)\n\tcase int:\n\t\treturn strconv.Itoa(v)\n\tcase int32:\n\t\treturn strconv.Itoa(int(v))\n\tcase int64:\n\t\treturn strconv.Itoa(int(v))\n\tcase uint:\n\t\treturn strconv.FormatUint(uint64(v), 10)\n\tcase uint32:\n\t\treturn strconv.FormatUint(uint64(v), 10)\n\tcase uint64:\n\t\treturn strconv.FormatUint(v, 10)\n\tcase float32:\n\t\treturn strconv.FormatFloat(float64(v), 'f', -1, 32)\n\tcase float64:\n\t\treturn strconv.FormatFloat(v, 'f', -1, 64)\n\tcase bool:\n\t\tif v {\n\t\t\treturn \"true\"\n\t\t}\n\t\treturn \"false\"\n\tdefault:\n\t\treturn fmt.Sprintf(\"%+v\", v)\n\t}\n}\n\n// ReplacerFunc is a function that returns a replacement for the\n// given key along with true if the function is able to service\n// that key (even if the value is blank). If the function does\n// not recognize the key, false should be returned.\ntype ReplacerFunc func(key string) (any, bool)\n\nfunc (f ReplacerFunc) replace(key string) (any, bool) {\n\treturn f(key)\n}\n\n// replacementProvider is a type that can provide replacements\n// for placeholders. Allows for type assertion to determine\n// which type of provider it is.\ntype replacementProvider interface {\n\treplace(key string) (any, bool)\n}\n\n// fileReplacementProvider handles {file.*} replacements,\n// reading a file from disk and replacing with its contents.\ntype fileReplacementProvider struct{}\n\nfunc (f fileReplacementProvider) replace(key string) (any, bool) {\n\tif !strings.HasPrefix(key, filePrefix) {\n\t\treturn nil, false\n\t}\n\n\tfilename := key[len(filePrefix):]\n\tmaxSize := 1024 * 1024\n\tbody, err := readFileIntoBuffer(filename, maxSize)\n\tif err != nil {\n\t\twd, _ := os.Getwd()\n\t\tLog().Error(\"placeholder: failed to read file\",\n\t\t\tzap.String(\"file\", filename),\n\t\t\tzap.String(\"working_dir\", wd),\n\t\t\tzap.Error(err))\n\t\treturn nil, true\n\t}\n\tbody = bytes.TrimSuffix(body, []byte(\"\\n\"))\n\tbody = bytes.TrimSuffix(body, []byte(\"\\r\"))\n\treturn string(body), true\n}\n\n// globalDefaultReplacementProvider handles replacements\n// that can be used in any context, such as system variables,\n// time, or environment variables.\ntype globalDefaultReplacementProvider struct{}\n\nfunc (f globalDefaultReplacementProvider) replace(key string) (any, bool) {\n\t// check environment variable\n\tconst envPrefix = \"env.\"\n\tif strings.HasPrefix(key, envPrefix) {\n\t\treturn os.Getenv(key[len(envPrefix):]), true\n\t}\n\n\tswitch key {\n\tcase \"system.hostname\":\n\t\t// OK if there is an error; just return empty string\n\t\tname, _ := os.Hostname()\n\t\treturn name, true\n\tcase \"system.slash\":\n\t\treturn string(filepath.Separator), true\n\tcase \"system.os\":\n\t\treturn runtime.GOOS, true\n\tcase \"system.wd\":\n\t\t// OK if there is an error; just return empty string\n\t\twd, _ := os.Getwd()\n\t\treturn wd, true\n\tcase \"system.arch\":\n\t\treturn runtime.GOARCH, true\n\tcase \"time.now\":\n\t\treturn nowFunc(), true\n\tcase \"time.now.http\":\n\t\t// According to the comment for http.TimeFormat, the timezone must be in UTC\n\t\t// to generate the correct format.\n\t\t// https://github.com/caddyserver/caddy/issues/5773\n\t\treturn nowFunc().UTC().Format(http.TimeFormat), true\n\tcase \"time.now.common_log\":\n\t\treturn nowFunc().Format(\"02/Jan/2006:15:04:05 -0700\"), true\n\tcase \"time.now.year\":\n\t\treturn strconv.Itoa(nowFunc().Year()), true\n\tcase \"time.now.unix\":\n\t\treturn strconv.FormatInt(nowFunc().Unix(), 10), true\n\tcase \"time.now.unix_ms\":\n\t\treturn strconv.FormatInt(nowFunc().UnixNano()/int64(time.Millisecond), 10), true\n\t}\n\n\treturn nil, false\n}\n\n// readFileIntoBuffer reads the file at filePath into a size limited buffer.\nfunc readFileIntoBuffer(filename string, size int) ([]byte, error) {\n\tfile, err := os.Open(filename)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer file.Close()\n\n\tbuffer := make([]byte, size)\n\tn, err := file.Read(buffer)\n\tif err != nil && err != io.EOF {\n\t\treturn nil, err\n\t}\n\n\t// slice the buffer to the actual size\n\treturn buffer[:n], nil\n}\n\n// ReplacementFunc is a function that is called when a\n// replacement is being performed. It receives the\n// variable (i.e. placeholder name) and the value that\n// will be the replacement, and returns the value that\n// will actually be the replacement, or an error. Note\n// that errors are sometimes ignored by replacers.\ntype ReplacementFunc func(variable string, val any) (any, error)\n\n// nowFunc is a variable so tests can change it\n// in order to obtain a deterministic time.\nvar nowFunc = time.Now\n\n// ReplacerCtxKey is the context key for a replacer.\nconst ReplacerCtxKey CtxKey = \"replacer\"\n\nconst phOpen, phClose, phEscape = '{', '}', '\\\\'\n\nconst filePrefix = \"file.\"\n"
  },
  {
    "path": "replacer_fuzz.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n//go:build gofuzz\n\npackage caddy\n\nfunc FuzzReplacer(data []byte) (score int) {\n\tNewReplacer().ReplaceAll(string(data), \"\")\n\tNewReplacer().ReplaceAll(NewReplacer().ReplaceAll(string(data), \"\"), \"\")\n\tNewReplacer().ReplaceAll(NewReplacer().ReplaceAll(string(data), \"\"), NewReplacer().ReplaceAll(string(data), \"\"))\n\tNewReplacer().ReplaceAll(string(data[:len(data)/2]), string(data[len(data)/2:]))\n\treturn 0\n}\n"
  },
  {
    "path": "replacer_test.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"sync\"\n\t\"testing\"\n)\n\nfunc TestReplacer(t *testing.T) {\n\ttype testCase struct {\n\t\tinput, expect, empty string\n\t}\n\n\trep := testReplacer()\n\n\t// ReplaceAll\n\tfor i, tc := range []testCase{\n\t\t{\n\t\t\tinput:  \"{\",\n\t\t\texpect: \"{\",\n\t\t},\n\t\t{\n\t\t\tinput:  `\\{`,\n\t\t\texpect: `{`,\n\t\t},\n\t\t{\n\t\t\tinput:  \"foo{\",\n\t\t\texpect: \"foo{\",\n\t\t},\n\t\t{\n\t\t\tinput:  `foo\\{`,\n\t\t\texpect: `foo{`,\n\t\t},\n\t\t{\n\t\t\tinput:  \"foo{bar\",\n\t\t\texpect: \"foo{bar\",\n\t\t},\n\t\t{\n\t\t\tinput:  `foo\\{bar`,\n\t\t\texpect: `foo{bar`,\n\t\t},\n\t\t{\n\t\t\tinput:  \"foo{bar}\",\n\t\t\texpect: \"foo\",\n\t\t},\n\t\t{\n\t\t\tinput:  `foo\\{bar\\}`,\n\t\t\texpect: `foo{bar}`,\n\t\t},\n\t\t{\n\t\t\tinput:  \"}\",\n\t\t\texpect: \"}\",\n\t\t},\n\t\t{\n\t\t\tinput:  `\\}`,\n\t\t\texpect: `}`,\n\t\t},\n\t\t{\n\t\t\tinput:  \"{}\",\n\t\t\texpect: \"\",\n\t\t},\n\t\t{\n\t\t\tinput:  `\\{\\}`,\n\t\t\texpect: `{}`,\n\t\t},\n\t\t{\n\t\t\tinput:  `{\"json\": \"object\"}`,\n\t\t\texpect: \"\",\n\t\t},\n\t\t{\n\t\t\tinput:  `\\{\"json\": \"object\"}`,\n\t\t\texpect: `{\"json\": \"object\"}`,\n\t\t},\n\t\t{\n\t\t\tinput:  `\\{\"json\": \"object\"\\}`,\n\t\t\texpect: `{\"json\": \"object\"}`,\n\t\t},\n\t\t{\n\t\t\tinput:  `\\{\"json\": \"object{bar}\"\\}`,\n\t\t\texpect: `{\"json\": \"object\"}`,\n\t\t},\n\t\t{\n\t\t\tinput:  `\\{\"json\": \\{\"nested\": \"object\"\\}\\}`,\n\t\t\texpect: `{\"json\": {\"nested\": \"object\"}}`,\n\t\t},\n\t\t{\n\t\t\tinput:  `\\{\"json\": \\{\"nested\": \"{bar}\"\\}\\}`,\n\t\t\texpect: `{\"json\": {\"nested\": \"\"}}`,\n\t\t},\n\t\t{\n\t\t\tinput:  `pre \\{\"json\": \\{\"nested\": \"{bar}\"\\}\\}`,\n\t\t\texpect: `pre {\"json\": {\"nested\": \"\"}}`,\n\t\t},\n\t\t{\n\t\t\tinput:  `\\{\"json\": \\{\"nested\": \"{bar}\"\\}\\} post`,\n\t\t\texpect: `{\"json\": {\"nested\": \"\"}} post`,\n\t\t},\n\t\t{\n\t\t\tinput:  `pre \\{\"json\": \\{\"nested\": \"{bar}\"\\}\\} post`,\n\t\t\texpect: `pre {\"json\": {\"nested\": \"\"}} post`,\n\t\t},\n\t\t{\n\t\t\tinput:  `{{`,\n\t\t\texpect: \"{{\",\n\t\t},\n\t\t{\n\t\t\tinput:  `{{}`,\n\t\t\texpect: \"\",\n\t\t},\n\t\t{\n\t\t\tinput:  `{\"json\": \"object\"\\}`,\n\t\t\texpect: \"\",\n\t\t},\n\t\t{\n\t\t\tinput:  `{unknown}`,\n\t\t\tempty:  \"-\",\n\t\t\texpect: \"-\",\n\t\t},\n\t\t{\n\t\t\tinput:  `back\\slashes`,\n\t\t\texpect: `back\\slashes`,\n\t\t},\n\t\t{\n\t\t\tinput:  `double back\\\\slashes`,\n\t\t\texpect: `double back\\\\slashes`,\n\t\t},\n\t\t{\n\t\t\tinput:  `placeholder {with \\{ brace} in name`,\n\t\t\texpect: `placeholder  in name`,\n\t\t},\n\t\t{\n\t\t\tinput:  `placeholder {with \\} brace} in name`,\n\t\t\texpect: `placeholder  in name`,\n\t\t},\n\t\t{\n\t\t\tinput:  `placeholder {with \\} \\} braces} in name`,\n\t\t\texpect: `placeholder  in name`,\n\t\t},\n\t\t{\n\t\t\tinput:  `\\{'group':'default','max_age':3600,'endpoints':[\\{'url':'https://some.domain.local/a/d/g'\\}],'include_subdomains':true\\}`,\n\t\t\texpect: `{'group':'default','max_age':3600,'endpoints':[{'url':'https://some.domain.local/a/d/g'}],'include_subdomains':true}`,\n\t\t},\n\t\t{\n\t\t\tinput:  `{}{}{}{\\\\\\\\}\\\\\\\\`,\n\t\t\texpect: `{\\\\\\}\\\\\\\\`,\n\t\t},\n\t\t{\n\t\t\tinput:  string([]byte{0x26, 0x00, 0x83, 0x7B, 0x84, 0x07, 0x5C, 0x7D, 0x84}),\n\t\t\texpect: string([]byte{0x26, 0x00, 0x83, 0x7B, 0x84, 0x07, 0x7D, 0x84}),\n\t\t},\n\t\t{\n\t\t\tinput:  `\\\\}`,\n\t\t\texpect: `\\}`,\n\t\t},\n\t} {\n\t\tactual := rep.ReplaceAll(tc.input, tc.empty)\n\t\tif actual != tc.expect {\n\t\t\tt.Errorf(\"Test %d: '%s': expected '%s' but got '%s'\",\n\t\t\t\ti, tc.input, tc.expect, actual)\n\t\t}\n\t}\n}\n\nfunc TestReplacerSet(t *testing.T) {\n\trep := testReplacer()\n\n\tfor _, tc := range []struct {\n\t\tvariable string\n\t\tvalue    any\n\t}{\n\t\t{\n\t\t\tvariable: \"test1\",\n\t\t\tvalue:    \"val1\",\n\t\t},\n\t\t{\n\t\t\tvariable: \"asdf\",\n\t\t\tvalue:    \"123\",\n\t\t},\n\t\t{\n\t\t\tvariable: \"numbers\",\n\t\t\tvalue:    123.456,\n\t\t},\n\t\t{\n\t\t\tvariable: \"äöü\",\n\t\t\tvalue:    \"öö_äü\",\n\t\t},\n\t\t{\n\t\t\tvariable: \"with space\",\n\t\t\tvalue:    \"space value\",\n\t\t},\n\t\t{\n\t\t\tvariable: \"1\",\n\t\t\tvalue:    \"test-123\",\n\t\t},\n\t\t{\n\t\t\tvariable: \"mySuper_IP\",\n\t\t\tvalue:    \"1.2.3.4\",\n\t\t},\n\t\t{\n\t\t\tvariable: \"testEmpty\",\n\t\t\tvalue:    \"\",\n\t\t},\n\t} {\n\t\trep.Set(tc.variable, tc.value)\n\n\t\t// test if key is added\n\t\tif val, ok := rep.static[tc.variable]; ok {\n\t\t\tif val != tc.value {\n\t\t\t\tt.Errorf(\"Expected value '%s' for key '%s' got '%s'\", tc.value, tc.variable, val)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Errorf(\"Expected existing key '%s' found nothing\", tc.variable)\n\t\t}\n\t}\n\n\t// test if all keys are still there (by length)\n\tlength := len(rep.static)\n\tif len(rep.static) != 8 {\n\t\tt.Errorf(\"Expected length '%v' got '%v'\", 7, length)\n\t}\n}\n\nfunc TestReplacerReplaceKnown(t *testing.T) {\n\trep := Replacer{\n\t\tmapMutex: &sync.RWMutex{},\n\t\tproviders: []replacementProvider{\n\t\t\t// split our possible vars to two functions (to test if both functions are called)\n\t\t\tReplacerFunc(func(key string) (val any, ok bool) {\n\t\t\t\tswitch key {\n\t\t\t\tcase \"test1\":\n\t\t\t\t\treturn \"val1\", true\n\t\t\t\tcase \"asdf\":\n\t\t\t\t\treturn \"123\", true\n\t\t\t\tcase \"äöü\":\n\t\t\t\t\treturn \"öö_äü\", true\n\t\t\t\tcase \"with space\":\n\t\t\t\t\treturn \"space value\", true\n\t\t\t\tdefault:\n\t\t\t\t\treturn \"NOOO\", false\n\t\t\t\t}\n\t\t\t}),\n\t\t\tReplacerFunc(func(key string) (val any, ok bool) {\n\t\t\t\tswitch key {\n\t\t\t\tcase \"1\":\n\t\t\t\t\treturn \"test-123\", true\n\t\t\t\tcase \"mySuper_IP\":\n\t\t\t\t\treturn \"1.2.3.4\", true\n\t\t\t\tcase \"testEmpty\":\n\t\t\t\t\treturn \"\", true\n\t\t\t\tdefault:\n\t\t\t\t\treturn \"NOOO\", false\n\t\t\t\t}\n\t\t\t}),\n\t\t},\n\t}\n\n\tfor _, tc := range []struct {\n\t\ttestInput string\n\t\texpected  string\n\t}{\n\t\t{\n\t\t\t// test vars without space\n\t\t\ttestInput: \"{test1}{asdf}{äöü}{1}{with space}{mySuper_IP}\",\n\t\t\texpected:  \"val1123öö_äütest-123space value1.2.3.4\",\n\t\t},\n\t\t{\n\t\t\t// test vars with space\n\t\t\ttestInput: \"{test1} {asdf} {äöü} {1} {with space} {mySuper_IP} \",\n\t\t\texpected:  \"val1 123 öö_äü test-123 space value 1.2.3.4 \",\n\t\t},\n\t\t{\n\t\t\t// test with empty val\n\t\t\ttestInput: \"{test1} {testEmpty} {asdf} {1} \",\n\t\t\texpected:  \"val1 EMPTY 123 test-123 \",\n\t\t},\n\t\t{\n\t\t\t// test vars with not finished placeholders\n\t\t\ttestInput: \"{te{test1}{as{{df{1}\",\n\t\t\texpected:  \"{teval1{as{{dftest-123\",\n\t\t},\n\t\t{\n\t\t\t// test with non existing vars\n\t\t\ttestInput: \"{test1} {nope} {1} \",\n\t\t\texpected:  \"val1 {nope} test-123 \",\n\t\t},\n\t} {\n\t\tactual := rep.ReplaceKnown(tc.testInput, \"EMPTY\")\n\n\t\t// test if all are replaced as expected\n\t\tif actual != tc.expected {\n\t\t\tt.Errorf(\"Expected '%s' got '%s' for '%s'\", tc.expected, actual, tc.testInput)\n\t\t}\n\t}\n}\n\nfunc TestReplacerDelete(t *testing.T) {\n\trep := Replacer{\n\t\tmapMutex: &sync.RWMutex{},\n\t\tstatic: map[string]any{\n\t\t\t\"key1\": \"val1\",\n\t\t\t\"key2\": \"val2\",\n\t\t\t\"key3\": \"val3\",\n\t\t\t\"key4\": \"val4\",\n\t\t},\n\t}\n\n\tstartLen := len(rep.static)\n\n\ttoDel := []string{\n\t\t\"key2\", \"key4\",\n\t}\n\n\tfor _, key := range toDel {\n\t\trep.Delete(key)\n\n\t\t// test if key is removed from static map\n\t\tif _, ok := rep.static[key]; ok {\n\t\t\tt.Errorf(\"Expected '%s' to be removed. It is still in static map.\", key)\n\t\t}\n\t}\n\n\t// check if static slice is smaller\n\texpected := startLen - len(toDel)\n\tactual := len(rep.static)\n\tif len(rep.static) != expected {\n\t\tt.Errorf(\"Expected length '%v' got length '%v'\", expected, actual)\n\t}\n}\n\nfunc TestReplacerMap(t *testing.T) {\n\trep := testReplacer()\n\n\tfor i, tc := range []ReplacerFunc{\n\t\tfunc(key string) (val any, ok bool) {\n\t\t\treturn \"\", false\n\t\t},\n\t\tfunc(key string) (val any, ok bool) {\n\t\t\treturn \"\", false\n\t\t},\n\t} {\n\t\trep.Map(tc)\n\n\t\t// test if function (which listens on specific key) is added by checking length\n\t\tif len(rep.providers) == i+1 {\n\t\t\t// check if the last function is the one we just added\n\t\t\tpTc := fmt.Sprintf(\"%p\", tc)\n\t\t\tpRep := fmt.Sprintf(\"%p\", rep.providers[i])\n\t\t\tif pRep != pTc {\n\t\t\t\tt.Errorf(\"Expected func pointer '%s' got '%s'\", pTc, pRep)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Errorf(\"Expected providers length '%v' got length '%v'\", i+1, len(rep.providers))\n\t\t}\n\t}\n}\n\nfunc TestReplacerNew(t *testing.T) {\n\trepl := NewReplacer()\n\n\tif len(repl.providers) != 3 {\n\t\tt.Errorf(\"Expected providers length '%v' got length '%v'\", 3, len(repl.providers))\n\t}\n\n\t// test if default global replacements are added as the first provider\n\thostname, _ := os.Hostname()\n\twd, _ := os.Getwd()\n\tos.Setenv(\"CADDY_REPLACER_TEST\", \"envtest\")\n\tdefer os.Setenv(\"CADDY_REPLACER_TEST\", \"\")\n\n\tfor _, tc := range []struct {\n\t\tvariable string\n\t\tvalue    string\n\t}{\n\t\t{\n\t\t\tvariable: \"system.hostname\",\n\t\t\tvalue:    hostname,\n\t\t},\n\t\t{\n\t\t\tvariable: \"system.slash\",\n\t\t\tvalue:    string(filepath.Separator),\n\t\t},\n\t\t{\n\t\t\tvariable: \"system.os\",\n\t\t\tvalue:    runtime.GOOS,\n\t\t},\n\t\t{\n\t\t\tvariable: \"system.arch\",\n\t\t\tvalue:    runtime.GOARCH,\n\t\t},\n\t\t{\n\t\t\tvariable: \"system.wd\",\n\t\t\tvalue:    wd,\n\t\t},\n\t\t{\n\t\t\tvariable: \"env.CADDY_REPLACER_TEST\",\n\t\t\tvalue:    \"envtest\",\n\t\t},\n\t} {\n\t\tif val, ok := repl.providers[0].replace(tc.variable); ok {\n\t\t\tif val != tc.value {\n\t\t\t\tt.Errorf(\"Expected value '%s' for key '%s' got '%s'\", tc.value, tc.variable, val)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Errorf(\"Expected key '%s' to be recognized by first provider\", tc.variable)\n\t\t}\n\t}\n\n\t// test if file provider is added as the second provider\n\tfor _, tc := range []struct {\n\t\tvariable string\n\t\tvalue    string\n\t}{\n\t\t{\n\t\t\tvariable: \"file.caddytest/integration/testdata/foo.txt\",\n\t\t\tvalue:    \"foo\",\n\t\t},\n\t\t{\n\t\t\tvariable: \"file.caddytest/integration/testdata/foo_with_trailing_newline.txt\",\n\t\t\tvalue:    \"foo\",\n\t\t},\n\t\t{\n\t\t\tvariable: \"file.caddytest/integration/testdata/foo_with_multiple_trailing_newlines.txt\",\n\t\t\tvalue:    \"foo\" + getEOL(),\n\t\t},\n\t} {\n\t\tif val, ok := repl.providers[1].replace(tc.variable); ok {\n\t\t\tif val != tc.value {\n\t\t\t\tt.Errorf(\"Expected value '%s' for key '%s' got '%s'\", tc.value, tc.variable, val)\n\t\t\t}\n\t\t} else {\n\t\t\tt.Errorf(\"Expected key '%s' to be recognized by second provider\", tc.variable)\n\t\t}\n\t}\n}\n\nfunc getEOL() string {\n\tif os.PathSeparator == '\\\\' {\n\t\treturn \"\\r\\n\" // Windows EOL\n\t}\n\treturn \"\\n\" // Unix and modern macOS EOL\n}\n\nfunc TestReplacerNewWithoutFile(t *testing.T) {\n\trepl := NewReplacer().WithoutFile()\n\n\tfor _, tc := range []struct {\n\t\tvariable string\n\t\tvalue    string\n\t\tnotFound bool\n\t}{\n\t\t{\n\t\t\tvariable: \"file.caddytest/integration/testdata/foo.txt\",\n\t\t\tnotFound: true,\n\t\t},\n\t\t{\n\t\t\tvariable: \"system.os\",\n\t\t\tvalue:    runtime.GOOS,\n\t\t},\n\t} {\n\t\tif val, ok := repl.Get(tc.variable); ok && !tc.notFound {\n\t\t\tif val != tc.value {\n\t\t\t\tt.Errorf(\"Expected value '%s' for key '%s' got '%s'\", tc.value, tc.variable, val)\n\t\t\t}\n\t\t} else if !tc.notFound {\n\t\t\tt.Errorf(\"Expected key '%s' to be recognized\", tc.variable)\n\t\t}\n\t}\n}\n\nfunc BenchmarkReplacer(b *testing.B) {\n\ttype testCase struct {\n\t\tname, input, empty string\n\t}\n\n\trep := testReplacer()\n\trep.Set(\"str\", \"a string\")\n\trep.Set(\"int\", 123.456)\n\n\tfor _, bm := range []testCase{\n\t\t{\n\t\t\tname:  \"no placeholder\",\n\t\t\tinput: `simple string`,\n\t\t},\n\t\t{\n\t\t\tname:  \"string replacement\",\n\t\t\tinput: `str={str}`,\n\t\t},\n\t\t{\n\t\t\tname:  \"int replacement\",\n\t\t\tinput: `int={int}`,\n\t\t},\n\t\t{\n\t\t\tname:  \"placeholder\",\n\t\t\tinput: `{\"json\": \"object\"}`,\n\t\t},\n\t\t{\n\t\t\tname:  \"escaped placeholder\",\n\t\t\tinput: `\\{\"json\": \\{\"nested\": \"{bar}\"\\}\\}`,\n\t\t},\n\t} {\n\t\tb.Run(bm.name, func(b *testing.B) {\n\t\t\tfor b.Loop() {\n\t\t\t\trep.ReplaceAll(bm.input, bm.empty)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc testReplacer() Replacer {\n\treturn Replacer{\n\t\tproviders: make([]replacementProvider, 0),\n\t\tstatic:    make(map[string]any),\n\t\tmapMutex:  &sync.RWMutex{},\n\t}\n}\n"
  },
  {
    "path": "service_windows.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\n\t\"golang.org/x/sys/windows/svc\"\n\n\t\"github.com/caddyserver/caddy/v2/notify\"\n)\n\nfunc init() {\n\tisService, err := svc.IsWindowsService()\n\tif err != nil || !isService {\n\t\treturn\n\t}\n\n\t// Windows services always start in the system32 directory, try to\n\t// switch into the directory where the caddy executable is.\n\texecPath, err := os.Executable()\n\tif err == nil {\n\t\t_ = os.Chdir(filepath.Dir(execPath))\n\t}\n\n\tgo func() {\n\t\t_ = svc.Run(\"\", runner{})\n\t}()\n}\n\ntype runner struct{}\n\nfunc (runner) Execute(args []string, request <-chan svc.ChangeRequest, status chan<- svc.Status) (bool, uint32) {\n\tnotify.SetGlobalStatus(status)\n\tstatus <- svc.Status{State: svc.StartPending}\n\n\tfor {\n\t\treq := <-request\n\t\tswitch req.Cmd {\n\t\tcase svc.Interrogate:\n\t\t\tstatus <- req.CurrentStatus\n\t\tcase svc.Stop, svc.Shutdown:\n\t\t\tstatus <- svc.Status{State: svc.StopPending}\n\t\t\texitProcessFromSignal(\"SIGINT\")\n\t\t\treturn false, 0\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "sigtrap.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport (\n\t\"context\"\n\t\"os\"\n\t\"os/signal\"\n\n\t\"go.uber.org/zap\"\n)\n\n// TrapSignals create signal/interrupt handlers as best it can for the\n// current OS. This is a rather invasive function to call in a Go program\n// that captures signals already, so in that case it would be better to\n// implement these handlers yourself.\nfunc TrapSignals() {\n\ttrapSignalsCrossPlatform()\n\ttrapSignalsPosix()\n}\n\n// trapSignalsCrossPlatform captures SIGINT or interrupt (depending\n// on the OS), which initiates a graceful shutdown. A second SIGINT\n// or interrupt will forcefully exit the process immediately.\nfunc trapSignalsCrossPlatform() {\n\tgo func() {\n\t\tshutdown := make(chan os.Signal, 1)\n\t\tsignal.Notify(shutdown, os.Interrupt)\n\n\t\tfor i := 0; true; i++ {\n\t\t\t<-shutdown\n\n\t\t\tif i > 0 {\n\t\t\t\tLog().Warn(\"force quit\", zap.String(\"signal\", \"SIGINT\"))\n\t\t\t\tos.Exit(ExitCodeForceQuit)\n\t\t\t}\n\n\t\t\tLog().Info(\"shutting down\", zap.String(\"signal\", \"SIGINT\"))\n\t\t\tgo exitProcessFromSignal(\"SIGINT\")\n\t\t}\n\t}()\n}\n\n// exitProcessFromSignal exits the process from a system signal.\nfunc exitProcessFromSignal(sigName string) {\n\tlogger := Log().With(zap.String(\"signal\", sigName))\n\texitProcess(context.TODO(), logger)\n}\n\n// Exit codes. Generally, you should NOT\n// automatically restart the process if the\n// exit code is ExitCodeFailedStartup (1).\nconst (\n\tExitCodeSuccess = iota\n\tExitCodeFailedStartup\n\tExitCodeForceQuit\n\tExitCodeFailedQuit\n)\n"
  },
  {
    "path": "sigtrap_nonposix.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n//go:build windows || plan9 || nacl || js\n\npackage caddy\n\nfunc trapSignalsPosix() {}\n"
  },
  {
    "path": "sigtrap_posix.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n//go:build !windows && !plan9 && !nacl && !js\n\npackage caddy\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"os\"\n\t\"os/signal\"\n\t\"syscall\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"go.uber.org/zap\"\n)\n\n// trapSignalsPosix captures POSIX-only signals.\nfunc trapSignalsPosix() {\n\t// Ignore all SIGPIPE signals to prevent weird issues with systemd: https://github.com/dunglas/frankenphp/issues/1020\n\t// Docker/Moby has a similar hack: https://github.com/moby/moby/blob/d828b032a87606ae34267e349bf7f7ccb1f6495a/cmd/dockerd/docker.go#L87-L90\n\tsignal.Ignore(syscall.SIGPIPE)\n\n\tgo func() {\n\t\tsigchan := make(chan os.Signal, 1)\n\t\tsignal.Notify(sigchan, syscall.SIGTERM, syscall.SIGHUP, syscall.SIGQUIT, syscall.SIGUSR1, syscall.SIGUSR2)\n\n\t\tfor sig := range sigchan {\n\t\t\tswitch sig {\n\t\t\tcase syscall.SIGQUIT:\n\t\t\t\tLog().Info(\"quitting process immediately\", zap.String(\"signal\", \"SIGQUIT\"))\n\t\t\t\tcertmagic.CleanUpOwnLocks(context.TODO(), Log()) // try to clean up locks anyway, it's important\n\t\t\t\tos.Exit(ExitCodeForceQuit)\n\n\t\t\tcase syscall.SIGTERM:\n\t\t\t\tLog().Info(\"shutting down apps, then terminating\", zap.String(\"signal\", \"SIGTERM\"))\n\t\t\t\texitProcessFromSignal(\"SIGTERM\")\n\n\t\t\tcase syscall.SIGUSR1:\n\t\t\t\tlogger := Log().With(zap.String(\"signal\", \"SIGUSR1\"))\n\t\t\t\t// If we know the last source config file/adapter (set when starting\n\t\t\t\t// via `caddy run --config <file> --adapter <adapter>`), attempt\n\t\t\t\t// to reload from that source. Otherwise, ignore the signal.\n\t\t\t\tfile, adapter, reloadCallback := getLastConfig()\n\t\t\t\tif file == \"\" {\n\t\t\t\t\tlogger.Info(\"last config unknown, ignored SIGUSR1\")\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tlogger = logger.With(\n\t\t\t\t\tzap.String(\"file\", file),\n\t\t\t\t\tzap.String(\"adapter\", adapter))\n\t\t\t\tif reloadCallback == nil {\n\t\t\t\t\tlogger.Warn(\"no reload helper available, ignored SIGUSR1\")\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tlogger.Info(\"reloading config from last-known source\")\n\t\t\t\tif err := reloadCallback(file, adapter); errors.Is(err, errReloadFromSourceUnavailable) {\n\t\t\t\t\t// No reload helper available (likely not started via caddy run).\n\t\t\t\t\tlogger.Warn(\"reload from source unavailable in this process; ignored SIGUSR1\")\n\t\t\t\t} else if err != nil {\n\t\t\t\t\tlogger.Error(\"failed to reload config from file\", zap.Error(err))\n\t\t\t\t} else {\n\t\t\t\t\tlogger.Info(\"successfully reloaded config from file\")\n\t\t\t\t}\n\n\t\t\tcase syscall.SIGUSR2:\n\t\t\t\tLog().Info(\"not implemented\", zap.String(\"signal\", \"SIGUSR2\"))\n\n\t\t\tcase syscall.SIGHUP:\n\t\t\t\t// ignore; this signal is sometimes sent outside of the user's control\n\t\t\t\tLog().Info(\"not implemented\", zap.String(\"signal\", \"SIGHUP\"))\n\t\t\t}\n\t\t}\n\t}()\n}\n"
  },
  {
    "path": "storage.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\t\"runtime\"\n\n\t\"github.com/caddyserver/certmagic\"\n\t\"go.uber.org/zap\"\n)\n\n// StorageConverter is a type that can convert itself\n// to a valid, usable certmagic.Storage value. (The\n// value might be short-lived.) This interface allows\n// us to adapt any CertMagic storage implementation\n// into a consistent API for Caddy configuration.\ntype StorageConverter interface {\n\tCertMagicStorage() (certmagic.Storage, error)\n}\n\n// HomeDir returns the best guess of the current user's home\n// directory from environment variables. If unknown, \".\" (the\n// current directory) is returned instead, except GOOS=android,\n// which returns \"/sdcard\".\nfunc HomeDir() string {\n\thome := homeDirUnsafe()\n\tif home == \"\" && runtime.GOOS == \"android\" {\n\t\thome = \"/sdcard\"\n\t}\n\tif home == \"\" {\n\t\thome = \".\"\n\t}\n\treturn home\n}\n\n// homeDirUnsafe is a low-level function that returns\n// the user's home directory from environment\n// variables. Careful: if it cannot be determined, an\n// empty string is returned. If not accounting for\n// that case, use HomeDir() instead; otherwise you\n// may end up using the root of the file system.\nfunc homeDirUnsafe() string {\n\thome := os.Getenv(\"HOME\")\n\tif home == \"\" && runtime.GOOS == \"windows\" {\n\t\tdrive := os.Getenv(\"HOMEDRIVE\")\n\t\tpath := os.Getenv(\"HOMEPATH\")\n\t\thome = drive + path\n\t\tif drive == \"\" || path == \"\" {\n\t\t\thome = os.Getenv(\"USERPROFILE\")\n\t\t}\n\t}\n\tif home == \"\" && runtime.GOOS == \"plan9\" {\n\t\thome = os.Getenv(\"home\")\n\t}\n\treturn home\n}\n\n// AppConfigDir returns the directory where to store user's config.\n//\n// If XDG_CONFIG_HOME is set, it returns: $XDG_CONFIG_HOME/caddy.\n// Otherwise, os.UserConfigDir() is used; if successful, it appends\n// \"Caddy\" (Windows & Mac) or \"caddy\" (every other OS) to the path.\n// If it returns an error, the fallback path \"./caddy\" is returned.\n//\n// The config directory is not guaranteed to be different from\n// AppDataDir().\n//\n// Unlike os.UserConfigDir(), this function prefers the\n// XDG_CONFIG_HOME env var on all platforms, not just Unix.\n//\n// Ref: https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html\nfunc AppConfigDir() string {\n\tif basedir := os.Getenv(\"XDG_CONFIG_HOME\"); basedir != \"\" {\n\t\treturn filepath.Join(basedir, \"caddy\")\n\t}\n\tbasedir, err := os.UserConfigDir()\n\tif err != nil {\n\t\tLog().Warn(\"unable to determine directory for user configuration; falling back to current directory\", zap.Error(err))\n\t\treturn \"./caddy\"\n\t}\n\tsubdir := \"caddy\"\n\tswitch runtime.GOOS {\n\tcase \"windows\", \"darwin\":\n\t\tsubdir = \"Caddy\"\n\t}\n\treturn filepath.Join(basedir, subdir)\n}\n\n// AppDataDir returns a directory path that is suitable for storing\n// application data on disk. It uses the environment for finding the\n// best place to store data, and appends a \"caddy\" or \"Caddy\" (depending\n// on OS and environment) subdirectory.\n//\n// For a base directory path:\n// If XDG_DATA_HOME is set, it returns: $XDG_DATA_HOME/caddy; otherwise,\n// on Windows it returns: %AppData%/Caddy,\n// on Mac: $HOME/Library/Application Support/Caddy,\n// on Plan9: $home/lib/caddy,\n// on Android: $HOME/caddy,\n// and on everything else: $HOME/.local/share/caddy.\n//\n// If a data directory cannot be determined, it returns \"./caddy\"\n// (this is not ideal, and the environment should be fixed).\n//\n// The data directory is not guaranteed to be different from AppConfigDir().\n//\n// Ref: https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html\nfunc AppDataDir() string {\n\tif basedir := os.Getenv(\"XDG_DATA_HOME\"); basedir != \"\" {\n\t\treturn filepath.Join(basedir, \"caddy\")\n\t}\n\tswitch runtime.GOOS {\n\tcase \"windows\":\n\t\tappData := os.Getenv(\"AppData\")\n\t\tif appData != \"\" {\n\t\t\treturn filepath.Join(appData, \"Caddy\")\n\t\t}\n\tcase \"darwin\":\n\t\thome := homeDirUnsafe()\n\t\tif home != \"\" {\n\t\t\treturn filepath.Join(home, \"Library\", \"Application Support\", \"Caddy\")\n\t\t}\n\tcase \"plan9\":\n\t\thome := homeDirUnsafe()\n\t\tif home != \"\" {\n\t\t\treturn filepath.Join(home, \"lib\", \"caddy\")\n\t\t}\n\tcase \"android\":\n\t\thome := homeDirUnsafe()\n\t\tif home != \"\" {\n\t\t\treturn filepath.Join(home, \"caddy\")\n\t\t}\n\tdefault:\n\t\thome := homeDirUnsafe()\n\t\tif home != \"\" {\n\t\t\treturn filepath.Join(home, \".local\", \"share\", \"caddy\")\n\t\t}\n\t}\n\treturn \"./caddy\"\n}\n\n// ConfigAutosavePath is the default path to which the last config will be persisted.\nvar ConfigAutosavePath = filepath.Join(AppConfigDir(), \"autosave.json\")\n\n// DefaultStorage is Caddy's default storage module.\nvar DefaultStorage = &certmagic.FileStorage{Path: AppDataDir()}\n"
  },
  {
    "path": "usagepool.go",
    "content": "// Copyright 2015 Matthew Holt and The Caddy Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage caddy\n\nimport (\n\t\"fmt\"\n\t\"sync\"\n\t\"sync/atomic\"\n)\n\n// UsagePool is a thread-safe map that pools values\n// based on usage (reference counting). Values are\n// only inserted if they do not already exist. There\n// are two ways to add values to the pool:\n//\n//  1. LoadOrStore will increment usage and store the\n//     value immediately if it does not already exist.\n//  2. LoadOrNew will atomically check for existence\n//     and construct the value immediately if it does\n//     not already exist, or increment the usage\n//     otherwise, then store that value in the pool.\n//     When the constructed value is finally deleted\n//     from the pool (when its usage reaches 0), it\n//     will be cleaned up by calling Destruct().\n//\n// The use of LoadOrNew allows values to be created\n// and reused and finally cleaned up only once, even\n// though they may have many references throughout\n// their lifespan. This is helpful, for example, when\n// sharing thread-safe io.Writers that you only want\n// to open and close once.\n//\n// There is no way to overwrite existing keys in the\n// pool without first deleting it as many times as it\n// was stored. Deleting too many times will panic.\n//\n// The implementation does not use a sync.Pool because\n// UsagePool needs additional atomicity to run the\n// constructor functions when creating a new value when\n// LoadOrNew is used. (We could probably use sync.Pool\n// but we'd still have to layer our own additional locks\n// on top.)\n//\n// An empty UsagePool is NOT safe to use; always call\n// NewUsagePool() to make a new one.\ntype UsagePool struct {\n\tsync.RWMutex\n\tpool map[any]*usagePoolVal\n}\n\n// NewUsagePool returns a new usage pool that is ready to use.\nfunc NewUsagePool() *UsagePool {\n\treturn &UsagePool{\n\t\tpool: make(map[any]*usagePoolVal),\n\t}\n}\n\n// LoadOrNew loads the value associated with key from the pool if it\n// already exists. If the key doesn't exist, it will call construct\n// to create a new value and then stores that in the pool. An error\n// is only returned if the constructor returns an error. The loaded\n// or constructed value is returned. The loaded return value is true\n// if the value already existed and was loaded, or false if it was\n// newly constructed.\nfunc (up *UsagePool) LoadOrNew(key any, construct Constructor) (value any, loaded bool, err error) {\n\tvar upv *usagePoolVal\n\tup.Lock()\n\tupv, loaded = up.pool[key]\n\tif loaded {\n\t\tatomic.AddInt32(&upv.refs, 1)\n\t\tup.Unlock()\n\t\tupv.RLock()\n\t\tvalue = upv.value\n\t\terr = upv.err\n\t\tupv.RUnlock()\n\t} else {\n\t\tupv = &usagePoolVal{refs: 1}\n\t\tupv.Lock()\n\t\tup.pool[key] = upv\n\t\tup.Unlock()\n\t\tvalue, err = construct()\n\t\tif err == nil {\n\t\t\tupv.value = value\n\t\t} else {\n\t\t\tupv.err = err\n\t\t\tup.Lock()\n\t\t\t// this *should* be safe, I think, because we have a\n\t\t\t// write lock on upv, but we might also need to ensure\n\t\t\t// that upv.err is nil before doing this, since we\n\t\t\t// released the write lock on up during construct...\n\t\t\t// but then again it's also after midnight...\n\t\t\tdelete(up.pool, key)\n\t\t\tup.Unlock()\n\t\t}\n\t\tupv.Unlock()\n\t}\n\treturn value, loaded, err\n}\n\n// LoadOrStore loads the value associated with key from the pool if it\n// already exists, or stores it if it does not exist. It returns the\n// value that was either loaded or stored, and true if the value already\n// existed and was loaded, false if the value didn't exist and was stored.\nfunc (up *UsagePool) LoadOrStore(key, val any) (value any, loaded bool) {\n\tvar upv *usagePoolVal\n\tup.Lock()\n\tupv, loaded = up.pool[key]\n\tif loaded {\n\t\tatomic.AddInt32(&upv.refs, 1)\n\t\tup.Unlock()\n\t\tupv.Lock()\n\t\tif upv.err == nil {\n\t\t\tvalue = upv.value\n\t\t} else {\n\t\t\tupv.value = val\n\t\t\tupv.err = nil\n\t\t}\n\t\tupv.Unlock()\n\t} else {\n\t\tupv = &usagePoolVal{refs: 1, value: val}\n\t\tup.pool[key] = upv\n\t\tup.Unlock()\n\t\tvalue = val\n\t}\n\treturn value, loaded\n}\n\n// Range iterates the pool similarly to how sync.Map.Range() does:\n// it calls f for every key in the pool, and if f returns false,\n// iteration is stopped. Ranging does not affect usage counts.\n//\n// This method is somewhat naive and acquires a read lock on the\n// entire pool during iteration, so do your best to make f() really\n// fast, m'kay?\nfunc (up *UsagePool) Range(f func(key, value any) bool) {\n\tup.RLock()\n\tdefer up.RUnlock()\n\tfor key, upv := range up.pool {\n\t\tupv.RLock()\n\t\tif upv.err != nil {\n\t\t\tupv.RUnlock()\n\t\t\tcontinue\n\t\t}\n\t\tval := upv.value\n\t\tupv.RUnlock()\n\t\tif !f(key, val) {\n\t\t\tbreak\n\t\t}\n\t}\n}\n\n// Delete decrements the usage count for key and removes the\n// value from the underlying map if the usage is 0. It returns\n// true if the usage count reached 0 and the value was deleted.\n// It panics if the usage count drops below 0; always call\n// Delete precisely as many times as LoadOrStore.\nfunc (up *UsagePool) Delete(key any) (deleted bool, err error) {\n\tup.Lock()\n\tupv, ok := up.pool[key]\n\tif !ok {\n\t\tup.Unlock()\n\t\treturn false, nil\n\t}\n\trefs := atomic.AddInt32(&upv.refs, -1)\n\tif refs == 0 {\n\t\tdelete(up.pool, key)\n\t\tup.Unlock()\n\t\tupv.RLock()\n\t\tval := upv.value\n\t\tupv.RUnlock()\n\t\tif destructor, ok := val.(Destructor); ok {\n\t\t\terr = destructor.Destruct()\n\t\t}\n\t\tdeleted = true\n\t} else {\n\t\tup.Unlock()\n\t\tif refs < 0 {\n\t\t\tpanic(fmt.Sprintf(\"deleted more than stored: %#v (usage: %d)\",\n\t\t\t\tupv.value, upv.refs))\n\t\t}\n\t}\n\treturn deleted, err\n}\n\n// References returns the number of references (count of usages) to a\n// key in the pool, and true if the key exists, or false otherwise.\nfunc (up *UsagePool) References(key any) (int, bool) {\n\tup.RLock()\n\tupv, loaded := up.pool[key]\n\tup.RUnlock()\n\tif loaded {\n\t\t// I wonder if it'd be safer to read this value during\n\t\t// our lock on the UsagePool... guess we'll see...\n\t\trefs := atomic.LoadInt32(&upv.refs)\n\t\treturn int(refs), true\n\t}\n\treturn 0, false\n}\n\n// Constructor is a function that returns a new value\n// that can destruct itself when it is no longer needed.\ntype Constructor func() (Destructor, error)\n\n// Destructor is a value that can clean itself up when\n// it is deallocated.\ntype Destructor interface {\n\tDestruct() error\n}\n\ntype usagePoolVal struct {\n\trefs  int32 // accessed atomically; must be 64-bit aligned for 32-bit systems\n\tvalue any\n\terr   error\n\tsync.RWMutex\n}\n"
  }
]